Jan 29 11:34:54.142677 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 09:36:13 -00 2025 Jan 29 11:34:54.142706 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 11:34:54.142720 kernel: BIOS-provided physical RAM map: Jan 29 11:34:54.142729 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 29 11:34:54.142738 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 29 11:34:54.142746 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 29 11:34:54.142757 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 29 11:34:54.142766 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 29 11:34:54.142774 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 29 11:34:54.142786 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 29 11:34:54.142795 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 29 11:34:54.142803 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 29 11:34:54.142812 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 29 11:34:54.142821 kernel: NX (Execute Disable) protection: active Jan 29 11:34:54.142832 kernel: APIC: Static calls initialized Jan 29 11:34:54.142844 kernel: SMBIOS 2.8 present. Jan 29 11:34:54.142854 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 29 11:34:54.142863 kernel: Hypervisor detected: KVM Jan 29 11:34:54.142873 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 11:34:54.142882 kernel: kvm-clock: using sched offset of 2360740302 cycles Jan 29 11:34:54.142892 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 11:34:54.142902 kernel: tsc: Detected 2794.748 MHz processor Jan 29 11:34:54.142912 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 11:34:54.142922 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 11:34:54.142932 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 29 11:34:54.142945 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 29 11:34:54.142970 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 11:34:54.142982 kernel: Using GB pages for direct mapping Jan 29 11:34:54.142991 kernel: ACPI: Early table checksum verification disabled Jan 29 11:34:54.142999 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 29 11:34:54.143008 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:34:54.143017 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:34:54.143026 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:34:54.143039 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 29 11:34:54.143047 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:34:54.143056 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:34:54.143065 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:34:54.143074 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:34:54.143082 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Jan 29 11:34:54.143092 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Jan 29 11:34:54.143106 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 29 11:34:54.143119 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Jan 29 11:34:54.143129 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Jan 29 11:34:54.143139 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Jan 29 11:34:54.143149 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Jan 29 11:34:54.143159 kernel: No NUMA configuration found Jan 29 11:34:54.143170 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 29 11:34:54.143180 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 29 11:34:54.143194 kernel: Zone ranges: Jan 29 11:34:54.143204 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 11:34:54.143214 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 29 11:34:54.143224 kernel: Normal empty Jan 29 11:34:54.143234 kernel: Movable zone start for each node Jan 29 11:34:54.143245 kernel: Early memory node ranges Jan 29 11:34:54.143255 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 29 11:34:54.143265 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 29 11:34:54.143276 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 29 11:34:54.143308 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 11:34:54.143320 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 29 11:34:54.143330 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 29 11:34:54.143340 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 29 11:34:54.143350 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 11:34:54.143360 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 11:34:54.143371 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 11:34:54.143381 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 11:34:54.143391 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 11:34:54.143404 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 11:34:54.143414 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 11:34:54.143425 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 11:34:54.143435 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 29 11:34:54.143445 kernel: TSC deadline timer available Jan 29 11:34:54.143455 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 29 11:34:54.143465 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 11:34:54.143475 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 29 11:34:54.143485 kernel: kvm-guest: setup PV sched yield Jan 29 11:34:54.143498 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 29 11:34:54.143509 kernel: Booting paravirtualized kernel on KVM Jan 29 11:34:54.143519 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 11:34:54.143530 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 29 11:34:54.143540 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 29 11:34:54.143550 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 29 11:34:54.143560 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 29 11:34:54.143570 kernel: kvm-guest: PV spinlocks enabled Jan 29 11:34:54.143581 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 29 11:34:54.143595 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 11:34:54.143606 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 11:34:54.143615 kernel: random: crng init done Jan 29 11:34:54.143625 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 11:34:54.143635 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 11:34:54.143645 kernel: Fallback order for Node 0: 0 Jan 29 11:34:54.143655 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 29 11:34:54.143664 kernel: Policy zone: DMA32 Jan 29 11:34:54.143674 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 11:34:54.143688 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2301K rwdata, 22736K rodata, 42972K init, 2220K bss, 136900K reserved, 0K cma-reserved) Jan 29 11:34:54.143698 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 29 11:34:54.143707 kernel: ftrace: allocating 37923 entries in 149 pages Jan 29 11:34:54.143717 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 11:34:54.143727 kernel: Dynamic Preempt: voluntary Jan 29 11:34:54.143737 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 11:34:54.143748 kernel: rcu: RCU event tracing is enabled. Jan 29 11:34:54.143758 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 29 11:34:54.143768 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 11:34:54.143781 kernel: Rude variant of Tasks RCU enabled. Jan 29 11:34:54.143791 kernel: Tracing variant of Tasks RCU enabled. Jan 29 11:34:54.143801 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 11:34:54.143811 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 29 11:34:54.143820 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 29 11:34:54.143830 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 11:34:54.143840 kernel: Console: colour VGA+ 80x25 Jan 29 11:34:54.143849 kernel: printk: console [ttyS0] enabled Jan 29 11:34:54.143859 kernel: ACPI: Core revision 20230628 Jan 29 11:34:54.143872 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 29 11:34:54.143882 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 11:34:54.143892 kernel: x2apic enabled Jan 29 11:34:54.143902 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 11:34:54.143911 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 29 11:34:54.143921 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 29 11:34:54.143931 kernel: kvm-guest: setup PV IPIs Jan 29 11:34:54.143964 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 29 11:34:54.143974 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 29 11:34:54.143984 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jan 29 11:34:54.143994 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 29 11:34:54.144007 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 29 11:34:54.144017 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 29 11:34:54.144028 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 11:34:54.144038 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 11:34:54.144048 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 11:34:54.144061 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 11:34:54.144072 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 29 11:34:54.144082 kernel: RETBleed: Mitigation: untrained return thunk Jan 29 11:34:54.144092 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 11:34:54.144102 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 11:34:54.144113 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 29 11:34:54.144125 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 29 11:34:54.144135 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 29 11:34:54.144149 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 11:34:54.144159 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 11:34:54.144170 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 11:34:54.144180 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 11:34:54.144191 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 29 11:34:54.144201 kernel: Freeing SMP alternatives memory: 32K Jan 29 11:34:54.144212 kernel: pid_max: default: 32768 minimum: 301 Jan 29 11:34:54.144222 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 11:34:54.144233 kernel: landlock: Up and running. Jan 29 11:34:54.144247 kernel: SELinux: Initializing. Jan 29 11:34:54.144257 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:34:54.144267 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:34:54.144278 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 29 11:34:54.144289 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:34:54.144324 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:34:54.144335 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:34:54.144345 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 29 11:34:54.144355 kernel: ... version: 0 Jan 29 11:34:54.144369 kernel: ... bit width: 48 Jan 29 11:34:54.144380 kernel: ... generic registers: 6 Jan 29 11:34:54.144391 kernel: ... value mask: 0000ffffffffffff Jan 29 11:34:54.144401 kernel: ... max period: 00007fffffffffff Jan 29 11:34:54.144412 kernel: ... fixed-purpose events: 0 Jan 29 11:34:54.144423 kernel: ... event mask: 000000000000003f Jan 29 11:34:54.144434 kernel: signal: max sigframe size: 1776 Jan 29 11:34:54.144445 kernel: rcu: Hierarchical SRCU implementation. Jan 29 11:34:54.144456 kernel: rcu: Max phase no-delay instances is 400. Jan 29 11:34:54.144470 kernel: smp: Bringing up secondary CPUs ... Jan 29 11:34:54.144481 kernel: smpboot: x86: Booting SMP configuration: Jan 29 11:34:54.144492 kernel: .... node #0, CPUs: #1 #2 #3 Jan 29 11:34:54.144503 kernel: smp: Brought up 1 node, 4 CPUs Jan 29 11:34:54.144513 kernel: smpboot: Max logical packages: 1 Jan 29 11:34:54.144525 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jan 29 11:34:54.144536 kernel: devtmpfs: initialized Jan 29 11:34:54.144547 kernel: x86/mm: Memory block size: 128MB Jan 29 11:34:54.144558 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 11:34:54.144569 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 29 11:34:54.144583 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 11:34:54.144593 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 11:34:54.144604 kernel: audit: initializing netlink subsys (disabled) Jan 29 11:34:54.144614 kernel: audit: type=2000 audit(1738150493.141:1): state=initialized audit_enabled=0 res=1 Jan 29 11:34:54.144625 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 11:34:54.144635 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 11:34:54.144646 kernel: cpuidle: using governor menu Jan 29 11:34:54.144657 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 11:34:54.144667 kernel: dca service started, version 1.12.1 Jan 29 11:34:54.144681 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 29 11:34:54.144692 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 29 11:34:54.144703 kernel: PCI: Using configuration type 1 for base access Jan 29 11:34:54.144713 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 11:34:54.144724 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 11:34:54.144734 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 11:34:54.144745 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 11:34:54.144756 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 11:34:54.144770 kernel: ACPI: Added _OSI(Module Device) Jan 29 11:34:54.144780 kernel: ACPI: Added _OSI(Processor Device) Jan 29 11:34:54.144791 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 11:34:54.144801 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 11:34:54.144811 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 11:34:54.144822 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 11:34:54.144832 kernel: ACPI: Interpreter enabled Jan 29 11:34:54.144842 kernel: ACPI: PM: (supports S0 S3 S5) Jan 29 11:34:54.144853 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 11:34:54.144865 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 11:34:54.144882 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 11:34:54.144896 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 29 11:34:54.144910 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 11:34:54.145152 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 11:34:54.145464 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 29 11:34:54.145623 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 29 11:34:54.145638 kernel: PCI host bridge to bus 0000:00 Jan 29 11:34:54.145796 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 11:34:54.145936 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 11:34:54.146126 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 11:34:54.146274 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 29 11:34:54.146429 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 29 11:34:54.146569 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 29 11:34:54.146710 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 11:34:54.146893 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 29 11:34:54.147098 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 29 11:34:54.147261 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 29 11:34:54.147455 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 29 11:34:54.147613 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 29 11:34:54.147769 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 11:34:54.147945 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 11:34:54.148117 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 29 11:34:54.148273 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 29 11:34:54.148454 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 29 11:34:54.148617 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 29 11:34:54.148778 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 29 11:34:54.148967 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 29 11:34:54.149141 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 29 11:34:54.149326 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 29 11:34:54.149482 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 29 11:34:54.149656 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 29 11:34:54.149813 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 29 11:34:54.149980 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 29 11:34:54.150170 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 29 11:34:54.150334 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 29 11:34:54.150486 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 29 11:34:54.150626 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 29 11:34:54.150765 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 29 11:34:54.150913 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 29 11:34:54.151062 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 29 11:34:54.151082 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 11:34:54.151092 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 11:34:54.151103 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 11:34:54.151113 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 11:34:54.151123 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 29 11:34:54.151133 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 29 11:34:54.151144 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 29 11:34:54.151154 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 29 11:34:54.151164 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 29 11:34:54.151177 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 29 11:34:54.151187 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 29 11:34:54.151197 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 29 11:34:54.151207 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 29 11:34:54.151217 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 29 11:34:54.151227 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 29 11:34:54.151238 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 29 11:34:54.151248 kernel: iommu: Default domain type: Translated Jan 29 11:34:54.151257 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 11:34:54.151270 kernel: PCI: Using ACPI for IRQ routing Jan 29 11:34:54.151280 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 11:34:54.151302 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 29 11:34:54.151312 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 29 11:34:54.151459 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 29 11:34:54.151602 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 29 11:34:54.151745 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 11:34:54.151759 kernel: vgaarb: loaded Jan 29 11:34:54.151770 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 29 11:34:54.151784 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 29 11:34:54.151795 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 11:34:54.151805 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 11:34:54.151816 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 11:34:54.151826 kernel: pnp: PnP ACPI init Jan 29 11:34:54.151996 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 29 11:34:54.152013 kernel: pnp: PnP ACPI: found 6 devices Jan 29 11:34:54.152024 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 11:34:54.152038 kernel: NET: Registered PF_INET protocol family Jan 29 11:34:54.152048 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 11:34:54.152059 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 11:34:54.152069 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 11:34:54.152080 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 11:34:54.152090 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 11:34:54.152101 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 11:34:54.152111 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:34:54.152124 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:34:54.152135 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 11:34:54.152145 kernel: NET: Registered PF_XDP protocol family Jan 29 11:34:54.152282 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 11:34:54.152431 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 11:34:54.152565 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 11:34:54.152700 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 29 11:34:54.152836 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 29 11:34:54.153007 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 29 11:34:54.153030 kernel: PCI: CLS 0 bytes, default 64 Jan 29 11:34:54.153041 kernel: Initialise system trusted keyrings Jan 29 11:34:54.153051 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 11:34:54.153062 kernel: Key type asymmetric registered Jan 29 11:34:54.153073 kernel: Asymmetric key parser 'x509' registered Jan 29 11:34:54.153083 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 11:34:54.153094 kernel: io scheduler mq-deadline registered Jan 29 11:34:54.153105 kernel: io scheduler kyber registered Jan 29 11:34:54.153115 kernel: io scheduler bfq registered Jan 29 11:34:54.153129 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 11:34:54.153141 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 29 11:34:54.153152 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 29 11:34:54.153163 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 29 11:34:54.153174 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 11:34:54.153185 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 11:34:54.153196 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 11:34:54.153207 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 11:34:54.153217 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 11:34:54.153231 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 11:34:54.153400 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 29 11:34:54.153543 kernel: rtc_cmos 00:04: registered as rtc0 Jan 29 11:34:54.153684 kernel: rtc_cmos 00:04: setting system clock to 2025-01-29T11:34:53 UTC (1738150493) Jan 29 11:34:54.153824 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 29 11:34:54.153838 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 29 11:34:54.153850 kernel: NET: Registered PF_INET6 protocol family Jan 29 11:34:54.153861 kernel: Segment Routing with IPv6 Jan 29 11:34:54.153876 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 11:34:54.153887 kernel: NET: Registered PF_PACKET protocol family Jan 29 11:34:54.153897 kernel: Key type dns_resolver registered Jan 29 11:34:54.153908 kernel: IPI shorthand broadcast: enabled Jan 29 11:34:54.153919 kernel: sched_clock: Marking stable (765003770, 115602564)->(950645570, -70039236) Jan 29 11:34:54.153929 kernel: registered taskstats version 1 Jan 29 11:34:54.153940 kernel: Loading compiled-in X.509 certificates Jan 29 11:34:54.153963 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: de92a621108c58f5771c86c5c3ccb1aa0728ed55' Jan 29 11:34:54.153976 kernel: Key type .fscrypt registered Jan 29 11:34:54.153993 kernel: Key type fscrypt-provisioning registered Jan 29 11:34:54.154007 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 11:34:54.154020 kernel: ima: Allocated hash algorithm: sha1 Jan 29 11:34:54.154034 kernel: ima: No architecture policies found Jan 29 11:34:54.154047 kernel: clk: Disabling unused clocks Jan 29 11:34:54.154060 kernel: Freeing unused kernel image (initmem) memory: 42972K Jan 29 11:34:54.154074 kernel: Write protecting the kernel read-only data: 36864k Jan 29 11:34:54.154087 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Jan 29 11:34:54.154101 kernel: Run /init as init process Jan 29 11:34:54.154118 kernel: with arguments: Jan 29 11:34:54.154132 kernel: /init Jan 29 11:34:54.154145 kernel: with environment: Jan 29 11:34:54.154158 kernel: HOME=/ Jan 29 11:34:54.154171 kernel: TERM=linux Jan 29 11:34:54.154184 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 11:34:54.154197 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:34:54.154211 systemd[1]: Detected virtualization kvm. Jan 29 11:34:54.154226 systemd[1]: Detected architecture x86-64. Jan 29 11:34:54.154237 systemd[1]: Running in initrd. Jan 29 11:34:54.154248 systemd[1]: No hostname configured, using default hostname. Jan 29 11:34:54.154260 systemd[1]: Hostname set to . Jan 29 11:34:54.154272 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:34:54.154283 systemd[1]: Queued start job for default target initrd.target. Jan 29 11:34:54.154307 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:34:54.154318 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:34:54.154335 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 11:34:54.154360 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:34:54.154375 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 11:34:54.154389 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 11:34:54.154404 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 11:34:54.154418 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 11:34:54.154430 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:34:54.154442 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:34:54.154454 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:34:54.154466 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:34:54.154478 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:34:54.154490 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:34:54.154502 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:34:54.154517 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:34:54.154528 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:34:54.154541 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:34:54.154552 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:34:54.154565 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:34:54.154577 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:34:54.154588 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:34:54.154601 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 11:34:54.154616 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:34:54.154628 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 11:34:54.154640 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 11:34:54.154652 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:34:54.154663 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:34:54.154700 systemd-journald[193]: Collecting audit messages is disabled. Jan 29 11:34:54.154731 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:34:54.154744 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 11:34:54.154756 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:34:54.154768 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 11:34:54.154784 systemd-journald[193]: Journal started Jan 29 11:34:54.154810 systemd-journald[193]: Runtime Journal (/run/log/journal/9ae2e4bc549c471bbcb451d347c48c59) is 6.0M, max 48.4M, 42.3M free. Jan 29 11:34:54.171583 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:34:54.174939 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:34:54.179441 systemd-modules-load[194]: Inserted module 'overlay' Jan 29 11:34:54.188362 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:34:54.228981 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:34:54.239326 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 11:34:54.245314 kernel: Bridge firewalling registered Jan 29 11:34:54.245275 systemd-modules-load[194]: Inserted module 'br_netfilter' Jan 29 11:34:54.246587 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:34:54.248263 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:34:54.251463 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:34:54.260110 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:34:54.269164 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:34:54.272133 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:34:54.275363 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:34:54.286799 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:34:54.290168 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:34:54.302503 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 11:34:54.308310 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:34:54.322363 dracut-cmdline[229]: dracut-dracut-053 Jan 29 11:34:54.328843 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 11:34:54.362226 systemd-resolved[231]: Positive Trust Anchors: Jan 29 11:34:54.362245 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:34:54.362288 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:34:54.365502 systemd-resolved[231]: Defaulting to hostname 'linux'. Jan 29 11:34:54.366713 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:34:54.377206 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:34:54.502360 kernel: SCSI subsystem initialized Jan 29 11:34:54.517514 kernel: Loading iSCSI transport class v2.0-870. Jan 29 11:34:54.543486 kernel: iscsi: registered transport (tcp) Jan 29 11:34:54.577584 kernel: iscsi: registered transport (qla4xxx) Jan 29 11:34:54.577671 kernel: QLogic iSCSI HBA Driver Jan 29 11:34:54.667322 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 11:34:54.697592 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 11:34:54.742345 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 11:34:54.742427 kernel: device-mapper: uevent: version 1.0.3 Jan 29 11:34:54.742445 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 11:34:54.800327 kernel: raid6: avx2x4 gen() 30073 MB/s Jan 29 11:34:54.817356 kernel: raid6: avx2x2 gen() 30757 MB/s Jan 29 11:34:54.849545 kernel: raid6: avx2x1 gen() 22530 MB/s Jan 29 11:34:54.849617 kernel: raid6: using algorithm avx2x2 gen() 30757 MB/s Jan 29 11:34:54.867524 kernel: raid6: .... xor() 19121 MB/s, rmw enabled Jan 29 11:34:54.867619 kernel: raid6: using avx2x2 recovery algorithm Jan 29 11:34:54.889329 kernel: xor: automatically using best checksumming function avx Jan 29 11:34:55.056328 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 11:34:55.068716 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:34:55.077454 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:34:55.089046 systemd-udevd[415]: Using default interface naming scheme 'v255'. Jan 29 11:34:55.093310 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:34:55.094848 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 11:34:55.112697 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Jan 29 11:34:55.146129 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:34:55.174435 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:34:55.238527 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:34:55.250446 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 11:34:55.263772 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 11:34:55.267092 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:34:55.270479 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:34:55.273596 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:34:55.283491 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 29 11:34:55.300933 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 29 11:34:55.301122 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 11:34:55.301139 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 11:34:55.301154 kernel: GPT:9289727 != 19775487 Jan 29 11:34:55.301177 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 11:34:55.301191 kernel: GPT:9289727 != 19775487 Jan 29 11:34:55.301204 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 11:34:55.301217 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:34:55.284482 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 11:34:55.303885 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:34:55.313399 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 11:34:55.313421 kernel: AES CTR mode by8 optimization enabled Jan 29 11:34:55.304017 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:34:55.306045 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:34:55.307607 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:34:55.307753 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:34:55.325512 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (468) Jan 29 11:34:55.309173 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:34:55.322566 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:34:55.326272 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:34:55.338327 kernel: BTRFS: device fsid 5ba3c9ea-61f2-4fe6-a507-2966757f6d44 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (476) Jan 29 11:34:55.347354 kernel: libata version 3.00 loaded. Jan 29 11:34:55.352337 kernel: ahci 0000:00:1f.2: version 3.0 Jan 29 11:34:55.361951 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 29 11:34:55.361971 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 29 11:34:55.362160 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 29 11:34:55.362564 kernel: scsi host0: ahci Jan 29 11:34:55.362803 kernel: scsi host1: ahci Jan 29 11:34:55.363010 kernel: scsi host2: ahci Jan 29 11:34:55.363207 kernel: scsi host3: ahci Jan 29 11:34:55.363404 kernel: scsi host4: ahci Jan 29 11:34:55.363550 kernel: scsi host5: ahci Jan 29 11:34:55.363695 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 29 11:34:55.363707 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 29 11:34:55.363717 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 29 11:34:55.363727 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 29 11:34:55.363744 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 29 11:34:55.363754 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 29 11:34:55.365751 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 11:34:55.394023 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 11:34:55.396819 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:34:55.403276 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:34:55.407628 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 11:34:55.436756 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 11:34:55.449430 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 11:34:55.492713 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:34:55.516584 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:34:55.692332 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 29 11:34:55.692408 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 29 11:34:55.693323 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 29 11:34:55.694320 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 29 11:34:55.694347 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 29 11:34:55.695316 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 29 11:34:55.696318 kernel: ata3.00: applying bridge limits Jan 29 11:34:55.697336 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 29 11:34:55.697359 kernel: ata3.00: configured for UDMA/100 Jan 29 11:34:55.698317 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 29 11:34:55.760692 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 29 11:34:55.772957 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 11:34:55.772973 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 29 11:34:55.921732 disk-uuid[570]: Primary Header is updated. Jan 29 11:34:55.921732 disk-uuid[570]: Secondary Entries is updated. Jan 29 11:34:55.921732 disk-uuid[570]: Secondary Header is updated. Jan 29 11:34:55.925942 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:34:55.984341 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:34:56.933975 disk-uuid[583]: The operation has completed successfully. Jan 29 11:34:56.935719 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:34:56.962633 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 11:34:56.962748 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 11:34:56.986471 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 11:34:56.989383 sh[596]: Success Jan 29 11:34:57.001336 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 29 11:34:57.032886 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 11:34:57.066103 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 11:34:57.068187 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 11:34:57.145892 kernel: BTRFS info (device dm-0): first mount of filesystem 5ba3c9ea-61f2-4fe6-a507-2966757f6d44 Jan 29 11:34:57.145924 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:34:57.145935 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 11:34:57.147041 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 11:34:57.147886 kernel: BTRFS info (device dm-0): using free space tree Jan 29 11:34:57.152849 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 11:34:57.155361 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 11:34:57.172442 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 11:34:57.175635 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 11:34:57.184256 kernel: BTRFS info (device vda6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:34:57.184317 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:34:57.184335 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:34:57.187334 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:34:57.196438 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 11:34:57.198435 kernel: BTRFS info (device vda6): last unmount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:34:57.276805 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:34:57.319427 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:34:57.341757 systemd-networkd[774]: lo: Link UP Jan 29 11:34:57.341767 systemd-networkd[774]: lo: Gained carrier Jan 29 11:34:57.343615 systemd-networkd[774]: Enumeration completed Jan 29 11:34:57.344063 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:34:57.344067 systemd-networkd[774]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:34:57.366489 systemd-networkd[774]: eth0: Link UP Jan 29 11:34:57.366490 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:34:57.366494 systemd-networkd[774]: eth0: Gained carrier Jan 29 11:34:57.366501 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:34:57.376784 systemd[1]: Reached target network.target - Network. Jan 29 11:34:57.393336 systemd-networkd[774]: eth0: DHCPv4 address 10.0.0.107/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:34:57.453928 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 11:34:57.469441 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 11:34:57.521340 ignition[779]: Ignition 2.20.0 Jan 29 11:34:57.521353 ignition[779]: Stage: fetch-offline Jan 29 11:34:57.521395 ignition[779]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:34:57.521408 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:34:57.521532 ignition[779]: parsed url from cmdline: "" Jan 29 11:34:57.521537 ignition[779]: no config URL provided Jan 29 11:34:57.521544 ignition[779]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:34:57.521555 ignition[779]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:34:57.521594 ignition[779]: op(1): [started] loading QEMU firmware config module Jan 29 11:34:57.521601 ignition[779]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 29 11:34:57.549540 ignition[779]: op(1): [finished] loading QEMU firmware config module Jan 29 11:34:57.588326 ignition[779]: parsing config with SHA512: 6f8aaa456c356d9199fa9c2dc2c466d5ed26ab0ae06c364086ac6281ef5fe490de281121c1cd3c60c0e25535e8f70181d7c6800ef5a472edc3b537528aa85bd2 Jan 29 11:34:57.592502 unknown[779]: fetched base config from "system" Jan 29 11:34:57.593332 ignition[779]: fetch-offline: fetch-offline passed Jan 29 11:34:57.592524 unknown[779]: fetched user config from "qemu" Jan 29 11:34:57.593514 ignition[779]: Ignition finished successfully Jan 29 11:34:57.597212 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:34:57.600087 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 29 11:34:57.613449 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 11:34:57.626758 ignition[789]: Ignition 2.20.0 Jan 29 11:34:57.626769 ignition[789]: Stage: kargs Jan 29 11:34:57.626934 ignition[789]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:34:57.626945 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:34:57.628100 ignition[789]: kargs: kargs passed Jan 29 11:34:57.628153 ignition[789]: Ignition finished successfully Jan 29 11:34:57.634549 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 11:34:57.641447 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 11:34:57.652739 ignition[797]: Ignition 2.20.0 Jan 29 11:34:57.652750 ignition[797]: Stage: disks Jan 29 11:34:57.652911 ignition[797]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:34:57.652922 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:34:57.653660 ignition[797]: disks: disks passed Jan 29 11:34:57.653704 ignition[797]: Ignition finished successfully Jan 29 11:34:57.695475 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 11:34:57.697655 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 11:34:57.697731 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:34:57.700022 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:34:57.703650 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:34:57.746029 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:34:57.759554 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 11:34:57.824001 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 11:34:58.138721 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 11:34:58.151385 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 11:34:58.255239 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 11:34:58.256834 kernel: EXT4-fs (vda9): mounted filesystem 2fbf9359-701e-4995-b3f7-74280bd2b1c9 r/w with ordered data mode. Quota mode: none. Jan 29 11:34:58.255887 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 11:34:58.275366 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:34:58.277278 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 11:34:58.278698 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 11:34:58.278743 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 11:34:58.285662 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (815) Jan 29 11:34:58.278767 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:34:58.321286 kernel: BTRFS info (device vda6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:34:58.321318 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:34:58.321333 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:34:58.323310 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:34:58.337383 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:34:58.342669 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 11:34:58.355408 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 11:34:58.416184 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 11:34:58.421493 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Jan 29 11:34:58.426583 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 11:34:58.431308 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 11:34:58.514124 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 11:34:58.560387 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 11:34:58.563685 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 11:34:58.567605 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 11:34:58.568939 kernel: BTRFS info (device vda6): last unmount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:34:58.588118 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 11:34:58.777988 ignition[933]: INFO : Ignition 2.20.0 Jan 29 11:34:58.777988 ignition[933]: INFO : Stage: mount Jan 29 11:34:58.779762 ignition[933]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:34:58.779762 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:34:58.779762 ignition[933]: INFO : mount: mount passed Jan 29 11:34:58.779762 ignition[933]: INFO : Ignition finished successfully Jan 29 11:34:58.781137 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 11:34:58.791377 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 11:34:59.044489 systemd-networkd[774]: eth0: Gained IPv6LL Jan 29 11:34:59.264434 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:34:59.327315 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (943) Jan 29 11:34:59.327343 kernel: BTRFS info (device vda6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:34:59.328865 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:34:59.328881 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:34:59.332319 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:34:59.333014 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:34:59.351664 ignition[960]: INFO : Ignition 2.20.0 Jan 29 11:34:59.351664 ignition[960]: INFO : Stage: files Jan 29 11:34:59.353366 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:34:59.353366 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:34:59.353366 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Jan 29 11:34:59.357019 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 11:34:59.357019 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 11:34:59.360049 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 11:34:59.360049 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 11:34:59.360049 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 11:34:59.360049 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 11:34:59.360049 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 29 11:34:59.357683 unknown[960]: wrote ssh authorized keys file for user: core Jan 29 11:34:59.403000 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 11:34:59.574690 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 11:34:59.577207 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 29 11:34:59.577207 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 11:34:59.577207 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:34:59.577207 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:34:59.577207 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:34:59.577207 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:34:59.577207 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:34:59.577207 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:34:59.577207 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:34:59.577207 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:34:59.577207 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 11:34:59.577207 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 11:34:59.577207 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 11:34:59.577207 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 29 11:34:59.964618 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 29 11:35:00.221341 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 11:35:00.221341 ignition[960]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 29 11:35:00.225485 ignition[960]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:35:00.225485 ignition[960]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:35:00.225485 ignition[960]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 29 11:35:00.225485 ignition[960]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 29 11:35:00.225485 ignition[960]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:35:00.225485 ignition[960]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:35:00.225485 ignition[960]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 29 11:35:00.225485 ignition[960]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 29 11:35:00.253587 ignition[960]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:35:00.258976 ignition[960]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:35:00.260844 ignition[960]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 29 11:35:00.260844 ignition[960]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 29 11:35:00.260844 ignition[960]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 11:35:00.260844 ignition[960]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:35:00.260844 ignition[960]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:35:00.260844 ignition[960]: INFO : files: files passed Jan 29 11:35:00.260844 ignition[960]: INFO : Ignition finished successfully Jan 29 11:35:00.272897 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 11:35:00.284523 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 11:35:00.286675 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 11:35:00.290488 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 11:35:00.290631 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 11:35:00.297354 initrd-setup-root-after-ignition[988]: grep: /sysroot/oem/oem-release: No such file or directory Jan 29 11:35:00.300414 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:35:00.300414 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:35:00.303587 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:35:00.307121 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:35:00.309946 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 11:35:00.320507 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 11:35:00.349763 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 11:35:00.349922 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 11:35:00.351209 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 11:35:00.354519 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 11:35:00.354820 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 11:35:00.366523 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 11:35:00.382545 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:35:00.399483 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 11:35:00.410729 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:35:00.410898 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:35:00.416192 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 11:35:00.416711 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 11:35:00.416892 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:35:00.420577 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 11:35:00.420974 systemd[1]: Stopped target basic.target - Basic System. Jan 29 11:35:00.421346 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 11:35:00.450055 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:35:00.452435 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 11:35:00.453687 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 11:35:00.454016 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:35:00.454378 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 11:35:00.454882 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 11:35:00.455187 systemd[1]: Stopped target swap.target - Swaps. Jan 29 11:35:00.455663 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 11:35:00.455849 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:35:00.465741 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:35:00.466152 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:35:00.466637 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 11:35:00.466752 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:35:00.467017 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 11:35:00.467125 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 11:35:00.467886 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 11:35:00.468012 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:35:00.468353 systemd[1]: Stopped target paths.target - Path Units. Jan 29 11:35:00.468798 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 11:35:00.492376 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:35:00.495640 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 11:35:00.498120 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 11:35:00.500078 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 11:35:00.500187 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:35:00.501100 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 11:35:00.501200 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:35:00.504005 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 11:35:00.504148 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:35:00.506286 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 11:35:00.506406 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 11:35:00.522443 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 11:35:00.523509 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 11:35:00.523627 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:35:00.526684 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 11:35:00.526943 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 11:35:00.527076 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:35:00.529078 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 11:35:00.529175 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:35:00.539238 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 11:35:00.539379 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 11:35:00.543014 ignition[1014]: INFO : Ignition 2.20.0 Jan 29 11:35:00.543014 ignition[1014]: INFO : Stage: umount Jan 29 11:35:00.545082 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:35:00.545082 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:35:00.545082 ignition[1014]: INFO : umount: umount passed Jan 29 11:35:00.545082 ignition[1014]: INFO : Ignition finished successfully Jan 29 11:35:00.545748 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 11:35:00.545884 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 11:35:00.547426 systemd[1]: Stopped target network.target - Network. Jan 29 11:35:00.549455 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 11:35:00.549510 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 11:35:00.551428 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 11:35:00.551476 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 11:35:00.553425 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 11:35:00.553473 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 11:35:00.553565 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 11:35:00.553609 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 11:35:00.554038 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 11:35:00.554436 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 11:35:00.559432 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 11:35:00.565380 systemd-networkd[774]: eth0: DHCPv6 lease lost Jan 29 11:35:00.565744 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 11:35:00.565882 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 11:35:00.568886 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 11:35:00.569026 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 11:35:00.572213 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 11:35:00.572259 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:35:00.582368 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 11:35:00.584150 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 11:35:00.584205 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:35:00.586658 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:35:00.586705 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:35:00.588699 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 11:35:00.588746 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 11:35:00.590965 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 11:35:00.591011 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:35:00.592439 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:35:00.605892 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 11:35:00.606030 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 11:35:00.611080 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 11:35:00.611263 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:35:00.613545 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 11:35:00.613594 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 11:35:00.615679 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 11:35:00.615720 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:35:00.617722 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 11:35:00.617771 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:35:00.619888 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 11:35:00.619936 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 11:35:00.622049 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:35:00.622096 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:35:00.631641 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 11:35:00.634046 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 11:35:00.634099 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:35:00.636460 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:35:00.636507 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:35:00.639010 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 11:35:00.639114 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 11:35:00.744698 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 11:35:00.744871 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 11:35:00.747152 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 11:35:00.749130 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 11:35:00.749195 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 11:35:00.758434 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 11:35:00.766437 systemd[1]: Switching root. Jan 29 11:35:00.802022 systemd-journald[193]: Journal stopped Jan 29 11:35:01.940353 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Jan 29 11:35:01.940415 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 11:35:01.940439 kernel: SELinux: policy capability open_perms=1 Jan 29 11:35:01.940451 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 11:35:01.940462 kernel: SELinux: policy capability always_check_network=0 Jan 29 11:35:01.940477 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 11:35:01.940488 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 11:35:01.940499 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 11:35:01.940515 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 11:35:01.940529 kernel: audit: type=1403 audit(1738150501.200:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 11:35:01.940547 systemd[1]: Successfully loaded SELinux policy in 39.706ms. Jan 29 11:35:01.940564 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.538ms. Jan 29 11:35:01.940580 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:35:01.941756 systemd[1]: Detected virtualization kvm. Jan 29 11:35:01.941772 systemd[1]: Detected architecture x86-64. Jan 29 11:35:01.941783 systemd[1]: Detected first boot. Jan 29 11:35:01.941801 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:35:01.941813 zram_generator::config[1058]: No configuration found. Jan 29 11:35:01.941829 systemd[1]: Populated /etc with preset unit settings. Jan 29 11:35:01.941841 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 11:35:01.941852 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 11:35:01.941864 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 11:35:01.941877 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 11:35:01.941889 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 11:35:01.941901 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 11:35:01.941913 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 11:35:01.941927 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 11:35:01.941939 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 11:35:01.941951 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 11:35:01.941963 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 11:35:01.941975 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:35:01.941987 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:35:01.941999 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 11:35:01.942011 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 11:35:01.942026 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 11:35:01.942038 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:35:01.942049 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 11:35:01.942062 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:35:01.942074 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 11:35:01.942086 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 11:35:01.942098 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 11:35:01.942111 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 11:35:01.942125 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:35:01.942137 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:35:01.942149 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:35:01.942161 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:35:01.942172 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 11:35:01.942184 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 11:35:01.942196 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:35:01.942208 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:35:01.942219 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:35:01.942233 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 11:35:01.942245 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 11:35:01.942257 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 11:35:01.942268 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 11:35:01.942281 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:35:01.942304 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 11:35:01.942316 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 11:35:01.942328 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 11:35:01.942340 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 11:35:01.942354 systemd[1]: Reached target machines.target - Containers. Jan 29 11:35:01.942366 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 11:35:01.942380 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:35:01.942392 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:35:01.942404 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 11:35:01.942416 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:35:01.942428 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:35:01.942440 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:35:01.942454 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 11:35:01.942466 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:35:01.942478 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 11:35:01.942490 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 11:35:01.942501 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 11:35:01.942513 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 11:35:01.942525 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 11:35:01.942537 kernel: fuse: init (API version 7.39) Jan 29 11:35:01.942548 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:35:01.942562 kernel: loop: module loaded Jan 29 11:35:01.942573 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:35:01.942585 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 11:35:01.942597 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 11:35:01.942608 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:35:01.942620 kernel: ACPI: bus type drm_connector registered Jan 29 11:35:01.942632 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 11:35:01.942644 systemd[1]: Stopped verity-setup.service. Jan 29 11:35:01.942656 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:35:01.942688 systemd-journald[1135]: Collecting audit messages is disabled. Jan 29 11:35:01.943795 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 11:35:01.943807 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 11:35:01.943818 systemd-journald[1135]: Journal started Jan 29 11:35:01.943840 systemd-journald[1135]: Runtime Journal (/run/log/journal/9ae2e4bc549c471bbcb451d347c48c59) is 6.0M, max 48.4M, 42.3M free. Jan 29 11:35:01.714406 systemd[1]: Queued start job for default target multi-user.target. Jan 29 11:35:01.733957 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 11:35:01.734411 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 11:35:01.945623 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 11:35:01.947512 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:35:01.948176 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 11:35:01.949403 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 11:35:01.950616 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 11:35:01.951941 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 11:35:01.953382 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:35:01.954932 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 11:35:01.955106 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 11:35:01.956577 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:35:01.956755 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:35:01.958192 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:35:01.958397 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:35:01.959833 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:35:01.960005 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:35:01.961523 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 11:35:01.961693 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 11:35:01.963141 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:35:01.963321 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:35:01.964790 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:35:01.966188 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 11:35:01.967719 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 11:35:01.983559 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 11:35:01.994460 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 11:35:01.996938 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 11:35:01.998143 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 11:35:01.998176 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:35:02.000183 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 11:35:02.002509 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 11:35:02.004898 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 11:35:02.006094 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:35:02.009478 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 11:35:02.012821 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 11:35:02.014976 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:35:02.016361 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 11:35:02.017729 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:35:02.022439 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:35:02.024856 systemd-journald[1135]: Time spent on flushing to /var/log/journal/9ae2e4bc549c471bbcb451d347c48c59 is 15.170ms for 948 entries. Jan 29 11:35:02.024856 systemd-journald[1135]: System Journal (/var/log/journal/9ae2e4bc549c471bbcb451d347c48c59) is 8.0M, max 195.6M, 187.6M free. Jan 29 11:35:02.059539 systemd-journald[1135]: Received client request to flush runtime journal. Jan 29 11:35:02.059587 kernel: loop0: detected capacity change from 0 to 140992 Jan 29 11:35:02.025649 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 11:35:02.030036 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 11:35:02.034858 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:35:02.036710 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 11:35:02.038206 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 11:35:02.039816 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 11:35:02.041433 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 11:35:02.049401 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 11:35:02.066055 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 11:35:02.071501 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 11:35:02.074401 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 11:35:02.078439 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:35:02.083819 udevadm[1183]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 11:35:02.089177 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 11:35:02.089323 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 11:35:02.091491 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 11:35:02.092163 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 11:35:02.102537 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:35:02.122322 kernel: loop1: detected capacity change from 0 to 138184 Jan 29 11:35:02.122847 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Jan 29 11:35:02.122867 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Jan 29 11:35:02.128889 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:35:02.160690 kernel: loop2: detected capacity change from 0 to 205544 Jan 29 11:35:02.194321 kernel: loop3: detected capacity change from 0 to 140992 Jan 29 11:35:02.206313 kernel: loop4: detected capacity change from 0 to 138184 Jan 29 11:35:02.216325 kernel: loop5: detected capacity change from 0 to 205544 Jan 29 11:35:02.221865 (sd-merge)[1197]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 29 11:35:02.222470 (sd-merge)[1197]: Merged extensions into '/usr'. Jan 29 11:35:02.227578 systemd[1]: Reloading requested from client PID 1172 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 11:35:02.227591 systemd[1]: Reloading... Jan 29 11:35:02.288328 zram_generator::config[1226]: No configuration found. Jan 29 11:35:02.328088 ldconfig[1167]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 11:35:02.396135 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:35:02.448325 systemd[1]: Reloading finished in 220 ms. Jan 29 11:35:02.485441 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 11:35:02.487081 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 11:35:02.496445 systemd[1]: Starting ensure-sysext.service... Jan 29 11:35:02.498325 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:35:02.506638 systemd[1]: Reloading requested from client PID 1260 ('systemctl') (unit ensure-sysext.service)... Jan 29 11:35:02.506659 systemd[1]: Reloading... Jan 29 11:35:02.520504 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 11:35:02.520887 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 11:35:02.521940 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 11:35:02.522233 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Jan 29 11:35:02.522319 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Jan 29 11:35:02.525468 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:35:02.525478 systemd-tmpfiles[1261]: Skipping /boot Jan 29 11:35:02.535984 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:35:02.535996 systemd-tmpfiles[1261]: Skipping /boot Jan 29 11:35:02.564336 zram_generator::config[1291]: No configuration found. Jan 29 11:35:02.659650 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:35:02.710211 systemd[1]: Reloading finished in 203 ms. Jan 29 11:35:02.730602 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 11:35:02.743738 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:35:02.752030 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:35:02.754580 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 11:35:02.757336 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 11:35:02.762201 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:35:02.767159 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:35:02.773987 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 11:35:02.778623 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:35:02.778870 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:35:02.782386 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:35:02.788205 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:35:02.791113 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:35:02.792454 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:35:02.797667 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 11:35:02.799584 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:35:02.801004 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:35:02.801553 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:35:02.808191 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 11:35:02.811480 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:35:02.811651 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:35:02.813949 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:35:02.814287 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:35:02.816024 systemd-udevd[1332]: Using default interface naming scheme 'v255'. Jan 29 11:35:02.820858 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:35:02.821066 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:35:02.830760 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:35:02.831449 augenrules[1361]: No rules Jan 29 11:35:02.834514 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:35:02.839377 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:35:02.842346 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:35:02.844550 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 11:35:02.845956 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:35:02.847145 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 11:35:02.849402 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:35:02.849698 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:35:02.852102 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 11:35:02.854651 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:35:02.855199 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:35:02.857478 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:35:02.860405 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:35:02.861379 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:35:02.869696 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:35:02.870417 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:35:02.883576 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 11:35:02.887206 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 11:35:02.908771 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:35:02.918612 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:35:02.920077 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:35:02.925532 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:35:02.930544 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:35:02.935018 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:35:02.940491 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:35:02.942014 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:35:02.944334 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1384) Jan 29 11:35:02.949489 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:35:02.950900 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:35:02.950936 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:35:02.951917 systemd[1]: Finished ensure-sysext.service. Jan 29 11:35:02.953268 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:35:02.953507 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:35:02.955350 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:35:02.955576 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:35:02.957394 systemd-resolved[1330]: Positive Trust Anchors: Jan 29 11:35:02.957406 systemd-resolved[1330]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:35:02.957436 systemd-resolved[1330]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:35:02.958762 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:35:02.958982 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:35:02.960928 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:35:02.961140 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:35:02.962754 systemd-resolved[1330]: Defaulting to hostname 'linux'. Jan 29 11:35:02.971056 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:35:02.979346 augenrules[1402]: /sbin/augenrules: No change Jan 29 11:35:02.981993 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 11:35:02.986498 augenrules[1435]: No rules Jan 29 11:35:02.987905 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:35:02.991462 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 29 11:35:02.991364 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:35:03.004408 kernel: ACPI: button: Power Button [PWRF] Jan 29 11:35:03.008278 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:35:03.010490 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:35:03.010562 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:35:03.018502 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 11:35:03.050326 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 29 11:35:03.064506 systemd-networkd[1414]: lo: Link UP Jan 29 11:35:03.064725 systemd-networkd[1414]: lo: Gained carrier Jan 29 11:35:03.070609 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:35:03.074672 systemd-networkd[1414]: Enumeration completed Jan 29 11:35:03.077003 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 29 11:35:03.077477 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 29 11:35:03.077665 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 29 11:35:03.115327 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 11:35:03.116109 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 11:35:03.116611 systemd-networkd[1414]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:35:03.116616 systemd-networkd[1414]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:35:03.117523 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:35:03.118070 systemd-networkd[1414]: eth0: Link UP Jan 29 11:35:03.118075 systemd-networkd[1414]: eth0: Gained carrier Jan 29 11:35:03.118087 systemd-networkd[1414]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:35:03.119240 systemd[1]: Reached target network.target - Network. Jan 29 11:35:03.121621 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 11:35:03.131682 kernel: kvm_amd: TSC scaling supported Jan 29 11:35:03.131729 kernel: kvm_amd: Nested Virtualization enabled Jan 29 11:35:03.131743 kernel: kvm_amd: Nested Paging enabled Jan 29 11:35:03.131767 kernel: kvm_amd: LBR virtualization supported Jan 29 11:35:03.132811 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 29 11:35:03.132831 kernel: kvm_amd: Virtual GIF supported Jan 29 11:35:03.134739 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:35:03.137553 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 11:35:03.139486 systemd-networkd[1414]: eth0: DHCPv4 address 10.0.0.107/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:35:03.163981 kernel: EDAC MC: Ver: 3.0.0 Jan 29 11:35:03.193392 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 11:35:03.636433 systemd-resolved[1330]: Clock change detected. Flushing caches. Jan 29 11:35:03.636480 systemd-timesyncd[1442]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 29 11:35:03.636521 systemd-timesyncd[1442]: Initial clock synchronization to Wed 2025-01-29 11:35:03.636386 UTC. Jan 29 11:35:03.668061 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 11:35:03.669724 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:35:03.672623 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 11:35:03.686005 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 11:35:03.694293 lvm[1459]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:35:03.723018 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 11:35:03.724659 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:35:03.725831 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:35:03.727103 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 11:35:03.728413 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 11:35:03.730092 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 11:35:03.731375 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 11:35:03.732681 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 11:35:03.733965 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 11:35:03.733995 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:35:03.734948 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:35:03.736736 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 11:35:03.739674 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 11:35:03.751394 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 11:35:03.753865 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 11:35:03.755467 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 11:35:03.756647 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:35:03.757673 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:35:03.758691 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:35:03.758721 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:35:03.759754 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 11:35:03.761851 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 11:35:03.766953 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 11:35:03.770068 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 11:35:03.771107 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 11:35:03.772313 lvm[1463]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:35:03.772676 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 11:35:03.777695 jq[1466]: false Jan 29 11:35:03.777748 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 11:35:03.780061 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 11:35:03.785839 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 11:35:03.793766 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 11:35:03.795111 dbus-daemon[1465]: [system] SELinux support is enabled Jan 29 11:35:03.795277 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 11:35:03.795772 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 11:35:03.799680 extend-filesystems[1467]: Found loop3 Jan 29 11:35:03.799680 extend-filesystems[1467]: Found loop4 Jan 29 11:35:03.799680 extend-filesystems[1467]: Found loop5 Jan 29 11:35:03.799680 extend-filesystems[1467]: Found sr0 Jan 29 11:35:03.799680 extend-filesystems[1467]: Found vda Jan 29 11:35:03.799680 extend-filesystems[1467]: Found vda1 Jan 29 11:35:03.799680 extend-filesystems[1467]: Found vda2 Jan 29 11:35:03.799680 extend-filesystems[1467]: Found vda3 Jan 29 11:35:03.799680 extend-filesystems[1467]: Found usr Jan 29 11:35:03.799680 extend-filesystems[1467]: Found vda4 Jan 29 11:35:03.799680 extend-filesystems[1467]: Found vda6 Jan 29 11:35:03.799680 extend-filesystems[1467]: Found vda7 Jan 29 11:35:03.799680 extend-filesystems[1467]: Found vda9 Jan 29 11:35:03.799680 extend-filesystems[1467]: Checking size of /dev/vda9 Jan 29 11:35:03.799818 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 11:35:03.806187 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 11:35:03.813259 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 11:35:03.826667 update_engine[1477]: I20250129 11:35:03.817096 1477 main.cc:92] Flatcar Update Engine starting Jan 29 11:35:03.826667 update_engine[1477]: I20250129 11:35:03.819183 1477 update_check_scheduler.cc:74] Next update check in 7m44s Jan 29 11:35:03.819996 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 11:35:03.826991 jq[1481]: true Jan 29 11:35:03.832155 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 11:35:03.832364 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 11:35:03.832710 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 11:35:03.832916 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 11:35:03.840534 extend-filesystems[1467]: Resized partition /dev/vda9 Jan 29 11:35:03.840266 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 11:35:03.846594 extend-filesystems[1490]: resize2fs 1.47.1 (20-May-2024) Jan 29 11:35:03.840474 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 11:35:03.849568 (ntainerd)[1492]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 11:35:03.856081 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 11:35:03.856111 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 11:35:03.857800 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 11:35:03.860560 jq[1491]: true Jan 29 11:35:03.860782 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1376) Jan 29 11:35:03.857833 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 11:35:03.862281 systemd-logind[1475]: Watching system buttons on /dev/input/event1 (Power Button) Jan 29 11:35:03.863727 systemd-logind[1475]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 11:35:03.867480 systemd-logind[1475]: New seat seat0. Jan 29 11:35:03.868045 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 29 11:35:03.872531 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 11:35:03.877990 systemd[1]: Started update-engine.service - Update Engine. Jan 29 11:35:03.878252 tar[1487]: linux-amd64/helm Jan 29 11:35:03.895233 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 11:35:04.013395 locksmithd[1506]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 11:35:04.082764 sshd_keygen[1485]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 11:35:04.107143 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 11:35:04.114842 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 11:35:04.123739 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 11:35:04.123964 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 11:35:04.133899 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 11:35:04.134707 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 29 11:35:04.146260 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 11:35:04.149343 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 11:35:04.151900 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 11:35:04.153167 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 11:35:04.360946 extend-filesystems[1490]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 11:35:04.360946 extend-filesystems[1490]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 11:35:04.360946 extend-filesystems[1490]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 29 11:35:04.366341 extend-filesystems[1467]: Resized filesystem in /dev/vda9 Jan 29 11:35:04.364707 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 11:35:04.382800 containerd[1492]: time="2025-01-29T11:35:04.361728871Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 11:35:04.364944 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 11:35:04.384217 containerd[1492]: time="2025-01-29T11:35:04.384086930Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:35:04.386393 containerd[1492]: time="2025-01-29T11:35:04.385737456Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:35:04.386393 containerd[1492]: time="2025-01-29T11:35:04.385984429Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 11:35:04.386393 containerd[1492]: time="2025-01-29T11:35:04.386046535Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 11:35:04.386393 containerd[1492]: time="2025-01-29T11:35:04.386248694Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 11:35:04.386393 containerd[1492]: time="2025-01-29T11:35:04.386275495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 11:35:04.386393 containerd[1492]: time="2025-01-29T11:35:04.386355355Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:35:04.386393 containerd[1492]: time="2025-01-29T11:35:04.386374841Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:35:04.386653 containerd[1492]: time="2025-01-29T11:35:04.386596286Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:35:04.386653 containerd[1492]: time="2025-01-29T11:35:04.386640569Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 11:35:04.386702 containerd[1492]: time="2025-01-29T11:35:04.386666418Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:35:04.386702 containerd[1492]: time="2025-01-29T11:35:04.386682979Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 11:35:04.387046 containerd[1492]: time="2025-01-29T11:35:04.386995285Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:35:04.387355 containerd[1492]: time="2025-01-29T11:35:04.387333609Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:35:04.387527 containerd[1492]: time="2025-01-29T11:35:04.387500953Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:35:04.387527 containerd[1492]: time="2025-01-29T11:35:04.387519207Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 11:35:04.387678 containerd[1492]: time="2025-01-29T11:35:04.387615979Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 11:35:04.387756 containerd[1492]: time="2025-01-29T11:35:04.387739490Z" level=info msg="metadata content store policy set" policy=shared Jan 29 11:35:04.484763 tar[1487]: linux-amd64/LICENSE Jan 29 11:35:04.484864 tar[1487]: linux-amd64/README.md Jan 29 11:35:04.501856 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 11:35:04.659738 bash[1519]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:35:04.662192 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 11:35:04.664361 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 29 11:35:04.699829 containerd[1492]: time="2025-01-29T11:35:04.699729293Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 11:35:04.699829 containerd[1492]: time="2025-01-29T11:35:04.699828128Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 11:35:04.699829 containerd[1492]: time="2025-01-29T11:35:04.699845480Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 11:35:04.700054 containerd[1492]: time="2025-01-29T11:35:04.699861941Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 11:35:04.700054 containerd[1492]: time="2025-01-29T11:35:04.699878011Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 11:35:04.700147 containerd[1492]: time="2025-01-29T11:35:04.700116028Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 11:35:04.700436 containerd[1492]: time="2025-01-29T11:35:04.700396975Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 11:35:04.700600 containerd[1492]: time="2025-01-29T11:35:04.700567284Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 11:35:04.700646 containerd[1492]: time="2025-01-29T11:35:04.700595387Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 11:35:04.700646 containerd[1492]: time="2025-01-29T11:35:04.700616256Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 11:35:04.700691 containerd[1492]: time="2025-01-29T11:35:04.700659958Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 11:35:04.700691 containerd[1492]: time="2025-01-29T11:35:04.700679715Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 11:35:04.700727 containerd[1492]: time="2025-01-29T11:35:04.700696126Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 11:35:04.700727 containerd[1492]: time="2025-01-29T11:35:04.700714380Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 11:35:04.700768 containerd[1492]: time="2025-01-29T11:35:04.700733466Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 11:35:04.700768 containerd[1492]: time="2025-01-29T11:35:04.700751439Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 11:35:04.700803 containerd[1492]: time="2025-01-29T11:35:04.700767439Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 11:35:04.700803 containerd[1492]: time="2025-01-29T11:35:04.700783770Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 11:35:04.700904 containerd[1492]: time="2025-01-29T11:35:04.700807184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 11:35:04.700904 containerd[1492]: time="2025-01-29T11:35:04.700825218Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 11:35:04.700904 containerd[1492]: time="2025-01-29T11:35:04.700849493Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 11:35:04.700904 containerd[1492]: time="2025-01-29T11:35:04.700865834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 11:35:04.700904 containerd[1492]: time="2025-01-29T11:35:04.700882415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 11:35:04.700904 containerd[1492]: time="2025-01-29T11:35:04.700900789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 11:35:04.701026 containerd[1492]: time="2025-01-29T11:35:04.700917902Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 11:35:04.701026 containerd[1492]: time="2025-01-29T11:35:04.700935945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 11:35:04.701026 containerd[1492]: time="2025-01-29T11:35:04.700952326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 11:35:04.701026 containerd[1492]: time="2025-01-29T11:35:04.700971993Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 11:35:04.701026 containerd[1492]: time="2025-01-29T11:35:04.700987472Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 11:35:04.701026 containerd[1492]: time="2025-01-29T11:35:04.701003051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 11:35:04.701026 containerd[1492]: time="2025-01-29T11:35:04.701027637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 11:35:04.701149 containerd[1492]: time="2025-01-29T11:35:04.701047054Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 11:35:04.701149 containerd[1492]: time="2025-01-29T11:35:04.701070768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 11:35:04.701149 containerd[1492]: time="2025-01-29T11:35:04.701087049Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 11:35:04.701149 containerd[1492]: time="2025-01-29T11:35:04.701100845Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 11:35:04.701217 containerd[1492]: time="2025-01-29T11:35:04.701163973Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 11:35:04.701217 containerd[1492]: time="2025-01-29T11:35:04.701186295Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 11:35:04.701217 containerd[1492]: time="2025-01-29T11:35:04.701199299Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 11:35:04.701277 containerd[1492]: time="2025-01-29T11:35:04.701218004Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 11:35:04.701277 containerd[1492]: time="2025-01-29T11:35:04.701231169Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 11:35:04.701277 containerd[1492]: time="2025-01-29T11:35:04.701246969Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 11:35:04.701277 containerd[1492]: time="2025-01-29T11:35:04.701260183Z" level=info msg="NRI interface is disabled by configuration." Jan 29 11:35:04.701277 containerd[1492]: time="2025-01-29T11:35:04.701273108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 11:35:04.701650 containerd[1492]: time="2025-01-29T11:35:04.701581857Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 11:35:04.701782 containerd[1492]: time="2025-01-29T11:35:04.701657769Z" level=info msg="Connect containerd service" Jan 29 11:35:04.701782 containerd[1492]: time="2025-01-29T11:35:04.701693717Z" level=info msg="using legacy CRI server" Jan 29 11:35:04.701782 containerd[1492]: time="2025-01-29T11:35:04.701700599Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 11:35:04.701848 containerd[1492]: time="2025-01-29T11:35:04.701804274Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 11:35:04.702417 containerd[1492]: time="2025-01-29T11:35:04.702370045Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:35:04.702555 containerd[1492]: time="2025-01-29T11:35:04.702514686Z" level=info msg="Start subscribing containerd event" Jan 29 11:35:04.702555 containerd[1492]: time="2025-01-29T11:35:04.702559520Z" level=info msg="Start recovering state" Jan 29 11:35:04.702727 containerd[1492]: time="2025-01-29T11:35:04.702709451Z" level=info msg="Start event monitor" Jan 29 11:35:04.702753 containerd[1492]: time="2025-01-29T11:35:04.702739127Z" level=info msg="Start snapshots syncer" Jan 29 11:35:04.702753 containerd[1492]: time="2025-01-29T11:35:04.702748715Z" level=info msg="Start cni network conf syncer for default" Jan 29 11:35:04.702821 containerd[1492]: time="2025-01-29T11:35:04.702757542Z" level=info msg="Start streaming server" Jan 29 11:35:04.702870 containerd[1492]: time="2025-01-29T11:35:04.702712277Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 11:35:04.702936 containerd[1492]: time="2025-01-29T11:35:04.702918483Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 11:35:04.703178 containerd[1492]: time="2025-01-29T11:35:04.702979768Z" level=info msg="containerd successfully booted in 0.343140s" Jan 29 11:35:04.703081 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 11:35:04.990849 systemd-networkd[1414]: eth0: Gained IPv6LL Jan 29 11:35:04.994079 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 11:35:04.995888 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 11:35:05.007835 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 29 11:35:05.010272 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:35:05.012423 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 11:35:05.032382 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 29 11:35:05.032641 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 29 11:35:05.034378 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 11:35:05.036775 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 11:35:05.622312 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:35:05.623960 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 11:35:05.625248 systemd[1]: Startup finished in 975ms (kernel) + 7.412s (initrd) + 4.020s (userspace) = 12.407s. Jan 29 11:35:05.646966 (kubelet)[1579]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:35:06.060586 kubelet[1579]: E0129 11:35:06.060462 1579 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:35:06.064414 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:35:06.064615 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:35:10.664418 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 11:35:10.665658 systemd[1]: Started sshd@0-10.0.0.107:22-10.0.0.1:33568.service - OpenSSH per-connection server daemon (10.0.0.1:33568). Jan 29 11:35:10.716130 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 33568 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:35:10.717824 sshd-session[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:35:10.726342 systemd-logind[1475]: New session 1 of user core. Jan 29 11:35:10.727592 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 11:35:10.733828 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 11:35:10.744890 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 11:35:10.747764 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 11:35:10.756576 (systemd)[1596]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 11:35:10.882742 systemd[1596]: Queued start job for default target default.target. Jan 29 11:35:10.897005 systemd[1596]: Created slice app.slice - User Application Slice. Jan 29 11:35:10.897034 systemd[1596]: Reached target paths.target - Paths. Jan 29 11:35:10.897047 systemd[1596]: Reached target timers.target - Timers. Jan 29 11:35:10.898696 systemd[1596]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 11:35:10.910253 systemd[1596]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 11:35:10.910384 systemd[1596]: Reached target sockets.target - Sockets. Jan 29 11:35:10.910404 systemd[1596]: Reached target basic.target - Basic System. Jan 29 11:35:10.910444 systemd[1596]: Reached target default.target - Main User Target. Jan 29 11:35:10.910480 systemd[1596]: Startup finished in 147ms. Jan 29 11:35:10.910865 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 11:35:10.912526 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 11:35:10.974093 systemd[1]: Started sshd@1-10.0.0.107:22-10.0.0.1:33578.service - OpenSSH per-connection server daemon (10.0.0.1:33578). Jan 29 11:35:11.023588 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 33578 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:35:11.025345 sshd-session[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:35:11.029721 systemd-logind[1475]: New session 2 of user core. Jan 29 11:35:11.037781 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 11:35:11.091652 sshd[1609]: Connection closed by 10.0.0.1 port 33578 Jan 29 11:35:11.092162 sshd-session[1607]: pam_unix(sshd:session): session closed for user core Jan 29 11:35:11.099403 systemd[1]: sshd@1-10.0.0.107:22-10.0.0.1:33578.service: Deactivated successfully. Jan 29 11:35:11.101265 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 11:35:11.102912 systemd-logind[1475]: Session 2 logged out. Waiting for processes to exit. Jan 29 11:35:11.119967 systemd[1]: Started sshd@2-10.0.0.107:22-10.0.0.1:33592.service - OpenSSH per-connection server daemon (10.0.0.1:33592). Jan 29 11:35:11.121069 systemd-logind[1475]: Removed session 2. Jan 29 11:35:11.159354 sshd[1614]: Accepted publickey for core from 10.0.0.1 port 33592 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:35:11.160712 sshd-session[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:35:11.164757 systemd-logind[1475]: New session 3 of user core. Jan 29 11:35:11.175738 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 11:35:11.225355 sshd[1616]: Connection closed by 10.0.0.1 port 33592 Jan 29 11:35:11.225968 sshd-session[1614]: pam_unix(sshd:session): session closed for user core Jan 29 11:35:11.238431 systemd[1]: sshd@2-10.0.0.107:22-10.0.0.1:33592.service: Deactivated successfully. Jan 29 11:35:11.240194 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 11:35:11.241792 systemd-logind[1475]: Session 3 logged out. Waiting for processes to exit. Jan 29 11:35:11.251899 systemd[1]: Started sshd@3-10.0.0.107:22-10.0.0.1:33598.service - OpenSSH per-connection server daemon (10.0.0.1:33598). Jan 29 11:35:11.252720 systemd-logind[1475]: Removed session 3. Jan 29 11:35:11.286962 sshd[1621]: Accepted publickey for core from 10.0.0.1 port 33598 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:35:11.288299 sshd-session[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:35:11.291977 systemd-logind[1475]: New session 4 of user core. Jan 29 11:35:11.307930 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 11:35:11.363407 sshd[1623]: Connection closed by 10.0.0.1 port 33598 Jan 29 11:35:11.363913 sshd-session[1621]: pam_unix(sshd:session): session closed for user core Jan 29 11:35:11.374487 systemd[1]: sshd@3-10.0.0.107:22-10.0.0.1:33598.service: Deactivated successfully. Jan 29 11:35:11.376034 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 11:35:11.377400 systemd-logind[1475]: Session 4 logged out. Waiting for processes to exit. Jan 29 11:35:11.386875 systemd[1]: Started sshd@4-10.0.0.107:22-10.0.0.1:33604.service - OpenSSH per-connection server daemon (10.0.0.1:33604). Jan 29 11:35:11.387742 systemd-logind[1475]: Removed session 4. Jan 29 11:35:11.421480 sshd[1628]: Accepted publickey for core from 10.0.0.1 port 33604 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:35:11.422696 sshd-session[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:35:11.426445 systemd-logind[1475]: New session 5 of user core. Jan 29 11:35:11.435751 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 11:35:11.491600 sudo[1631]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 11:35:11.492035 sudo[1631]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:35:11.512304 sudo[1631]: pam_unix(sudo:session): session closed for user root Jan 29 11:35:11.513536 sshd[1630]: Connection closed by 10.0.0.1 port 33604 Jan 29 11:35:11.513876 sshd-session[1628]: pam_unix(sshd:session): session closed for user core Jan 29 11:35:11.525223 systemd[1]: sshd@4-10.0.0.107:22-10.0.0.1:33604.service: Deactivated successfully. Jan 29 11:35:11.526911 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 11:35:11.528186 systemd-logind[1475]: Session 5 logged out. Waiting for processes to exit. Jan 29 11:35:11.529483 systemd[1]: Started sshd@5-10.0.0.107:22-10.0.0.1:33620.service - OpenSSH per-connection server daemon (10.0.0.1:33620). Jan 29 11:35:11.530325 systemd-logind[1475]: Removed session 5. Jan 29 11:35:11.575070 sshd[1636]: Accepted publickey for core from 10.0.0.1 port 33620 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:35:11.576512 sshd-session[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:35:11.580068 systemd-logind[1475]: New session 6 of user core. Jan 29 11:35:11.597744 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 11:35:11.649500 sudo[1640]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 11:35:11.649833 sudo[1640]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:35:11.653127 sudo[1640]: pam_unix(sudo:session): session closed for user root Jan 29 11:35:11.658516 sudo[1639]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 11:35:11.658855 sudo[1639]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:35:11.677886 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:35:11.706219 augenrules[1662]: No rules Jan 29 11:35:11.707067 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:35:11.707297 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:35:11.708369 sudo[1639]: pam_unix(sudo:session): session closed for user root Jan 29 11:35:11.709838 sshd[1638]: Connection closed by 10.0.0.1 port 33620 Jan 29 11:35:11.710202 sshd-session[1636]: pam_unix(sshd:session): session closed for user core Jan 29 11:35:11.723013 systemd[1]: sshd@5-10.0.0.107:22-10.0.0.1:33620.service: Deactivated successfully. Jan 29 11:35:11.724500 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 11:35:11.725725 systemd-logind[1475]: Session 6 logged out. Waiting for processes to exit. Jan 29 11:35:11.726858 systemd[1]: Started sshd@6-10.0.0.107:22-10.0.0.1:33622.service - OpenSSH per-connection server daemon (10.0.0.1:33622). Jan 29 11:35:11.727593 systemd-logind[1475]: Removed session 6. Jan 29 11:35:11.765579 sshd[1670]: Accepted publickey for core from 10.0.0.1 port 33622 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:35:11.766889 sshd-session[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:35:11.770669 systemd-logind[1475]: New session 7 of user core. Jan 29 11:35:11.780735 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 11:35:11.832194 sudo[1673]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 11:35:11.832501 sudo[1673]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:35:12.116875 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 11:35:12.116998 (dockerd)[1693]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 11:35:12.403590 dockerd[1693]: time="2025-01-29T11:35:12.403446172Z" level=info msg="Starting up" Jan 29 11:35:12.516935 systemd[1]: var-lib-docker-metacopy\x2dcheck1567577412-merged.mount: Deactivated successfully. Jan 29 11:35:12.542702 dockerd[1693]: time="2025-01-29T11:35:12.542602439Z" level=info msg="Loading containers: start." Jan 29 11:35:12.705662 kernel: Initializing XFRM netlink socket Jan 29 11:35:12.781583 systemd-networkd[1414]: docker0: Link UP Jan 29 11:35:12.823151 dockerd[1693]: time="2025-01-29T11:35:12.823113688Z" level=info msg="Loading containers: done." Jan 29 11:35:12.837983 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1404035214-merged.mount: Deactivated successfully. Jan 29 11:35:12.839968 dockerd[1693]: time="2025-01-29T11:35:12.839931928Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 11:35:12.840030 dockerd[1693]: time="2025-01-29T11:35:12.840022017Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Jan 29 11:35:12.840140 dockerd[1693]: time="2025-01-29T11:35:12.840120351Z" level=info msg="Daemon has completed initialization" Jan 29 11:35:12.877090 dockerd[1693]: time="2025-01-29T11:35:12.877022767Z" level=info msg="API listen on /run/docker.sock" Jan 29 11:35:12.877255 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 11:35:13.498853 containerd[1492]: time="2025-01-29T11:35:13.498813827Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 29 11:35:14.155492 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3057635998.mount: Deactivated successfully. Jan 29 11:35:14.990846 containerd[1492]: time="2025-01-29T11:35:14.990788302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:14.991421 containerd[1492]: time="2025-01-29T11:35:14.991381003Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=27976721" Jan 29 11:35:14.992697 containerd[1492]: time="2025-01-29T11:35:14.992651135Z" level=info msg="ImageCreate event name:\"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:14.995204 containerd[1492]: time="2025-01-29T11:35:14.995163226Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:14.996249 containerd[1492]: time="2025-01-29T11:35:14.996207645Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"27973521\" in 1.497354424s" Jan 29 11:35:14.996288 containerd[1492]: time="2025-01-29T11:35:14.996250946Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\"" Jan 29 11:35:14.997709 containerd[1492]: time="2025-01-29T11:35:14.997683232Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 29 11:35:16.113303 containerd[1492]: time="2025-01-29T11:35:16.113246171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:16.114031 containerd[1492]: time="2025-01-29T11:35:16.113965890Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=24701143" Jan 29 11:35:16.115050 containerd[1492]: time="2025-01-29T11:35:16.115016140Z" level=info msg="ImageCreate event name:\"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:16.118462 containerd[1492]: time="2025-01-29T11:35:16.118426526Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:16.120896 containerd[1492]: time="2025-01-29T11:35:16.120849630Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"26147725\" in 1.123128166s" Jan 29 11:35:16.120933 containerd[1492]: time="2025-01-29T11:35:16.120895506Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\"" Jan 29 11:35:16.121325 containerd[1492]: time="2025-01-29T11:35:16.121308801Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 29 11:35:16.314880 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 11:35:16.328799 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:35:16.473449 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:35:16.477882 (kubelet)[1959]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:35:16.514490 kubelet[1959]: E0129 11:35:16.514437 1959 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:35:16.520462 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:35:16.520715 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:35:17.642401 containerd[1492]: time="2025-01-29T11:35:17.642341881Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:17.643169 containerd[1492]: time="2025-01-29T11:35:17.643093651Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=18652053" Jan 29 11:35:17.644184 containerd[1492]: time="2025-01-29T11:35:17.644149842Z" level=info msg="ImageCreate event name:\"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:17.646842 containerd[1492]: time="2025-01-29T11:35:17.646806674Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:17.647951 containerd[1492]: time="2025-01-29T11:35:17.647919130Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"20098653\" in 1.526585143s" Jan 29 11:35:17.647951 containerd[1492]: time="2025-01-29T11:35:17.647949527Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\"" Jan 29 11:35:17.648583 containerd[1492]: time="2025-01-29T11:35:17.648423927Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 29 11:35:18.642822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2214179309.mount: Deactivated successfully. Jan 29 11:35:18.923979 containerd[1492]: time="2025-01-29T11:35:18.923857109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:18.924732 containerd[1492]: time="2025-01-29T11:35:18.924678510Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=30231128" Jan 29 11:35:18.925654 containerd[1492]: time="2025-01-29T11:35:18.925597794Z" level=info msg="ImageCreate event name:\"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:18.927427 containerd[1492]: time="2025-01-29T11:35:18.927392510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:18.928154 containerd[1492]: time="2025-01-29T11:35:18.928125334Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"30230147\" in 1.279669046s" Jan 29 11:35:18.928189 containerd[1492]: time="2025-01-29T11:35:18.928154779Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\"" Jan 29 11:35:18.928649 containerd[1492]: time="2025-01-29T11:35:18.928602228Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 11:35:19.475669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2887819251.mount: Deactivated successfully. Jan 29 11:35:20.420885 containerd[1492]: time="2025-01-29T11:35:20.420828996Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:20.421523 containerd[1492]: time="2025-01-29T11:35:20.421499063Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 29 11:35:20.422791 containerd[1492]: time="2025-01-29T11:35:20.422759006Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:20.425282 containerd[1492]: time="2025-01-29T11:35:20.425249466Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:20.426300 containerd[1492]: time="2025-01-29T11:35:20.426261374Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.497613369s" Jan 29 11:35:20.426329 containerd[1492]: time="2025-01-29T11:35:20.426300467Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 29 11:35:20.426912 containerd[1492]: time="2025-01-29T11:35:20.426768044Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 29 11:35:20.916476 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3842760777.mount: Deactivated successfully. Jan 29 11:35:20.946792 containerd[1492]: time="2025-01-29T11:35:20.946738685Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:20.947522 containerd[1492]: time="2025-01-29T11:35:20.947481738Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 29 11:35:20.948564 containerd[1492]: time="2025-01-29T11:35:20.948533701Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:20.950540 containerd[1492]: time="2025-01-29T11:35:20.950502653Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:20.951193 containerd[1492]: time="2025-01-29T11:35:20.951165707Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 524.374459ms" Jan 29 11:35:20.951193 containerd[1492]: time="2025-01-29T11:35:20.951190914Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 29 11:35:20.951656 containerd[1492]: time="2025-01-29T11:35:20.951613046Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 29 11:35:21.494668 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3344018084.mount: Deactivated successfully. Jan 29 11:35:23.233101 containerd[1492]: time="2025-01-29T11:35:23.233039004Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:23.233839 containerd[1492]: time="2025-01-29T11:35:23.233811302Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Jan 29 11:35:23.235202 containerd[1492]: time="2025-01-29T11:35:23.235177965Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:23.240262 containerd[1492]: time="2025-01-29T11:35:23.240201386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:23.241471 containerd[1492]: time="2025-01-29T11:35:23.241422856Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.289755899s" Jan 29 11:35:23.241527 containerd[1492]: time="2025-01-29T11:35:23.241475846Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jan 29 11:35:26.047958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:35:26.062847 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:35:26.085364 systemd[1]: Reloading requested from client PID 2112 ('systemctl') (unit session-7.scope)... Jan 29 11:35:26.085379 systemd[1]: Reloading... Jan 29 11:35:26.159659 zram_generator::config[2160]: No configuration found. Jan 29 11:35:26.281398 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:35:26.357157 systemd[1]: Reloading finished in 271 ms. Jan 29 11:35:26.402032 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 11:35:26.402253 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:35:26.404478 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:35:26.546374 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:35:26.550741 (kubelet)[2200]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:35:26.580758 kubelet[2200]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:35:26.580758 kubelet[2200]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:35:26.580758 kubelet[2200]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:35:26.581138 kubelet[2200]: I0129 11:35:26.580808 2200 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:35:26.785560 kubelet[2200]: I0129 11:35:26.785460 2200 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 11:35:26.785560 kubelet[2200]: I0129 11:35:26.785485 2200 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:35:26.785728 kubelet[2200]: I0129 11:35:26.785715 2200 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 11:35:26.804372 kubelet[2200]: I0129 11:35:26.804338 2200 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:35:26.804451 kubelet[2200]: E0129 11:35:26.804389 2200 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.107:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.107:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:35:26.809332 kubelet[2200]: E0129 11:35:26.809300 2200 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 11:35:26.809332 kubelet[2200]: I0129 11:35:26.809333 2200 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 11:35:26.814785 kubelet[2200]: I0129 11:35:26.814771 2200 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:35:26.815667 kubelet[2200]: I0129 11:35:26.815649 2200 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 11:35:26.815814 kubelet[2200]: I0129 11:35:26.815784 2200 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:35:26.815943 kubelet[2200]: I0129 11:35:26.815807 2200 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 11:35:26.816030 kubelet[2200]: I0129 11:35:26.815953 2200 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:35:26.816030 kubelet[2200]: I0129 11:35:26.815960 2200 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 11:35:26.816074 kubelet[2200]: I0129 11:35:26.816060 2200 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:35:26.817295 kubelet[2200]: I0129 11:35:26.817277 2200 kubelet.go:408] "Attempting to sync node with API server" Jan 29 11:35:26.817295 kubelet[2200]: I0129 11:35:26.817294 2200 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:35:26.817364 kubelet[2200]: I0129 11:35:26.817326 2200 kubelet.go:314] "Adding apiserver pod source" Jan 29 11:35:26.817364 kubelet[2200]: I0129 11:35:26.817340 2200 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:35:26.822532 kubelet[2200]: W0129 11:35:26.822494 2200 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.107:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.107:6443: connect: connection refused Jan 29 11:35:26.822851 kubelet[2200]: E0129 11:35:26.822617 2200 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.107:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.107:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:35:26.823966 kubelet[2200]: I0129 11:35:26.823842 2200 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:35:26.824024 kubelet[2200]: W0129 11:35:26.823954 2200 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.107:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.107:6443: connect: connection refused Jan 29 11:35:26.824024 kubelet[2200]: E0129 11:35:26.823994 2200 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.107:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.107:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:35:26.825755 kubelet[2200]: I0129 11:35:26.825734 2200 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:35:26.826258 kubelet[2200]: W0129 11:35:26.826231 2200 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 11:35:26.827014 kubelet[2200]: I0129 11:35:26.826929 2200 server.go:1269] "Started kubelet" Jan 29 11:35:26.827702 kubelet[2200]: I0129 11:35:26.827607 2200 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:35:26.830844 kubelet[2200]: I0129 11:35:26.829064 2200 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:35:26.830844 kubelet[2200]: I0129 11:35:26.829653 2200 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:35:26.830844 kubelet[2200]: I0129 11:35:26.829942 2200 server.go:460] "Adding debug handlers to kubelet server" Jan 29 11:35:26.830974 kubelet[2200]: E0129 11:35:26.830852 2200 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:35:26.831132 kubelet[2200]: I0129 11:35:26.831115 2200 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:35:26.833576 kubelet[2200]: I0129 11:35:26.831195 2200 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 11:35:26.833576 kubelet[2200]: I0129 11:35:26.831529 2200 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 11:35:26.833576 kubelet[2200]: I0129 11:35:26.831607 2200 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 11:35:26.833576 kubelet[2200]: I0129 11:35:26.831687 2200 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:35:26.833576 kubelet[2200]: W0129 11:35:26.832247 2200 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.107:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.107:6443: connect: connection refused Jan 29 11:35:26.833576 kubelet[2200]: E0129 11:35:26.832279 2200 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.107:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.107:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:35:26.833576 kubelet[2200]: I0129 11:35:26.832399 2200 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:35:26.833576 kubelet[2200]: I0129 11:35:26.832467 2200 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:35:26.833576 kubelet[2200]: E0129 11:35:26.833384 2200 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:35:26.833936 kubelet[2200]: E0129 11:35:26.831859 2200 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.107:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.107:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f26b1edad04cd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 11:35:26.826906829 +0000 UTC m=+0.272780161,LastTimestamp:2025-01-29 11:35:26.826906829 +0000 UTC m=+0.272780161,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 11:35:26.833936 kubelet[2200]: E0129 11:35:26.833471 2200 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.107:6443: connect: connection refused" interval="200ms" Jan 29 11:35:26.833936 kubelet[2200]: I0129 11:35:26.833650 2200 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:35:26.847241 kubelet[2200]: I0129 11:35:26.847219 2200 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:35:26.847241 kubelet[2200]: I0129 11:35:26.847233 2200 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:35:26.847241 kubelet[2200]: I0129 11:35:26.847247 2200 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:35:26.847834 kubelet[2200]: I0129 11:35:26.847801 2200 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:35:26.848968 kubelet[2200]: I0129 11:35:26.848945 2200 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:35:26.849027 kubelet[2200]: I0129 11:35:26.848977 2200 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:35:26.849027 kubelet[2200]: I0129 11:35:26.849001 2200 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 11:35:26.849096 kubelet[2200]: E0129 11:35:26.849033 2200 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:35:26.849851 kubelet[2200]: W0129 11:35:26.849782 2200 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.107:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.107:6443: connect: connection refused Jan 29 11:35:26.849851 kubelet[2200]: E0129 11:35:26.849825 2200 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.107:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.107:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:35:26.933902 kubelet[2200]: E0129 11:35:26.933878 2200 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:35:26.950067 kubelet[2200]: E0129 11:35:26.950041 2200 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 11:35:27.034510 kubelet[2200]: E0129 11:35:27.034483 2200 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:35:27.034816 kubelet[2200]: E0129 11:35:27.034787 2200 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.107:6443: connect: connection refused" interval="400ms" Jan 29 11:35:27.135072 kubelet[2200]: E0129 11:35:27.134937 2200 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:35:27.151117 kubelet[2200]: E0129 11:35:27.151080 2200 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 11:35:27.235780 kubelet[2200]: E0129 11:35:27.235720 2200 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:35:27.336751 kubelet[2200]: E0129 11:35:27.336706 2200 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:35:27.351206 kubelet[2200]: I0129 11:35:27.351175 2200 policy_none.go:49] "None policy: Start" Jan 29 11:35:27.351802 kubelet[2200]: I0129 11:35:27.351781 2200 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:35:27.351802 kubelet[2200]: I0129 11:35:27.351803 2200 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:35:27.358465 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 11:35:27.373447 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 11:35:27.376269 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 11:35:27.386413 kubelet[2200]: I0129 11:35:27.386329 2200 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:35:27.386812 kubelet[2200]: I0129 11:35:27.386517 2200 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 11:35:27.386812 kubelet[2200]: I0129 11:35:27.386529 2200 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:35:27.386812 kubelet[2200]: I0129 11:35:27.386719 2200 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:35:27.388142 kubelet[2200]: E0129 11:35:27.388092 2200 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 29 11:35:27.436069 kubelet[2200]: E0129 11:35:27.436030 2200 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.107:6443: connect: connection refused" interval="800ms" Jan 29 11:35:27.488214 kubelet[2200]: I0129 11:35:27.488190 2200 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:35:27.488533 kubelet[2200]: E0129 11:35:27.488513 2200 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.107:6443/api/v1/nodes\": dial tcp 10.0.0.107:6443: connect: connection refused" node="localhost" Jan 29 11:35:27.559559 systemd[1]: Created slice kubepods-burstable-podc988230cd0d49eebfaffbefbe8c74a10.slice - libcontainer container kubepods-burstable-podc988230cd0d49eebfaffbefbe8c74a10.slice. Jan 29 11:35:27.576319 systemd[1]: Created slice kubepods-burstable-podd0b6bad116425ae2a51e9147e84e69df.slice - libcontainer container kubepods-burstable-podd0b6bad116425ae2a51e9147e84e69df.slice. Jan 29 11:35:27.589559 systemd[1]: Created slice kubepods-burstable-podfa5289f3c0ba7f1736282e713231ffc5.slice - libcontainer container kubepods-burstable-podfa5289f3c0ba7f1736282e713231ffc5.slice. Jan 29 11:35:27.635980 kubelet[2200]: I0129 11:35:27.635924 2200 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d0b6bad116425ae2a51e9147e84e69df-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d0b6bad116425ae2a51e9147e84e69df\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:35:27.635980 kubelet[2200]: I0129 11:35:27.635970 2200 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d0b6bad116425ae2a51e9147e84e69df-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d0b6bad116425ae2a51e9147e84e69df\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:35:27.635980 kubelet[2200]: I0129 11:35:27.635989 2200 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d0b6bad116425ae2a51e9147e84e69df-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d0b6bad116425ae2a51e9147e84e69df\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:35:27.636499 kubelet[2200]: I0129 11:35:27.636005 2200 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:35:27.636499 kubelet[2200]: I0129 11:35:27.636020 2200 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:35:27.636499 kubelet[2200]: I0129 11:35:27.636036 2200 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:35:27.636499 kubelet[2200]: I0129 11:35:27.636055 2200 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:35:27.636499 kubelet[2200]: I0129 11:35:27.636074 2200 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:35:27.636638 kubelet[2200]: I0129 11:35:27.636108 2200 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c988230cd0d49eebfaffbefbe8c74a10-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c988230cd0d49eebfaffbefbe8c74a10\") " pod="kube-system/kube-scheduler-localhost" Jan 29 11:35:27.692225 kubelet[2200]: I0129 11:35:27.690489 2200 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:35:27.692362 kubelet[2200]: E0129 11:35:27.692265 2200 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.107:6443/api/v1/nodes\": dial tcp 10.0.0.107:6443: connect: connection refused" node="localhost" Jan 29 11:35:27.693995 kubelet[2200]: W0129 11:35:27.693954 2200 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.107:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.107:6443: connect: connection refused Jan 29 11:35:27.694035 kubelet[2200]: E0129 11:35:27.694001 2200 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.107:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.107:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:35:27.874102 kubelet[2200]: E0129 11:35:27.874052 2200 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:35:27.874635 containerd[1492]: time="2025-01-29T11:35:27.874586993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c988230cd0d49eebfaffbefbe8c74a10,Namespace:kube-system,Attempt:0,}" Jan 29 11:35:27.887803 kubelet[2200]: E0129 11:35:27.887729 2200 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:35:27.888018 containerd[1492]: time="2025-01-29T11:35:27.887989688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d0b6bad116425ae2a51e9147e84e69df,Namespace:kube-system,Attempt:0,}" Jan 29 11:35:27.892248 kubelet[2200]: E0129 11:35:27.892216 2200 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:35:27.892509 containerd[1492]: time="2025-01-29T11:35:27.892483636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fa5289f3c0ba7f1736282e713231ffc5,Namespace:kube-system,Attempt:0,}" Jan 29 11:35:27.984347 kubelet[2200]: W0129 11:35:27.984273 2200 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.107:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.107:6443: connect: connection refused Jan 29 11:35:27.984347 kubelet[2200]: E0129 11:35:27.984332 2200 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.107:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.107:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:35:28.031948 kubelet[2200]: W0129 11:35:28.031894 2200 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.107:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.107:6443: connect: connection refused Jan 29 11:35:28.031948 kubelet[2200]: E0129 11:35:28.031945 2200 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.107:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.107:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:35:28.093352 kubelet[2200]: I0129 11:35:28.093305 2200 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:35:28.093602 kubelet[2200]: E0129 11:35:28.093566 2200 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.107:6443/api/v1/nodes\": dial tcp 10.0.0.107:6443: connect: connection refused" node="localhost" Jan 29 11:35:28.236748 kubelet[2200]: E0129 11:35:28.236601 2200 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.107:6443: connect: connection refused" interval="1.6s" Jan 29 11:35:28.366227 kubelet[2200]: W0129 11:35:28.366156 2200 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.107:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.107:6443: connect: connection refused Jan 29 11:35:28.366227 kubelet[2200]: E0129 11:35:28.366227 2200 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.107:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.107:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:35:28.832709 kubelet[2200]: E0129 11:35:28.832660 2200 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.107:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.107:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:35:28.895126 kubelet[2200]: I0129 11:35:28.895081 2200 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:35:28.895397 kubelet[2200]: E0129 11:35:28.895362 2200 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.107:6443/api/v1/nodes\": dial tcp 10.0.0.107:6443: connect: connection refused" node="localhost" Jan 29 11:35:29.513185 kubelet[2200]: W0129 11:35:29.513150 2200 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.107:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.107:6443: connect: connection refused Jan 29 11:35:29.513185 kubelet[2200]: E0129 11:35:29.513190 2200 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.107:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.107:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:35:29.730116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3862243740.mount: Deactivated successfully. Jan 29 11:35:29.742907 containerd[1492]: time="2025-01-29T11:35:29.742847308Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:35:29.745706 containerd[1492]: time="2025-01-29T11:35:29.745668839Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 29 11:35:29.749716 containerd[1492]: time="2025-01-29T11:35:29.749686624Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:35:29.751110 containerd[1492]: time="2025-01-29T11:35:29.751084285Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:35:29.752913 containerd[1492]: time="2025-01-29T11:35:29.752886174Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:35:29.755859 containerd[1492]: time="2025-01-29T11:35:29.755808936Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:35:29.756848 containerd[1492]: time="2025-01-29T11:35:29.756814311Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:35:29.792107 containerd[1492]: time="2025-01-29T11:35:29.792011378Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:35:29.792796 containerd[1492]: time="2025-01-29T11:35:29.792774289Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.918060919s" Jan 29 11:35:29.823982 containerd[1492]: time="2025-01-29T11:35:29.823956186Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.935896477s" Jan 29 11:35:29.824801 containerd[1492]: time="2025-01-29T11:35:29.824774921Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.9322296s" Jan 29 11:35:29.837000 kubelet[2200]: E0129 11:35:29.836967 2200 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.107:6443: connect: connection refused" interval="3.2s" Jan 29 11:35:30.058002 containerd[1492]: time="2025-01-29T11:35:30.057799105Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:35:30.058002 containerd[1492]: time="2025-01-29T11:35:30.057841825Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:35:30.058002 containerd[1492]: time="2025-01-29T11:35:30.057856062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:35:30.058002 containerd[1492]: time="2025-01-29T11:35:30.057644796Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:35:30.058002 containerd[1492]: time="2025-01-29T11:35:30.057692896Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:35:30.058002 containerd[1492]: time="2025-01-29T11:35:30.057705840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:35:30.058002 containerd[1492]: time="2025-01-29T11:35:30.057792102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:35:30.058289 containerd[1492]: time="2025-01-29T11:35:30.057922426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:35:30.058565 containerd[1492]: time="2025-01-29T11:35:30.058504368Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:35:30.058788 containerd[1492]: time="2025-01-29T11:35:30.058707138Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:35:30.059450 containerd[1492]: time="2025-01-29T11:35:30.059409265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:35:30.059614 containerd[1492]: time="2025-01-29T11:35:30.059581538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:35:30.082784 systemd[1]: Started cri-containerd-1e3a31937fbc33761c0c37e97be663d2b7a90989aa14f05f2b3ef91db209df9d.scope - libcontainer container 1e3a31937fbc33761c0c37e97be663d2b7a90989aa14f05f2b3ef91db209df9d. Jan 29 11:35:30.084202 systemd[1]: Started cri-containerd-af158d3d245c3057cda3ea52bce907130e17cb097d92824e5e23af7e001b0871.scope - libcontainer container af158d3d245c3057cda3ea52bce907130e17cb097d92824e5e23af7e001b0871. Jan 29 11:35:30.085530 systemd[1]: Started cri-containerd-d9d16c058743022eba8d9c8b90104a9671401864052f59c4c1269aaaae1f5640.scope - libcontainer container d9d16c058743022eba8d9c8b90104a9671401864052f59c4c1269aaaae1f5640. Jan 29 11:35:30.121266 containerd[1492]: time="2025-01-29T11:35:30.121216105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d0b6bad116425ae2a51e9147e84e69df,Namespace:kube-system,Attempt:0,} returns sandbox id \"d9d16c058743022eba8d9c8b90104a9671401864052f59c4c1269aaaae1f5640\"" Jan 29 11:35:30.123752 kubelet[2200]: E0129 11:35:30.123387 2200 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:35:30.128730 containerd[1492]: time="2025-01-29T11:35:30.128681795Z" level=info msg="CreateContainer within sandbox \"d9d16c058743022eba8d9c8b90104a9671401864052f59c4c1269aaaae1f5640\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 11:35:30.130342 containerd[1492]: time="2025-01-29T11:35:30.130299279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fa5289f3c0ba7f1736282e713231ffc5,Namespace:kube-system,Attempt:0,} returns sandbox id \"af158d3d245c3057cda3ea52bce907130e17cb097d92824e5e23af7e001b0871\"" Jan 29 11:35:30.130589 containerd[1492]: time="2025-01-29T11:35:30.130565037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c988230cd0d49eebfaffbefbe8c74a10,Namespace:kube-system,Attempt:0,} returns sandbox id \"1e3a31937fbc33761c0c37e97be663d2b7a90989aa14f05f2b3ef91db209df9d\"" Jan 29 11:35:30.131566 kubelet[2200]: E0129 11:35:30.131526 2200 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:35:30.131646 kubelet[2200]: E0129 11:35:30.131571 2200 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:35:30.133523 containerd[1492]: time="2025-01-29T11:35:30.133450268Z" level=info msg="CreateContainer within sandbox \"1e3a31937fbc33761c0c37e97be663d2b7a90989aa14f05f2b3ef91db209df9d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 11:35:30.133523 containerd[1492]: time="2025-01-29T11:35:30.133477539Z" level=info msg="CreateContainer within sandbox \"af158d3d245c3057cda3ea52bce907130e17cb097d92824e5e23af7e001b0871\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 11:35:30.149408 containerd[1492]: time="2025-01-29T11:35:30.149366116Z" level=info msg="CreateContainer within sandbox \"d9d16c058743022eba8d9c8b90104a9671401864052f59c4c1269aaaae1f5640\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1ae2f2d86a85a88eef0c52e1d2dbb348622b3204eb22a3c6a7339ebef40356df\"" Jan 29 11:35:30.150185 containerd[1492]: time="2025-01-29T11:35:30.149947476Z" level=info msg="StartContainer for \"1ae2f2d86a85a88eef0c52e1d2dbb348622b3204eb22a3c6a7339ebef40356df\"" Jan 29 11:35:30.159698 containerd[1492]: time="2025-01-29T11:35:30.159663377Z" level=info msg="CreateContainer within sandbox \"1e3a31937fbc33761c0c37e97be663d2b7a90989aa14f05f2b3ef91db209df9d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"494e12b65b6b980a04c3cfc2d9153600b71aa5c1f754cfcb2ae7ef0cac648c9d\"" Jan 29 11:35:30.160167 containerd[1492]: time="2025-01-29T11:35:30.160145902Z" level=info msg="StartContainer for \"494e12b65b6b980a04c3cfc2d9153600b71aa5c1f754cfcb2ae7ef0cac648c9d\"" Jan 29 11:35:30.161532 containerd[1492]: time="2025-01-29T11:35:30.161511873Z" level=info msg="CreateContainer within sandbox \"af158d3d245c3057cda3ea52bce907130e17cb097d92824e5e23af7e001b0871\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"295dc91a85c53d9a7afbbae6f0f06ffa739f380c76b93a4a5b0a76dccc332034\"" Jan 29 11:35:30.161848 containerd[1492]: time="2025-01-29T11:35:30.161832825Z" level=info msg="StartContainer for \"295dc91a85c53d9a7afbbae6f0f06ffa739f380c76b93a4a5b0a76dccc332034\"" Jan 29 11:35:30.176847 systemd[1]: Started cri-containerd-1ae2f2d86a85a88eef0c52e1d2dbb348622b3204eb22a3c6a7339ebef40356df.scope - libcontainer container 1ae2f2d86a85a88eef0c52e1d2dbb348622b3204eb22a3c6a7339ebef40356df. Jan 29 11:35:30.192755 systemd[1]: Started cri-containerd-295dc91a85c53d9a7afbbae6f0f06ffa739f380c76b93a4a5b0a76dccc332034.scope - libcontainer container 295dc91a85c53d9a7afbbae6f0f06ffa739f380c76b93a4a5b0a76dccc332034. Jan 29 11:35:30.194032 systemd[1]: Started cri-containerd-494e12b65b6b980a04c3cfc2d9153600b71aa5c1f754cfcb2ae7ef0cac648c9d.scope - libcontainer container 494e12b65b6b980a04c3cfc2d9153600b71aa5c1f754cfcb2ae7ef0cac648c9d. Jan 29 11:35:30.235100 containerd[1492]: time="2025-01-29T11:35:30.235048461Z" level=info msg="StartContainer for \"1ae2f2d86a85a88eef0c52e1d2dbb348622b3204eb22a3c6a7339ebef40356df\" returns successfully" Jan 29 11:35:30.235228 containerd[1492]: time="2025-01-29T11:35:30.235194204Z" level=info msg="StartContainer for \"295dc91a85c53d9a7afbbae6f0f06ffa739f380c76b93a4a5b0a76dccc332034\" returns successfully" Jan 29 11:35:30.245810 containerd[1492]: time="2025-01-29T11:35:30.245757664Z" level=info msg="StartContainer for \"494e12b65b6b980a04c3cfc2d9153600b71aa5c1f754cfcb2ae7ef0cac648c9d\" returns successfully" Jan 29 11:35:30.497663 kubelet[2200]: I0129 11:35:30.497522 2200 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:35:30.859606 kubelet[2200]: E0129 11:35:30.859491 2200 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:35:30.860023 kubelet[2200]: E0129 11:35:30.859960 2200 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:35:30.861548 kubelet[2200]: E0129 11:35:30.861525 2200 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:35:31.515645 kubelet[2200]: E0129 11:35:31.515510 2200 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.181f26b1edad04cd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 11:35:26.826906829 +0000 UTC m=+0.272780161,LastTimestamp:2025-01-29 11:35:26.826906829 +0000 UTC m=+0.272780161,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 11:35:31.567696 kubelet[2200]: I0129 11:35:31.567652 2200 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 29 11:35:31.567696 kubelet[2200]: E0129 11:35:31.567710 2200 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 29 11:35:31.568650 kubelet[2200]: E0129 11:35:31.568527 2200 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.181f26b1ede90e86 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 11:35:26.830841478 +0000 UTC m=+0.276714810,LastTimestamp:2025-01-29 11:35:26.830841478 +0000 UTC m=+0.276714810,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 11:35:31.621927 kubelet[2200]: E0129 11:35:31.621793 2200 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.181f26b1eedd0a46 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 11:35:26.846831174 +0000 UTC m=+0.292704506,LastTimestamp:2025-01-29 11:35:26.846831174 +0000 UTC m=+0.292704506,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 11:35:31.674816 kubelet[2200]: E0129 11:35:31.674715 2200 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.181f26b1eedd1af2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 11:35:26.846835442 +0000 UTC m=+0.292708774,LastTimestamp:2025-01-29 11:35:26.846835442 +0000 UTC m=+0.292708774,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 11:35:31.727171 kubelet[2200]: E0129 11:35:31.727075 2200 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.181f26b1eedd2714 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node localhost status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 11:35:26.846838548 +0000 UTC m=+0.292711880,LastTimestamp:2025-01-29 11:35:26.846838548 +0000 UTC m=+0.292711880,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 11:35:31.826922 kubelet[2200]: I0129 11:35:31.826446 2200 apiserver.go:52] "Watching apiserver" Jan 29 11:35:31.832043 kubelet[2200]: I0129 11:35:31.832002 2200 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 11:35:31.866786 kubelet[2200]: E0129 11:35:31.866738 2200 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 29 11:35:31.867262 kubelet[2200]: E0129 11:35:31.866948 2200 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:35:32.023734 kubelet[2200]: E0129 11:35:32.023698 2200 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 29 11:35:32.023869 kubelet[2200]: E0129 11:35:32.023833 2200 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:35:33.484378 systemd[1]: Reloading requested from client PID 2476 ('systemctl') (unit session-7.scope)... Jan 29 11:35:33.484392 systemd[1]: Reloading... Jan 29 11:35:33.559719 zram_generator::config[2518]: No configuration found. Jan 29 11:35:33.665745 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:35:33.756984 systemd[1]: Reloading finished in 272 ms. Jan 29 11:35:33.799286 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:35:33.818014 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 11:35:33.818288 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:35:33.829843 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:35:33.969865 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:35:33.975329 (kubelet)[2560]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:35:34.012500 kubelet[2560]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:35:34.012500 kubelet[2560]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:35:34.012500 kubelet[2560]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:35:34.012944 kubelet[2560]: I0129 11:35:34.012486 2560 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:35:34.019132 kubelet[2560]: I0129 11:35:34.019096 2560 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 11:35:34.019132 kubelet[2560]: I0129 11:35:34.019118 2560 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:35:34.019335 kubelet[2560]: I0129 11:35:34.019313 2560 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 11:35:34.020440 kubelet[2560]: I0129 11:35:34.020415 2560 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 11:35:34.022054 kubelet[2560]: I0129 11:35:34.022038 2560 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:35:34.024905 kubelet[2560]: E0129 11:35:34.024867 2560 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 11:35:34.024905 kubelet[2560]: I0129 11:35:34.024892 2560 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 11:35:34.029839 kubelet[2560]: I0129 11:35:34.029457 2560 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:35:34.029839 kubelet[2560]: I0129 11:35:34.029568 2560 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 11:35:34.029839 kubelet[2560]: I0129 11:35:34.029712 2560 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:35:34.030289 kubelet[2560]: I0129 11:35:34.029733 2560 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 11:35:34.030289 kubelet[2560]: I0129 11:35:34.030287 2560 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:35:34.030425 kubelet[2560]: I0129 11:35:34.030299 2560 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 11:35:34.030425 kubelet[2560]: I0129 11:35:34.030333 2560 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:35:34.030488 kubelet[2560]: I0129 11:35:34.030445 2560 kubelet.go:408] "Attempting to sync node with API server" Jan 29 11:35:34.030488 kubelet[2560]: I0129 11:35:34.030460 2560 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:35:34.030551 kubelet[2560]: I0129 11:35:34.030493 2560 kubelet.go:314] "Adding apiserver pod source" Jan 29 11:35:34.030551 kubelet[2560]: I0129 11:35:34.030510 2560 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:35:34.031443 kubelet[2560]: I0129 11:35:34.031099 2560 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:35:34.031515 kubelet[2560]: I0129 11:35:34.031477 2560 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:35:34.032022 kubelet[2560]: I0129 11:35:34.031989 2560 server.go:1269] "Started kubelet" Jan 29 11:35:34.032609 kubelet[2560]: I0129 11:35:34.032538 2560 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:35:34.033114 kubelet[2560]: I0129 11:35:34.032997 2560 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:35:34.033770 kubelet[2560]: I0129 11:35:34.033730 2560 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:35:34.034651 kubelet[2560]: I0129 11:35:34.034620 2560 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:35:34.034789 kubelet[2560]: I0129 11:35:34.034767 2560 server.go:460] "Adding debug handlers to kubelet server" Jan 29 11:35:34.034915 kubelet[2560]: E0129 11:35:34.034855 2560 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:35:34.034963 kubelet[2560]: I0129 11:35:34.034935 2560 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 11:35:34.041143 kubelet[2560]: E0129 11:35:34.041100 2560 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:35:34.041262 kubelet[2560]: I0129 11:35:34.041164 2560 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 11:35:34.041611 kubelet[2560]: I0129 11:35:34.041436 2560 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 11:35:34.041726 kubelet[2560]: I0129 11:35:34.041669 2560 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:35:34.043100 kubelet[2560]: I0129 11:35:34.042302 2560 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:35:34.043100 kubelet[2560]: I0129 11:35:34.042410 2560 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:35:34.045302 kubelet[2560]: I0129 11:35:34.045275 2560 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:35:34.054482 kubelet[2560]: I0129 11:35:34.054333 2560 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:35:34.055859 kubelet[2560]: I0129 11:35:34.055841 2560 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:35:34.056368 kubelet[2560]: I0129 11:35:34.055936 2560 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:35:34.056368 kubelet[2560]: I0129 11:35:34.055970 2560 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 11:35:34.056368 kubelet[2560]: E0129 11:35:34.056021 2560 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:35:34.078871 kubelet[2560]: I0129 11:35:34.078841 2560 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:35:34.078871 kubelet[2560]: I0129 11:35:34.078858 2560 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:35:34.078871 kubelet[2560]: I0129 11:35:34.078876 2560 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:35:34.079044 kubelet[2560]: I0129 11:35:34.078999 2560 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 11:35:34.079044 kubelet[2560]: I0129 11:35:34.079009 2560 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 11:35:34.079044 kubelet[2560]: I0129 11:35:34.079026 2560 policy_none.go:49] "None policy: Start" Jan 29 11:35:34.079618 kubelet[2560]: I0129 11:35:34.079594 2560 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:35:34.079618 kubelet[2560]: I0129 11:35:34.079614 2560 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:35:34.079776 kubelet[2560]: I0129 11:35:34.079756 2560 state_mem.go:75] "Updated machine memory state" Jan 29 11:35:34.084008 kubelet[2560]: I0129 11:35:34.083922 2560 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:35:34.084125 kubelet[2560]: I0129 11:35:34.084098 2560 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 11:35:34.084176 kubelet[2560]: I0129 11:35:34.084115 2560 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:35:34.084359 kubelet[2560]: I0129 11:35:34.084338 2560 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:35:34.188939 kubelet[2560]: I0129 11:35:34.188893 2560 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:35:34.343476 kubelet[2560]: I0129 11:35:34.343323 2560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:35:34.343476 kubelet[2560]: I0129 11:35:34.343360 2560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c988230cd0d49eebfaffbefbe8c74a10-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c988230cd0d49eebfaffbefbe8c74a10\") " pod="kube-system/kube-scheduler-localhost" Jan 29 11:35:34.343476 kubelet[2560]: I0129 11:35:34.343379 2560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:35:34.343476 kubelet[2560]: I0129 11:35:34.343394 2560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:35:34.343476 kubelet[2560]: I0129 11:35:34.343409 2560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:35:34.344399 kubelet[2560]: I0129 11:35:34.343428 2560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:35:34.344399 kubelet[2560]: I0129 11:35:34.343440 2560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d0b6bad116425ae2a51e9147e84e69df-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d0b6bad116425ae2a51e9147e84e69df\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:35:34.344399 kubelet[2560]: I0129 11:35:34.343452 2560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d0b6bad116425ae2a51e9147e84e69df-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d0b6bad116425ae2a51e9147e84e69df\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:35:34.344399 kubelet[2560]: I0129 11:35:34.343466 2560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d0b6bad116425ae2a51e9147e84e69df-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d0b6bad116425ae2a51e9147e84e69df\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:35:34.347372 kubelet[2560]: I0129 11:35:34.347317 2560 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jan 29 11:35:34.347540 kubelet[2560]: I0129 11:35:34.347418 2560 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 29 11:35:34.515761 kubelet[2560]: E0129 11:35:34.515723 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:35:34.588917 kubelet[2560]: E0129 11:35:34.588823 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:35:34.588917 kubelet[2560]: E0129 11:35:34.588857 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:35:35.031586 kubelet[2560]: I0129 11:35:35.031532 2560 apiserver.go:52] "Watching apiserver" Jan 29 11:35:35.042472 kubelet[2560]: I0129 11:35:35.042424 2560 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 11:35:35.065339 kubelet[2560]: E0129 11:35:35.065299 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:35:35.065492 kubelet[2560]: E0129 11:35:35.065463 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:35:35.067070 kubelet[2560]: E0129 11:35:35.066055 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:35:35.090197 kubelet[2560]: I0129 11:35:35.090127 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.090106327 podStartE2EDuration="1.090106327s" podCreationTimestamp="2025-01-29 11:35:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:35:35.089662846 +0000 UTC m=+1.110472614" watchObservedRunningTime="2025-01-29 11:35:35.090106327 +0000 UTC m=+1.110916095" Jan 29 11:35:35.090448 kubelet[2560]: I0129 11:35:35.090257 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.090253253 podStartE2EDuration="1.090253253s" podCreationTimestamp="2025-01-29 11:35:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:35:35.081886502 +0000 UTC m=+1.102696270" watchObservedRunningTime="2025-01-29 11:35:35.090253253 +0000 UTC m=+1.111063021" Jan 29 11:35:35.123425 kubelet[2560]: I0129 11:35:35.123333 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.123310717 podStartE2EDuration="1.123310717s" podCreationTimestamp="2025-01-29 11:35:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:35:35.109162134 +0000 UTC m=+1.129971892" watchObservedRunningTime="2025-01-29 11:35:35.123310717 +0000 UTC m=+1.144120485" Jan 29 11:35:36.065927 kubelet[2560]: E0129 11:35:36.065889 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:35:37.067087 kubelet[2560]: E0129 11:35:37.067041 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:35:38.707911 sudo[1673]: pam_unix(sudo:session): session closed for user root Jan 29 11:35:38.709327 sshd[1672]: Connection closed by 10.0.0.1 port 33622 Jan 29 11:35:38.709828 sshd-session[1670]: pam_unix(sshd:session): session closed for user core Jan 29 11:35:38.712901 systemd[1]: sshd@6-10.0.0.107:22-10.0.0.1:33622.service: Deactivated successfully. Jan 29 11:35:38.714955 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 11:35:38.715150 systemd[1]: session-7.scope: Consumed 4.766s CPU time, 152.1M memory peak, 0B memory swap peak. Jan 29 11:35:38.716871 systemd-logind[1475]: Session 7 logged out. Waiting for processes to exit. Jan 29 11:35:38.717964 systemd-logind[1475]: Removed session 7. Jan 29 11:35:39.183011 kubelet[2560]: I0129 11:35:39.182891 2560 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 11:35:39.183572 kubelet[2560]: I0129 11:35:39.183369 2560 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 11:35:39.183602 containerd[1492]: time="2025-01-29T11:35:39.183161668Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 11:35:39.812173 systemd[1]: Created slice kubepods-besteffort-pod169804bb_e2e7_401e_9941_148d7601c418.slice - libcontainer container kubepods-besteffort-pod169804bb_e2e7_401e_9941_148d7601c418.slice. Jan 29 11:35:39.875783 kubelet[2560]: I0129 11:35:39.875722 2560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/169804bb-e2e7-401e-9941-148d7601c418-lib-modules\") pod \"kube-proxy-r8c7p\" (UID: \"169804bb-e2e7-401e-9941-148d7601c418\") " pod="kube-system/kube-proxy-r8c7p" Jan 29 11:35:39.875783 kubelet[2560]: I0129 11:35:39.875767 2560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ksd5\" (UniqueName: \"kubernetes.io/projected/169804bb-e2e7-401e-9941-148d7601c418-kube-api-access-9ksd5\") pod \"kube-proxy-r8c7p\" (UID: \"169804bb-e2e7-401e-9941-148d7601c418\") " pod="kube-system/kube-proxy-r8c7p" Jan 29 11:35:39.875783 kubelet[2560]: I0129 11:35:39.875791 2560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/169804bb-e2e7-401e-9941-148d7601c418-kube-proxy\") pod \"kube-proxy-r8c7p\" (UID: \"169804bb-e2e7-401e-9941-148d7601c418\") " pod="kube-system/kube-proxy-r8c7p" Jan 29 11:35:39.876004 kubelet[2560]: I0129 11:35:39.875810 2560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/169804bb-e2e7-401e-9941-148d7601c418-xtables-lock\") pod \"kube-proxy-r8c7p\" (UID: \"169804bb-e2e7-401e-9941-148d7601c418\") " pod="kube-system/kube-proxy-r8c7p" Jan 29 11:35:40.124974 kubelet[2560]: E0129 11:35:40.124816 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:35:40.125553 containerd[1492]: time="2025-01-29T11:35:40.125499574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r8c7p,Uid:169804bb-e2e7-401e-9941-148d7601c418,Namespace:kube-system,Attempt:0,}" Jan 29 11:35:40.161408 containerd[1492]: time="2025-01-29T11:35:40.161269091Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:35:40.161408 containerd[1492]: time="2025-01-29T11:35:40.161363732Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:35:40.161408 containerd[1492]: time="2025-01-29T11:35:40.161380814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:35:40.161810 containerd[1492]: time="2025-01-29T11:35:40.161745323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:35:40.185940 systemd[1]: Started cri-containerd-70992e26fab47a12fbe23dc6f346058f74c4bc0fdb7746ab637e328ddc1e35cb.scope - libcontainer container 70992e26fab47a12fbe23dc6f346058f74c4bc0fdb7746ab637e328ddc1e35cb. Jan 29 11:35:40.227543 systemd[1]: Created slice kubepods-besteffort-podd57dcd47_d523_44d8_a619_0cd862fb730c.slice - libcontainer container kubepods-besteffort-podd57dcd47_d523_44d8_a619_0cd862fb730c.slice. Jan 29 11:35:40.227858 containerd[1492]: time="2025-01-29T11:35:40.227706348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r8c7p,Uid:169804bb-e2e7-401e-9941-148d7601c418,Namespace:kube-system,Attempt:0,} returns sandbox id \"70992e26fab47a12fbe23dc6f346058f74c4bc0fdb7746ab637e328ddc1e35cb\"" Jan 29 11:35:40.230035 kubelet[2560]: E0129 11:35:40.229850 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:35:40.233192 containerd[1492]: time="2025-01-29T11:35:40.233137746Z" level=info msg="CreateContainer within sandbox \"70992e26fab47a12fbe23dc6f346058f74c4bc0fdb7746ab637e328ddc1e35cb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 11:35:40.254607 containerd[1492]: time="2025-01-29T11:35:40.254548730Z" level=info msg="CreateContainer within sandbox \"70992e26fab47a12fbe23dc6f346058f74c4bc0fdb7746ab637e328ddc1e35cb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5b7c351af3fd7a31b64362a2797306236187ae4ee9aaff7338a4d1285eed5535\"" Jan 29 11:35:40.256184 containerd[1492]: time="2025-01-29T11:35:40.256108258Z" level=info msg="StartContainer for \"5b7c351af3fd7a31b64362a2797306236187ae4ee9aaff7338a4d1285eed5535\"" Jan 29 11:35:40.278191 kubelet[2560]: I0129 11:35:40.278053 2560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzmgk\" (UniqueName: \"kubernetes.io/projected/d57dcd47-d523-44d8-a619-0cd862fb730c-kube-api-access-qzmgk\") pod \"tigera-operator-76c4976dd7-vnb2n\" (UID: \"d57dcd47-d523-44d8-a619-0cd862fb730c\") " pod="tigera-operator/tigera-operator-76c4976dd7-vnb2n" Jan 29 11:35:40.278191 kubelet[2560]: I0129 11:35:40.278114 2560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d57dcd47-d523-44d8-a619-0cd862fb730c-var-lib-calico\") pod \"tigera-operator-76c4976dd7-vnb2n\" (UID: \"d57dcd47-d523-44d8-a619-0cd862fb730c\") " pod="tigera-operator/tigera-operator-76c4976dd7-vnb2n" Jan 29 11:35:40.287823 systemd[1]: Started cri-containerd-5b7c351af3fd7a31b64362a2797306236187ae4ee9aaff7338a4d1285eed5535.scope - libcontainer container 5b7c351af3fd7a31b64362a2797306236187ae4ee9aaff7338a4d1285eed5535. Jan 29 11:35:40.330594 containerd[1492]: time="2025-01-29T11:35:40.330540946Z" level=info msg="StartContainer for \"5b7c351af3fd7a31b64362a2797306236187ae4ee9aaff7338a4d1285eed5535\" returns successfully" Jan 29 11:35:40.447272 kubelet[2560]: E0129 11:35:40.447103 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:35:40.532034 containerd[1492]: time="2025-01-29T11:35:40.531957352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-vnb2n,Uid:d57dcd47-d523-44d8-a619-0cd862fb730c,Namespace:tigera-operator,Attempt:0,}" Jan 29 11:35:40.566717 containerd[1492]: time="2025-01-29T11:35:40.566393023Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:35:40.566717 containerd[1492]: time="2025-01-29T11:35:40.566464179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:35:40.566717 containerd[1492]: time="2025-01-29T11:35:40.566478356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:35:40.566717 containerd[1492]: time="2025-01-29T11:35:40.566572557Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:35:40.589856 systemd[1]: Started cri-containerd-06c4174cc399fc0fafda9ac4e282352183b378e7edce89e83dfd29be84897c03.scope - libcontainer container 06c4174cc399fc0fafda9ac4e282352183b378e7edce89e83dfd29be84897c03. Jan 29 11:35:40.629848 containerd[1492]: time="2025-01-29T11:35:40.629804082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-vnb2n,Uid:d57dcd47-d523-44d8-a619-0cd862fb730c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"06c4174cc399fc0fafda9ac4e282352183b378e7edce89e83dfd29be84897c03\"" Jan 29 11:35:40.632185 containerd[1492]: time="2025-01-29T11:35:40.632111113Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 29 11:35:40.990179 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2288691224.mount: Deactivated successfully. Jan 29 11:35:41.075791 kubelet[2560]: E0129 11:35:41.075753 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:35:41.075960 kubelet[2560]: E0129 11:35:41.075769 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:35:41.863152 kubelet[2560]: E0129 11:35:41.863118 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:35:41.873551 kubelet[2560]: I0129 11:35:41.873374 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-r8c7p" podStartSLOduration=2.8733572069999997 podStartE2EDuration="2.873357207s" podCreationTimestamp="2025-01-29 11:35:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:35:41.171163055 +0000 UTC m=+7.191972833" watchObservedRunningTime="2025-01-29 11:35:41.873357207 +0000 UTC m=+7.894166975" Jan 29 11:35:42.077331 kubelet[2560]: E0129 11:35:42.077297 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:35:45.980751 kubelet[2560]: E0129 11:35:45.980682 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:35:46.082174 kubelet[2560]: E0129 11:35:46.082141 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:35:48.391212 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3238056745.mount: Deactivated successfully. Jan 29 11:35:48.707507 containerd[1492]: time="2025-01-29T11:35:48.707375798Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:48.708370 containerd[1492]: time="2025-01-29T11:35:48.708331735Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Jan 29 11:35:48.709546 containerd[1492]: time="2025-01-29T11:35:48.709513649Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:48.711855 containerd[1492]: time="2025-01-29T11:35:48.711810452Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:48.712449 containerd[1492]: time="2025-01-29T11:35:48.712416153Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 8.080222293s" Jan 29 11:35:48.712449 containerd[1492]: time="2025-01-29T11:35:48.712443566Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 29 11:35:48.714304 containerd[1492]: time="2025-01-29T11:35:48.714265506Z" level=info msg="CreateContainer within sandbox \"06c4174cc399fc0fafda9ac4e282352183b378e7edce89e83dfd29be84897c03\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 29 11:35:48.726233 containerd[1492]: time="2025-01-29T11:35:48.726193625Z" level=info msg="CreateContainer within sandbox \"06c4174cc399fc0fafda9ac4e282352183b378e7edce89e83dfd29be84897c03\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2ab145cc84c995b3822b419ebaec785296441c3b3e9614293cc87125d423cc42\"" Jan 29 11:35:48.726772 containerd[1492]: time="2025-01-29T11:35:48.726736347Z" level=info msg="StartContainer for \"2ab145cc84c995b3822b419ebaec785296441c3b3e9614293cc87125d423cc42\"" Jan 29 11:35:48.762856 systemd[1]: Started cri-containerd-2ab145cc84c995b3822b419ebaec785296441c3b3e9614293cc87125d423cc42.scope - libcontainer container 2ab145cc84c995b3822b419ebaec785296441c3b3e9614293cc87125d423cc42. Jan 29 11:35:48.909558 containerd[1492]: time="2025-01-29T11:35:48.909504331Z" level=info msg="StartContainer for \"2ab145cc84c995b3822b419ebaec785296441c3b3e9614293cc87125d423cc42\" returns successfully" Jan 29 11:35:49.153901 kubelet[2560]: I0129 11:35:49.153837 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-vnb2n" podStartSLOduration=1.07188923 podStartE2EDuration="9.153819753s" podCreationTimestamp="2025-01-29 11:35:40 +0000 UTC" firstStartedPulling="2025-01-29 11:35:40.63122249 +0000 UTC m=+6.652032258" lastFinishedPulling="2025-01-29 11:35:48.713153013 +0000 UTC m=+14.733962781" observedRunningTime="2025-01-29 11:35:49.153664809 +0000 UTC m=+15.174474577" watchObservedRunningTime="2025-01-29 11:35:49.153819753 +0000 UTC m=+15.174629531" Jan 29 11:35:49.219091 update_engine[1477]: I20250129 11:35:49.219022 1477 update_attempter.cc:509] Updating boot flags... Jan 29 11:35:49.269706 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2948) Jan 29 11:35:49.312660 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2947) Jan 29 11:35:49.524647 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2947) Jan 29 11:35:52.149861 systemd[1]: Created slice kubepods-besteffort-pod63e82303_f283_4880_ae72_28401c0f1e1d.slice - libcontainer container kubepods-besteffort-pod63e82303_f283_4880_ae72_28401c0f1e1d.slice. Jan 29 11:35:52.176097 systemd[1]: Created slice kubepods-besteffort-pod2baf1b3f_6711_40d8_b04b_4c08b2793a8d.slice - libcontainer container kubepods-besteffort-pod2baf1b3f_6711_40d8_b04b_4c08b2793a8d.slice. Jan 29 11:35:52.259573 kubelet[2560]: I0129 11:35:52.259503 2560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63e82303-f283-4880-ae72-28401c0f1e1d-tigera-ca-bundle\") pod \"calico-typha-c449f648f-wk9h8\" (UID: \"63e82303-f283-4880-ae72-28401c0f1e1d\") " pod="calico-system/calico-typha-c449f648f-wk9h8" Jan 29 11:35:52.259573 kubelet[2560]: I0129 11:35:52.259554 2560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rs6p\" (UniqueName: \"kubernetes.io/projected/63e82303-f283-4880-ae72-28401c0f1e1d-kube-api-access-5rs6p\") pod \"calico-typha-c449f648f-wk9h8\" (UID: \"63e82303-f283-4880-ae72-28401c0f1e1d\") " pod="calico-system/calico-typha-c449f648f-wk9h8" Jan 29 11:35:52.259573 kubelet[2560]: I0129 11:35:52.259572 2560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/63e82303-f283-4880-ae72-28401c0f1e1d-typha-certs\") pod \"calico-typha-c449f648f-wk9h8\" (UID: \"63e82303-f283-4880-ae72-28401c0f1e1d\") " pod="calico-system/calico-typha-c449f648f-wk9h8" Jan 29 11:35:52.285370 kubelet[2560]: E0129 11:35:52.285302 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s9vh2" podUID="0ee1d7b9-9e01-4183-97ec-91d9420b2dab" Jan 29 11:35:52.359986 kubelet[2560]: I0129 11:35:52.359946 2560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2baf1b3f-6711-40d8-b04b-4c08b2793a8d-lib-modules\") pod \"calico-node-w84hj\" (UID: \"2baf1b3f-6711-40d8-b04b-4c08b2793a8d\") " pod="calico-system/calico-node-w84hj" Jan 29 11:35:52.359986 kubelet[2560]: I0129 11:35:52.359982 2560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2baf1b3f-6711-40d8-b04b-4c08b2793a8d-cni-log-dir\") pod \"calico-node-w84hj\" (UID: \"2baf1b3f-6711-40d8-b04b-4c08b2793a8d\") " pod="calico-system/calico-node-w84hj" Jan 29 11:35:52.359986 kubelet[2560]: I0129 11:35:52.359998 2560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2baf1b3f-6711-40d8-b04b-4c08b2793a8d-var-lib-calico\") pod \"calico-node-w84hj\" (UID: \"2baf1b3f-6711-40d8-b04b-4c08b2793a8d\") " pod="calico-system/calico-node-w84hj" Jan 29 11:35:52.360218 kubelet[2560]: I0129 11:35:52.360024 2560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2baf1b3f-6711-40d8-b04b-4c08b2793a8d-policysync\") pod \"calico-node-w84hj\" (UID: \"2baf1b3f-6711-40d8-b04b-4c08b2793a8d\") " pod="calico-system/calico-node-w84hj" Jan 29 11:35:52.360218 kubelet[2560]: I0129 11:35:52.360042 2560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlxxp\" (UniqueName: \"kubernetes.io/projected/2baf1b3f-6711-40d8-b04b-4c08b2793a8d-kube-api-access-wlxxp\") pod \"calico-node-w84hj\" (UID: \"2baf1b3f-6711-40d8-b04b-4c08b2793a8d\") " pod="calico-system/calico-node-w84hj" Jan 29 11:35:52.360218 kubelet[2560]: I0129 11:35:52.360058 2560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2baf1b3f-6711-40d8-b04b-4c08b2793a8d-tigera-ca-bundle\") pod \"calico-node-w84hj\" (UID: \"2baf1b3f-6711-40d8-b04b-4c08b2793a8d\") " pod="calico-system/calico-node-w84hj" Jan 29 11:35:52.360218 kubelet[2560]: I0129 11:35:52.360072 2560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2baf1b3f-6711-40d8-b04b-4c08b2793a8d-flexvol-driver-host\") pod \"calico-node-w84hj\" (UID: \"2baf1b3f-6711-40d8-b04b-4c08b2793a8d\") " pod="calico-system/calico-node-w84hj" Jan 29 11:35:52.360218 kubelet[2560]: I0129 11:35:52.360092 2560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2baf1b3f-6711-40d8-b04b-4c08b2793a8d-xtables-lock\") pod \"calico-node-w84hj\" (UID: \"2baf1b3f-6711-40d8-b04b-4c08b2793a8d\") " pod="calico-system/calico-node-w84hj" Jan 29 11:35:52.360445 kubelet[2560]: I0129 11:35:52.360133 2560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2baf1b3f-6711-40d8-b04b-4c08b2793a8d-cni-bin-dir\") pod \"calico-node-w84hj\" (UID: \"2baf1b3f-6711-40d8-b04b-4c08b2793a8d\") " pod="calico-system/calico-node-w84hj" Jan 29 11:35:52.360445 kubelet[2560]: I0129 11:35:52.360178 2560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2baf1b3f-6711-40d8-b04b-4c08b2793a8d-node-certs\") pod \"calico-node-w84hj\" (UID: \"2baf1b3f-6711-40d8-b04b-4c08b2793a8d\") " pod="calico-system/calico-node-w84hj" Jan 29 11:35:52.360445 kubelet[2560]: I0129 11:35:52.360209 2560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2baf1b3f-6711-40d8-b04b-4c08b2793a8d-var-run-calico\") pod \"calico-node-w84hj\" (UID: \"2baf1b3f-6711-40d8-b04b-4c08b2793a8d\") " pod="calico-system/calico-node-w84hj" Jan 29 11:35:52.360445 kubelet[2560]: I0129 11:35:52.360229 2560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2baf1b3f-6711-40d8-b04b-4c08b2793a8d-cni-net-dir\") pod \"calico-node-w84hj\" (UID: \"2baf1b3f-6711-40d8-b04b-4c08b2793a8d\") " pod="calico-system/calico-node-w84hj" Jan 29 11:35:52.456685 kubelet[2560]: E0129 11:35:52.456030 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:35:52.457159 containerd[1492]: time="2025-01-29T11:35:52.457125638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-c449f648f-wk9h8,Uid:63e82303-f283-4880-ae72-28401c0f1e1d,Namespace:calico-system,Attempt:0,}" Jan 29 11:35:52.461151 kubelet[2560]: I0129 11:35:52.461119 2560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0ee1d7b9-9e01-4183-97ec-91d9420b2dab-varrun\") pod \"csi-node-driver-s9vh2\" (UID: \"0ee1d7b9-9e01-4183-97ec-91d9420b2dab\") " pod="calico-system/csi-node-driver-s9vh2" Jan 29 11:35:52.461208 kubelet[2560]: I0129 11:35:52.461172 2560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0ee1d7b9-9e01-4183-97ec-91d9420b2dab-registration-dir\") pod \"csi-node-driver-s9vh2\" (UID: \"0ee1d7b9-9e01-4183-97ec-91d9420b2dab\") " pod="calico-system/csi-node-driver-s9vh2" Jan 29 11:35:52.461241 kubelet[2560]: I0129 11:35:52.461216 2560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75j7z\" (UniqueName: \"kubernetes.io/projected/0ee1d7b9-9e01-4183-97ec-91d9420b2dab-kube-api-access-75j7z\") pod \"csi-node-driver-s9vh2\" (UID: \"0ee1d7b9-9e01-4183-97ec-91d9420b2dab\") " pod="calico-system/csi-node-driver-s9vh2" Jan 29 11:35:52.461591 kubelet[2560]: I0129 11:35:52.461337 2560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0ee1d7b9-9e01-4183-97ec-91d9420b2dab-kubelet-dir\") pod \"csi-node-driver-s9vh2\" (UID: \"0ee1d7b9-9e01-4183-97ec-91d9420b2dab\") " pod="calico-system/csi-node-driver-s9vh2" Jan 29 11:35:52.461591 kubelet[2560]: I0129 11:35:52.461389 2560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0ee1d7b9-9e01-4183-97ec-91d9420b2dab-socket-dir\") pod \"csi-node-driver-s9vh2\" (UID: \"0ee1d7b9-9e01-4183-97ec-91d9420b2dab\") " pod="calico-system/csi-node-driver-s9vh2" Jan 29 11:35:52.464049 kubelet[2560]: E0129 11:35:52.464027 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:52.464049 kubelet[2560]: W0129 11:35:52.464047 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:52.464173 kubelet[2560]: E0129 11:35:52.464067 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:52.466349 kubelet[2560]: E0129 11:35:52.465810 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:52.466349 kubelet[2560]: W0129 11:35:52.465824 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:52.466349 kubelet[2560]: E0129 11:35:52.465836 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:52.471577 kubelet[2560]: E0129 11:35:52.471211 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:52.471577 kubelet[2560]: W0129 11:35:52.471227 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:52.471577 kubelet[2560]: E0129 11:35:52.471244 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:52.479430 kubelet[2560]: E0129 11:35:52.479407 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:35:52.480866 containerd[1492]: time="2025-01-29T11:35:52.480824046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-w84hj,Uid:2baf1b3f-6711-40d8-b04b-4c08b2793a8d,Namespace:calico-system,Attempt:0,}" Jan 29 11:35:52.484312 containerd[1492]: time="2025-01-29T11:35:52.484220288Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:35:52.484312 containerd[1492]: time="2025-01-29T11:35:52.484286404Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:35:52.484312 containerd[1492]: time="2025-01-29T11:35:52.484297034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:35:52.485222 containerd[1492]: time="2025-01-29T11:35:52.484368118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:35:52.502802 systemd[1]: Started cri-containerd-554e3951591d666de3e7f69d2759a8564253d7226e2460886c3fa81e94fa8c51.scope - libcontainer container 554e3951591d666de3e7f69d2759a8564253d7226e2460886c3fa81e94fa8c51. Jan 29 11:35:52.512883 containerd[1492]: time="2025-01-29T11:35:52.512611885Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:35:52.512883 containerd[1492]: time="2025-01-29T11:35:52.512687729Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:35:52.512883 containerd[1492]: time="2025-01-29T11:35:52.512701676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:35:52.512883 containerd[1492]: time="2025-01-29T11:35:52.512781436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:35:52.530797 systemd[1]: Started cri-containerd-1c1ed092f32cbd93dffbdbae963f10cdf8637329ee99fc641d84f4d0568a92d5.scope - libcontainer container 1c1ed092f32cbd93dffbdbae963f10cdf8637329ee99fc641d84f4d0568a92d5. Jan 29 11:35:52.539861 containerd[1492]: time="2025-01-29T11:35:52.539825181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-c449f648f-wk9h8,Uid:63e82303-f283-4880-ae72-28401c0f1e1d,Namespace:calico-system,Attempt:0,} returns sandbox id \"554e3951591d666de3e7f69d2759a8564253d7226e2460886c3fa81e94fa8c51\"" Jan 29 11:35:52.540538 kubelet[2560]: E0129 11:35:52.540512 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:35:52.542096 containerd[1492]: time="2025-01-29T11:35:52.542033944Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 29 11:35:52.555714 containerd[1492]: time="2025-01-29T11:35:52.555676315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-w84hj,Uid:2baf1b3f-6711-40d8-b04b-4c08b2793a8d,Namespace:calico-system,Attempt:0,} returns sandbox id \"1c1ed092f32cbd93dffbdbae963f10cdf8637329ee99fc641d84f4d0568a92d5\"" Jan 29 11:35:52.556263 kubelet[2560]: E0129 11:35:52.556237 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:35:52.562250 kubelet[2560]: E0129 11:35:52.562218 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:52.562250 kubelet[2560]: W0129 11:35:52.562237 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:52.562375 kubelet[2560]: E0129 11:35:52.562255 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:52.562519 kubelet[2560]: E0129 11:35:52.562492 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:52.562519 kubelet[2560]: W0129 11:35:52.562505 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:52.562519 kubelet[2560]: E0129 11:35:52.562518 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:52.562760 kubelet[2560]: E0129 11:35:52.562736 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:52.562760 kubelet[2560]: W0129 11:35:52.562747 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:52.562760 kubelet[2560]: E0129 11:35:52.562760 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:52.562985 kubelet[2560]: E0129 11:35:52.562960 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:52.562985 kubelet[2560]: W0129 11:35:52.562972 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:52.562985 kubelet[2560]: E0129 11:35:52.562984 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:52.563209 kubelet[2560]: E0129 11:35:52.563190 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:52.563209 kubelet[2560]: W0129 11:35:52.563203 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:52.563284 kubelet[2560]: E0129 11:35:52.563218 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:52.563447 kubelet[2560]: E0129 11:35:52.563421 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:52.563447 kubelet[2560]: W0129 11:35:52.563433 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:52.563447 kubelet[2560]: E0129 11:35:52.563446 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:52.563665 kubelet[2560]: E0129 11:35:52.563651 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:52.563665 kubelet[2560]: W0129 11:35:52.563661 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:52.563737 kubelet[2560]: E0129 11:35:52.563674 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:52.563934 kubelet[2560]: E0129 11:35:52.563912 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:52.563934 kubelet[2560]: W0129 11:35:52.563924 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:52.564004 kubelet[2560]: E0129 11:35:52.563951 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:52.564141 kubelet[2560]: E0129 11:35:52.564123 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:52.564141 kubelet[2560]: W0129 11:35:52.564135 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:52.564209 kubelet[2560]: E0129 11:35:52.564160 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:52.564356 kubelet[2560]: E0129 11:35:52.564338 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:52.564356 kubelet[2560]: W0129 11:35:52.564349 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:52.564428 kubelet[2560]: E0129 11:35:52.564375 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:52.564550 kubelet[2560]: E0129 11:35:52.564533 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:52.564550 kubelet[2560]: W0129 11:35:52.564544 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:52.564618 kubelet[2560]: E0129 11:35:52.564578 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:52.564780 kubelet[2560]: E0129 11:35:52.564765 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:52.564780 kubelet[2560]: W0129 11:35:52.564777 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:52.564879 kubelet[2560]: E0129 11:35:52.564849 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:52.565028 kubelet[2560]: E0129 11:35:52.565014 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:52.565066 kubelet[2560]: W0129 11:35:52.565036 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:52.565066 kubelet[2560]: E0129 11:35:52.565052 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:52.565335 kubelet[2560]: E0129 11:35:52.565318 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:52.565335 kubelet[2560]: W0129 11:35:52.565331 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:52.565428 kubelet[2560]: E0129 11:35:52.565347 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:52.565701 kubelet[2560]: E0129 11:35:52.565687 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:52.565701 kubelet[2560]: W0129 11:35:52.565698 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:52.565800 kubelet[2560]: E0129 11:35:52.565723 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:52.565985 kubelet[2560]: E0129 11:35:52.565971 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:52.565985 kubelet[2560]: W0129 11:35:52.565983 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:52.566119 kubelet[2560]: E0129 11:35:52.566101 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:52.566248 kubelet[2560]: E0129 11:35:52.566224 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:52.566248 kubelet[2560]: W0129 11:35:52.566236 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:52.566349 kubelet[2560]: E0129 11:35:52.566315 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:52.566660 kubelet[2560]: E0129 11:35:52.566620 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:52.566660 kubelet[2560]: W0129 11:35:52.566651 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:52.566750 kubelet[2560]: E0129 11:35:52.566680 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:52.567088 kubelet[2560]: E0129 11:35:52.567062 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:52.567088 kubelet[2560]: W0129 11:35:52.567077 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:52.567297 kubelet[2560]: E0129 11:35:52.567197 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:52.567343 kubelet[2560]: E0129 11:35:52.567324 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:52.567343 kubelet[2560]: W0129 11:35:52.567333 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:52.567409 kubelet[2560]: E0129 11:35:52.567347 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:52.567843 kubelet[2560]: E0129 11:35:52.567550 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:52.567843 kubelet[2560]: W0129 11:35:52.567562 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:52.567843 kubelet[2560]: E0129 11:35:52.567582 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:52.567962 kubelet[2560]: E0129 11:35:52.567889 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:52.567962 kubelet[2560]: W0129 11:35:52.567899 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:52.567962 kubelet[2560]: E0129 11:35:52.567916 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:52.568895 kubelet[2560]: E0129 11:35:52.568876 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:52.568895 kubelet[2560]: W0129 11:35:52.568891 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:52.568996 kubelet[2560]: E0129 11:35:52.568928 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:52.569130 kubelet[2560]: E0129 11:35:52.569112 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:52.569204 kubelet[2560]: W0129 11:35:52.569138 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:52.569204 kubelet[2560]: E0129 11:35:52.569151 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:52.569434 kubelet[2560]: E0129 11:35:52.569410 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:52.569489 kubelet[2560]: W0129 11:35:52.569427 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:52.569489 kubelet[2560]: E0129 11:35:52.569465 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:52.576007 kubelet[2560]: E0129 11:35:52.575990 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:52.576007 kubelet[2560]: W0129 11:35:52.576003 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:52.576090 kubelet[2560]: E0129 11:35:52.576017 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:54.057267 kubelet[2560]: E0129 11:35:54.057184 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s9vh2" podUID="0ee1d7b9-9e01-4183-97ec-91d9420b2dab" Jan 29 11:35:54.130607 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount176626415.mount: Deactivated successfully. Jan 29 11:35:55.328651 containerd[1492]: time="2025-01-29T11:35:55.328594965Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:55.400985 containerd[1492]: time="2025-01-29T11:35:55.400927042Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 29 11:35:55.438763 containerd[1492]: time="2025-01-29T11:35:55.438710644Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:55.501589 containerd[1492]: time="2025-01-29T11:35:55.501515797Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:55.502190 containerd[1492]: time="2025-01-29T11:35:55.502160947Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.960092507s" Jan 29 11:35:55.502227 containerd[1492]: time="2025-01-29T11:35:55.502189851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 29 11:35:55.503433 containerd[1492]: time="2025-01-29T11:35:55.503399298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 29 11:35:55.509848 containerd[1492]: time="2025-01-29T11:35:55.509801824Z" level=info msg="CreateContainer within sandbox \"554e3951591d666de3e7f69d2759a8564253d7226e2460886c3fa81e94fa8c51\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 29 11:35:56.056582 kubelet[2560]: E0129 11:35:56.056539 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s9vh2" podUID="0ee1d7b9-9e01-4183-97ec-91d9420b2dab" Jan 29 11:35:56.072266 containerd[1492]: time="2025-01-29T11:35:56.072213723Z" level=info msg="CreateContainer within sandbox \"554e3951591d666de3e7f69d2759a8564253d7226e2460886c3fa81e94fa8c51\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"566a7f771259f96b4a1da8514dafbdb64d3306d81a8f6f6926f6a873d19aff1b\"" Jan 29 11:35:56.072665 containerd[1492]: time="2025-01-29T11:35:56.072587340Z" level=info msg="StartContainer for \"566a7f771259f96b4a1da8514dafbdb64d3306d81a8f6f6926f6a873d19aff1b\"" Jan 29 11:35:56.113752 systemd[1]: Started cri-containerd-566a7f771259f96b4a1da8514dafbdb64d3306d81a8f6f6926f6a873d19aff1b.scope - libcontainer container 566a7f771259f96b4a1da8514dafbdb64d3306d81a8f6f6926f6a873d19aff1b. Jan 29 11:35:56.257826 containerd[1492]: time="2025-01-29T11:35:56.257782731Z" level=info msg="StartContainer for \"566a7f771259f96b4a1da8514dafbdb64d3306d81a8f6f6926f6a873d19aff1b\" returns successfully" Jan 29 11:35:57.106993 kubelet[2560]: E0129 11:35:57.106956 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:35:57.193391 kubelet[2560]: E0129 11:35:57.193352 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:57.193391 kubelet[2560]: W0129 11:35:57.193376 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:57.193391 kubelet[2560]: E0129 11:35:57.193396 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:57.194049 kubelet[2560]: E0129 11:35:57.194035 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:57.194086 kubelet[2560]: W0129 11:35:57.194048 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:57.194086 kubelet[2560]: E0129 11:35:57.194070 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:57.194294 kubelet[2560]: E0129 11:35:57.194282 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:57.194294 kubelet[2560]: W0129 11:35:57.194293 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:57.194366 kubelet[2560]: E0129 11:35:57.194302 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:57.194521 kubelet[2560]: E0129 11:35:57.194510 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:57.194550 kubelet[2560]: W0129 11:35:57.194521 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:57.194550 kubelet[2560]: E0129 11:35:57.194531 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:57.194741 kubelet[2560]: E0129 11:35:57.194729 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:57.194741 kubelet[2560]: W0129 11:35:57.194739 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:57.194814 kubelet[2560]: E0129 11:35:57.194749 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:57.194964 kubelet[2560]: E0129 11:35:57.194953 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:57.195005 kubelet[2560]: W0129 11:35:57.194963 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:57.195005 kubelet[2560]: E0129 11:35:57.194972 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:57.195170 kubelet[2560]: E0129 11:35:57.195159 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:57.195200 kubelet[2560]: W0129 11:35:57.195169 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:57.195200 kubelet[2560]: E0129 11:35:57.195178 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:57.195384 kubelet[2560]: E0129 11:35:57.195372 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:57.195416 kubelet[2560]: W0129 11:35:57.195384 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:57.195416 kubelet[2560]: E0129 11:35:57.195394 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:57.195604 kubelet[2560]: E0129 11:35:57.195586 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:57.195604 kubelet[2560]: W0129 11:35:57.195598 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:57.195669 kubelet[2560]: E0129 11:35:57.195607 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:57.195812 kubelet[2560]: E0129 11:35:57.195794 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:57.195812 kubelet[2560]: W0129 11:35:57.195807 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:57.195812 kubelet[2560]: E0129 11:35:57.195817 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:57.196008 kubelet[2560]: E0129 11:35:57.195993 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:57.196050 kubelet[2560]: W0129 11:35:57.196019 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:57.196050 kubelet[2560]: E0129 11:35:57.196030 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:57.196257 kubelet[2560]: E0129 11:35:57.196240 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:57.196257 kubelet[2560]: W0129 11:35:57.196253 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:57.196378 kubelet[2560]: E0129 11:35:57.196265 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:57.196523 kubelet[2560]: E0129 11:35:57.196501 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:57.196523 kubelet[2560]: W0129 11:35:57.196513 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:57.196673 kubelet[2560]: E0129 11:35:57.196524 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:57.196756 kubelet[2560]: E0129 11:35:57.196743 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:57.196756 kubelet[2560]: W0129 11:35:57.196754 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:57.196820 kubelet[2560]: E0129 11:35:57.196763 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:57.196979 kubelet[2560]: E0129 11:35:57.196966 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:57.196979 kubelet[2560]: W0129 11:35:57.196976 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:57.197057 kubelet[2560]: E0129 11:35:57.196985 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:57.254907 kubelet[2560]: I0129 11:35:57.254844 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-c449f648f-wk9h8" podStartSLOduration=2.293502599 podStartE2EDuration="5.254818089s" podCreationTimestamp="2025-01-29 11:35:52 +0000 UTC" firstStartedPulling="2025-01-29 11:35:52.541810341 +0000 UTC m=+18.562620109" lastFinishedPulling="2025-01-29 11:35:55.503125831 +0000 UTC m=+21.523935599" observedRunningTime="2025-01-29 11:35:57.254740753 +0000 UTC m=+23.275550521" watchObservedRunningTime="2025-01-29 11:35:57.254818089 +0000 UTC m=+23.275627858" Jan 29 11:35:57.293875 kubelet[2560]: E0129 11:35:57.293844 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:57.293875 kubelet[2560]: W0129 11:35:57.293865 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:57.293875 kubelet[2560]: E0129 11:35:57.293884 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:57.294141 kubelet[2560]: E0129 11:35:57.294124 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:57.294141 kubelet[2560]: W0129 11:35:57.294138 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:57.294192 kubelet[2560]: E0129 11:35:57.294154 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:57.294404 kubelet[2560]: E0129 11:35:57.294385 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:57.294404 kubelet[2560]: W0129 11:35:57.294401 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:57.294448 kubelet[2560]: E0129 11:35:57.294417 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:57.294684 kubelet[2560]: E0129 11:35:57.294667 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:57.294684 kubelet[2560]: W0129 11:35:57.294681 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:57.294739 kubelet[2560]: E0129 11:35:57.294696 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:57.294911 kubelet[2560]: E0129 11:35:57.294895 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:57.294911 kubelet[2560]: W0129 11:35:57.294908 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:57.294958 kubelet[2560]: E0129 11:35:57.294921 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:57.295164 kubelet[2560]: E0129 11:35:57.295147 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:57.295164 kubelet[2560]: W0129 11:35:57.295161 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:57.295233 kubelet[2560]: E0129 11:35:57.295176 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:57.295425 kubelet[2560]: E0129 11:35:57.295409 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:57.295425 kubelet[2560]: W0129 11:35:57.295421 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:57.295465 kubelet[2560]: E0129 11:35:57.295454 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:57.295616 kubelet[2560]: E0129 11:35:57.295601 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:57.295616 kubelet[2560]: W0129 11:35:57.295613 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:57.295712 kubelet[2560]: E0129 11:35:57.295687 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:57.295861 kubelet[2560]: E0129 11:35:57.295842 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:57.295861 kubelet[2560]: W0129 11:35:57.295855 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:57.295909 kubelet[2560]: E0129 11:35:57.295869 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:57.296178 kubelet[2560]: E0129 11:35:57.296160 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:57.296178 kubelet[2560]: W0129 11:35:57.296174 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:57.296253 kubelet[2560]: E0129 11:35:57.296189 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:57.296424 kubelet[2560]: E0129 11:35:57.296408 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:57.296424 kubelet[2560]: W0129 11:35:57.296421 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:57.296487 kubelet[2560]: E0129 11:35:57.296436 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:57.296668 kubelet[2560]: E0129 11:35:57.296652 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:57.296668 kubelet[2560]: W0129 11:35:57.296666 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:57.296737 kubelet[2560]: E0129 11:35:57.296682 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:57.296894 kubelet[2560]: E0129 11:35:57.296876 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:57.296894 kubelet[2560]: W0129 11:35:57.296890 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:57.296966 kubelet[2560]: E0129 11:35:57.296903 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:57.297116 kubelet[2560]: E0129 11:35:57.297097 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:57.297116 kubelet[2560]: W0129 11:35:57.297111 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:57.297200 kubelet[2560]: E0129 11:35:57.297127 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:57.297375 kubelet[2560]: E0129 11:35:57.297359 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:57.297375 kubelet[2560]: W0129 11:35:57.297372 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:57.297429 kubelet[2560]: E0129 11:35:57.297386 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:57.297618 kubelet[2560]: E0129 11:35:57.297601 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:57.297618 kubelet[2560]: W0129 11:35:57.297616 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:57.297693 kubelet[2560]: E0129 11:35:57.297652 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:57.297932 kubelet[2560]: E0129 11:35:57.297916 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:57.297968 kubelet[2560]: W0129 11:35:57.297931 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:57.297968 kubelet[2560]: E0129 11:35:57.297944 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:57.298186 kubelet[2560]: E0129 11:35:57.298172 2560 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:35:57.298186 kubelet[2560]: W0129 11:35:57.298183 2560 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:35:57.298260 kubelet[2560]: E0129 11:35:57.298191 2560 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:35:57.462871 containerd[1492]: time="2025-01-29T11:35:57.462754008Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:57.464072 containerd[1492]: time="2025-01-29T11:35:57.464009710Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 29 11:35:57.465145 containerd[1492]: time="2025-01-29T11:35:57.465094189Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:57.466972 containerd[1492]: time="2025-01-29T11:35:57.466946908Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:57.467531 containerd[1492]: time="2025-01-29T11:35:57.467506766Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.964070628s" Jan 29 11:35:57.467569 containerd[1492]: time="2025-01-29T11:35:57.467534949Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 29 11:35:57.469521 containerd[1492]: time="2025-01-29T11:35:57.469493739Z" level=info msg="CreateContainer within sandbox \"1c1ed092f32cbd93dffbdbae963f10cdf8637329ee99fc641d84f4d0568a92d5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 11:35:57.484447 containerd[1492]: time="2025-01-29T11:35:57.484414434Z" level=info msg="CreateContainer within sandbox \"1c1ed092f32cbd93dffbdbae963f10cdf8637329ee99fc641d84f4d0568a92d5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"35f68bdf655f833b5435a6e3f87fc0196ce21c8701d13ee2f5ac134a8afc14c2\"" Jan 29 11:35:57.484780 containerd[1492]: time="2025-01-29T11:35:57.484753985Z" level=info msg="StartContainer for \"35f68bdf655f833b5435a6e3f87fc0196ce21c8701d13ee2f5ac134a8afc14c2\"" Jan 29 11:35:57.515771 systemd[1]: Started cri-containerd-35f68bdf655f833b5435a6e3f87fc0196ce21c8701d13ee2f5ac134a8afc14c2.scope - libcontainer container 35f68bdf655f833b5435a6e3f87fc0196ce21c8701d13ee2f5ac134a8afc14c2. Jan 29 11:35:57.547129 containerd[1492]: time="2025-01-29T11:35:57.547089433Z" level=info msg="StartContainer for \"35f68bdf655f833b5435a6e3f87fc0196ce21c8701d13ee2f5ac134a8afc14c2\" returns successfully" Jan 29 11:35:57.561777 systemd[1]: cri-containerd-35f68bdf655f833b5435a6e3f87fc0196ce21c8701d13ee2f5ac134a8afc14c2.scope: Deactivated successfully. Jan 29 11:35:57.584853 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-35f68bdf655f833b5435a6e3f87fc0196ce21c8701d13ee2f5ac134a8afc14c2-rootfs.mount: Deactivated successfully. Jan 29 11:35:57.945721 containerd[1492]: time="2025-01-29T11:35:57.945656334Z" level=info msg="shim disconnected" id=35f68bdf655f833b5435a6e3f87fc0196ce21c8701d13ee2f5ac134a8afc14c2 namespace=k8s.io Jan 29 11:35:57.945721 containerd[1492]: time="2025-01-29T11:35:57.945715306Z" level=warning msg="cleaning up after shim disconnected" id=35f68bdf655f833b5435a6e3f87fc0196ce21c8701d13ee2f5ac134a8afc14c2 namespace=k8s.io Jan 29 11:35:57.945721 containerd[1492]: time="2025-01-29T11:35:57.945724924Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:35:58.056873 kubelet[2560]: E0129 11:35:58.056818 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s9vh2" podUID="0ee1d7b9-9e01-4183-97ec-91d9420b2dab" Jan 29 11:35:58.109153 kubelet[2560]: I0129 11:35:58.109123 2560 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:35:58.109549 kubelet[2560]: E0129 11:35:58.109425 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:35:58.116386 kubelet[2560]: E0129 11:35:58.116352 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:35:58.117130 containerd[1492]: time="2025-01-29T11:35:58.117083472Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 29 11:36:00.057868 kubelet[2560]: E0129 11:36:00.057826 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s9vh2" podUID="0ee1d7b9-9e01-4183-97ec-91d9420b2dab" Jan 29 11:36:02.057346 kubelet[2560]: E0129 11:36:02.057285 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s9vh2" podUID="0ee1d7b9-9e01-4183-97ec-91d9420b2dab" Jan 29 11:36:04.174687 kubelet[2560]: E0129 11:36:04.174620 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s9vh2" podUID="0ee1d7b9-9e01-4183-97ec-91d9420b2dab" Jan 29 11:36:04.199347 containerd[1492]: time="2025-01-29T11:36:04.199301679Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:36:04.200121 containerd[1492]: time="2025-01-29T11:36:04.200068674Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 29 11:36:04.201240 containerd[1492]: time="2025-01-29T11:36:04.201188513Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:36:04.203247 containerd[1492]: time="2025-01-29T11:36:04.203224299Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:36:04.204138 containerd[1492]: time="2025-01-29T11:36:04.204116370Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 6.086985948s" Jan 29 11:36:04.204199 containerd[1492]: time="2025-01-29T11:36:04.204137730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 29 11:36:04.206281 containerd[1492]: time="2025-01-29T11:36:04.206252294Z" level=info msg="CreateContainer within sandbox \"1c1ed092f32cbd93dffbdbae963f10cdf8637329ee99fc641d84f4d0568a92d5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 11:36:04.225970 containerd[1492]: time="2025-01-29T11:36:04.225913857Z" level=info msg="CreateContainer within sandbox \"1c1ed092f32cbd93dffbdbae963f10cdf8637329ee99fc641d84f4d0568a92d5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"37f44dfa3c50a02cc30fe2c254c5c127d9d0891c686d8384f718345edb4d23ff\"" Jan 29 11:36:04.226750 containerd[1492]: time="2025-01-29T11:36:04.226645304Z" level=info msg="StartContainer for \"37f44dfa3c50a02cc30fe2c254c5c127d9d0891c686d8384f718345edb4d23ff\"" Jan 29 11:36:04.263788 systemd[1]: Started cri-containerd-37f44dfa3c50a02cc30fe2c254c5c127d9d0891c686d8384f718345edb4d23ff.scope - libcontainer container 37f44dfa3c50a02cc30fe2c254c5c127d9d0891c686d8384f718345edb4d23ff. Jan 29 11:36:04.336774 containerd[1492]: time="2025-01-29T11:36:04.336728432Z" level=info msg="StartContainer for \"37f44dfa3c50a02cc30fe2c254c5c127d9d0891c686d8384f718345edb4d23ff\" returns successfully" Jan 29 11:36:05.340149 kubelet[2560]: E0129 11:36:05.340115 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:36:06.056786 kubelet[2560]: E0129 11:36:06.056715 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s9vh2" podUID="0ee1d7b9-9e01-4183-97ec-91d9420b2dab" Jan 29 11:36:06.081865 systemd[1]: cri-containerd-37f44dfa3c50a02cc30fe2c254c5c127d9d0891c686d8384f718345edb4d23ff.scope: Deactivated successfully. Jan 29 11:36:06.106401 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-37f44dfa3c50a02cc30fe2c254c5c127d9d0891c686d8384f718345edb4d23ff-rootfs.mount: Deactivated successfully. Jan 29 11:36:06.109241 containerd[1492]: time="2025-01-29T11:36:06.109186839Z" level=info msg="shim disconnected" id=37f44dfa3c50a02cc30fe2c254c5c127d9d0891c686d8384f718345edb4d23ff namespace=k8s.io Jan 29 11:36:06.109241 containerd[1492]: time="2025-01-29T11:36:06.109239469Z" level=warning msg="cleaning up after shim disconnected" id=37f44dfa3c50a02cc30fe2c254c5c127d9d0891c686d8384f718345edb4d23ff namespace=k8s.io Jan 29 11:36:06.109620 containerd[1492]: time="2025-01-29T11:36:06.109248676Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:36:06.141678 kubelet[2560]: I0129 11:36:06.141646 2560 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 29 11:36:06.177423 systemd[1]: Created slice kubepods-burstable-podd32f565b_8d7d_47d2_85bf_68725ec04cff.slice - libcontainer container kubepods-burstable-podd32f565b_8d7d_47d2_85bf_68725ec04cff.slice. Jan 29 11:36:06.186484 systemd[1]: Created slice kubepods-besteffort-pod0725190d_a48f_4c98_9011_c6cdb64f50fe.slice - libcontainer container kubepods-besteffort-pod0725190d_a48f_4c98_9011_c6cdb64f50fe.slice. Jan 29 11:36:06.193400 systemd[1]: Created slice kubepods-burstable-podcf2aa93c_f4bc_4322_9163_052200dd877a.slice - libcontainer container kubepods-burstable-podcf2aa93c_f4bc_4322_9163_052200dd877a.slice. Jan 29 11:36:06.201113 systemd[1]: Created slice kubepods-besteffort-podc4ad5d31_5d68_4473_8b3c_72bfc21e63c5.slice - libcontainer container kubepods-besteffort-podc4ad5d31_5d68_4473_8b3c_72bfc21e63c5.slice. Jan 29 11:36:06.206967 systemd[1]: Created slice kubepods-besteffort-pod5e835acf_6d91_46a2_a52a_32309f48a3b4.slice - libcontainer container kubepods-besteffort-pod5e835acf_6d91_46a2_a52a_32309f48a3b4.slice. Jan 29 11:36:06.304803 kubelet[2560]: I0129 11:36:06.304745 2560 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:36:06.305239 kubelet[2560]: E0129 11:36:06.305218 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:36:06.343578 kubelet[2560]: E0129 11:36:06.343468 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:36:06.343578 kubelet[2560]: E0129 11:36:06.343164 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:36:06.345235 containerd[1492]: time="2025-01-29T11:36:06.345174985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 29 11:36:06.358235 kubelet[2560]: I0129 11:36:06.358182 2560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf2aa93c-f4bc-4322-9163-052200dd877a-config-volume\") pod \"coredns-6f6b679f8f-qqzzk\" (UID: \"cf2aa93c-f4bc-4322-9163-052200dd877a\") " pod="kube-system/coredns-6f6b679f8f-qqzzk" Jan 29 11:36:06.358235 kubelet[2560]: I0129 11:36:06.358226 2560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d32f565b-8d7d-47d2-85bf-68725ec04cff-config-volume\") pod \"coredns-6f6b679f8f-8sz4x\" (UID: \"d32f565b-8d7d-47d2-85bf-68725ec04cff\") " pod="kube-system/coredns-6f6b679f8f-8sz4x" Jan 29 11:36:06.358235 kubelet[2560]: I0129 11:36:06.358243 2560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84575\" (UniqueName: \"kubernetes.io/projected/d32f565b-8d7d-47d2-85bf-68725ec04cff-kube-api-access-84575\") pod \"coredns-6f6b679f8f-8sz4x\" (UID: \"d32f565b-8d7d-47d2-85bf-68725ec04cff\") " pod="kube-system/coredns-6f6b679f8f-8sz4x" Jan 29 11:36:06.358466 kubelet[2560]: I0129 11:36:06.358259 2560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7zbv\" (UniqueName: \"kubernetes.io/projected/cf2aa93c-f4bc-4322-9163-052200dd877a-kube-api-access-l7zbv\") pod \"coredns-6f6b679f8f-qqzzk\" (UID: \"cf2aa93c-f4bc-4322-9163-052200dd877a\") " pod="kube-system/coredns-6f6b679f8f-qqzzk" Jan 29 11:36:06.358466 kubelet[2560]: I0129 11:36:06.358275 2560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c4ad5d31-5d68-4473-8b3c-72bfc21e63c5-calico-apiserver-certs\") pod \"calico-apiserver-78d7549f7d-g9z2x\" (UID: \"c4ad5d31-5d68-4473-8b3c-72bfc21e63c5\") " pod="calico-apiserver/calico-apiserver-78d7549f7d-g9z2x" Jan 29 11:36:06.358466 kubelet[2560]: I0129 11:36:06.358299 2560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbc8w\" (UniqueName: \"kubernetes.io/projected/c4ad5d31-5d68-4473-8b3c-72bfc21e63c5-kube-api-access-cbc8w\") pod \"calico-apiserver-78d7549f7d-g9z2x\" (UID: \"c4ad5d31-5d68-4473-8b3c-72bfc21e63c5\") " pod="calico-apiserver/calico-apiserver-78d7549f7d-g9z2x" Jan 29 11:36:06.358466 kubelet[2560]: I0129 11:36:06.358314 2560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e835acf-6d91-46a2-a52a-32309f48a3b4-tigera-ca-bundle\") pod \"calico-kube-controllers-7f5f6fb96-hcxll\" (UID: \"5e835acf-6d91-46a2-a52a-32309f48a3b4\") " pod="calico-system/calico-kube-controllers-7f5f6fb96-hcxll" Jan 29 11:36:06.358466 kubelet[2560]: I0129 11:36:06.358346 2560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp4nq\" (UniqueName: \"kubernetes.io/projected/0725190d-a48f-4c98-9011-c6cdb64f50fe-kube-api-access-rp4nq\") pod \"calico-apiserver-78d7549f7d-5n5j6\" (UID: \"0725190d-a48f-4c98-9011-c6cdb64f50fe\") " pod="calico-apiserver/calico-apiserver-78d7549f7d-5n5j6" Jan 29 11:36:06.358764 kubelet[2560]: I0129 11:36:06.358363 2560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl64d\" (UniqueName: \"kubernetes.io/projected/5e835acf-6d91-46a2-a52a-32309f48a3b4-kube-api-access-vl64d\") pod \"calico-kube-controllers-7f5f6fb96-hcxll\" (UID: \"5e835acf-6d91-46a2-a52a-32309f48a3b4\") " pod="calico-system/calico-kube-controllers-7f5f6fb96-hcxll" Jan 29 11:36:06.358764 kubelet[2560]: I0129 11:36:06.358380 2560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0725190d-a48f-4c98-9011-c6cdb64f50fe-calico-apiserver-certs\") pod \"calico-apiserver-78d7549f7d-5n5j6\" (UID: \"0725190d-a48f-4c98-9011-c6cdb64f50fe\") " pod="calico-apiserver/calico-apiserver-78d7549f7d-5n5j6" Jan 29 11:36:06.482557 kubelet[2560]: E0129 11:36:06.482259 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:36:06.483899 containerd[1492]: time="2025-01-29T11:36:06.483832411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8sz4x,Uid:d32f565b-8d7d-47d2-85bf-68725ec04cff,Namespace:kube-system,Attempt:0,}" Jan 29 11:36:06.491009 containerd[1492]: time="2025-01-29T11:36:06.490964782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78d7549f7d-5n5j6,Uid:0725190d-a48f-4c98-9011-c6cdb64f50fe,Namespace:calico-apiserver,Attempt:0,}" Jan 29 11:36:06.498563 kubelet[2560]: E0129 11:36:06.498496 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:36:06.499184 containerd[1492]: time="2025-01-29T11:36:06.499111692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qqzzk,Uid:cf2aa93c-f4bc-4322-9163-052200dd877a,Namespace:kube-system,Attempt:0,}" Jan 29 11:36:06.507589 containerd[1492]: time="2025-01-29T11:36:06.507434133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78d7549f7d-g9z2x,Uid:c4ad5d31-5d68-4473-8b3c-72bfc21e63c5,Namespace:calico-apiserver,Attempt:0,}" Jan 29 11:36:06.510483 containerd[1492]: time="2025-01-29T11:36:06.510442638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f5f6fb96-hcxll,Uid:5e835acf-6d91-46a2-a52a-32309f48a3b4,Namespace:calico-system,Attempt:0,}" Jan 29 11:36:06.599791 systemd[1]: Started sshd@7-10.0.0.107:22-10.0.0.1:42428.service - OpenSSH per-connection server daemon (10.0.0.1:42428). Jan 29 11:36:06.620659 containerd[1492]: time="2025-01-29T11:36:06.620496655Z" level=error msg="Failed to destroy network for sandbox \"7796f2b8c41cac5338ffa9b402e0614ff1aa9234cde81ec23c1a0b7905f12023\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:06.621083 containerd[1492]: time="2025-01-29T11:36:06.621046299Z" level=error msg="encountered an error cleaning up failed sandbox \"7796f2b8c41cac5338ffa9b402e0614ff1aa9234cde81ec23c1a0b7905f12023\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:06.621194 containerd[1492]: time="2025-01-29T11:36:06.621175272Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qqzzk,Uid:cf2aa93c-f4bc-4322-9163-052200dd877a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7796f2b8c41cac5338ffa9b402e0614ff1aa9234cde81ec23c1a0b7905f12023\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:06.622773 kubelet[2560]: E0129 11:36:06.621765 2560 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7796f2b8c41cac5338ffa9b402e0614ff1aa9234cde81ec23c1a0b7905f12023\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:06.622773 kubelet[2560]: E0129 11:36:06.621840 2560 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7796f2b8c41cac5338ffa9b402e0614ff1aa9234cde81ec23c1a0b7905f12023\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-qqzzk" Jan 29 11:36:06.622773 kubelet[2560]: E0129 11:36:06.621861 2560 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7796f2b8c41cac5338ffa9b402e0614ff1aa9234cde81ec23c1a0b7905f12023\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-qqzzk" Jan 29 11:36:06.622887 kubelet[2560]: E0129 11:36:06.621897 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-qqzzk_kube-system(cf2aa93c-f4bc-4322-9163-052200dd877a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-qqzzk_kube-system(cf2aa93c-f4bc-4322-9163-052200dd877a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7796f2b8c41cac5338ffa9b402e0614ff1aa9234cde81ec23c1a0b7905f12023\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-qqzzk" podUID="cf2aa93c-f4bc-4322-9163-052200dd877a" Jan 29 11:36:06.624315 containerd[1492]: time="2025-01-29T11:36:06.624260783Z" level=error msg="Failed to destroy network for sandbox \"e4c786a1f87f123c515ee8f4dae18559dc1b81a1fbae2953ab843c3ffdeb2f38\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:06.625218 containerd[1492]: time="2025-01-29T11:36:06.625048937Z" level=error msg="Failed to destroy network for sandbox \"fe40cde01f9bc00e0017a79d8d4efb77eed31f6ad079f48d7d46d61b289b6a4b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:06.625790 containerd[1492]: time="2025-01-29T11:36:06.625762199Z" level=error msg="encountered an error cleaning up failed sandbox \"fe40cde01f9bc00e0017a79d8d4efb77eed31f6ad079f48d7d46d61b289b6a4b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:06.625850 containerd[1492]: time="2025-01-29T11:36:06.625815470Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8sz4x,Uid:d32f565b-8d7d-47d2-85bf-68725ec04cff,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fe40cde01f9bc00e0017a79d8d4efb77eed31f6ad079f48d7d46d61b289b6a4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:06.625971 containerd[1492]: time="2025-01-29T11:36:06.625909628Z" level=error msg="encountered an error cleaning up failed sandbox \"e4c786a1f87f123c515ee8f4dae18559dc1b81a1fbae2953ab843c3ffdeb2f38\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:06.626018 containerd[1492]: time="2025-01-29T11:36:06.625974650Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78d7549f7d-5n5j6,Uid:0725190d-a48f-4c98-9011-c6cdb64f50fe,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e4c786a1f87f123c515ee8f4dae18559dc1b81a1fbae2953ab843c3ffdeb2f38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:06.626126 kubelet[2560]: E0129 11:36:06.626097 2560 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe40cde01f9bc00e0017a79d8d4efb77eed31f6ad079f48d7d46d61b289b6a4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:06.626178 kubelet[2560]: E0129 11:36:06.626139 2560 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe40cde01f9bc00e0017a79d8d4efb77eed31f6ad079f48d7d46d61b289b6a4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-8sz4x" Jan 29 11:36:06.626216 kubelet[2560]: E0129 11:36:06.626186 2560 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe40cde01f9bc00e0017a79d8d4efb77eed31f6ad079f48d7d46d61b289b6a4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-8sz4x" Jan 29 11:36:06.626442 kubelet[2560]: E0129 11:36:06.626414 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-8sz4x_kube-system(d32f565b-8d7d-47d2-85bf-68725ec04cff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-8sz4x_kube-system(d32f565b-8d7d-47d2-85bf-68725ec04cff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fe40cde01f9bc00e0017a79d8d4efb77eed31f6ad079f48d7d46d61b289b6a4b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-8sz4x" podUID="d32f565b-8d7d-47d2-85bf-68725ec04cff" Jan 29 11:36:06.626676 kubelet[2560]: E0129 11:36:06.626659 2560 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4c786a1f87f123c515ee8f4dae18559dc1b81a1fbae2953ab843c3ffdeb2f38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:06.626778 kubelet[2560]: E0129 11:36:06.626763 2560 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4c786a1f87f123c515ee8f4dae18559dc1b81a1fbae2953ab843c3ffdeb2f38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78d7549f7d-5n5j6" Jan 29 11:36:06.626939 kubelet[2560]: E0129 11:36:06.626923 2560 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4c786a1f87f123c515ee8f4dae18559dc1b81a1fbae2953ab843c3ffdeb2f38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78d7549f7d-5n5j6" Jan 29 11:36:06.627053 kubelet[2560]: E0129 11:36:06.627035 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-78d7549f7d-5n5j6_calico-apiserver(0725190d-a48f-4c98-9011-c6cdb64f50fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-78d7549f7d-5n5j6_calico-apiserver(0725190d-a48f-4c98-9011-c6cdb64f50fe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e4c786a1f87f123c515ee8f4dae18559dc1b81a1fbae2953ab843c3ffdeb2f38\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78d7549f7d-5n5j6" podUID="0725190d-a48f-4c98-9011-c6cdb64f50fe" Jan 29 11:36:06.640600 containerd[1492]: time="2025-01-29T11:36:06.640546488Z" level=error msg="Failed to destroy network for sandbox \"f81c812ded146071c6d5357fc23680f636d22e342d877bc1c706a89aba49dd93\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:06.640954 containerd[1492]: time="2025-01-29T11:36:06.640926293Z" level=error msg="encountered an error cleaning up failed sandbox \"f81c812ded146071c6d5357fc23680f636d22e342d877bc1c706a89aba49dd93\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:06.641091 containerd[1492]: time="2025-01-29T11:36:06.640984823Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78d7549f7d-g9z2x,Uid:c4ad5d31-5d68-4473-8b3c-72bfc21e63c5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f81c812ded146071c6d5357fc23680f636d22e342d877bc1c706a89aba49dd93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:06.641427 kubelet[2560]: E0129 11:36:06.641221 2560 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f81c812ded146071c6d5357fc23680f636d22e342d877bc1c706a89aba49dd93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:06.641427 kubelet[2560]: E0129 11:36:06.641286 2560 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f81c812ded146071c6d5357fc23680f636d22e342d877bc1c706a89aba49dd93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78d7549f7d-g9z2x" Jan 29 11:36:06.641427 kubelet[2560]: E0129 11:36:06.641307 2560 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f81c812ded146071c6d5357fc23680f636d22e342d877bc1c706a89aba49dd93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78d7549f7d-g9z2x" Jan 29 11:36:06.641556 kubelet[2560]: E0129 11:36:06.641361 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-78d7549f7d-g9z2x_calico-apiserver(c4ad5d31-5d68-4473-8b3c-72bfc21e63c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-78d7549f7d-g9z2x_calico-apiserver(c4ad5d31-5d68-4473-8b3c-72bfc21e63c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f81c812ded146071c6d5357fc23680f636d22e342d877bc1c706a89aba49dd93\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78d7549f7d-g9z2x" podUID="c4ad5d31-5d68-4473-8b3c-72bfc21e63c5" Jan 29 11:36:06.649924 containerd[1492]: time="2025-01-29T11:36:06.649733707Z" level=error msg="Failed to destroy network for sandbox \"b5f2937f631c80a639072f6b6e4b81c67d79bd7b8b00d51fbbb621f6dcf86772\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:06.650212 containerd[1492]: time="2025-01-29T11:36:06.650184596Z" level=error msg="encountered an error cleaning up failed sandbox \"b5f2937f631c80a639072f6b6e4b81c67d79bd7b8b00d51fbbb621f6dcf86772\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:06.650260 containerd[1492]: time="2025-01-29T11:36:06.650242275Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f5f6fb96-hcxll,Uid:5e835acf-6d91-46a2-a52a-32309f48a3b4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b5f2937f631c80a639072f6b6e4b81c67d79bd7b8b00d51fbbb621f6dcf86772\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:06.650497 kubelet[2560]: E0129 11:36:06.650458 2560 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5f2937f631c80a639072f6b6e4b81c67d79bd7b8b00d51fbbb621f6dcf86772\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:06.650549 kubelet[2560]: E0129 11:36:06.650520 2560 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5f2937f631c80a639072f6b6e4b81c67d79bd7b8b00d51fbbb621f6dcf86772\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f5f6fb96-hcxll" Jan 29 11:36:06.650549 kubelet[2560]: E0129 11:36:06.650541 2560 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5f2937f631c80a639072f6b6e4b81c67d79bd7b8b00d51fbbb621f6dcf86772\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f5f6fb96-hcxll" Jan 29 11:36:06.650644 kubelet[2560]: E0129 11:36:06.650587 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f5f6fb96-hcxll_calico-system(5e835acf-6d91-46a2-a52a-32309f48a3b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f5f6fb96-hcxll_calico-system(5e835acf-6d91-46a2-a52a-32309f48a3b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b5f2937f631c80a639072f6b6e4b81c67d79bd7b8b00d51fbbb621f6dcf86772\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f5f6fb96-hcxll" podUID="5e835acf-6d91-46a2-a52a-32309f48a3b4" Jan 29 11:36:06.655210 sshd[3462]: Accepted publickey for core from 10.0.0.1 port 42428 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:36:06.657229 sshd-session[3462]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:36:06.661696 systemd-logind[1475]: New session 8 of user core. Jan 29 11:36:06.674796 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 11:36:06.789547 sshd[3505]: Connection closed by 10.0.0.1 port 42428 Jan 29 11:36:06.789905 sshd-session[3462]: pam_unix(sshd:session): session closed for user core Jan 29 11:36:06.793895 systemd[1]: sshd@7-10.0.0.107:22-10.0.0.1:42428.service: Deactivated successfully. Jan 29 11:36:06.795772 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 11:36:06.796496 systemd-logind[1475]: Session 8 logged out. Waiting for processes to exit. Jan 29 11:36:06.797477 systemd-logind[1475]: Removed session 8. Jan 29 11:36:07.351651 kubelet[2560]: I0129 11:36:07.348307 2560 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe40cde01f9bc00e0017a79d8d4efb77eed31f6ad079f48d7d46d61b289b6a4b" Jan 29 11:36:07.352843 containerd[1492]: time="2025-01-29T11:36:07.352354095Z" level=info msg="StopPodSandbox for \"fe40cde01f9bc00e0017a79d8d4efb77eed31f6ad079f48d7d46d61b289b6a4b\"" Jan 29 11:36:07.352843 containerd[1492]: time="2025-01-29T11:36:07.352640895Z" level=info msg="Ensure that sandbox fe40cde01f9bc00e0017a79d8d4efb77eed31f6ad079f48d7d46d61b289b6a4b in task-service has been cleanup successfully" Jan 29 11:36:07.356179 containerd[1492]: time="2025-01-29T11:36:07.356028272Z" level=info msg="TearDown network for sandbox \"fe40cde01f9bc00e0017a79d8d4efb77eed31f6ad079f48d7d46d61b289b6a4b\" successfully" Jan 29 11:36:07.356179 containerd[1492]: time="2025-01-29T11:36:07.356060342Z" level=info msg="StopPodSandbox for \"fe40cde01f9bc00e0017a79d8d4efb77eed31f6ad079f48d7d46d61b289b6a4b\" returns successfully" Jan 29 11:36:07.356817 kubelet[2560]: E0129 11:36:07.356594 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:36:07.358274 containerd[1492]: time="2025-01-29T11:36:07.357909903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8sz4x,Uid:d32f565b-8d7d-47d2-85bf-68725ec04cff,Namespace:kube-system,Attempt:1,}" Jan 29 11:36:07.358265 systemd[1]: run-netns-cni\x2de3788467\x2d098f\x2d3470\x2d4ff3\x2d67375a2d8318.mount: Deactivated successfully. Jan 29 11:36:07.360985 kubelet[2560]: I0129 11:36:07.360948 2560 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5f2937f631c80a639072f6b6e4b81c67d79bd7b8b00d51fbbb621f6dcf86772" Jan 29 11:36:07.362555 containerd[1492]: time="2025-01-29T11:36:07.362472864Z" level=info msg="StopPodSandbox for \"b5f2937f631c80a639072f6b6e4b81c67d79bd7b8b00d51fbbb621f6dcf86772\"" Jan 29 11:36:07.362854 containerd[1492]: time="2025-01-29T11:36:07.362682197Z" level=info msg="Ensure that sandbox b5f2937f631c80a639072f6b6e4b81c67d79bd7b8b00d51fbbb621f6dcf86772 in task-service has been cleanup successfully" Jan 29 11:36:07.364294 kubelet[2560]: I0129 11:36:07.363206 2560 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7796f2b8c41cac5338ffa9b402e0614ff1aa9234cde81ec23c1a0b7905f12023" Jan 29 11:36:07.364699 kubelet[2560]: I0129 11:36:07.364663 2560 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4c786a1f87f123c515ee8f4dae18559dc1b81a1fbae2953ab843c3ffdeb2f38" Jan 29 11:36:07.365659 containerd[1492]: time="2025-01-29T11:36:07.365610760Z" level=info msg="StopPodSandbox for \"7796f2b8c41cac5338ffa9b402e0614ff1aa9234cde81ec23c1a0b7905f12023\"" Jan 29 11:36:07.365900 containerd[1492]: time="2025-01-29T11:36:07.365867033Z" level=info msg="Ensure that sandbox 7796f2b8c41cac5338ffa9b402e0614ff1aa9234cde81ec23c1a0b7905f12023 in task-service has been cleanup successfully" Jan 29 11:36:07.366052 containerd[1492]: time="2025-01-29T11:36:07.365610880Z" level=info msg="StopPodSandbox for \"e4c786a1f87f123c515ee8f4dae18559dc1b81a1fbae2953ab843c3ffdeb2f38\"" Jan 29 11:36:07.366345 containerd[1492]: time="2025-01-29T11:36:07.366203056Z" level=info msg="Ensure that sandbox e4c786a1f87f123c515ee8f4dae18559dc1b81a1fbae2953ab843c3ffdeb2f38 in task-service has been cleanup successfully" Jan 29 11:36:07.366564 containerd[1492]: time="2025-01-29T11:36:07.366538106Z" level=info msg="TearDown network for sandbox \"b5f2937f631c80a639072f6b6e4b81c67d79bd7b8b00d51fbbb621f6dcf86772\" successfully" Jan 29 11:36:07.366692 containerd[1492]: time="2025-01-29T11:36:07.366677889Z" level=info msg="StopPodSandbox for \"b5f2937f631c80a639072f6b6e4b81c67d79bd7b8b00d51fbbb621f6dcf86772\" returns successfully" Jan 29 11:36:07.367098 kubelet[2560]: I0129 11:36:07.367072 2560 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f81c812ded146071c6d5357fc23680f636d22e342d877bc1c706a89aba49dd93" Jan 29 11:36:07.367352 containerd[1492]: time="2025-01-29T11:36:07.366808595Z" level=info msg="TearDown network for sandbox \"e4c786a1f87f123c515ee8f4dae18559dc1b81a1fbae2953ab843c3ffdeb2f38\" successfully" Jan 29 11:36:07.367442 containerd[1492]: time="2025-01-29T11:36:07.367410118Z" level=info msg="StopPodSandbox for \"e4c786a1f87f123c515ee8f4dae18559dc1b81a1fbae2953ab843c3ffdeb2f38\" returns successfully" Jan 29 11:36:07.367527 containerd[1492]: time="2025-01-29T11:36:07.367196907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f5f6fb96-hcxll,Uid:5e835acf-6d91-46a2-a52a-32309f48a3b4,Namespace:calico-system,Attempt:1,}" Jan 29 11:36:07.367669 containerd[1492]: time="2025-01-29T11:36:07.366617084Z" level=info msg="TearDown network for sandbox \"7796f2b8c41cac5338ffa9b402e0614ff1aa9234cde81ec23c1a0b7905f12023\" successfully" Jan 29 11:36:07.367669 containerd[1492]: time="2025-01-29T11:36:07.367667933Z" level=info msg="StopPodSandbox for \"7796f2b8c41cac5338ffa9b402e0614ff1aa9234cde81ec23c1a0b7905f12023\" returns successfully" Jan 29 11:36:07.367867 containerd[1492]: time="2025-01-29T11:36:07.367785505Z" level=info msg="StopPodSandbox for \"f81c812ded146071c6d5357fc23680f636d22e342d877bc1c706a89aba49dd93\"" Jan 29 11:36:07.368138 containerd[1492]: time="2025-01-29T11:36:07.368027490Z" level=info msg="Ensure that sandbox f81c812ded146071c6d5357fc23680f636d22e342d877bc1c706a89aba49dd93 in task-service has been cleanup successfully" Jan 29 11:36:07.368180 systemd[1]: run-netns-cni\x2d27f6e1c7\x2d43b1\x2d9f6c\x2d1579\x2d05f95126779a.mount: Deactivated successfully. Jan 29 11:36:07.368489 containerd[1492]: time="2025-01-29T11:36:07.368251602Z" level=info msg="TearDown network for sandbox \"f81c812ded146071c6d5357fc23680f636d22e342d877bc1c706a89aba49dd93\" successfully" Jan 29 11:36:07.368489 containerd[1492]: time="2025-01-29T11:36:07.368271098Z" level=info msg="StopPodSandbox for \"f81c812ded146071c6d5357fc23680f636d22e342d877bc1c706a89aba49dd93\" returns successfully" Jan 29 11:36:07.369309 containerd[1492]: time="2025-01-29T11:36:07.368771040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78d7549f7d-g9z2x,Uid:c4ad5d31-5d68-4473-8b3c-72bfc21e63c5,Namespace:calico-apiserver,Attempt:1,}" Jan 29 11:36:07.369309 containerd[1492]: time="2025-01-29T11:36:07.369011852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78d7549f7d-5n5j6,Uid:0725190d-a48f-4c98-9011-c6cdb64f50fe,Namespace:calico-apiserver,Attempt:1,}" Jan 29 11:36:07.369392 kubelet[2560]: E0129 11:36:07.369190 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:36:07.369580 containerd[1492]: time="2025-01-29T11:36:07.369560455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qqzzk,Uid:cf2aa93c-f4bc-4322-9163-052200dd877a,Namespace:kube-system,Attempt:1,}" Jan 29 11:36:07.371772 systemd[1]: run-netns-cni\x2dde04c0c9\x2dc34d\x2d3e46\x2de36c\x2d53de1117e0dc.mount: Deactivated successfully. Jan 29 11:36:07.371888 systemd[1]: run-netns-cni\x2d11ccb370\x2de8c5\x2d8a74\x2d3e5b\x2dbf841a056bf6.mount: Deactivated successfully. Jan 29 11:36:07.371982 systemd[1]: run-netns-cni\x2d170d1771\x2de9fb\x2d74be\x2dc9ff\x2da6bfa6e0cb6a.mount: Deactivated successfully. Jan 29 11:36:07.738102 containerd[1492]: time="2025-01-29T11:36:07.737960803Z" level=error msg="Failed to destroy network for sandbox \"f289a1ea20b800c633270b52ec0a7cbce53731f38497fa4630193ed068515360\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:07.738937 containerd[1492]: time="2025-01-29T11:36:07.738872279Z" level=error msg="encountered an error cleaning up failed sandbox \"f289a1ea20b800c633270b52ec0a7cbce53731f38497fa4630193ed068515360\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:07.739125 containerd[1492]: time="2025-01-29T11:36:07.739073037Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78d7549f7d-5n5j6,Uid:0725190d-a48f-4c98-9011-c6cdb64f50fe,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"f289a1ea20b800c633270b52ec0a7cbce53731f38497fa4630193ed068515360\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:07.739738 kubelet[2560]: E0129 11:36:07.739458 2560 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f289a1ea20b800c633270b52ec0a7cbce53731f38497fa4630193ed068515360\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:07.739738 kubelet[2560]: E0129 11:36:07.739533 2560 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f289a1ea20b800c633270b52ec0a7cbce53731f38497fa4630193ed068515360\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78d7549f7d-5n5j6" Jan 29 11:36:07.739738 kubelet[2560]: E0129 11:36:07.739558 2560 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f289a1ea20b800c633270b52ec0a7cbce53731f38497fa4630193ed068515360\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78d7549f7d-5n5j6" Jan 29 11:36:07.739995 kubelet[2560]: E0129 11:36:07.739612 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-78d7549f7d-5n5j6_calico-apiserver(0725190d-a48f-4c98-9011-c6cdb64f50fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-78d7549f7d-5n5j6_calico-apiserver(0725190d-a48f-4c98-9011-c6cdb64f50fe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f289a1ea20b800c633270b52ec0a7cbce53731f38497fa4630193ed068515360\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78d7549f7d-5n5j6" podUID="0725190d-a48f-4c98-9011-c6cdb64f50fe" Jan 29 11:36:07.762452 containerd[1492]: time="2025-01-29T11:36:07.761584794Z" level=error msg="Failed to destroy network for sandbox \"78be77b980959cfbde3c7658c4ee4124dfca02a33e1eb74d810982ec9ce9af2d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:07.763158 containerd[1492]: time="2025-01-29T11:36:07.763131417Z" level=error msg="encountered an error cleaning up failed sandbox \"78be77b980959cfbde3c7658c4ee4124dfca02a33e1eb74d810982ec9ce9af2d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:07.764342 containerd[1492]: time="2025-01-29T11:36:07.764310927Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8sz4x,Uid:d32f565b-8d7d-47d2-85bf-68725ec04cff,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"78be77b980959cfbde3c7658c4ee4124dfca02a33e1eb74d810982ec9ce9af2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:07.764717 kubelet[2560]: E0129 11:36:07.764608 2560 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78be77b980959cfbde3c7658c4ee4124dfca02a33e1eb74d810982ec9ce9af2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:07.764717 kubelet[2560]: E0129 11:36:07.764719 2560 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78be77b980959cfbde3c7658c4ee4124dfca02a33e1eb74d810982ec9ce9af2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-8sz4x" Jan 29 11:36:07.764898 kubelet[2560]: E0129 11:36:07.764744 2560 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78be77b980959cfbde3c7658c4ee4124dfca02a33e1eb74d810982ec9ce9af2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-8sz4x" Jan 29 11:36:07.764898 kubelet[2560]: E0129 11:36:07.764812 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-8sz4x_kube-system(d32f565b-8d7d-47d2-85bf-68725ec04cff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-8sz4x_kube-system(d32f565b-8d7d-47d2-85bf-68725ec04cff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"78be77b980959cfbde3c7658c4ee4124dfca02a33e1eb74d810982ec9ce9af2d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-8sz4x" podUID="d32f565b-8d7d-47d2-85bf-68725ec04cff" Jan 29 11:36:07.766663 containerd[1492]: time="2025-01-29T11:36:07.766576251Z" level=error msg="Failed to destroy network for sandbox \"7cf0ee6b79c5f0ffbd1df3c4ba1fa347f2ab15775291f793b405df1eb50ddbdc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:07.767116 containerd[1492]: time="2025-01-29T11:36:07.767071103Z" level=error msg="encountered an error cleaning up failed sandbox \"7cf0ee6b79c5f0ffbd1df3c4ba1fa347f2ab15775291f793b405df1eb50ddbdc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:07.767195 containerd[1492]: time="2025-01-29T11:36:07.767161012Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f5f6fb96-hcxll,Uid:5e835acf-6d91-46a2-a52a-32309f48a3b4,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"7cf0ee6b79c5f0ffbd1df3c4ba1fa347f2ab15775291f793b405df1eb50ddbdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:07.767835 kubelet[2560]: E0129 11:36:07.767786 2560 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cf0ee6b79c5f0ffbd1df3c4ba1fa347f2ab15775291f793b405df1eb50ddbdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:07.767895 kubelet[2560]: E0129 11:36:07.767844 2560 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cf0ee6b79c5f0ffbd1df3c4ba1fa347f2ab15775291f793b405df1eb50ddbdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f5f6fb96-hcxll" Jan 29 11:36:07.767895 kubelet[2560]: E0129 11:36:07.767866 2560 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cf0ee6b79c5f0ffbd1df3c4ba1fa347f2ab15775291f793b405df1eb50ddbdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f5f6fb96-hcxll" Jan 29 11:36:07.767972 kubelet[2560]: E0129 11:36:07.767927 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f5f6fb96-hcxll_calico-system(5e835acf-6d91-46a2-a52a-32309f48a3b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f5f6fb96-hcxll_calico-system(5e835acf-6d91-46a2-a52a-32309f48a3b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7cf0ee6b79c5f0ffbd1df3c4ba1fa347f2ab15775291f793b405df1eb50ddbdc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f5f6fb96-hcxll" podUID="5e835acf-6d91-46a2-a52a-32309f48a3b4" Jan 29 11:36:07.776597 containerd[1492]: time="2025-01-29T11:36:07.776527504Z" level=error msg="Failed to destroy network for sandbox \"ca71e99b7c48067d7bf825de1146deea722737ac8db1aede627b41776be5c345\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:07.777113 containerd[1492]: time="2025-01-29T11:36:07.777059015Z" level=error msg="Failed to destroy network for sandbox \"dd045a0823620908e29eab7f45fc540dbd8b4da70a585fad2b83ddf304ac7fc1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:07.777113 containerd[1492]: time="2025-01-29T11:36:07.777112375Z" level=error msg="encountered an error cleaning up failed sandbox \"ca71e99b7c48067d7bf825de1146deea722737ac8db1aede627b41776be5c345\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:07.777296 containerd[1492]: time="2025-01-29T11:36:07.777186575Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78d7549f7d-g9z2x,Uid:c4ad5d31-5d68-4473-8b3c-72bfc21e63c5,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"ca71e99b7c48067d7bf825de1146deea722737ac8db1aede627b41776be5c345\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:07.777466 kubelet[2560]: E0129 11:36:07.777416 2560 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca71e99b7c48067d7bf825de1146deea722737ac8db1aede627b41776be5c345\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:07.777550 kubelet[2560]: E0129 11:36:07.777475 2560 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca71e99b7c48067d7bf825de1146deea722737ac8db1aede627b41776be5c345\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78d7549f7d-g9z2x" Jan 29 11:36:07.777550 kubelet[2560]: E0129 11:36:07.777499 2560 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca71e99b7c48067d7bf825de1146deea722737ac8db1aede627b41776be5c345\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78d7549f7d-g9z2x" Jan 29 11:36:07.777618 containerd[1492]: time="2025-01-29T11:36:07.777525463Z" level=error msg="encountered an error cleaning up failed sandbox \"dd045a0823620908e29eab7f45fc540dbd8b4da70a585fad2b83ddf304ac7fc1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:07.777618 containerd[1492]: time="2025-01-29T11:36:07.777591417Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qqzzk,Uid:cf2aa93c-f4bc-4322-9163-052200dd877a,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"dd045a0823620908e29eab7f45fc540dbd8b4da70a585fad2b83ddf304ac7fc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:07.777719 kubelet[2560]: E0129 11:36:07.777544 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-78d7549f7d-g9z2x_calico-apiserver(c4ad5d31-5d68-4473-8b3c-72bfc21e63c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-78d7549f7d-g9z2x_calico-apiserver(c4ad5d31-5d68-4473-8b3c-72bfc21e63c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ca71e99b7c48067d7bf825de1146deea722737ac8db1aede627b41776be5c345\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78d7549f7d-g9z2x" podUID="c4ad5d31-5d68-4473-8b3c-72bfc21e63c5" Jan 29 11:36:07.777790 kubelet[2560]: E0129 11:36:07.777751 2560 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd045a0823620908e29eab7f45fc540dbd8b4da70a585fad2b83ddf304ac7fc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:07.777790 kubelet[2560]: E0129 11:36:07.777785 2560 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd045a0823620908e29eab7f45fc540dbd8b4da70a585fad2b83ddf304ac7fc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-qqzzk" Jan 29 11:36:07.777902 kubelet[2560]: E0129 11:36:07.777798 2560 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd045a0823620908e29eab7f45fc540dbd8b4da70a585fad2b83ddf304ac7fc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-qqzzk" Jan 29 11:36:07.777902 kubelet[2560]: E0129 11:36:07.777823 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-qqzzk_kube-system(cf2aa93c-f4bc-4322-9163-052200dd877a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-qqzzk_kube-system(cf2aa93c-f4bc-4322-9163-052200dd877a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd045a0823620908e29eab7f45fc540dbd8b4da70a585fad2b83ddf304ac7fc1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-qqzzk" podUID="cf2aa93c-f4bc-4322-9163-052200dd877a" Jan 29 11:36:08.063959 systemd[1]: Created slice kubepods-besteffort-pod0ee1d7b9_9e01_4183_97ec_91d9420b2dab.slice - libcontainer container kubepods-besteffort-pod0ee1d7b9_9e01_4183_97ec_91d9420b2dab.slice. Jan 29 11:36:08.066253 containerd[1492]: time="2025-01-29T11:36:08.066217401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s9vh2,Uid:0ee1d7b9-9e01-4183-97ec-91d9420b2dab,Namespace:calico-system,Attempt:0,}" Jan 29 11:36:08.112363 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f289a1ea20b800c633270b52ec0a7cbce53731f38497fa4630193ed068515360-shm.mount: Deactivated successfully. Jan 29 11:36:08.153821 containerd[1492]: time="2025-01-29T11:36:08.153765881Z" level=error msg="Failed to destroy network for sandbox \"fc2896e11c36986d8fde21f00e5a96b42534c3d205cbbed9dc871d57b74e32b9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:08.156099 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fc2896e11c36986d8fde21f00e5a96b42534c3d205cbbed9dc871d57b74e32b9-shm.mount: Deactivated successfully. Jan 29 11:36:08.156973 containerd[1492]: time="2025-01-29T11:36:08.156936419Z" level=error msg="encountered an error cleaning up failed sandbox \"fc2896e11c36986d8fde21f00e5a96b42534c3d205cbbed9dc871d57b74e32b9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:08.157023 containerd[1492]: time="2025-01-29T11:36:08.157004917Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s9vh2,Uid:0ee1d7b9-9e01-4183-97ec-91d9420b2dab,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fc2896e11c36986d8fde21f00e5a96b42534c3d205cbbed9dc871d57b74e32b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:08.157302 kubelet[2560]: E0129 11:36:08.157257 2560 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc2896e11c36986d8fde21f00e5a96b42534c3d205cbbed9dc871d57b74e32b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:08.157430 kubelet[2560]: E0129 11:36:08.157327 2560 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc2896e11c36986d8fde21f00e5a96b42534c3d205cbbed9dc871d57b74e32b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s9vh2" Jan 29 11:36:08.157430 kubelet[2560]: E0129 11:36:08.157360 2560 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc2896e11c36986d8fde21f00e5a96b42534c3d205cbbed9dc871d57b74e32b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s9vh2" Jan 29 11:36:08.157430 kubelet[2560]: E0129 11:36:08.157408 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-s9vh2_calico-system(0ee1d7b9-9e01-4183-97ec-91d9420b2dab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-s9vh2_calico-system(0ee1d7b9-9e01-4183-97ec-91d9420b2dab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc2896e11c36986d8fde21f00e5a96b42534c3d205cbbed9dc871d57b74e32b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s9vh2" podUID="0ee1d7b9-9e01-4183-97ec-91d9420b2dab" Jan 29 11:36:08.372044 kubelet[2560]: I0129 11:36:08.371923 2560 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca71e99b7c48067d7bf825de1146deea722737ac8db1aede627b41776be5c345" Jan 29 11:36:08.373360 containerd[1492]: time="2025-01-29T11:36:08.373183644Z" level=info msg="StopPodSandbox for \"ca71e99b7c48067d7bf825de1146deea722737ac8db1aede627b41776be5c345\"" Jan 29 11:36:08.373708 containerd[1492]: time="2025-01-29T11:36:08.373579659Z" level=info msg="Ensure that sandbox ca71e99b7c48067d7bf825de1146deea722737ac8db1aede627b41776be5c345 in task-service has been cleanup successfully" Jan 29 11:36:08.373940 containerd[1492]: time="2025-01-29T11:36:08.373902357Z" level=info msg="TearDown network for sandbox \"ca71e99b7c48067d7bf825de1146deea722737ac8db1aede627b41776be5c345\" successfully" Jan 29 11:36:08.374120 containerd[1492]: time="2025-01-29T11:36:08.373927384Z" level=info msg="StopPodSandbox for \"ca71e99b7c48067d7bf825de1146deea722737ac8db1aede627b41776be5c345\" returns successfully" Jan 29 11:36:08.376514 systemd[1]: run-netns-cni\x2dfa122b7d\x2d9c50\x2d59fe\x2d4a6b\x2d3ea572845219.mount: Deactivated successfully. Jan 29 11:36:08.379131 containerd[1492]: time="2025-01-29T11:36:08.376591588Z" level=info msg="StopPodSandbox for \"f81c812ded146071c6d5357fc23680f636d22e342d877bc1c706a89aba49dd93\"" Jan 29 11:36:08.379131 containerd[1492]: time="2025-01-29T11:36:08.376681938Z" level=info msg="TearDown network for sandbox \"f81c812ded146071c6d5357fc23680f636d22e342d877bc1c706a89aba49dd93\" successfully" Jan 29 11:36:08.379131 containerd[1492]: time="2025-01-29T11:36:08.376691166Z" level=info msg="StopPodSandbox for \"f81c812ded146071c6d5357fc23680f636d22e342d877bc1c706a89aba49dd93\" returns successfully" Jan 29 11:36:08.379131 containerd[1492]: time="2025-01-29T11:36:08.377311523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78d7549f7d-g9z2x,Uid:c4ad5d31-5d68-4473-8b3c-72bfc21e63c5,Namespace:calico-apiserver,Attempt:2,}" Jan 29 11:36:08.381003 kubelet[2560]: I0129 11:36:08.380466 2560 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78be77b980959cfbde3c7658c4ee4124dfca02a33e1eb74d810982ec9ce9af2d" Jan 29 11:36:08.381863 kubelet[2560]: I0129 11:36:08.381530 2560 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc2896e11c36986d8fde21f00e5a96b42534c3d205cbbed9dc871d57b74e32b9" Jan 29 11:36:08.381922 containerd[1492]: time="2025-01-29T11:36:08.381704530Z" level=info msg="StopPodSandbox for \"78be77b980959cfbde3c7658c4ee4124dfca02a33e1eb74d810982ec9ce9af2d\"" Jan 29 11:36:08.381988 containerd[1492]: time="2025-01-29T11:36:08.381949351Z" level=info msg="Ensure that sandbox 78be77b980959cfbde3c7658c4ee4124dfca02a33e1eb74d810982ec9ce9af2d in task-service has been cleanup successfully" Jan 29 11:36:08.382191 containerd[1492]: time="2025-01-29T11:36:08.382169846Z" level=info msg="TearDown network for sandbox \"78be77b980959cfbde3c7658c4ee4124dfca02a33e1eb74d810982ec9ce9af2d\" successfully" Jan 29 11:36:08.382191 containerd[1492]: time="2025-01-29T11:36:08.382186869Z" level=info msg="StopPodSandbox for \"78be77b980959cfbde3c7658c4ee4124dfca02a33e1eb74d810982ec9ce9af2d\" returns successfully" Jan 29 11:36:08.382380 containerd[1492]: time="2025-01-29T11:36:08.382328064Z" level=info msg="StopPodSandbox for \"fc2896e11c36986d8fde21f00e5a96b42534c3d205cbbed9dc871d57b74e32b9\"" Jan 29 11:36:08.384794 containerd[1492]: time="2025-01-29T11:36:08.382455444Z" level=info msg="Ensure that sandbox fc2896e11c36986d8fde21f00e5a96b42534c3d205cbbed9dc871d57b74e32b9 in task-service has been cleanup successfully" Jan 29 11:36:08.384794 containerd[1492]: time="2025-01-29T11:36:08.382584396Z" level=info msg="TearDown network for sandbox \"fc2896e11c36986d8fde21f00e5a96b42534c3d205cbbed9dc871d57b74e32b9\" successfully" Jan 29 11:36:08.384794 containerd[1492]: time="2025-01-29T11:36:08.382594095Z" level=info msg="StopPodSandbox for \"fc2896e11c36986d8fde21f00e5a96b42534c3d205cbbed9dc871d57b74e32b9\" returns successfully" Jan 29 11:36:08.384794 containerd[1492]: time="2025-01-29T11:36:08.383078967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s9vh2,Uid:0ee1d7b9-9e01-4183-97ec-91d9420b2dab,Namespace:calico-system,Attempt:1,}" Jan 29 11:36:08.384794 containerd[1492]: time="2025-01-29T11:36:08.383271781Z" level=info msg="StopPodSandbox for \"fe40cde01f9bc00e0017a79d8d4efb77eed31f6ad079f48d7d46d61b289b6a4b\"" Jan 29 11:36:08.384794 containerd[1492]: time="2025-01-29T11:36:08.383334558Z" level=info msg="TearDown network for sandbox \"fe40cde01f9bc00e0017a79d8d4efb77eed31f6ad079f48d7d46d61b289b6a4b\" successfully" Jan 29 11:36:08.384794 containerd[1492]: time="2025-01-29T11:36:08.383344537Z" level=info msg="StopPodSandbox for \"fe40cde01f9bc00e0017a79d8d4efb77eed31f6ad079f48d7d46d61b289b6a4b\" returns successfully" Jan 29 11:36:08.384794 containerd[1492]: time="2025-01-29T11:36:08.383771471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8sz4x,Uid:d32f565b-8d7d-47d2-85bf-68725ec04cff,Namespace:kube-system,Attempt:2,}" Jan 29 11:36:08.384794 containerd[1492]: time="2025-01-29T11:36:08.384614147Z" level=info msg="StopPodSandbox for \"7cf0ee6b79c5f0ffbd1df3c4ba1fa347f2ab15775291f793b405df1eb50ddbdc\"" Jan 29 11:36:08.384794 containerd[1492]: time="2025-01-29T11:36:08.384763598Z" level=info msg="Ensure that sandbox 7cf0ee6b79c5f0ffbd1df3c4ba1fa347f2ab15775291f793b405df1eb50ddbdc in task-service has been cleanup successfully" Jan 29 11:36:08.385208 kubelet[2560]: E0129 11:36:08.383558 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:36:08.385208 kubelet[2560]: I0129 11:36:08.384308 2560 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7cf0ee6b79c5f0ffbd1df3c4ba1fa347f2ab15775291f793b405df1eb50ddbdc" Jan 29 11:36:08.385284 containerd[1492]: time="2025-01-29T11:36:08.384908751Z" level=info msg="TearDown network for sandbox \"7cf0ee6b79c5f0ffbd1df3c4ba1fa347f2ab15775291f793b405df1eb50ddbdc\" successfully" Jan 29 11:36:08.385284 containerd[1492]: time="2025-01-29T11:36:08.384919101Z" level=info msg="StopPodSandbox for \"7cf0ee6b79c5f0ffbd1df3c4ba1fa347f2ab15775291f793b405df1eb50ddbdc\" returns successfully" Jan 29 11:36:08.385284 containerd[1492]: time="2025-01-29T11:36:08.385261575Z" level=info msg="StopPodSandbox for \"b5f2937f631c80a639072f6b6e4b81c67d79bd7b8b00d51fbbb621f6dcf86772\"" Jan 29 11:36:08.385374 containerd[1492]: time="2025-01-29T11:36:08.385348108Z" level=info msg="TearDown network for sandbox \"b5f2937f631c80a639072f6b6e4b81c67d79bd7b8b00d51fbbb621f6dcf86772\" successfully" Jan 29 11:36:08.385374 containerd[1492]: time="2025-01-29T11:36:08.385357286Z" level=info msg="StopPodSandbox for \"b5f2937f631c80a639072f6b6e4b81c67d79bd7b8b00d51fbbb621f6dcf86772\" returns successfully" Jan 29 11:36:08.385779 containerd[1492]: time="2025-01-29T11:36:08.385755144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f5f6fb96-hcxll,Uid:5e835acf-6d91-46a2-a52a-32309f48a3b4,Namespace:calico-system,Attempt:2,}" Jan 29 11:36:08.385996 systemd[1]: run-netns-cni\x2db60adcfd\x2d910d\x2d567f\x2da508\x2dc9dd25e7cb5d.mount: Deactivated successfully. Jan 29 11:36:08.386123 kubelet[2560]: I0129 11:36:08.386106 2560 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd045a0823620908e29eab7f45fc540dbd8b4da70a585fad2b83ddf304ac7fc1" Jan 29 11:36:08.386211 systemd[1]: run-netns-cni\x2d66de7612\x2d9336\x2dbb49\x2d8fdd\x2da5e40af5c82a.mount: Deactivated successfully. Jan 29 11:36:08.386740 containerd[1492]: time="2025-01-29T11:36:08.386384118Z" level=info msg="StopPodSandbox for \"dd045a0823620908e29eab7f45fc540dbd8b4da70a585fad2b83ddf304ac7fc1\"" Jan 29 11:36:08.386740 containerd[1492]: time="2025-01-29T11:36:08.386519011Z" level=info msg="Ensure that sandbox dd045a0823620908e29eab7f45fc540dbd8b4da70a585fad2b83ddf304ac7fc1 in task-service has been cleanup successfully" Jan 29 11:36:08.387344 containerd[1492]: time="2025-01-29T11:36:08.387324488Z" level=info msg="TearDown network for sandbox \"dd045a0823620908e29eab7f45fc540dbd8b4da70a585fad2b83ddf304ac7fc1\" successfully" Jan 29 11:36:08.387344 containerd[1492]: time="2025-01-29T11:36:08.387342091Z" level=info msg="StopPodSandbox for \"dd045a0823620908e29eab7f45fc540dbd8b4da70a585fad2b83ddf304ac7fc1\" returns successfully" Jan 29 11:36:08.387771 containerd[1492]: time="2025-01-29T11:36:08.387722045Z" level=info msg="StopPodSandbox for \"7796f2b8c41cac5338ffa9b402e0614ff1aa9234cde81ec23c1a0b7905f12023\"" Jan 29 11:36:08.387911 containerd[1492]: time="2025-01-29T11:36:08.387816243Z" level=info msg="TearDown network for sandbox \"7796f2b8c41cac5338ffa9b402e0614ff1aa9234cde81ec23c1a0b7905f12023\" successfully" Jan 29 11:36:08.387911 containerd[1492]: time="2025-01-29T11:36:08.387832083Z" level=info msg="StopPodSandbox for \"7796f2b8c41cac5338ffa9b402e0614ff1aa9234cde81ec23c1a0b7905f12023\" returns successfully" Jan 29 11:36:08.390385 kubelet[2560]: E0129 11:36:08.387960 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:36:08.390385 kubelet[2560]: I0129 11:36:08.388090 2560 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f289a1ea20b800c633270b52ec0a7cbce53731f38497fa4630193ed068515360" Jan 29 11:36:08.390042 systemd[1]: run-netns-cni\x2d4ef182d3\x2dd6e9\x2d5f3d\x2dda00\x2d4f25c8b6b20c.mount: Deactivated successfully. Jan 29 11:36:08.390617 containerd[1492]: time="2025-01-29T11:36:08.388818339Z" level=info msg="StopPodSandbox for \"f289a1ea20b800c633270b52ec0a7cbce53731f38497fa4630193ed068515360\"" Jan 29 11:36:08.390617 containerd[1492]: time="2025-01-29T11:36:08.389019237Z" level=info msg="Ensure that sandbox f289a1ea20b800c633270b52ec0a7cbce53731f38497fa4630193ed068515360 in task-service has been cleanup successfully" Jan 29 11:36:08.390617 containerd[1492]: time="2025-01-29T11:36:08.389330903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qqzzk,Uid:cf2aa93c-f4bc-4322-9163-052200dd877a,Namespace:kube-system,Attempt:2,}" Jan 29 11:36:08.390166 systemd[1]: run-netns-cni\x2d4133807a\x2d11c3\x2d628b\x2d0b13\x2ddad9f6944e63.mount: Deactivated successfully. Jan 29 11:36:08.390763 containerd[1492]: time="2025-01-29T11:36:08.390675364Z" level=info msg="TearDown network for sandbox \"f289a1ea20b800c633270b52ec0a7cbce53731f38497fa4630193ed068515360\" successfully" Jan 29 11:36:08.390763 containerd[1492]: time="2025-01-29T11:36:08.390692085Z" level=info msg="StopPodSandbox for \"f289a1ea20b800c633270b52ec0a7cbce53731f38497fa4630193ed068515360\" returns successfully" Jan 29 11:36:08.390962 containerd[1492]: time="2025-01-29T11:36:08.390942427Z" level=info msg="StopPodSandbox for \"e4c786a1f87f123c515ee8f4dae18559dc1b81a1fbae2953ab843c3ffdeb2f38\"" Jan 29 11:36:08.391064 containerd[1492]: time="2025-01-29T11:36:08.391041884Z" level=info msg="TearDown network for sandbox \"e4c786a1f87f123c515ee8f4dae18559dc1b81a1fbae2953ab843c3ffdeb2f38\" successfully" Jan 29 11:36:08.391064 containerd[1492]: time="2025-01-29T11:36:08.391059447Z" level=info msg="StopPodSandbox for \"e4c786a1f87f123c515ee8f4dae18559dc1b81a1fbae2953ab843c3ffdeb2f38\" returns successfully" Jan 29 11:36:08.391400 containerd[1492]: time="2025-01-29T11:36:08.391379068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78d7549f7d-5n5j6,Uid:0725190d-a48f-4c98-9011-c6cdb64f50fe,Namespace:calico-apiserver,Attempt:2,}" Jan 29 11:36:08.768213 containerd[1492]: time="2025-01-29T11:36:08.767974395Z" level=error msg="Failed to destroy network for sandbox \"62e6cc0503d21814956a6c95475295a49bf85000e6e767d35029c30730d88e17\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:08.780409 containerd[1492]: time="2025-01-29T11:36:08.780344967Z" level=error msg="encountered an error cleaning up failed sandbox \"62e6cc0503d21814956a6c95475295a49bf85000e6e767d35029c30730d88e17\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:08.780542 containerd[1492]: time="2025-01-29T11:36:08.780451508Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qqzzk,Uid:cf2aa93c-f4bc-4322-9163-052200dd877a,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"62e6cc0503d21814956a6c95475295a49bf85000e6e767d35029c30730d88e17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:08.780855 kubelet[2560]: E0129 11:36:08.780782 2560 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62e6cc0503d21814956a6c95475295a49bf85000e6e767d35029c30730d88e17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:08.780961 kubelet[2560]: E0129 11:36:08.780905 2560 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62e6cc0503d21814956a6c95475295a49bf85000e6e767d35029c30730d88e17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-qqzzk" Jan 29 11:36:08.780961 kubelet[2560]: E0129 11:36:08.780936 2560 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62e6cc0503d21814956a6c95475295a49bf85000e6e767d35029c30730d88e17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-qqzzk" Jan 29 11:36:08.781029 kubelet[2560]: E0129 11:36:08.780979 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-qqzzk_kube-system(cf2aa93c-f4bc-4322-9163-052200dd877a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-qqzzk_kube-system(cf2aa93c-f4bc-4322-9163-052200dd877a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"62e6cc0503d21814956a6c95475295a49bf85000e6e767d35029c30730d88e17\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-qqzzk" podUID="cf2aa93c-f4bc-4322-9163-052200dd877a" Jan 29 11:36:08.798146 containerd[1492]: time="2025-01-29T11:36:08.798082500Z" level=error msg="Failed to destroy network for sandbox \"05f9cf1c50bc395566b534f6d9d826c90680313b67695139cea702c54a97f9a4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:08.798951 containerd[1492]: time="2025-01-29T11:36:08.798916189Z" level=error msg="encountered an error cleaning up failed sandbox \"05f9cf1c50bc395566b534f6d9d826c90680313b67695139cea702c54a97f9a4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:08.799035 containerd[1492]: time="2025-01-29T11:36:08.798984207Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8sz4x,Uid:d32f565b-8d7d-47d2-85bf-68725ec04cff,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"05f9cf1c50bc395566b534f6d9d826c90680313b67695139cea702c54a97f9a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:08.799661 kubelet[2560]: E0129 11:36:08.799241 2560 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05f9cf1c50bc395566b534f6d9d826c90680313b67695139cea702c54a97f9a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:08.799661 kubelet[2560]: E0129 11:36:08.799340 2560 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05f9cf1c50bc395566b534f6d9d826c90680313b67695139cea702c54a97f9a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-8sz4x" Jan 29 11:36:08.799661 kubelet[2560]: E0129 11:36:08.799363 2560 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05f9cf1c50bc395566b534f6d9d826c90680313b67695139cea702c54a97f9a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-8sz4x" Jan 29 11:36:08.799794 kubelet[2560]: E0129 11:36:08.799405 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-8sz4x_kube-system(d32f565b-8d7d-47d2-85bf-68725ec04cff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-8sz4x_kube-system(d32f565b-8d7d-47d2-85bf-68725ec04cff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"05f9cf1c50bc395566b534f6d9d826c90680313b67695139cea702c54a97f9a4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-8sz4x" podUID="d32f565b-8d7d-47d2-85bf-68725ec04cff" Jan 29 11:36:08.807242 containerd[1492]: time="2025-01-29T11:36:08.807180012Z" level=error msg="Failed to destroy network for sandbox \"b0603ed2dd195e5f4326b4b967ccff21260ba0e0d5173691aeb14b6a04acf563\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:08.807787 containerd[1492]: time="2025-01-29T11:36:08.807760033Z" level=error msg="encountered an error cleaning up failed sandbox \"b0603ed2dd195e5f4326b4b967ccff21260ba0e0d5173691aeb14b6a04acf563\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:08.807856 containerd[1492]: time="2025-01-29T11:36:08.807823984Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78d7549f7d-g9z2x,Uid:c4ad5d31-5d68-4473-8b3c-72bfc21e63c5,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"b0603ed2dd195e5f4326b4b967ccff21260ba0e0d5173691aeb14b6a04acf563\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:08.808132 kubelet[2560]: E0129 11:36:08.808078 2560 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0603ed2dd195e5f4326b4b967ccff21260ba0e0d5173691aeb14b6a04acf563\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:08.808198 kubelet[2560]: E0129 11:36:08.808155 2560 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0603ed2dd195e5f4326b4b967ccff21260ba0e0d5173691aeb14b6a04acf563\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78d7549f7d-g9z2x" Jan 29 11:36:08.808198 kubelet[2560]: E0129 11:36:08.808182 2560 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0603ed2dd195e5f4326b4b967ccff21260ba0e0d5173691aeb14b6a04acf563\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78d7549f7d-g9z2x" Jan 29 11:36:08.808287 kubelet[2560]: E0129 11:36:08.808233 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-78d7549f7d-g9z2x_calico-apiserver(c4ad5d31-5d68-4473-8b3c-72bfc21e63c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-78d7549f7d-g9z2x_calico-apiserver(c4ad5d31-5d68-4473-8b3c-72bfc21e63c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b0603ed2dd195e5f4326b4b967ccff21260ba0e0d5173691aeb14b6a04acf563\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78d7549f7d-g9z2x" podUID="c4ad5d31-5d68-4473-8b3c-72bfc21e63c5" Jan 29 11:36:08.818193 containerd[1492]: time="2025-01-29T11:36:08.817827752Z" level=error msg="Failed to destroy network for sandbox \"080cc56d0ab33d56eb6a1d38d66c3a2cdd67647fca21a8c854e3a90bdc763db1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:08.818348 containerd[1492]: time="2025-01-29T11:36:08.818314097Z" level=error msg="encountered an error cleaning up failed sandbox \"080cc56d0ab33d56eb6a1d38d66c3a2cdd67647fca21a8c854e3a90bdc763db1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:08.818418 containerd[1492]: time="2025-01-29T11:36:08.818381193Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s9vh2,Uid:0ee1d7b9-9e01-4183-97ec-91d9420b2dab,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"080cc56d0ab33d56eb6a1d38d66c3a2cdd67647fca21a8c854e3a90bdc763db1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:08.818645 kubelet[2560]: E0129 11:36:08.818588 2560 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"080cc56d0ab33d56eb6a1d38d66c3a2cdd67647fca21a8c854e3a90bdc763db1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:08.818719 kubelet[2560]: E0129 11:36:08.818654 2560 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"080cc56d0ab33d56eb6a1d38d66c3a2cdd67647fca21a8c854e3a90bdc763db1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s9vh2" Jan 29 11:36:08.818719 kubelet[2560]: E0129 11:36:08.818677 2560 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"080cc56d0ab33d56eb6a1d38d66c3a2cdd67647fca21a8c854e3a90bdc763db1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s9vh2" Jan 29 11:36:08.818787 kubelet[2560]: E0129 11:36:08.818716 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-s9vh2_calico-system(0ee1d7b9-9e01-4183-97ec-91d9420b2dab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-s9vh2_calico-system(0ee1d7b9-9e01-4183-97ec-91d9420b2dab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"080cc56d0ab33d56eb6a1d38d66c3a2cdd67647fca21a8c854e3a90bdc763db1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s9vh2" podUID="0ee1d7b9-9e01-4183-97ec-91d9420b2dab" Jan 29 11:36:08.832008 containerd[1492]: time="2025-01-29T11:36:08.831955340Z" level=error msg="Failed to destroy network for sandbox \"dad5edafd75c208e507084ef0dcd7c29e4f3cd0afee0ca5f8f325c8bed2cca8a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:08.846784 containerd[1492]: time="2025-01-29T11:36:08.832368978Z" level=error msg="encountered an error cleaning up failed sandbox \"dad5edafd75c208e507084ef0dcd7c29e4f3cd0afee0ca5f8f325c8bed2cca8a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:08.846784 containerd[1492]: time="2025-01-29T11:36:08.832419685Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f5f6fb96-hcxll,Uid:5e835acf-6d91-46a2-a52a-32309f48a3b4,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"dad5edafd75c208e507084ef0dcd7c29e4f3cd0afee0ca5f8f325c8bed2cca8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:08.846923 kubelet[2560]: E0129 11:36:08.832651 2560 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dad5edafd75c208e507084ef0dcd7c29e4f3cd0afee0ca5f8f325c8bed2cca8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:08.846923 kubelet[2560]: E0129 11:36:08.832704 2560 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dad5edafd75c208e507084ef0dcd7c29e4f3cd0afee0ca5f8f325c8bed2cca8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f5f6fb96-hcxll" Jan 29 11:36:08.846923 kubelet[2560]: E0129 11:36:08.832722 2560 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dad5edafd75c208e507084ef0dcd7c29e4f3cd0afee0ca5f8f325c8bed2cca8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f5f6fb96-hcxll" Jan 29 11:36:08.847089 kubelet[2560]: E0129 11:36:08.832764 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f5f6fb96-hcxll_calico-system(5e835acf-6d91-46a2-a52a-32309f48a3b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f5f6fb96-hcxll_calico-system(5e835acf-6d91-46a2-a52a-32309f48a3b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dad5edafd75c208e507084ef0dcd7c29e4f3cd0afee0ca5f8f325c8bed2cca8a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f5f6fb96-hcxll" podUID="5e835acf-6d91-46a2-a52a-32309f48a3b4" Jan 29 11:36:08.923815 containerd[1492]: time="2025-01-29T11:36:08.923759289Z" level=error msg="Failed to destroy network for sandbox \"6b8213b783a7c465b327152c49a96b3af800823e3978928ad141cc9679eef942\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:08.924221 containerd[1492]: time="2025-01-29T11:36:08.924186573Z" level=error msg="encountered an error cleaning up failed sandbox \"6b8213b783a7c465b327152c49a96b3af800823e3978928ad141cc9679eef942\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:08.924270 containerd[1492]: time="2025-01-29T11:36:08.924253809Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78d7549f7d-5n5j6,Uid:0725190d-a48f-4c98-9011-c6cdb64f50fe,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"6b8213b783a7c465b327152c49a96b3af800823e3978928ad141cc9679eef942\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:08.924893 kubelet[2560]: E0129 11:36:08.924490 2560 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b8213b783a7c465b327152c49a96b3af800823e3978928ad141cc9679eef942\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:08.924893 kubelet[2560]: E0129 11:36:08.924565 2560 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b8213b783a7c465b327152c49a96b3af800823e3978928ad141cc9679eef942\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78d7549f7d-5n5j6" Jan 29 11:36:08.924893 kubelet[2560]: E0129 11:36:08.924590 2560 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b8213b783a7c465b327152c49a96b3af800823e3978928ad141cc9679eef942\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78d7549f7d-5n5j6" Jan 29 11:36:08.925132 kubelet[2560]: E0129 11:36:08.924671 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-78d7549f7d-5n5j6_calico-apiserver(0725190d-a48f-4c98-9011-c6cdb64f50fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-78d7549f7d-5n5j6_calico-apiserver(0725190d-a48f-4c98-9011-c6cdb64f50fe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6b8213b783a7c465b327152c49a96b3af800823e3978928ad141cc9679eef942\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78d7549f7d-5n5j6" podUID="0725190d-a48f-4c98-9011-c6cdb64f50fe" Jan 29 11:36:09.108855 systemd[1]: run-netns-cni\x2d7fb86eb1\x2d3e98\x2df3e3\x2d5958\x2d2f29bf6fd9aa.mount: Deactivated successfully. Jan 29 11:36:09.392309 kubelet[2560]: I0129 11:36:09.392186 2560 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dad5edafd75c208e507084ef0dcd7c29e4f3cd0afee0ca5f8f325c8bed2cca8a" Jan 29 11:36:09.392863 containerd[1492]: time="2025-01-29T11:36:09.392741337Z" level=info msg="StopPodSandbox for \"dad5edafd75c208e507084ef0dcd7c29e4f3cd0afee0ca5f8f325c8bed2cca8a\"" Jan 29 11:36:09.393124 containerd[1492]: time="2025-01-29T11:36:09.393058193Z" level=info msg="Ensure that sandbox dad5edafd75c208e507084ef0dcd7c29e4f3cd0afee0ca5f8f325c8bed2cca8a in task-service has been cleanup successfully" Jan 29 11:36:09.393611 containerd[1492]: time="2025-01-29T11:36:09.393577710Z" level=info msg="TearDown network for sandbox \"dad5edafd75c208e507084ef0dcd7c29e4f3cd0afee0ca5f8f325c8bed2cca8a\" successfully" Jan 29 11:36:09.393611 containerd[1492]: time="2025-01-29T11:36:09.393603510Z" level=info msg="StopPodSandbox for \"dad5edafd75c208e507084ef0dcd7c29e4f3cd0afee0ca5f8f325c8bed2cca8a\" returns successfully" Jan 29 11:36:09.395723 containerd[1492]: time="2025-01-29T11:36:09.394056622Z" level=info msg="StopPodSandbox for \"7cf0ee6b79c5f0ffbd1df3c4ba1fa347f2ab15775291f793b405df1eb50ddbdc\"" Jan 29 11:36:09.395723 containerd[1492]: time="2025-01-29T11:36:09.394180014Z" level=info msg="TearDown network for sandbox \"7cf0ee6b79c5f0ffbd1df3c4ba1fa347f2ab15775291f793b405df1eb50ddbdc\" successfully" Jan 29 11:36:09.395723 containerd[1492]: time="2025-01-29T11:36:09.394192968Z" level=info msg="StopPodSandbox for \"7cf0ee6b79c5f0ffbd1df3c4ba1fa347f2ab15775291f793b405df1eb50ddbdc\" returns successfully" Jan 29 11:36:09.395723 containerd[1492]: time="2025-01-29T11:36:09.394510365Z" level=info msg="StopPodSandbox for \"b5f2937f631c80a639072f6b6e4b81c67d79bd7b8b00d51fbbb621f6dcf86772\"" Jan 29 11:36:09.395723 containerd[1492]: time="2025-01-29T11:36:09.394581560Z" level=info msg="TearDown network for sandbox \"b5f2937f631c80a639072f6b6e4b81c67d79bd7b8b00d51fbbb621f6dcf86772\" successfully" Jan 29 11:36:09.395723 containerd[1492]: time="2025-01-29T11:36:09.394590617Z" level=info msg="StopPodSandbox for \"b5f2937f631c80a639072f6b6e4b81c67d79bd7b8b00d51fbbb621f6dcf86772\" returns successfully" Jan 29 11:36:09.396173 systemd[1]: run-netns-cni\x2df5aa3ed0\x2d6123\x2d52e9\x2d4d52\x2debac1c9e6dd5.mount: Deactivated successfully. Jan 29 11:36:09.396431 containerd[1492]: time="2025-01-29T11:36:09.396221896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f5f6fb96-hcxll,Uid:5e835acf-6d91-46a2-a52a-32309f48a3b4,Namespace:calico-system,Attempt:3,}" Jan 29 11:36:09.396466 kubelet[2560]: I0129 11:36:09.396392 2560 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62e6cc0503d21814956a6c95475295a49bf85000e6e767d35029c30730d88e17" Jan 29 11:36:09.396932 containerd[1492]: time="2025-01-29T11:36:09.396902256Z" level=info msg="StopPodSandbox for \"62e6cc0503d21814956a6c95475295a49bf85000e6e767d35029c30730d88e17\"" Jan 29 11:36:09.397111 containerd[1492]: time="2025-01-29T11:36:09.397090030Z" level=info msg="Ensure that sandbox 62e6cc0503d21814956a6c95475295a49bf85000e6e767d35029c30730d88e17 in task-service has been cleanup successfully" Jan 29 11:36:09.397330 containerd[1492]: time="2025-01-29T11:36:09.397304313Z" level=info msg="TearDown network for sandbox \"62e6cc0503d21814956a6c95475295a49bf85000e6e767d35029c30730d88e17\" successfully" Jan 29 11:36:09.397365 containerd[1492]: time="2025-01-29T11:36:09.397324721Z" level=info msg="StopPodSandbox for \"62e6cc0503d21814956a6c95475295a49bf85000e6e767d35029c30730d88e17\" returns successfully" Jan 29 11:36:09.398481 containerd[1492]: time="2025-01-29T11:36:09.398451541Z" level=info msg="StopPodSandbox for \"dd045a0823620908e29eab7f45fc540dbd8b4da70a585fad2b83ddf304ac7fc1\"" Jan 29 11:36:09.398552 containerd[1492]: time="2025-01-29T11:36:09.398530210Z" level=info msg="TearDown network for sandbox \"dd045a0823620908e29eab7f45fc540dbd8b4da70a585fad2b83ddf304ac7fc1\" successfully" Jan 29 11:36:09.398552 containerd[1492]: time="2025-01-29T11:36:09.398546681Z" level=info msg="StopPodSandbox for \"dd045a0823620908e29eab7f45fc540dbd8b4da70a585fad2b83ddf304ac7fc1\" returns successfully" Jan 29 11:36:09.400245 containerd[1492]: time="2025-01-29T11:36:09.400098921Z" level=info msg="StopPodSandbox for \"7796f2b8c41cac5338ffa9b402e0614ff1aa9234cde81ec23c1a0b7905f12023\"" Jan 29 11:36:09.400245 containerd[1492]: time="2025-01-29T11:36:09.400190854Z" level=info msg="TearDown network for sandbox \"7796f2b8c41cac5338ffa9b402e0614ff1aa9234cde81ec23c1a0b7905f12023\" successfully" Jan 29 11:36:09.400245 containerd[1492]: time="2025-01-29T11:36:09.400203087Z" level=info msg="StopPodSandbox for \"7796f2b8c41cac5338ffa9b402e0614ff1aa9234cde81ec23c1a0b7905f12023\" returns successfully" Jan 29 11:36:09.400705 kubelet[2560]: E0129 11:36:09.400684 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:36:09.400743 systemd[1]: run-netns-cni\x2d8c592b81\x2d1d71\x2d9559\x2d234d\x2d0554053008aa.mount: Deactivated successfully. Jan 29 11:36:09.401264 containerd[1492]: time="2025-01-29T11:36:09.401222335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qqzzk,Uid:cf2aa93c-f4bc-4322-9163-052200dd877a,Namespace:kube-system,Attempt:3,}" Jan 29 11:36:09.401688 kubelet[2560]: I0129 11:36:09.401615 2560 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b8213b783a7c465b327152c49a96b3af800823e3978928ad141cc9679eef942" Jan 29 11:36:09.402546 containerd[1492]: time="2025-01-29T11:36:09.402402085Z" level=info msg="StopPodSandbox for \"6b8213b783a7c465b327152c49a96b3af800823e3978928ad141cc9679eef942\"" Jan 29 11:36:09.402602 containerd[1492]: time="2025-01-29T11:36:09.402559351Z" level=info msg="Ensure that sandbox 6b8213b783a7c465b327152c49a96b3af800823e3978928ad141cc9679eef942 in task-service has been cleanup successfully" Jan 29 11:36:09.402825 containerd[1492]: time="2025-01-29T11:36:09.402763435Z" level=info msg="TearDown network for sandbox \"6b8213b783a7c465b327152c49a96b3af800823e3978928ad141cc9679eef942\" successfully" Jan 29 11:36:09.402825 containerd[1492]: time="2025-01-29T11:36:09.402784003Z" level=info msg="StopPodSandbox for \"6b8213b783a7c465b327152c49a96b3af800823e3978928ad141cc9679eef942\" returns successfully" Jan 29 11:36:09.403206 containerd[1492]: time="2025-01-29T11:36:09.403179137Z" level=info msg="StopPodSandbox for \"f289a1ea20b800c633270b52ec0a7cbce53731f38497fa4630193ed068515360\"" Jan 29 11:36:09.403292 containerd[1492]: time="2025-01-29T11:36:09.403270940Z" level=info msg="TearDown network for sandbox \"f289a1ea20b800c633270b52ec0a7cbce53731f38497fa4630193ed068515360\" successfully" Jan 29 11:36:09.403349 containerd[1492]: time="2025-01-29T11:36:09.403290336Z" level=info msg="StopPodSandbox for \"f289a1ea20b800c633270b52ec0a7cbce53731f38497fa4630193ed068515360\" returns successfully" Jan 29 11:36:09.404041 containerd[1492]: time="2025-01-29T11:36:09.403882741Z" level=info msg="StopPodSandbox for \"e4c786a1f87f123c515ee8f4dae18559dc1b81a1fbae2953ab843c3ffdeb2f38\"" Jan 29 11:36:09.404041 containerd[1492]: time="2025-01-29T11:36:09.403965227Z" level=info msg="TearDown network for sandbox \"e4c786a1f87f123c515ee8f4dae18559dc1b81a1fbae2953ab843c3ffdeb2f38\" successfully" Jan 29 11:36:09.404041 containerd[1492]: time="2025-01-29T11:36:09.403977429Z" level=info msg="StopPodSandbox for \"e4c786a1f87f123c515ee8f4dae18559dc1b81a1fbae2953ab843c3ffdeb2f38\" returns successfully" Jan 29 11:36:09.404747 systemd[1]: run-netns-cni\x2db39cd433\x2d376a\x2de4d7\x2d02fd\x2d5a0b5a3c3a1e.mount: Deactivated successfully. Jan 29 11:36:09.405793 containerd[1492]: time="2025-01-29T11:36:09.405190231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78d7549f7d-5n5j6,Uid:0725190d-a48f-4c98-9011-c6cdb64f50fe,Namespace:calico-apiserver,Attempt:3,}" Jan 29 11:36:09.405848 kubelet[2560]: I0129 11:36:09.405493 2560 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05f9cf1c50bc395566b534f6d9d826c90680313b67695139cea702c54a97f9a4" Jan 29 11:36:09.406274 containerd[1492]: time="2025-01-29T11:36:09.406223666Z" level=info msg="StopPodSandbox for \"05f9cf1c50bc395566b534f6d9d826c90680313b67695139cea702c54a97f9a4\"" Jan 29 11:36:09.406439 containerd[1492]: time="2025-01-29T11:36:09.406393505Z" level=info msg="Ensure that sandbox 05f9cf1c50bc395566b534f6d9d826c90680313b67695139cea702c54a97f9a4 in task-service has been cleanup successfully" Jan 29 11:36:09.409242 containerd[1492]: time="2025-01-29T11:36:09.406577511Z" level=info msg="TearDown network for sandbox \"05f9cf1c50bc395566b534f6d9d826c90680313b67695139cea702c54a97f9a4\" successfully" Jan 29 11:36:09.409242 containerd[1492]: time="2025-01-29T11:36:09.406595345Z" level=info msg="StopPodSandbox for \"05f9cf1c50bc395566b534f6d9d826c90680313b67695139cea702c54a97f9a4\" returns successfully" Jan 29 11:36:09.409379 kubelet[2560]: I0129 11:36:09.408836 2560 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="080cc56d0ab33d56eb6a1d38d66c3a2cdd67647fca21a8c854e3a90bdc763db1" Jan 29 11:36:09.408595 systemd[1]: run-netns-cni\x2dd5f1a1f8\x2d608d\x2d161f\x2dba28\x2d367d65474751.mount: Deactivated successfully. Jan 29 11:36:09.409557 containerd[1492]: time="2025-01-29T11:36:09.409419710Z" level=info msg="StopPodSandbox for \"080cc56d0ab33d56eb6a1d38d66c3a2cdd67647fca21a8c854e3a90bdc763db1\"" Jan 29 11:36:09.409599 containerd[1492]: time="2025-01-29T11:36:09.409581935Z" level=info msg="Ensure that sandbox 080cc56d0ab33d56eb6a1d38d66c3a2cdd67647fca21a8c854e3a90bdc763db1 in task-service has been cleanup successfully" Jan 29 11:36:09.409849 containerd[1492]: time="2025-01-29T11:36:09.409812879Z" level=info msg="StopPodSandbox for \"78be77b980959cfbde3c7658c4ee4124dfca02a33e1eb74d810982ec9ce9af2d\"" Jan 29 11:36:09.409931 containerd[1492]: time="2025-01-29T11:36:09.409886648Z" level=info msg="TearDown network for sandbox \"78be77b980959cfbde3c7658c4ee4124dfca02a33e1eb74d810982ec9ce9af2d\" successfully" Jan 29 11:36:09.409931 containerd[1492]: time="2025-01-29T11:36:09.409895325Z" level=info msg="StopPodSandbox for \"78be77b980959cfbde3c7658c4ee4124dfca02a33e1eb74d810982ec9ce9af2d\" returns successfully" Jan 29 11:36:09.410163 containerd[1492]: time="2025-01-29T11:36:09.410056337Z" level=info msg="TearDown network for sandbox \"080cc56d0ab33d56eb6a1d38d66c3a2cdd67647fca21a8c854e3a90bdc763db1\" successfully" Jan 29 11:36:09.410163 containerd[1492]: time="2025-01-29T11:36:09.410071987Z" level=info msg="StopPodSandbox for \"080cc56d0ab33d56eb6a1d38d66c3a2cdd67647fca21a8c854e3a90bdc763db1\" returns successfully" Jan 29 11:36:09.410337 containerd[1492]: time="2025-01-29T11:36:09.410197363Z" level=info msg="StopPodSandbox for \"fe40cde01f9bc00e0017a79d8d4efb77eed31f6ad079f48d7d46d61b289b6a4b\"" Jan 29 11:36:09.410337 containerd[1492]: time="2025-01-29T11:36:09.410258458Z" level=info msg="TearDown network for sandbox \"fe40cde01f9bc00e0017a79d8d4efb77eed31f6ad079f48d7d46d61b289b6a4b\" successfully" Jan 29 11:36:09.410337 containerd[1492]: time="2025-01-29T11:36:09.410266553Z" level=info msg="StopPodSandbox for \"fe40cde01f9bc00e0017a79d8d4efb77eed31f6ad079f48d7d46d61b289b6a4b\" returns successfully" Jan 29 11:36:09.410546 kubelet[2560]: E0129 11:36:09.410490 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:36:09.410830 containerd[1492]: time="2025-01-29T11:36:09.410801831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8sz4x,Uid:d32f565b-8d7d-47d2-85bf-68725ec04cff,Namespace:kube-system,Attempt:3,}" Jan 29 11:36:09.411019 containerd[1492]: time="2025-01-29T11:36:09.410985686Z" level=info msg="StopPodSandbox for \"fc2896e11c36986d8fde21f00e5a96b42534c3d205cbbed9dc871d57b74e32b9\"" Jan 29 11:36:09.411086 containerd[1492]: time="2025-01-29T11:36:09.411070766Z" level=info msg="TearDown network for sandbox \"fc2896e11c36986d8fde21f00e5a96b42534c3d205cbbed9dc871d57b74e32b9\" successfully" Jan 29 11:36:09.411086 containerd[1492]: time="2025-01-29T11:36:09.411082628Z" level=info msg="StopPodSandbox for \"fc2896e11c36986d8fde21f00e5a96b42534c3d205cbbed9dc871d57b74e32b9\" returns successfully" Jan 29 11:36:09.411615 containerd[1492]: time="2025-01-29T11:36:09.411453937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s9vh2,Uid:0ee1d7b9-9e01-4183-97ec-91d9420b2dab,Namespace:calico-system,Attempt:2,}" Jan 29 11:36:09.412282 kubelet[2560]: I0129 11:36:09.412263 2560 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0603ed2dd195e5f4326b4b967ccff21260ba0e0d5173691aeb14b6a04acf563" Jan 29 11:36:09.412720 containerd[1492]: time="2025-01-29T11:36:09.412673441Z" level=info msg="StopPodSandbox for \"b0603ed2dd195e5f4326b4b967ccff21260ba0e0d5173691aeb14b6a04acf563\"" Jan 29 11:36:09.412880 containerd[1492]: time="2025-01-29T11:36:09.412857407Z" level=info msg="Ensure that sandbox b0603ed2dd195e5f4326b4b967ccff21260ba0e0d5173691aeb14b6a04acf563 in task-service has been cleanup successfully" Jan 29 11:36:09.413031 containerd[1492]: time="2025-01-29T11:36:09.413000547Z" level=info msg="TearDown network for sandbox \"b0603ed2dd195e5f4326b4b967ccff21260ba0e0d5173691aeb14b6a04acf563\" successfully" Jan 29 11:36:09.413066 containerd[1492]: time="2025-01-29T11:36:09.413028470Z" level=info msg="StopPodSandbox for \"b0603ed2dd195e5f4326b4b967ccff21260ba0e0d5173691aeb14b6a04acf563\" returns successfully" Jan 29 11:36:09.413597 containerd[1492]: time="2025-01-29T11:36:09.413384219Z" level=info msg="StopPodSandbox for \"ca71e99b7c48067d7bf825de1146deea722737ac8db1aede627b41776be5c345\"" Jan 29 11:36:09.413597 containerd[1492]: time="2025-01-29T11:36:09.413516428Z" level=info msg="TearDown network for sandbox \"ca71e99b7c48067d7bf825de1146deea722737ac8db1aede627b41776be5c345\" successfully" Jan 29 11:36:09.413597 containerd[1492]: time="2025-01-29T11:36:09.413527649Z" level=info msg="StopPodSandbox for \"ca71e99b7c48067d7bf825de1146deea722737ac8db1aede627b41776be5c345\" returns successfully" Jan 29 11:36:09.414154 containerd[1492]: time="2025-01-29T11:36:09.414127988Z" level=info msg="StopPodSandbox for \"f81c812ded146071c6d5357fc23680f636d22e342d877bc1c706a89aba49dd93\"" Jan 29 11:36:09.414224 containerd[1492]: time="2025-01-29T11:36:09.414205284Z" level=info msg="TearDown network for sandbox \"f81c812ded146071c6d5357fc23680f636d22e342d877bc1c706a89aba49dd93\" successfully" Jan 29 11:36:09.414224 containerd[1492]: time="2025-01-29T11:36:09.414221114Z" level=info msg="StopPodSandbox for \"f81c812ded146071c6d5357fc23680f636d22e342d877bc1c706a89aba49dd93\" returns successfully" Jan 29 11:36:09.414666 containerd[1492]: time="2025-01-29T11:36:09.414645583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78d7549f7d-g9z2x,Uid:c4ad5d31-5d68-4473-8b3c-72bfc21e63c5,Namespace:calico-apiserver,Attempt:3,}" Jan 29 11:36:09.532814 containerd[1492]: time="2025-01-29T11:36:09.531763174Z" level=error msg="Failed to destroy network for sandbox \"5e37377a6c7e16ee19055fe266012ffe5537a92df9268d751dcb96f4c96359da\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:09.532814 containerd[1492]: time="2025-01-29T11:36:09.532166422Z" level=error msg="encountered an error cleaning up failed sandbox \"5e37377a6c7e16ee19055fe266012ffe5537a92df9268d751dcb96f4c96359da\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:09.532814 containerd[1492]: time="2025-01-29T11:36:09.532215083Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qqzzk,Uid:cf2aa93c-f4bc-4322-9163-052200dd877a,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"5e37377a6c7e16ee19055fe266012ffe5537a92df9268d751dcb96f4c96359da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:09.533192 kubelet[2560]: E0129 11:36:09.532412 2560 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e37377a6c7e16ee19055fe266012ffe5537a92df9268d751dcb96f4c96359da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:09.533192 kubelet[2560]: E0129 11:36:09.532468 2560 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e37377a6c7e16ee19055fe266012ffe5537a92df9268d751dcb96f4c96359da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-qqzzk" Jan 29 11:36:09.533192 kubelet[2560]: E0129 11:36:09.532488 2560 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e37377a6c7e16ee19055fe266012ffe5537a92df9268d751dcb96f4c96359da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-qqzzk" Jan 29 11:36:09.533320 kubelet[2560]: E0129 11:36:09.532535 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-qqzzk_kube-system(cf2aa93c-f4bc-4322-9163-052200dd877a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-qqzzk_kube-system(cf2aa93c-f4bc-4322-9163-052200dd877a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5e37377a6c7e16ee19055fe266012ffe5537a92df9268d751dcb96f4c96359da\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-qqzzk" podUID="cf2aa93c-f4bc-4322-9163-052200dd877a" Jan 29 11:36:09.546051 containerd[1492]: time="2025-01-29T11:36:09.545899202Z" level=error msg="Failed to destroy network for sandbox \"02f3fae8add70e6d47c8bf8ac88f4239e8c0deb00c48770a4763038b478021c8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:09.546449 containerd[1492]: time="2025-01-29T11:36:09.546428368Z" level=error msg="encountered an error cleaning up failed sandbox \"02f3fae8add70e6d47c8bf8ac88f4239e8c0deb00c48770a4763038b478021c8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:09.557751 containerd[1492]: time="2025-01-29T11:36:09.557684046Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f5f6fb96-hcxll,Uid:5e835acf-6d91-46a2-a52a-32309f48a3b4,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"02f3fae8add70e6d47c8bf8ac88f4239e8c0deb00c48770a4763038b478021c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:09.558141 kubelet[2560]: E0129 11:36:09.557962 2560 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02f3fae8add70e6d47c8bf8ac88f4239e8c0deb00c48770a4763038b478021c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:09.558141 kubelet[2560]: E0129 11:36:09.558041 2560 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02f3fae8add70e6d47c8bf8ac88f4239e8c0deb00c48770a4763038b478021c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f5f6fb96-hcxll" Jan 29 11:36:09.558141 kubelet[2560]: E0129 11:36:09.558065 2560 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02f3fae8add70e6d47c8bf8ac88f4239e8c0deb00c48770a4763038b478021c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f5f6fb96-hcxll" Jan 29 11:36:09.558246 kubelet[2560]: E0129 11:36:09.558108 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f5f6fb96-hcxll_calico-system(5e835acf-6d91-46a2-a52a-32309f48a3b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f5f6fb96-hcxll_calico-system(5e835acf-6d91-46a2-a52a-32309f48a3b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"02f3fae8add70e6d47c8bf8ac88f4239e8c0deb00c48770a4763038b478021c8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f5f6fb96-hcxll" podUID="5e835acf-6d91-46a2-a52a-32309f48a3b4" Jan 29 11:36:09.566711 containerd[1492]: time="2025-01-29T11:36:09.566559978Z" level=error msg="Failed to destroy network for sandbox \"287ed895e0fd544523f25485baaa91620d0f599e710587bc7ace1b80134c67ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:09.567332 containerd[1492]: time="2025-01-29T11:36:09.567305150Z" level=error msg="encountered an error cleaning up failed sandbox \"287ed895e0fd544523f25485baaa91620d0f599e710587bc7ace1b80134c67ce\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:09.567494 containerd[1492]: time="2025-01-29T11:36:09.567466504Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8sz4x,Uid:d32f565b-8d7d-47d2-85bf-68725ec04cff,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"287ed895e0fd544523f25485baaa91620d0f599e710587bc7ace1b80134c67ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:09.567924 kubelet[2560]: E0129 11:36:09.567846 2560 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"287ed895e0fd544523f25485baaa91620d0f599e710587bc7ace1b80134c67ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:09.568009 kubelet[2560]: E0129 11:36:09.567932 2560 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"287ed895e0fd544523f25485baaa91620d0f599e710587bc7ace1b80134c67ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-8sz4x" Jan 29 11:36:09.568009 kubelet[2560]: E0129 11:36:09.567961 2560 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"287ed895e0fd544523f25485baaa91620d0f599e710587bc7ace1b80134c67ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-8sz4x" Jan 29 11:36:09.568081 kubelet[2560]: E0129 11:36:09.568030 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-8sz4x_kube-system(d32f565b-8d7d-47d2-85bf-68725ec04cff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-8sz4x_kube-system(d32f565b-8d7d-47d2-85bf-68725ec04cff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"287ed895e0fd544523f25485baaa91620d0f599e710587bc7ace1b80134c67ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-8sz4x" podUID="d32f565b-8d7d-47d2-85bf-68725ec04cff" Jan 29 11:36:09.573877 containerd[1492]: time="2025-01-29T11:36:09.573824748Z" level=error msg="Failed to destroy network for sandbox \"0aa3e1e1cc1c6bf9e805ca1806db21a802b9119be72b5d45614a63e66a9541a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:09.575141 containerd[1492]: time="2025-01-29T11:36:09.575099416Z" level=error msg="encountered an error cleaning up failed sandbox \"0aa3e1e1cc1c6bf9e805ca1806db21a802b9119be72b5d45614a63e66a9541a6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:09.575292 containerd[1492]: time="2025-01-29T11:36:09.575176461Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78d7549f7d-g9z2x,Uid:c4ad5d31-5d68-4473-8b3c-72bfc21e63c5,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"0aa3e1e1cc1c6bf9e805ca1806db21a802b9119be72b5d45614a63e66a9541a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:09.575490 kubelet[2560]: E0129 11:36:09.575438 2560 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0aa3e1e1cc1c6bf9e805ca1806db21a802b9119be72b5d45614a63e66a9541a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:09.576259 kubelet[2560]: E0129 11:36:09.575507 2560 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0aa3e1e1cc1c6bf9e805ca1806db21a802b9119be72b5d45614a63e66a9541a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78d7549f7d-g9z2x" Jan 29 11:36:09.576259 kubelet[2560]: E0129 11:36:09.575535 2560 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0aa3e1e1cc1c6bf9e805ca1806db21a802b9119be72b5d45614a63e66a9541a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78d7549f7d-g9z2x" Jan 29 11:36:09.576259 kubelet[2560]: E0129 11:36:09.575582 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-78d7549f7d-g9z2x_calico-apiserver(c4ad5d31-5d68-4473-8b3c-72bfc21e63c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-78d7549f7d-g9z2x_calico-apiserver(c4ad5d31-5d68-4473-8b3c-72bfc21e63c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0aa3e1e1cc1c6bf9e805ca1806db21a802b9119be72b5d45614a63e66a9541a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78d7549f7d-g9z2x" podUID="c4ad5d31-5d68-4473-8b3c-72bfc21e63c5" Jan 29 11:36:09.582752 containerd[1492]: time="2025-01-29T11:36:09.581915170Z" level=error msg="Failed to destroy network for sandbox \"e1a02e62fa690b02079fe64e81dbe8322e67817e7a20aa91d7cc841d5e25bba6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:09.582752 containerd[1492]: time="2025-01-29T11:36:09.582301798Z" level=error msg="encountered an error cleaning up failed sandbox \"e1a02e62fa690b02079fe64e81dbe8322e67817e7a20aa91d7cc841d5e25bba6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:09.582752 containerd[1492]: time="2025-01-29T11:36:09.582354587Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s9vh2,Uid:0ee1d7b9-9e01-4183-97ec-91d9420b2dab,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"e1a02e62fa690b02079fe64e81dbe8322e67817e7a20aa91d7cc841d5e25bba6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:09.583014 kubelet[2560]: E0129 11:36:09.582619 2560 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1a02e62fa690b02079fe64e81dbe8322e67817e7a20aa91d7cc841d5e25bba6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:09.583014 kubelet[2560]: E0129 11:36:09.582683 2560 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1a02e62fa690b02079fe64e81dbe8322e67817e7a20aa91d7cc841d5e25bba6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s9vh2" Jan 29 11:36:09.583331 kubelet[2560]: E0129 11:36:09.582706 2560 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1a02e62fa690b02079fe64e81dbe8322e67817e7a20aa91d7cc841d5e25bba6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s9vh2" Jan 29 11:36:09.583495 kubelet[2560]: E0129 11:36:09.583425 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-s9vh2_calico-system(0ee1d7b9-9e01-4183-97ec-91d9420b2dab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-s9vh2_calico-system(0ee1d7b9-9e01-4183-97ec-91d9420b2dab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e1a02e62fa690b02079fe64e81dbe8322e67817e7a20aa91d7cc841d5e25bba6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s9vh2" podUID="0ee1d7b9-9e01-4183-97ec-91d9420b2dab" Jan 29 11:36:09.595612 containerd[1492]: time="2025-01-29T11:36:09.595545107Z" level=error msg="Failed to destroy network for sandbox \"9c2e1fb00b0ed28c181a6f5615fe212925f0c2f757fdb4ef8583ddfde1de9f84\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:09.596045 containerd[1492]: time="2025-01-29T11:36:09.596014440Z" level=error msg="encountered an error cleaning up failed sandbox \"9c2e1fb00b0ed28c181a6f5615fe212925f0c2f757fdb4ef8583ddfde1de9f84\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:09.596108 containerd[1492]: time="2025-01-29T11:36:09.596082548Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78d7549f7d-5n5j6,Uid:0725190d-a48f-4c98-9011-c6cdb64f50fe,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"9c2e1fb00b0ed28c181a6f5615fe212925f0c2f757fdb4ef8583ddfde1de9f84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:09.596436 kubelet[2560]: E0129 11:36:09.596347 2560 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c2e1fb00b0ed28c181a6f5615fe212925f0c2f757fdb4ef8583ddfde1de9f84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:09.596436 kubelet[2560]: E0129 11:36:09.596424 2560 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c2e1fb00b0ed28c181a6f5615fe212925f0c2f757fdb4ef8583ddfde1de9f84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78d7549f7d-5n5j6" Jan 29 11:36:09.596580 kubelet[2560]: E0129 11:36:09.596449 2560 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c2e1fb00b0ed28c181a6f5615fe212925f0c2f757fdb4ef8583ddfde1de9f84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78d7549f7d-5n5j6" Jan 29 11:36:09.596580 kubelet[2560]: E0129 11:36:09.596502 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-78d7549f7d-5n5j6_calico-apiserver(0725190d-a48f-4c98-9011-c6cdb64f50fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-78d7549f7d-5n5j6_calico-apiserver(0725190d-a48f-4c98-9011-c6cdb64f50fe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9c2e1fb00b0ed28c181a6f5615fe212925f0c2f757fdb4ef8583ddfde1de9f84\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78d7549f7d-5n5j6" podUID="0725190d-a48f-4c98-9011-c6cdb64f50fe" Jan 29 11:36:10.109223 systemd[1]: run-netns-cni\x2dcf386106\x2def35\x2d33c0\x2de412\x2d2b46786a0f8d.mount: Deactivated successfully. Jan 29 11:36:10.109327 systemd[1]: run-netns-cni\x2df8a4efc5\x2d4e52\x2da38f\x2d2e7a\x2d16c24e550740.mount: Deactivated successfully. Jan 29 11:36:10.416133 kubelet[2560]: I0129 11:36:10.415997 2560 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1a02e62fa690b02079fe64e81dbe8322e67817e7a20aa91d7cc841d5e25bba6" Jan 29 11:36:10.416830 containerd[1492]: time="2025-01-29T11:36:10.416802569Z" level=info msg="StopPodSandbox for \"e1a02e62fa690b02079fe64e81dbe8322e67817e7a20aa91d7cc841d5e25bba6\"" Jan 29 11:36:10.419720 containerd[1492]: time="2025-01-29T11:36:10.417020048Z" level=info msg="Ensure that sandbox e1a02e62fa690b02079fe64e81dbe8322e67817e7a20aa91d7cc841d5e25bba6 in task-service has been cleanup successfully" Jan 29 11:36:10.419720 containerd[1492]: time="2025-01-29T11:36:10.417199756Z" level=info msg="TearDown network for sandbox \"e1a02e62fa690b02079fe64e81dbe8322e67817e7a20aa91d7cc841d5e25bba6\" successfully" Jan 29 11:36:10.419720 containerd[1492]: time="2025-01-29T11:36:10.417210727Z" level=info msg="StopPodSandbox for \"e1a02e62fa690b02079fe64e81dbe8322e67817e7a20aa91d7cc841d5e25bba6\" returns successfully" Jan 29 11:36:10.419720 containerd[1492]: time="2025-01-29T11:36:10.418023326Z" level=info msg="StopPodSandbox for \"02f3fae8add70e6d47c8bf8ac88f4239e8c0deb00c48770a4763038b478021c8\"" Jan 29 11:36:10.419720 containerd[1492]: time="2025-01-29T11:36:10.418154473Z" level=info msg="Ensure that sandbox 02f3fae8add70e6d47c8bf8ac88f4239e8c0deb00c48770a4763038b478021c8 in task-service has been cleanup successfully" Jan 29 11:36:10.419720 containerd[1492]: time="2025-01-29T11:36:10.418382361Z" level=info msg="StopPodSandbox for \"080cc56d0ab33d56eb6a1d38d66c3a2cdd67647fca21a8c854e3a90bdc763db1\"" Jan 29 11:36:10.419720 containerd[1492]: time="2025-01-29T11:36:10.418443225Z" level=info msg="TearDown network for sandbox \"080cc56d0ab33d56eb6a1d38d66c3a2cdd67647fca21a8c854e3a90bdc763db1\" successfully" Jan 29 11:36:10.419720 containerd[1492]: time="2025-01-29T11:36:10.418451832Z" level=info msg="StopPodSandbox for \"080cc56d0ab33d56eb6a1d38d66c3a2cdd67647fca21a8c854e3a90bdc763db1\" returns successfully" Jan 29 11:36:10.419720 containerd[1492]: time="2025-01-29T11:36:10.418590502Z" level=info msg="TearDown network for sandbox \"02f3fae8add70e6d47c8bf8ac88f4239e8c0deb00c48770a4763038b478021c8\" successfully" Jan 29 11:36:10.419720 containerd[1492]: time="2025-01-29T11:36:10.418660835Z" level=info msg="StopPodSandbox for \"02f3fae8add70e6d47c8bf8ac88f4239e8c0deb00c48770a4763038b478021c8\" returns successfully" Jan 29 11:36:10.419720 containerd[1492]: time="2025-01-29T11:36:10.418814304Z" level=info msg="StopPodSandbox for \"fc2896e11c36986d8fde21f00e5a96b42534c3d205cbbed9dc871d57b74e32b9\"" Jan 29 11:36:10.419720 containerd[1492]: time="2025-01-29T11:36:10.418880629Z" level=info msg="TearDown network for sandbox \"fc2896e11c36986d8fde21f00e5a96b42534c3d205cbbed9dc871d57b74e32b9\" successfully" Jan 29 11:36:10.419720 containerd[1492]: time="2025-01-29T11:36:10.418888854Z" level=info msg="StopPodSandbox for \"fc2896e11c36986d8fde21f00e5a96b42534c3d205cbbed9dc871d57b74e32b9\" returns successfully" Jan 29 11:36:10.419366 systemd[1]: run-netns-cni\x2dd9840e45\x2dab04\x2d5021\x2db82a\x2d0c2d3acdbdc6.mount: Deactivated successfully. Jan 29 11:36:10.420459 kubelet[2560]: I0129 11:36:10.417550 2560 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02f3fae8add70e6d47c8bf8ac88f4239e8c0deb00c48770a4763038b478021c8" Jan 29 11:36:10.420501 containerd[1492]: time="2025-01-29T11:36:10.419939180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s9vh2,Uid:0ee1d7b9-9e01-4183-97ec-91d9420b2dab,Namespace:calico-system,Attempt:3,}" Jan 29 11:36:10.420501 containerd[1492]: time="2025-01-29T11:36:10.420272447Z" level=info msg="StopPodSandbox for \"dad5edafd75c208e507084ef0dcd7c29e4f3cd0afee0ca5f8f325c8bed2cca8a\"" Jan 29 11:36:10.420501 containerd[1492]: time="2025-01-29T11:36:10.420340175Z" level=info msg="TearDown network for sandbox \"dad5edafd75c208e507084ef0dcd7c29e4f3cd0afee0ca5f8f325c8bed2cca8a\" successfully" Jan 29 11:36:10.420501 containerd[1492]: time="2025-01-29T11:36:10.420349222Z" level=info msg="StopPodSandbox for \"dad5edafd75c208e507084ef0dcd7c29e4f3cd0afee0ca5f8f325c8bed2cca8a\" returns successfully" Jan 29 11:36:10.420826 containerd[1492]: time="2025-01-29T11:36:10.420802864Z" level=info msg="StopPodSandbox for \"7cf0ee6b79c5f0ffbd1df3c4ba1fa347f2ab15775291f793b405df1eb50ddbdc\"" Jan 29 11:36:10.420914 containerd[1492]: time="2025-01-29T11:36:10.420888055Z" level=info msg="TearDown network for sandbox \"7cf0ee6b79c5f0ffbd1df3c4ba1fa347f2ab15775291f793b405df1eb50ddbdc\" successfully" Jan 29 11:36:10.420961 containerd[1492]: time="2025-01-29T11:36:10.420912271Z" level=info msg="StopPodSandbox for \"7cf0ee6b79c5f0ffbd1df3c4ba1fa347f2ab15775291f793b405df1eb50ddbdc\" returns successfully" Jan 29 11:36:10.421292 containerd[1492]: time="2025-01-29T11:36:10.421265725Z" level=info msg="StopPodSandbox for \"b5f2937f631c80a639072f6b6e4b81c67d79bd7b8b00d51fbbb621f6dcf86772\"" Jan 29 11:36:10.421375 containerd[1492]: time="2025-01-29T11:36:10.421333893Z" level=info msg="TearDown network for sandbox \"b5f2937f631c80a639072f6b6e4b81c67d79bd7b8b00d51fbbb621f6dcf86772\" successfully" Jan 29 11:36:10.421375 containerd[1492]: time="2025-01-29T11:36:10.421346288Z" level=info msg="StopPodSandbox for \"b5f2937f631c80a639072f6b6e4b81c67d79bd7b8b00d51fbbb621f6dcf86772\" returns successfully" Jan 29 11:36:10.421857 containerd[1492]: time="2025-01-29T11:36:10.421713749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f5f6fb96-hcxll,Uid:5e835acf-6d91-46a2-a52a-32309f48a3b4,Namespace:calico-system,Attempt:4,}" Jan 29 11:36:10.422058 systemd[1]: run-netns-cni\x2d36c0a3a7\x2db4bf\x2d8b09\x2dea64\x2d1d4037c91fb0.mount: Deactivated successfully. Jan 29 11:36:10.423796 kubelet[2560]: I0129 11:36:10.422559 2560 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e37377a6c7e16ee19055fe266012ffe5537a92df9268d751dcb96f4c96359da" Jan 29 11:36:10.424422 containerd[1492]: time="2025-01-29T11:36:10.424378371Z" level=info msg="StopPodSandbox for \"5e37377a6c7e16ee19055fe266012ffe5537a92df9268d751dcb96f4c96359da\"" Jan 29 11:36:10.424811 containerd[1492]: time="2025-01-29T11:36:10.424778013Z" level=info msg="Ensure that sandbox 5e37377a6c7e16ee19055fe266012ffe5537a92df9268d751dcb96f4c96359da in task-service has been cleanup successfully" Jan 29 11:36:10.430528 systemd[1]: run-netns-cni\x2d7104311c\x2d294c\x2ddf8e\x2dfc91\x2d6e4e9311d8f6.mount: Deactivated successfully. Jan 29 11:36:10.431814 containerd[1492]: time="2025-01-29T11:36:10.431555093Z" level=info msg="TearDown network for sandbox \"5e37377a6c7e16ee19055fe266012ffe5537a92df9268d751dcb96f4c96359da\" successfully" Jan 29 11:36:10.431814 containerd[1492]: time="2025-01-29T11:36:10.431577444Z" level=info msg="StopPodSandbox for \"5e37377a6c7e16ee19055fe266012ffe5537a92df9268d751dcb96f4c96359da\" returns successfully" Jan 29 11:36:10.441958 containerd[1492]: time="2025-01-29T11:36:10.441911415Z" level=info msg="StopPodSandbox for \"62e6cc0503d21814956a6c95475295a49bf85000e6e767d35029c30730d88e17\"" Jan 29 11:36:10.445112 kubelet[2560]: I0129 11:36:10.445085 2560 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c2e1fb00b0ed28c181a6f5615fe212925f0c2f757fdb4ef8583ddfde1de9f84" Jan 29 11:36:10.447534 kubelet[2560]: I0129 11:36:10.447437 2560 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0aa3e1e1cc1c6bf9e805ca1806db21a802b9119be72b5d45614a63e66a9541a6" Jan 29 11:36:10.449683 kubelet[2560]: I0129 11:36:10.449654 2560 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="287ed895e0fd544523f25485baaa91620d0f599e710587bc7ace1b80134c67ce" Jan 29 11:36:10.454951 containerd[1492]: time="2025-01-29T11:36:10.442045286Z" level=info msg="TearDown network for sandbox \"62e6cc0503d21814956a6c95475295a49bf85000e6e767d35029c30730d88e17\" successfully" Jan 29 11:36:10.454951 containerd[1492]: time="2025-01-29T11:36:10.454942619Z" level=info msg="StopPodSandbox for \"62e6cc0503d21814956a6c95475295a49bf85000e6e767d35029c30730d88e17\" returns successfully" Jan 29 11:36:10.455124 containerd[1492]: time="2025-01-29T11:36:10.445572893Z" level=info msg="StopPodSandbox for \"9c2e1fb00b0ed28c181a6f5615fe212925f0c2f757fdb4ef8583ddfde1de9f84\"" Jan 29 11:36:10.455196 containerd[1492]: time="2025-01-29T11:36:10.455180107Z" level=info msg="Ensure that sandbox 9c2e1fb00b0ed28c181a6f5615fe212925f0c2f757fdb4ef8583ddfde1de9f84 in task-service has been cleanup successfully" Jan 29 11:36:10.455433 containerd[1492]: time="2025-01-29T11:36:10.455412834Z" level=info msg="StopPodSandbox for \"dd045a0823620908e29eab7f45fc540dbd8b4da70a585fad2b83ddf304ac7fc1\"" Jan 29 11:36:10.455514 containerd[1492]: time="2025-01-29T11:36:10.455483316Z" level=info msg="TearDown network for sandbox \"dd045a0823620908e29eab7f45fc540dbd8b4da70a585fad2b83ddf304ac7fc1\" successfully" Jan 29 11:36:10.455514 containerd[1492]: time="2025-01-29T11:36:10.455513223Z" level=info msg="StopPodSandbox for \"dd045a0823620908e29eab7f45fc540dbd8b4da70a585fad2b83ddf304ac7fc1\" returns successfully" Jan 29 11:36:10.455589 containerd[1492]: time="2025-01-29T11:36:10.447905842Z" level=info msg="StopPodSandbox for \"0aa3e1e1cc1c6bf9e805ca1806db21a802b9119be72b5d45614a63e66a9541a6\"" Jan 29 11:36:10.458614 containerd[1492]: time="2025-01-29T11:36:10.455686589Z" level=info msg="Ensure that sandbox 0aa3e1e1cc1c6bf9e805ca1806db21a802b9119be72b5d45614a63e66a9541a6 in task-service has been cleanup successfully" Jan 29 11:36:10.458614 containerd[1492]: time="2025-01-29T11:36:10.456687012Z" level=info msg="TearDown network for sandbox \"9c2e1fb00b0ed28c181a6f5615fe212925f0c2f757fdb4ef8583ddfde1de9f84\" successfully" Jan 29 11:36:10.458614 containerd[1492]: time="2025-01-29T11:36:10.456700517Z" level=info msg="StopPodSandbox for \"9c2e1fb00b0ed28c181a6f5615fe212925f0c2f757fdb4ef8583ddfde1de9f84\" returns successfully" Jan 29 11:36:10.458614 containerd[1492]: time="2025-01-29T11:36:10.456796617Z" level=info msg="TearDown network for sandbox \"0aa3e1e1cc1c6bf9e805ca1806db21a802b9119be72b5d45614a63e66a9541a6\" successfully" Jan 29 11:36:10.458614 containerd[1492]: time="2025-01-29T11:36:10.456826664Z" level=info msg="StopPodSandbox for \"0aa3e1e1cc1c6bf9e805ca1806db21a802b9119be72b5d45614a63e66a9541a6\" returns successfully" Jan 29 11:36:10.458614 containerd[1492]: time="2025-01-29T11:36:10.456850638Z" level=info msg="StopPodSandbox for \"7796f2b8c41cac5338ffa9b402e0614ff1aa9234cde81ec23c1a0b7905f12023\"" Jan 29 11:36:10.458614 containerd[1492]: time="2025-01-29T11:36:10.450412909Z" level=info msg="StopPodSandbox for \"287ed895e0fd544523f25485baaa91620d0f599e710587bc7ace1b80134c67ce\"" Jan 29 11:36:10.458614 containerd[1492]: time="2025-01-29T11:36:10.456936250Z" level=info msg="TearDown network for sandbox \"7796f2b8c41cac5338ffa9b402e0614ff1aa9234cde81ec23c1a0b7905f12023\" successfully" Jan 29 11:36:10.458614 containerd[1492]: time="2025-01-29T11:36:10.456949886Z" level=info msg="StopPodSandbox for \"7796f2b8c41cac5338ffa9b402e0614ff1aa9234cde81ec23c1a0b7905f12023\" returns successfully" Jan 29 11:36:10.458614 containerd[1492]: time="2025-01-29T11:36:10.457197863Z" level=info msg="Ensure that sandbox 287ed895e0fd544523f25485baaa91620d0f599e710587bc7ace1b80134c67ce in task-service has been cleanup successfully" Jan 29 11:36:10.458614 containerd[1492]: time="2025-01-29T11:36:10.457962170Z" level=info msg="StopPodSandbox for \"b0603ed2dd195e5f4326b4b967ccff21260ba0e0d5173691aeb14b6a04acf563\"" Jan 29 11:36:10.458614 containerd[1492]: time="2025-01-29T11:36:10.458049754Z" level=info msg="TearDown network for sandbox \"b0603ed2dd195e5f4326b4b967ccff21260ba0e0d5173691aeb14b6a04acf563\" successfully" Jan 29 11:36:10.458614 containerd[1492]: time="2025-01-29T11:36:10.458058892Z" level=info msg="StopPodSandbox for \"b0603ed2dd195e5f4326b4b967ccff21260ba0e0d5173691aeb14b6a04acf563\" returns successfully" Jan 29 11:36:10.458614 containerd[1492]: time="2025-01-29T11:36:10.458199456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qqzzk,Uid:cf2aa93c-f4bc-4322-9163-052200dd877a,Namespace:kube-system,Attempt:4,}" Jan 29 11:36:10.458614 containerd[1492]: time="2025-01-29T11:36:10.458397329Z" level=info msg="StopPodSandbox for \"6b8213b783a7c465b327152c49a96b3af800823e3978928ad141cc9679eef942\"" Jan 29 11:36:10.458614 containerd[1492]: time="2025-01-29T11:36:10.458465016Z" level=info msg="TearDown network for sandbox \"6b8213b783a7c465b327152c49a96b3af800823e3978928ad141cc9679eef942\" successfully" Jan 29 11:36:10.458614 containerd[1492]: time="2025-01-29T11:36:10.458473702Z" level=info msg="StopPodSandbox for \"6b8213b783a7c465b327152c49a96b3af800823e3978928ad141cc9679eef942\" returns successfully" Jan 29 11:36:10.457768 systemd[1]: run-netns-cni\x2d04e5ac2c\x2dc52c\x2d255a\x2d3a9d\x2db79584f0256f.mount: Deactivated successfully. Jan 29 11:36:10.459115 kubelet[2560]: E0129 11:36:10.457242 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:36:10.457872 systemd[1]: run-netns-cni\x2da69fe188\x2ddf9f\x2da395\x2dc06a\x2db1bb4131e938.mount: Deactivated successfully. Jan 29 11:36:10.459461 containerd[1492]: time="2025-01-29T11:36:10.459323611Z" level=info msg="TearDown network for sandbox \"287ed895e0fd544523f25485baaa91620d0f599e710587bc7ace1b80134c67ce\" successfully" Jan 29 11:36:10.459461 containerd[1492]: time="2025-01-29T11:36:10.459366061Z" level=info msg="StopPodSandbox for \"287ed895e0fd544523f25485baaa91620d0f599e710587bc7ace1b80134c67ce\" returns successfully" Jan 29 11:36:10.459693 containerd[1492]: time="2025-01-29T11:36:10.459504752Z" level=info msg="StopPodSandbox for \"ca71e99b7c48067d7bf825de1146deea722737ac8db1aede627b41776be5c345\"" Jan 29 11:36:10.459693 containerd[1492]: time="2025-01-29T11:36:10.459590013Z" level=info msg="TearDown network for sandbox \"ca71e99b7c48067d7bf825de1146deea722737ac8db1aede627b41776be5c345\" successfully" Jan 29 11:36:10.459693 containerd[1492]: time="2025-01-29T11:36:10.459600162Z" level=info msg="StopPodSandbox for \"ca71e99b7c48067d7bf825de1146deea722737ac8db1aede627b41776be5c345\" returns successfully" Jan 29 11:36:10.459777 containerd[1492]: time="2025-01-29T11:36:10.459694980Z" level=info msg="StopPodSandbox for \"f289a1ea20b800c633270b52ec0a7cbce53731f38497fa4630193ed068515360\"" Jan 29 11:36:10.459802 containerd[1492]: time="2025-01-29T11:36:10.459775482Z" level=info msg="TearDown network for sandbox \"f289a1ea20b800c633270b52ec0a7cbce53731f38497fa4630193ed068515360\" successfully" Jan 29 11:36:10.459802 containerd[1492]: time="2025-01-29T11:36:10.459786943Z" level=info msg="StopPodSandbox for \"f289a1ea20b800c633270b52ec0a7cbce53731f38497fa4630193ed068515360\" returns successfully" Jan 29 11:36:10.460153 containerd[1492]: time="2025-01-29T11:36:10.460136811Z" level=info msg="StopPodSandbox for \"e4c786a1f87f123c515ee8f4dae18559dc1b81a1fbae2953ab843c3ffdeb2f38\"" Jan 29 11:36:10.460419 containerd[1492]: time="2025-01-29T11:36:10.460384166Z" level=info msg="TearDown network for sandbox \"e4c786a1f87f123c515ee8f4dae18559dc1b81a1fbae2953ab843c3ffdeb2f38\" successfully" Jan 29 11:36:10.460419 containerd[1492]: time="2025-01-29T11:36:10.460167749Z" level=info msg="StopPodSandbox for \"f81c812ded146071c6d5357fc23680f636d22e342d877bc1c706a89aba49dd93\"" Jan 29 11:36:10.460799 containerd[1492]: time="2025-01-29T11:36:10.460517728Z" level=info msg="TearDown network for sandbox \"f81c812ded146071c6d5357fc23680f636d22e342d877bc1c706a89aba49dd93\" successfully" Jan 29 11:36:10.460799 containerd[1492]: time="2025-01-29T11:36:10.460538507Z" level=info msg="StopPodSandbox for \"f81c812ded146071c6d5357fc23680f636d22e342d877bc1c706a89aba49dd93\" returns successfully" Jan 29 11:36:10.460799 containerd[1492]: time="2025-01-29T11:36:10.460399244Z" level=info msg="StopPodSandbox for \"e4c786a1f87f123c515ee8f4dae18559dc1b81a1fbae2953ab843c3ffdeb2f38\" returns successfully" Jan 29 11:36:10.460799 containerd[1492]: time="2025-01-29T11:36:10.460186454Z" level=info msg="StopPodSandbox for \"05f9cf1c50bc395566b534f6d9d826c90680313b67695139cea702c54a97f9a4\"" Jan 29 11:36:10.460799 containerd[1492]: time="2025-01-29T11:36:10.460777958Z" level=info msg="TearDown network for sandbox \"05f9cf1c50bc395566b534f6d9d826c90680313b67695139cea702c54a97f9a4\" successfully" Jan 29 11:36:10.460799 containerd[1492]: time="2025-01-29T11:36:10.460791393Z" level=info msg="StopPodSandbox for \"05f9cf1c50bc395566b534f6d9d826c90680313b67695139cea702c54a97f9a4\" returns successfully" Jan 29 11:36:10.461292 containerd[1492]: time="2025-01-29T11:36:10.461269000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78d7549f7d-g9z2x,Uid:c4ad5d31-5d68-4473-8b3c-72bfc21e63c5,Namespace:calico-apiserver,Attempt:4,}" Jan 29 11:36:10.461537 containerd[1492]: time="2025-01-29T11:36:10.461519402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78d7549f7d-5n5j6,Uid:0725190d-a48f-4c98-9011-c6cdb64f50fe,Namespace:calico-apiserver,Attempt:4,}" Jan 29 11:36:10.461746 containerd[1492]: time="2025-01-29T11:36:10.461724347Z" level=info msg="StopPodSandbox for \"78be77b980959cfbde3c7658c4ee4124dfca02a33e1eb74d810982ec9ce9af2d\"" Jan 29 11:36:10.461832 containerd[1492]: time="2025-01-29T11:36:10.461815529Z" level=info msg="TearDown network for sandbox \"78be77b980959cfbde3c7658c4ee4124dfca02a33e1eb74d810982ec9ce9af2d\" successfully" Jan 29 11:36:10.461892 containerd[1492]: time="2025-01-29T11:36:10.461830848Z" level=info msg="StopPodSandbox for \"78be77b980959cfbde3c7658c4ee4124dfca02a33e1eb74d810982ec9ce9af2d\" returns successfully" Jan 29 11:36:10.462239 containerd[1492]: time="2025-01-29T11:36:10.462126655Z" level=info msg="StopPodSandbox for \"fe40cde01f9bc00e0017a79d8d4efb77eed31f6ad079f48d7d46d61b289b6a4b\"" Jan 29 11:36:10.462239 containerd[1492]: time="2025-01-29T11:36:10.462224538Z" level=info msg="TearDown network for sandbox \"fe40cde01f9bc00e0017a79d8d4efb77eed31f6ad079f48d7d46d61b289b6a4b\" successfully" Jan 29 11:36:10.462239 containerd[1492]: time="2025-01-29T11:36:10.462235389Z" level=info msg="StopPodSandbox for \"fe40cde01f9bc00e0017a79d8d4efb77eed31f6ad079f48d7d46d61b289b6a4b\" returns successfully" Jan 29 11:36:10.462495 kubelet[2560]: E0129 11:36:10.462474 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:36:10.462788 containerd[1492]: time="2025-01-29T11:36:10.462766458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8sz4x,Uid:d32f565b-8d7d-47d2-85bf-68725ec04cff,Namespace:kube-system,Attempt:4,}" Jan 29 11:36:10.532341 containerd[1492]: time="2025-01-29T11:36:10.532209396Z" level=error msg="Failed to destroy network for sandbox \"8825c450d804aebd9a63530be345a53b152b35ff17031f6c5482c2b8008c22fd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:10.532872 containerd[1492]: time="2025-01-29T11:36:10.532829573Z" level=error msg="encountered an error cleaning up failed sandbox \"8825c450d804aebd9a63530be345a53b152b35ff17031f6c5482c2b8008c22fd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:10.533030 containerd[1492]: time="2025-01-29T11:36:10.532886009Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s9vh2,Uid:0ee1d7b9-9e01-4183-97ec-91d9420b2dab,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"8825c450d804aebd9a63530be345a53b152b35ff17031f6c5482c2b8008c22fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:10.533165 kubelet[2560]: E0129 11:36:10.533115 2560 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8825c450d804aebd9a63530be345a53b152b35ff17031f6c5482c2b8008c22fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:10.533223 kubelet[2560]: E0129 11:36:10.533197 2560 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8825c450d804aebd9a63530be345a53b152b35ff17031f6c5482c2b8008c22fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s9vh2" Jan 29 11:36:10.533263 kubelet[2560]: E0129 11:36:10.533228 2560 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8825c450d804aebd9a63530be345a53b152b35ff17031f6c5482c2b8008c22fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s9vh2" Jan 29 11:36:10.533317 kubelet[2560]: E0129 11:36:10.533284 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-s9vh2_calico-system(0ee1d7b9-9e01-4183-97ec-91d9420b2dab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-s9vh2_calico-system(0ee1d7b9-9e01-4183-97ec-91d9420b2dab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8825c450d804aebd9a63530be345a53b152b35ff17031f6c5482c2b8008c22fd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s9vh2" podUID="0ee1d7b9-9e01-4183-97ec-91d9420b2dab" Jan 29 11:36:10.573499 containerd[1492]: time="2025-01-29T11:36:10.573431764Z" level=error msg="Failed to destroy network for sandbox \"3a3ba7ab52526116889cf321914b1c4c8be6c1d810f1d28bbb52853d080bfba4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:10.577010 containerd[1492]: time="2025-01-29T11:36:10.576888187Z" level=error msg="encountered an error cleaning up failed sandbox \"3a3ba7ab52526116889cf321914b1c4c8be6c1d810f1d28bbb52853d080bfba4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:10.579931 containerd[1492]: time="2025-01-29T11:36:10.579789265Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f5f6fb96-hcxll,Uid:5e835acf-6d91-46a2-a52a-32309f48a3b4,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"3a3ba7ab52526116889cf321914b1c4c8be6c1d810f1d28bbb52853d080bfba4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:10.581323 kubelet[2560]: E0129 11:36:10.581192 2560 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a3ba7ab52526116889cf321914b1c4c8be6c1d810f1d28bbb52853d080bfba4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:10.581323 kubelet[2560]: E0129 11:36:10.581256 2560 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a3ba7ab52526116889cf321914b1c4c8be6c1d810f1d28bbb52853d080bfba4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f5f6fb96-hcxll" Jan 29 11:36:10.581323 kubelet[2560]: E0129 11:36:10.581278 2560 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a3ba7ab52526116889cf321914b1c4c8be6c1d810f1d28bbb52853d080bfba4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f5f6fb96-hcxll" Jan 29 11:36:10.583045 kubelet[2560]: E0129 11:36:10.581318 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f5f6fb96-hcxll_calico-system(5e835acf-6d91-46a2-a52a-32309f48a3b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f5f6fb96-hcxll_calico-system(5e835acf-6d91-46a2-a52a-32309f48a3b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3a3ba7ab52526116889cf321914b1c4c8be6c1d810f1d28bbb52853d080bfba4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f5f6fb96-hcxll" podUID="5e835acf-6d91-46a2-a52a-32309f48a3b4" Jan 29 11:36:10.655192 containerd[1492]: time="2025-01-29T11:36:10.655057462Z" level=error msg="Failed to destroy network for sandbox \"c813bd7f3e0a9030e60075419217b1ddcf7974cf9fe9ec3a62f9cc760570f4ff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:10.656394 containerd[1492]: time="2025-01-29T11:36:10.656314146Z" level=error msg="Failed to destroy network for sandbox \"4c7481776cb0257d1f8d06312c485b245875bec0a4c771dfef98cf6f06e42854\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:10.656687 containerd[1492]: time="2025-01-29T11:36:10.656406669Z" level=error msg="encountered an error cleaning up failed sandbox \"c813bd7f3e0a9030e60075419217b1ddcf7974cf9fe9ec3a62f9cc760570f4ff\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:10.656764 containerd[1492]: time="2025-01-29T11:36:10.656722203Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qqzzk,Uid:cf2aa93c-f4bc-4322-9163-052200dd877a,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"c813bd7f3e0a9030e60075419217b1ddcf7974cf9fe9ec3a62f9cc760570f4ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:10.657602 kubelet[2560]: E0129 11:36:10.657001 2560 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c813bd7f3e0a9030e60075419217b1ddcf7974cf9fe9ec3a62f9cc760570f4ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:10.657602 kubelet[2560]: E0129 11:36:10.657552 2560 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c813bd7f3e0a9030e60075419217b1ddcf7974cf9fe9ec3a62f9cc760570f4ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-qqzzk" Jan 29 11:36:10.657602 kubelet[2560]: E0129 11:36:10.657575 2560 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c813bd7f3e0a9030e60075419217b1ddcf7974cf9fe9ec3a62f9cc760570f4ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-qqzzk" Jan 29 11:36:10.657806 containerd[1492]: time="2025-01-29T11:36:10.657573254Z" level=error msg="encountered an error cleaning up failed sandbox \"4c7481776cb0257d1f8d06312c485b245875bec0a4c771dfef98cf6f06e42854\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:10.657806 containerd[1492]: time="2025-01-29T11:36:10.657605375Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8sz4x,Uid:d32f565b-8d7d-47d2-85bf-68725ec04cff,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"4c7481776cb0257d1f8d06312c485b245875bec0a4c771dfef98cf6f06e42854\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:10.657871 kubelet[2560]: E0129 11:36:10.657646 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-qqzzk_kube-system(cf2aa93c-f4bc-4322-9163-052200dd877a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-qqzzk_kube-system(cf2aa93c-f4bc-4322-9163-052200dd877a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c813bd7f3e0a9030e60075419217b1ddcf7974cf9fe9ec3a62f9cc760570f4ff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-qqzzk" podUID="cf2aa93c-f4bc-4322-9163-052200dd877a" Jan 29 11:36:10.658198 kubelet[2560]: E0129 11:36:10.658166 2560 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c7481776cb0257d1f8d06312c485b245875bec0a4c771dfef98cf6f06e42854\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:10.658369 kubelet[2560]: E0129 11:36:10.658352 2560 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c7481776cb0257d1f8d06312c485b245875bec0a4c771dfef98cf6f06e42854\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-8sz4x" Jan 29 11:36:10.658428 kubelet[2560]: E0129 11:36:10.658415 2560 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c7481776cb0257d1f8d06312c485b245875bec0a4c771dfef98cf6f06e42854\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-8sz4x" Jan 29 11:36:10.658518 kubelet[2560]: E0129 11:36:10.658482 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-8sz4x_kube-system(d32f565b-8d7d-47d2-85bf-68725ec04cff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-8sz4x_kube-system(d32f565b-8d7d-47d2-85bf-68725ec04cff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c7481776cb0257d1f8d06312c485b245875bec0a4c771dfef98cf6f06e42854\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-8sz4x" podUID="d32f565b-8d7d-47d2-85bf-68725ec04cff" Jan 29 11:36:10.659823 containerd[1492]: time="2025-01-29T11:36:10.659783372Z" level=error msg="Failed to destroy network for sandbox \"4467e330ce986042ee6370798d0cfac69c12394e8c48bb378f3d8274118997f5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:10.660231 containerd[1492]: time="2025-01-29T11:36:10.660187382Z" level=error msg="encountered an error cleaning up failed sandbox \"4467e330ce986042ee6370798d0cfac69c12394e8c48bb378f3d8274118997f5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:10.660880 containerd[1492]: time="2025-01-29T11:36:10.660815744Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78d7549f7d-5n5j6,Uid:0725190d-a48f-4c98-9011-c6cdb64f50fe,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"4467e330ce986042ee6370798d0cfac69c12394e8c48bb378f3d8274118997f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:10.661138 kubelet[2560]: E0129 11:36:10.661108 2560 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4467e330ce986042ee6370798d0cfac69c12394e8c48bb378f3d8274118997f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:10.661508 kubelet[2560]: E0129 11:36:10.661373 2560 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4467e330ce986042ee6370798d0cfac69c12394e8c48bb378f3d8274118997f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78d7549f7d-5n5j6" Jan 29 11:36:10.661508 kubelet[2560]: E0129 11:36:10.661401 2560 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4467e330ce986042ee6370798d0cfac69c12394e8c48bb378f3d8274118997f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78d7549f7d-5n5j6" Jan 29 11:36:10.661508 kubelet[2560]: E0129 11:36:10.661458 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-78d7549f7d-5n5j6_calico-apiserver(0725190d-a48f-4c98-9011-c6cdb64f50fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-78d7549f7d-5n5j6_calico-apiserver(0725190d-a48f-4c98-9011-c6cdb64f50fe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4467e330ce986042ee6370798d0cfac69c12394e8c48bb378f3d8274118997f5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78d7549f7d-5n5j6" podUID="0725190d-a48f-4c98-9011-c6cdb64f50fe" Jan 29 11:36:10.666791 containerd[1492]: time="2025-01-29T11:36:10.666687460Z" level=error msg="Failed to destroy network for sandbox \"9ede5520602dd767ddb315a2dea6c55d80b92b58b2e4a6098587bb67c0211e9a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:10.667388 containerd[1492]: time="2025-01-29T11:36:10.667359645Z" level=error msg="encountered an error cleaning up failed sandbox \"9ede5520602dd767ddb315a2dea6c55d80b92b58b2e4a6098587bb67c0211e9a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:10.667810 containerd[1492]: time="2025-01-29T11:36:10.667787168Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78d7549f7d-g9z2x,Uid:c4ad5d31-5d68-4473-8b3c-72bfc21e63c5,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"9ede5520602dd767ddb315a2dea6c55d80b92b58b2e4a6098587bb67c0211e9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:10.668156 kubelet[2560]: E0129 11:36:10.668117 2560 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ede5520602dd767ddb315a2dea6c55d80b92b58b2e4a6098587bb67c0211e9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:10.668222 kubelet[2560]: E0129 11:36:10.668172 2560 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ede5520602dd767ddb315a2dea6c55d80b92b58b2e4a6098587bb67c0211e9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78d7549f7d-g9z2x" Jan 29 11:36:10.668222 kubelet[2560]: E0129 11:36:10.668192 2560 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ede5520602dd767ddb315a2dea6c55d80b92b58b2e4a6098587bb67c0211e9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78d7549f7d-g9z2x" Jan 29 11:36:10.668312 kubelet[2560]: E0129 11:36:10.668232 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-78d7549f7d-g9z2x_calico-apiserver(c4ad5d31-5d68-4473-8b3c-72bfc21e63c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-78d7549f7d-g9z2x_calico-apiserver(c4ad5d31-5d68-4473-8b3c-72bfc21e63c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9ede5520602dd767ddb315a2dea6c55d80b92b58b2e4a6098587bb67c0211e9a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78d7549f7d-g9z2x" podUID="c4ad5d31-5d68-4473-8b3c-72bfc21e63c5" Jan 29 11:36:11.112358 systemd[1]: run-netns-cni\x2d919e583c\x2df053\x2dd0a2\x2d4273\x2d528457b0d7dd.mount: Deactivated successfully. Jan 29 11:36:11.658716 kubelet[2560]: I0129 11:36:11.658679 2560 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c7481776cb0257d1f8d06312c485b245875bec0a4c771dfef98cf6f06e42854" Jan 29 11:36:11.664205 containerd[1492]: time="2025-01-29T11:36:11.659245277Z" level=info msg="StopPodSandbox for \"4c7481776cb0257d1f8d06312c485b245875bec0a4c771dfef98cf6f06e42854\"" Jan 29 11:36:11.664205 containerd[1492]: time="2025-01-29T11:36:11.659448479Z" level=info msg="Ensure that sandbox 4c7481776cb0257d1f8d06312c485b245875bec0a4c771dfef98cf6f06e42854 in task-service has been cleanup successfully" Jan 29 11:36:11.664205 containerd[1492]: time="2025-01-29T11:36:11.659663835Z" level=info msg="TearDown network for sandbox \"4c7481776cb0257d1f8d06312c485b245875bec0a4c771dfef98cf6f06e42854\" successfully" Jan 29 11:36:11.664205 containerd[1492]: time="2025-01-29T11:36:11.659675266Z" level=info msg="StopPodSandbox for \"4c7481776cb0257d1f8d06312c485b245875bec0a4c771dfef98cf6f06e42854\" returns successfully" Jan 29 11:36:11.664205 containerd[1492]: time="2025-01-29T11:36:11.663094628Z" level=info msg="StopPodSandbox for \"287ed895e0fd544523f25485baaa91620d0f599e710587bc7ace1b80134c67ce\"" Jan 29 11:36:11.664205 containerd[1492]: time="2025-01-29T11:36:11.663208030Z" level=info msg="TearDown network for sandbox \"287ed895e0fd544523f25485baaa91620d0f599e710587bc7ace1b80134c67ce\" successfully" Jan 29 11:36:11.664205 containerd[1492]: time="2025-01-29T11:36:11.663218500Z" level=info msg="StopPodSandbox for \"287ed895e0fd544523f25485baaa91620d0f599e710587bc7ace1b80134c67ce\" returns successfully" Jan 29 11:36:11.664205 containerd[1492]: time="2025-01-29T11:36:11.663581082Z" level=info msg="StopPodSandbox for \"05f9cf1c50bc395566b534f6d9d826c90680313b67695139cea702c54a97f9a4\"" Jan 29 11:36:11.664205 containerd[1492]: time="2025-01-29T11:36:11.663693193Z" level=info msg="TearDown network for sandbox \"05f9cf1c50bc395566b534f6d9d826c90680313b67695139cea702c54a97f9a4\" successfully" Jan 29 11:36:11.664205 containerd[1492]: time="2025-01-29T11:36:11.663703072Z" level=info msg="StopPodSandbox for \"05f9cf1c50bc395566b534f6d9d826c90680313b67695139cea702c54a97f9a4\" returns successfully" Jan 29 11:36:11.664205 containerd[1492]: time="2025-01-29T11:36:11.664147698Z" level=info msg="StopPodSandbox for \"78be77b980959cfbde3c7658c4ee4124dfca02a33e1eb74d810982ec9ce9af2d\"" Jan 29 11:36:11.663113 systemd[1]: run-netns-cni\x2dfb798073\x2df79b\x2dac88\x2d2827\x2d0fff20f2f0e8.mount: Deactivated successfully. Jan 29 11:36:11.827234 containerd[1492]: time="2025-01-29T11:36:11.664224963Z" level=info msg="TearDown network for sandbox \"78be77b980959cfbde3c7658c4ee4124dfca02a33e1eb74d810982ec9ce9af2d\" successfully" Jan 29 11:36:11.827234 containerd[1492]: time="2025-01-29T11:36:11.664235153Z" level=info msg="StopPodSandbox for \"78be77b980959cfbde3c7658c4ee4124dfca02a33e1eb74d810982ec9ce9af2d\" returns successfully" Jan 29 11:36:11.827234 containerd[1492]: time="2025-01-29T11:36:11.664583879Z" level=info msg="StopPodSandbox for \"fe40cde01f9bc00e0017a79d8d4efb77eed31f6ad079f48d7d46d61b289b6a4b\"" Jan 29 11:36:11.827234 containerd[1492]: time="2025-01-29T11:36:11.664699205Z" level=info msg="TearDown network for sandbox \"fe40cde01f9bc00e0017a79d8d4efb77eed31f6ad079f48d7d46d61b289b6a4b\" successfully" Jan 29 11:36:11.827234 containerd[1492]: time="2025-01-29T11:36:11.664711518Z" level=info msg="StopPodSandbox for \"fe40cde01f9bc00e0017a79d8d4efb77eed31f6ad079f48d7d46d61b289b6a4b\" returns successfully" Jan 29 11:36:11.827234 containerd[1492]: time="2025-01-29T11:36:11.665200849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8sz4x,Uid:d32f565b-8d7d-47d2-85bf-68725ec04cff,Namespace:kube-system,Attempt:5,}" Jan 29 11:36:11.827234 containerd[1492]: time="2025-01-29T11:36:11.665932014Z" level=info msg="StopPodSandbox for \"c813bd7f3e0a9030e60075419217b1ddcf7974cf9fe9ec3a62f9cc760570f4ff\"" Jan 29 11:36:11.827234 containerd[1492]: time="2025-01-29T11:36:11.666090602Z" level=info msg="Ensure that sandbox c813bd7f3e0a9030e60075419217b1ddcf7974cf9fe9ec3a62f9cc760570f4ff in task-service has been cleanup successfully" Jan 29 11:36:11.827234 containerd[1492]: time="2025-01-29T11:36:11.669004463Z" level=info msg="TearDown network for sandbox \"c813bd7f3e0a9030e60075419217b1ddcf7974cf9fe9ec3a62f9cc760570f4ff\" successfully" Jan 29 11:36:11.827234 containerd[1492]: time="2025-01-29T11:36:11.669017617Z" level=info msg="StopPodSandbox for \"c813bd7f3e0a9030e60075419217b1ddcf7974cf9fe9ec3a62f9cc760570f4ff\" returns successfully" Jan 29 11:36:11.827234 containerd[1492]: time="2025-01-29T11:36:11.669830827Z" level=info msg="StopPodSandbox for \"4467e330ce986042ee6370798d0cfac69c12394e8c48bb378f3d8274118997f5\"" Jan 29 11:36:11.827234 containerd[1492]: time="2025-01-29T11:36:11.670011748Z" level=info msg="Ensure that sandbox 4467e330ce986042ee6370798d0cfac69c12394e8c48bb378f3d8274118997f5 in task-service has been cleanup successfully" Jan 29 11:36:11.827234 containerd[1492]: time="2025-01-29T11:36:11.670223305Z" level=info msg="TearDown network for sandbox \"4467e330ce986042ee6370798d0cfac69c12394e8c48bb378f3d8274118997f5\" successfully" Jan 29 11:36:11.827234 containerd[1492]: time="2025-01-29T11:36:11.670234406Z" level=info msg="StopPodSandbox for \"4467e330ce986042ee6370798d0cfac69c12394e8c48bb378f3d8274118997f5\" returns successfully" Jan 29 11:36:11.827234 containerd[1492]: time="2025-01-29T11:36:11.670279260Z" level=info msg="StopPodSandbox for \"5e37377a6c7e16ee19055fe266012ffe5537a92df9268d751dcb96f4c96359da\"" Jan 29 11:36:11.827234 containerd[1492]: time="2025-01-29T11:36:11.670350465Z" level=info msg="TearDown network for sandbox \"5e37377a6c7e16ee19055fe266012ffe5537a92df9268d751dcb96f4c96359da\" successfully" Jan 29 11:36:11.827234 containerd[1492]: time="2025-01-29T11:36:11.670358310Z" level=info msg="StopPodSandbox for \"5e37377a6c7e16ee19055fe266012ffe5537a92df9268d751dcb96f4c96359da\" returns successfully" Jan 29 11:36:11.827234 containerd[1492]: time="2025-01-29T11:36:11.671790383Z" level=info msg="StopPodSandbox for \"62e6cc0503d21814956a6c95475295a49bf85000e6e767d35029c30730d88e17\"" Jan 29 11:36:11.827234 containerd[1492]: time="2025-01-29T11:36:11.671821121Z" level=info msg="StopPodSandbox for \"9c2e1fb00b0ed28c181a6f5615fe212925f0c2f757fdb4ef8583ddfde1de9f84\"" Jan 29 11:36:11.827234 containerd[1492]: time="2025-01-29T11:36:11.671866736Z" level=info msg="TearDown network for sandbox \"62e6cc0503d21814956a6c95475295a49bf85000e6e767d35029c30730d88e17\" successfully" Jan 29 11:36:11.827234 containerd[1492]: time="2025-01-29T11:36:11.671878198Z" level=info msg="StopPodSandbox for \"62e6cc0503d21814956a6c95475295a49bf85000e6e767d35029c30730d88e17\" returns successfully" Jan 29 11:36:11.827234 containerd[1492]: time="2025-01-29T11:36:11.671891833Z" level=info msg="TearDown network for sandbox \"9c2e1fb00b0ed28c181a6f5615fe212925f0c2f757fdb4ef8583ddfde1de9f84\" successfully" Jan 29 11:36:11.827234 containerd[1492]: time="2025-01-29T11:36:11.671901472Z" level=info msg="StopPodSandbox for \"9c2e1fb00b0ed28c181a6f5615fe212925f0c2f757fdb4ef8583ddfde1de9f84\" returns successfully" Jan 29 11:36:11.827234 containerd[1492]: time="2025-01-29T11:36:11.672160028Z" level=info msg="StopPodSandbox for \"dd045a0823620908e29eab7f45fc540dbd8b4da70a585fad2b83ddf304ac7fc1\"" Jan 29 11:36:11.827234 containerd[1492]: time="2025-01-29T11:36:11.672211345Z" level=info msg="StopPodSandbox for \"6b8213b783a7c465b327152c49a96b3af800823e3978928ad141cc9679eef942\"" Jan 29 11:36:11.827234 containerd[1492]: time="2025-01-29T11:36:11.672301684Z" level=info msg="TearDown network for sandbox \"6b8213b783a7c465b327152c49a96b3af800823e3978928ad141cc9679eef942\" successfully" Jan 29 11:36:11.827234 containerd[1492]: time="2025-01-29T11:36:11.672314508Z" level=info msg="StopPodSandbox for \"6b8213b783a7c465b327152c49a96b3af800823e3978928ad141cc9679eef942\" returns successfully" Jan 29 11:36:11.827234 containerd[1492]: time="2025-01-29T11:36:11.672228627Z" level=info msg="TearDown network for sandbox \"dd045a0823620908e29eab7f45fc540dbd8b4da70a585fad2b83ddf304ac7fc1\" successfully" Jan 29 11:36:11.827234 containerd[1492]: time="2025-01-29T11:36:11.672355486Z" level=info msg="StopPodSandbox for \"dd045a0823620908e29eab7f45fc540dbd8b4da70a585fad2b83ddf304ac7fc1\" returns successfully" Jan 29 11:36:11.827234 containerd[1492]: time="2025-01-29T11:36:11.672742674Z" level=info msg="StopPodSandbox for \"7796f2b8c41cac5338ffa9b402e0614ff1aa9234cde81ec23c1a0b7905f12023\"" Jan 29 11:36:11.827234 containerd[1492]: time="2025-01-29T11:36:11.672826902Z" level=info msg="TearDown network for sandbox \"7796f2b8c41cac5338ffa9b402e0614ff1aa9234cde81ec23c1a0b7905f12023\" successfully" Jan 29 11:36:11.827234 containerd[1492]: time="2025-01-29T11:36:11.672870784Z" level=info msg="StopPodSandbox for \"7796f2b8c41cac5338ffa9b402e0614ff1aa9234cde81ec23c1a0b7905f12023\" returns successfully" Jan 29 11:36:11.827234 containerd[1492]: time="2025-01-29T11:36:11.672972536Z" level=info msg="StopPodSandbox for \"f289a1ea20b800c633270b52ec0a7cbce53731f38497fa4630193ed068515360\"" Jan 29 11:36:11.827234 containerd[1492]: time="2025-01-29T11:36:11.673036707Z" level=info msg="TearDown network for sandbox \"f289a1ea20b800c633270b52ec0a7cbce53731f38497fa4630193ed068515360\" successfully" Jan 29 11:36:11.827234 containerd[1492]: time="2025-01-29T11:36:11.673045132Z" level=info msg="StopPodSandbox for \"f289a1ea20b800c633270b52ec0a7cbce53731f38497fa4630193ed068515360\" returns successfully" Jan 29 11:36:11.827234 containerd[1492]: time="2025-01-29T11:36:11.673561353Z" level=info msg="StopPodSandbox for \"e4c786a1f87f123c515ee8f4dae18559dc1b81a1fbae2953ab843c3ffdeb2f38\"" Jan 29 11:36:11.827234 containerd[1492]: time="2025-01-29T11:36:11.673652385Z" level=info msg="TearDown network for sandbox \"e4c786a1f87f123c515ee8f4dae18559dc1b81a1fbae2953ab843c3ffdeb2f38\" successfully" Jan 29 11:36:11.827234 containerd[1492]: time="2025-01-29T11:36:11.673662294Z" level=info msg="StopPodSandbox for \"e4c786a1f87f123c515ee8f4dae18559dc1b81a1fbae2953ab843c3ffdeb2f38\" returns successfully" Jan 29 11:36:11.669246 systemd[1]: run-netns-cni\x2da5cdcf91\x2d68a4\x2d5c20\x2d96f9\x2d04e5e3cb8064.mount: Deactivated successfully. Jan 29 11:36:11.829688 kubelet[2560]: E0129 11:36:11.664918 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:36:11.829688 kubelet[2560]: I0129 11:36:11.665459 2560 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c813bd7f3e0a9030e60075419217b1ddcf7974cf9fe9ec3a62f9cc760570f4ff" Jan 29 11:36:11.829688 kubelet[2560]: I0129 11:36:11.669216 2560 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4467e330ce986042ee6370798d0cfac69c12394e8c48bb378f3d8274118997f5" Jan 29 11:36:11.829688 kubelet[2560]: E0129 11:36:11.673252 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:36:11.829688 kubelet[2560]: I0129 11:36:11.674752 2560 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a3ba7ab52526116889cf321914b1c4c8be6c1d810f1d28bbb52853d080bfba4" Jan 29 11:36:11.829688 kubelet[2560]: I0129 11:36:11.680371 2560 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8825c450d804aebd9a63530be345a53b152b35ff17031f6c5482c2b8008c22fd" Jan 29 11:36:11.829688 kubelet[2560]: I0129 11:36:11.686819 2560 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ede5520602dd767ddb315a2dea6c55d80b92b58b2e4a6098587bb67c0211e9a" Jan 29 11:36:11.941790 containerd[1492]: time="2025-01-29T11:36:11.673723288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qqzzk,Uid:cf2aa93c-f4bc-4322-9163-052200dd877a,Namespace:kube-system,Attempt:5,}" Jan 29 11:36:11.941790 containerd[1492]: time="2025-01-29T11:36:11.674199194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78d7549f7d-5n5j6,Uid:0725190d-a48f-4c98-9011-c6cdb64f50fe,Namespace:calico-apiserver,Attempt:5,}" Jan 29 11:36:11.941790 containerd[1492]: time="2025-01-29T11:36:11.675252405Z" level=info msg="StopPodSandbox for \"3a3ba7ab52526116889cf321914b1c4c8be6c1d810f1d28bbb52853d080bfba4\"" Jan 29 11:36:11.941790 containerd[1492]: time="2025-01-29T11:36:11.675402536Z" level=info msg="Ensure that sandbox 3a3ba7ab52526116889cf321914b1c4c8be6c1d810f1d28bbb52853d080bfba4 in task-service has been cleanup successfully" Jan 29 11:36:11.941790 containerd[1492]: time="2025-01-29T11:36:11.675733980Z" level=info msg="TearDown network for sandbox \"3a3ba7ab52526116889cf321914b1c4c8be6c1d810f1d28bbb52853d080bfba4\" successfully" Jan 29 11:36:11.941790 containerd[1492]: time="2025-01-29T11:36:11.675745471Z" level=info msg="StopPodSandbox for \"3a3ba7ab52526116889cf321914b1c4c8be6c1d810f1d28bbb52853d080bfba4\" returns successfully" Jan 29 11:36:11.941790 containerd[1492]: time="2025-01-29T11:36:11.678888584Z" level=info msg="StopPodSandbox for \"02f3fae8add70e6d47c8bf8ac88f4239e8c0deb00c48770a4763038b478021c8\"" Jan 29 11:36:11.941790 containerd[1492]: time="2025-01-29T11:36:11.678993240Z" level=info msg="TearDown network for sandbox \"02f3fae8add70e6d47c8bf8ac88f4239e8c0deb00c48770a4763038b478021c8\" successfully" Jan 29 11:36:11.941790 containerd[1492]: time="2025-01-29T11:36:11.679006976Z" level=info msg="StopPodSandbox for \"02f3fae8add70e6d47c8bf8ac88f4239e8c0deb00c48770a4763038b478021c8\" returns successfully" Jan 29 11:36:11.941790 containerd[1492]: time="2025-01-29T11:36:11.679317951Z" level=info msg="StopPodSandbox for \"dad5edafd75c208e507084ef0dcd7c29e4f3cd0afee0ca5f8f325c8bed2cca8a\"" Jan 29 11:36:11.941790 containerd[1492]: time="2025-01-29T11:36:11.679434471Z" level=info msg="TearDown network for sandbox \"dad5edafd75c208e507084ef0dcd7c29e4f3cd0afee0ca5f8f325c8bed2cca8a\" successfully" Jan 29 11:36:11.941790 containerd[1492]: time="2025-01-29T11:36:11.679445682Z" level=info msg="StopPodSandbox for \"dad5edafd75c208e507084ef0dcd7c29e4f3cd0afee0ca5f8f325c8bed2cca8a\" returns successfully" Jan 29 11:36:11.941790 containerd[1492]: time="2025-01-29T11:36:11.680091706Z" level=info msg="StopPodSandbox for \"7cf0ee6b79c5f0ffbd1df3c4ba1fa347f2ab15775291f793b405df1eb50ddbdc\"" Jan 29 11:36:11.941790 containerd[1492]: time="2025-01-29T11:36:11.680187046Z" level=info msg="TearDown network for sandbox \"7cf0ee6b79c5f0ffbd1df3c4ba1fa347f2ab15775291f793b405df1eb50ddbdc\" successfully" Jan 29 11:36:11.941790 containerd[1492]: time="2025-01-29T11:36:11.680208356Z" level=info msg="StopPodSandbox for \"7cf0ee6b79c5f0ffbd1df3c4ba1fa347f2ab15775291f793b405df1eb50ddbdc\" returns successfully" Jan 29 11:36:11.941790 containerd[1492]: time="2025-01-29T11:36:11.680398183Z" level=info msg="StopPodSandbox for \"b5f2937f631c80a639072f6b6e4b81c67d79bd7b8b00d51fbbb621f6dcf86772\"" Jan 29 11:36:11.941790 containerd[1492]: time="2025-01-29T11:36:11.680466741Z" level=info msg="TearDown network for sandbox \"b5f2937f631c80a639072f6b6e4b81c67d79bd7b8b00d51fbbb621f6dcf86772\" successfully" Jan 29 11:36:11.941790 containerd[1492]: time="2025-01-29T11:36:11.680476139Z" level=info msg="StopPodSandbox for \"b5f2937f631c80a639072f6b6e4b81c67d79bd7b8b00d51fbbb621f6dcf86772\" returns successfully" Jan 29 11:36:11.941790 containerd[1492]: time="2025-01-29T11:36:11.680758140Z" level=info msg="StopPodSandbox for \"8825c450d804aebd9a63530be345a53b152b35ff17031f6c5482c2b8008c22fd\"" Jan 29 11:36:11.941790 containerd[1492]: time="2025-01-29T11:36:11.680896911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f5f6fb96-hcxll,Uid:5e835acf-6d91-46a2-a52a-32309f48a3b4,Namespace:calico-system,Attempt:5,}" Jan 29 11:36:11.941790 containerd[1492]: time="2025-01-29T11:36:11.680935614Z" level=info msg="Ensure that sandbox 8825c450d804aebd9a63530be345a53b152b35ff17031f6c5482c2b8008c22fd in task-service has been cleanup successfully" Jan 29 11:36:11.941790 containerd[1492]: time="2025-01-29T11:36:11.681188609Z" level=info msg="TearDown network for sandbox \"8825c450d804aebd9a63530be345a53b152b35ff17031f6c5482c2b8008c22fd\" successfully" Jan 29 11:36:11.941790 containerd[1492]: time="2025-01-29T11:36:11.681204228Z" level=info msg="StopPodSandbox for \"8825c450d804aebd9a63530be345a53b152b35ff17031f6c5482c2b8008c22fd\" returns successfully" Jan 29 11:36:11.941790 containerd[1492]: time="2025-01-29T11:36:11.681564837Z" level=info msg="StopPodSandbox for \"e1a02e62fa690b02079fe64e81dbe8322e67817e7a20aa91d7cc841d5e25bba6\"" Jan 29 11:36:11.941790 containerd[1492]: time="2025-01-29T11:36:11.681641792Z" level=info msg="TearDown network for sandbox \"e1a02e62fa690b02079fe64e81dbe8322e67817e7a20aa91d7cc841d5e25bba6\" successfully" Jan 29 11:36:11.941790 containerd[1492]: time="2025-01-29T11:36:11.681676036Z" level=info msg="StopPodSandbox for \"e1a02e62fa690b02079fe64e81dbe8322e67817e7a20aa91d7cc841d5e25bba6\" returns successfully" Jan 29 11:36:11.941790 containerd[1492]: time="2025-01-29T11:36:11.681948449Z" level=info msg="StopPodSandbox for \"080cc56d0ab33d56eb6a1d38d66c3a2cdd67647fca21a8c854e3a90bdc763db1\"" Jan 29 11:36:11.941790 containerd[1492]: time="2025-01-29T11:36:11.682023310Z" level=info msg="TearDown network for sandbox \"080cc56d0ab33d56eb6a1d38d66c3a2cdd67647fca21a8c854e3a90bdc763db1\" successfully" Jan 29 11:36:11.941790 containerd[1492]: time="2025-01-29T11:36:11.682032296Z" level=info msg="StopPodSandbox for \"080cc56d0ab33d56eb6a1d38d66c3a2cdd67647fca21a8c854e3a90bdc763db1\" returns successfully" Jan 29 11:36:11.941790 containerd[1492]: time="2025-01-29T11:36:11.685478067Z" level=info msg="StopPodSandbox for \"fc2896e11c36986d8fde21f00e5a96b42534c3d205cbbed9dc871d57b74e32b9\"" Jan 29 11:36:11.941790 containerd[1492]: time="2025-01-29T11:36:11.685569119Z" level=info msg="TearDown network for sandbox \"fc2896e11c36986d8fde21f00e5a96b42534c3d205cbbed9dc871d57b74e32b9\" successfully" Jan 29 11:36:11.941790 containerd[1492]: time="2025-01-29T11:36:11.685579018Z" level=info msg="StopPodSandbox for \"fc2896e11c36986d8fde21f00e5a96b42534c3d205cbbed9dc871d57b74e32b9\" returns successfully" Jan 29 11:36:11.941790 containerd[1492]: time="2025-01-29T11:36:11.686192832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s9vh2,Uid:0ee1d7b9-9e01-4183-97ec-91d9420b2dab,Namespace:calico-system,Attempt:4,}" Jan 29 11:36:11.941790 containerd[1492]: time="2025-01-29T11:36:11.687180880Z" level=info msg="StopPodSandbox for \"9ede5520602dd767ddb315a2dea6c55d80b92b58b2e4a6098587bb67c0211e9a\"" Jan 29 11:36:11.941790 containerd[1492]: time="2025-01-29T11:36:11.687351170Z" level=info msg="Ensure that sandbox 9ede5520602dd767ddb315a2dea6c55d80b92b58b2e4a6098587bb67c0211e9a in task-service has been cleanup successfully" Jan 29 11:36:11.941790 containerd[1492]: time="2025-01-29T11:36:11.687518044Z" level=info msg="TearDown network for sandbox \"9ede5520602dd767ddb315a2dea6c55d80b92b58b2e4a6098587bb67c0211e9a\" successfully" Jan 29 11:36:11.941790 containerd[1492]: time="2025-01-29T11:36:11.687530688Z" level=info msg="StopPodSandbox for \"9ede5520602dd767ddb315a2dea6c55d80b92b58b2e4a6098587bb67c0211e9a\" returns successfully" Jan 29 11:36:11.942761 sshd[4403]: Accepted publickey for core from 10.0.0.1 port 50764 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:36:11.672748 systemd[1]: run-netns-cni\x2d7e0e419d\x2d0300\x2dfb82\x2d47fe\x2d707f2c33a8c8.mount: Deactivated successfully. Jan 29 11:36:11.850989 sshd-session[4403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:36:11.966723 containerd[1492]: time="2025-01-29T11:36:11.687970926Z" level=info msg="StopPodSandbox for \"0aa3e1e1cc1c6bf9e805ca1806db21a802b9119be72b5d45614a63e66a9541a6\"" Jan 29 11:36:11.966723 containerd[1492]: time="2025-01-29T11:36:11.688044835Z" level=info msg="TearDown network for sandbox \"0aa3e1e1cc1c6bf9e805ca1806db21a802b9119be72b5d45614a63e66a9541a6\" successfully" Jan 29 11:36:11.966723 containerd[1492]: time="2025-01-29T11:36:11.688054152Z" level=info msg="StopPodSandbox for \"0aa3e1e1cc1c6bf9e805ca1806db21a802b9119be72b5d45614a63e66a9541a6\" returns successfully" Jan 29 11:36:11.966723 containerd[1492]: time="2025-01-29T11:36:11.688705508Z" level=info msg="StopPodSandbox for \"b0603ed2dd195e5f4326b4b967ccff21260ba0e0d5173691aeb14b6a04acf563\"" Jan 29 11:36:11.966723 containerd[1492]: time="2025-01-29T11:36:11.688824321Z" level=info msg="TearDown network for sandbox \"b0603ed2dd195e5f4326b4b967ccff21260ba0e0d5173691aeb14b6a04acf563\" successfully" Jan 29 11:36:11.966723 containerd[1492]: time="2025-01-29T11:36:11.688840342Z" level=info msg="StopPodSandbox for \"b0603ed2dd195e5f4326b4b967ccff21260ba0e0d5173691aeb14b6a04acf563\" returns successfully" Jan 29 11:36:11.966723 containerd[1492]: time="2025-01-29T11:36:11.689083990Z" level=info msg="StopPodSandbox for \"ca71e99b7c48067d7bf825de1146deea722737ac8db1aede627b41776be5c345\"" Jan 29 11:36:11.966723 containerd[1492]: time="2025-01-29T11:36:11.689167136Z" level=info msg="TearDown network for sandbox \"ca71e99b7c48067d7bf825de1146deea722737ac8db1aede627b41776be5c345\" successfully" Jan 29 11:36:11.966723 containerd[1492]: time="2025-01-29T11:36:11.689181883Z" level=info msg="StopPodSandbox for \"ca71e99b7c48067d7bf825de1146deea722737ac8db1aede627b41776be5c345\" returns successfully" Jan 29 11:36:11.966723 containerd[1492]: time="2025-01-29T11:36:11.689739904Z" level=info msg="StopPodSandbox for \"f81c812ded146071c6d5357fc23680f636d22e342d877bc1c706a89aba49dd93\"" Jan 29 11:36:11.966723 containerd[1492]: time="2025-01-29T11:36:11.689822539Z" level=info msg="TearDown network for sandbox \"f81c812ded146071c6d5357fc23680f636d22e342d877bc1c706a89aba49dd93\" successfully" Jan 29 11:36:11.966723 containerd[1492]: time="2025-01-29T11:36:11.689839410Z" level=info msg="StopPodSandbox for \"f81c812ded146071c6d5357fc23680f636d22e342d877bc1c706a89aba49dd93\" returns successfully" Jan 29 11:36:11.966723 containerd[1492]: time="2025-01-29T11:36:11.690350562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78d7549f7d-g9z2x,Uid:c4ad5d31-5d68-4473-8b3c-72bfc21e63c5,Namespace:calico-apiserver,Attempt:5,}" Jan 29 11:36:11.678360 systemd[1]: run-netns-cni\x2da0f4b4c6\x2ddcc3\x2de522\x2db84d\x2d2b28e8d2f023.mount: Deactivated successfully. Jan 29 11:36:11.802339 systemd[1]: Started sshd@8-10.0.0.107:22-10.0.0.1:50764.service - OpenSSH per-connection server daemon (10.0.0.1:50764). Jan 29 11:36:11.855690 systemd-logind[1475]: New session 9 of user core. Jan 29 11:36:11.861169 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 11:36:12.023554 sshd[4405]: Connection closed by 10.0.0.1 port 50764 Jan 29 11:36:12.030415 systemd[1]: sshd@8-10.0.0.107:22-10.0.0.1:50764.service: Deactivated successfully. Jan 29 11:36:12.024520 sshd-session[4403]: pam_unix(sshd:session): session closed for user core Jan 29 11:36:12.034004 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 11:36:12.035031 systemd-logind[1475]: Session 9 logged out. Waiting for processes to exit. Jan 29 11:36:12.036226 systemd-logind[1475]: Removed session 9. Jan 29 11:36:12.108485 systemd[1]: run-netns-cni\x2d2f9866cd\x2d0289\x2df6ce\x2de853\x2df5cf07e8efe1.mount: Deactivated successfully. Jan 29 11:36:12.108682 systemd[1]: run-netns-cni\x2d953bd5a4\x2d69e0\x2d5142\x2de46f\x2d7f2c150abc20.mount: Deactivated successfully. Jan 29 11:36:13.119394 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount594644652.mount: Deactivated successfully. Jan 29 11:36:13.956318 containerd[1492]: time="2025-01-29T11:36:13.956256066Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:36:13.970927 containerd[1492]: time="2025-01-29T11:36:13.970851719Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 29 11:36:13.979935 containerd[1492]: time="2025-01-29T11:36:13.979886597Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:36:14.027384 containerd[1492]: time="2025-01-29T11:36:14.027331394Z" level=error msg="Failed to destroy network for sandbox \"d4de1c4e4badf8994e3ce5ec4ee74b35d107c1caa3b89b472d0ec5af0549c89e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:14.028166 containerd[1492]: time="2025-01-29T11:36:14.028135616Z" level=error msg="encountered an error cleaning up failed sandbox \"d4de1c4e4badf8994e3ce5ec4ee74b35d107c1caa3b89b472d0ec5af0549c89e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:14.028295 containerd[1492]: time="2025-01-29T11:36:14.028269537Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78d7549f7d-g9z2x,Uid:c4ad5d31-5d68-4473-8b3c-72bfc21e63c5,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"d4de1c4e4badf8994e3ce5ec4ee74b35d107c1caa3b89b472d0ec5af0549c89e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:14.028717 kubelet[2560]: E0129 11:36:14.028662 2560 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4de1c4e4badf8994e3ce5ec4ee74b35d107c1caa3b89b472d0ec5af0549c89e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:14.029158 kubelet[2560]: E0129 11:36:14.028731 2560 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4de1c4e4badf8994e3ce5ec4ee74b35d107c1caa3b89b472d0ec5af0549c89e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78d7549f7d-g9z2x" Jan 29 11:36:14.029158 kubelet[2560]: E0129 11:36:14.028753 2560 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4de1c4e4badf8994e3ce5ec4ee74b35d107c1caa3b89b472d0ec5af0549c89e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78d7549f7d-g9z2x" Jan 29 11:36:14.029158 kubelet[2560]: E0129 11:36:14.028794 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-78d7549f7d-g9z2x_calico-apiserver(c4ad5d31-5d68-4473-8b3c-72bfc21e63c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-78d7549f7d-g9z2x_calico-apiserver(c4ad5d31-5d68-4473-8b3c-72bfc21e63c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d4de1c4e4badf8994e3ce5ec4ee74b35d107c1caa3b89b472d0ec5af0549c89e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78d7549f7d-g9z2x" podUID="c4ad5d31-5d68-4473-8b3c-72bfc21e63c5" Jan 29 11:36:14.034256 containerd[1492]: time="2025-01-29T11:36:14.034205867Z" level=error msg="Failed to destroy network for sandbox \"c1ad62e2e4cccfb4748651dd83ebb7c29397ff496d8fe77bbfc3dafc4e0ece8e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:14.034608 containerd[1492]: time="2025-01-29T11:36:14.034572146Z" level=error msg="encountered an error cleaning up failed sandbox \"c1ad62e2e4cccfb4748651dd83ebb7c29397ff496d8fe77bbfc3dafc4e0ece8e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:14.034671 containerd[1492]: time="2025-01-29T11:36:14.034646245Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qqzzk,Uid:cf2aa93c-f4bc-4322-9163-052200dd877a,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"c1ad62e2e4cccfb4748651dd83ebb7c29397ff496d8fe77bbfc3dafc4e0ece8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:14.034874 kubelet[2560]: E0129 11:36:14.034832 2560 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1ad62e2e4cccfb4748651dd83ebb7c29397ff496d8fe77bbfc3dafc4e0ece8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:14.034935 kubelet[2560]: E0129 11:36:14.034882 2560 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1ad62e2e4cccfb4748651dd83ebb7c29397ff496d8fe77bbfc3dafc4e0ece8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-qqzzk" Jan 29 11:36:14.034935 kubelet[2560]: E0129 11:36:14.034913 2560 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1ad62e2e4cccfb4748651dd83ebb7c29397ff496d8fe77bbfc3dafc4e0ece8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-qqzzk" Jan 29 11:36:14.034996 kubelet[2560]: E0129 11:36:14.034951 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-qqzzk_kube-system(cf2aa93c-f4bc-4322-9163-052200dd877a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-qqzzk_kube-system(cf2aa93c-f4bc-4322-9163-052200dd877a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c1ad62e2e4cccfb4748651dd83ebb7c29397ff496d8fe77bbfc3dafc4e0ece8e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-qqzzk" podUID="cf2aa93c-f4bc-4322-9163-052200dd877a" Jan 29 11:36:14.084405 containerd[1492]: time="2025-01-29T11:36:14.084338621Z" level=error msg="Failed to destroy network for sandbox \"1d8a5c622769c491a41f633c7530baca459b29e444f7e3520cae5792982531a8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:14.084846 containerd[1492]: time="2025-01-29T11:36:14.084800670Z" level=error msg="encountered an error cleaning up failed sandbox \"1d8a5c622769c491a41f633c7530baca459b29e444f7e3520cae5792982531a8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:14.085003 containerd[1492]: time="2025-01-29T11:36:14.084885430Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8sz4x,Uid:d32f565b-8d7d-47d2-85bf-68725ec04cff,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"1d8a5c622769c491a41f633c7530baca459b29e444f7e3520cae5792982531a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:14.085212 kubelet[2560]: E0129 11:36:14.085160 2560 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d8a5c622769c491a41f633c7530baca459b29e444f7e3520cae5792982531a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:14.085282 kubelet[2560]: E0129 11:36:14.085223 2560 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d8a5c622769c491a41f633c7530baca459b29e444f7e3520cae5792982531a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-8sz4x" Jan 29 11:36:14.085282 kubelet[2560]: E0129 11:36:14.085241 2560 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d8a5c622769c491a41f633c7530baca459b29e444f7e3520cae5792982531a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-8sz4x" Jan 29 11:36:14.085351 kubelet[2560]: E0129 11:36:14.085283 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-8sz4x_kube-system(d32f565b-8d7d-47d2-85bf-68725ec04cff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-8sz4x_kube-system(d32f565b-8d7d-47d2-85bf-68725ec04cff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1d8a5c622769c491a41f633c7530baca459b29e444f7e3520cae5792982531a8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-8sz4x" podUID="d32f565b-8d7d-47d2-85bf-68725ec04cff" Jan 29 11:36:14.122104 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1d8a5c622769c491a41f633c7530baca459b29e444f7e3520cae5792982531a8-shm.mount: Deactivated successfully. Jan 29 11:36:14.122215 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c1ad62e2e4cccfb4748651dd83ebb7c29397ff496d8fe77bbfc3dafc4e0ece8e-shm.mount: Deactivated successfully. Jan 29 11:36:14.122296 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d4de1c4e4badf8994e3ce5ec4ee74b35d107c1caa3b89b472d0ec5af0549c89e-shm.mount: Deactivated successfully. Jan 29 11:36:14.276518 containerd[1492]: time="2025-01-29T11:36:14.276321268Z" level=error msg="Failed to destroy network for sandbox \"79d39bbdd4f88a8ac890dc57de0b71341e4fe506913f34d3613e4ea9cfabf344\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:14.277361 containerd[1492]: time="2025-01-29T11:36:14.276846806Z" level=error msg="encountered an error cleaning up failed sandbox \"79d39bbdd4f88a8ac890dc57de0b71341e4fe506913f34d3613e4ea9cfabf344\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:14.277361 containerd[1492]: time="2025-01-29T11:36:14.276938017Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f5f6fb96-hcxll,Uid:5e835acf-6d91-46a2-a52a-32309f48a3b4,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"79d39bbdd4f88a8ac890dc57de0b71341e4fe506913f34d3613e4ea9cfabf344\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:14.277515 kubelet[2560]: E0129 11:36:14.277205 2560 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79d39bbdd4f88a8ac890dc57de0b71341e4fe506913f34d3613e4ea9cfabf344\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:14.277515 kubelet[2560]: E0129 11:36:14.277277 2560 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79d39bbdd4f88a8ac890dc57de0b71341e4fe506913f34d3613e4ea9cfabf344\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f5f6fb96-hcxll" Jan 29 11:36:14.277515 kubelet[2560]: E0129 11:36:14.277300 2560 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79d39bbdd4f88a8ac890dc57de0b71341e4fe506913f34d3613e4ea9cfabf344\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f5f6fb96-hcxll" Jan 29 11:36:14.277681 kubelet[2560]: E0129 11:36:14.277346 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f5f6fb96-hcxll_calico-system(5e835acf-6d91-46a2-a52a-32309f48a3b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f5f6fb96-hcxll_calico-system(5e835acf-6d91-46a2-a52a-32309f48a3b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"79d39bbdd4f88a8ac890dc57de0b71341e4fe506913f34d3613e4ea9cfabf344\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f5f6fb96-hcxll" podUID="5e835acf-6d91-46a2-a52a-32309f48a3b4" Jan 29 11:36:14.280450 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-79d39bbdd4f88a8ac890dc57de0b71341e4fe506913f34d3613e4ea9cfabf344-shm.mount: Deactivated successfully. Jan 29 11:36:14.429996 containerd[1492]: time="2025-01-29T11:36:14.429926038Z" level=error msg="Failed to destroy network for sandbox \"183bedfa503a9d2160a322c44bf62cdfc0109e87f48648b193fd53fe82d96642\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:14.430438 containerd[1492]: time="2025-01-29T11:36:14.430397004Z" level=error msg="encountered an error cleaning up failed sandbox \"183bedfa503a9d2160a322c44bf62cdfc0109e87f48648b193fd53fe82d96642\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:14.430504 containerd[1492]: time="2025-01-29T11:36:14.430470913Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s9vh2,Uid:0ee1d7b9-9e01-4183-97ec-91d9420b2dab,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"183bedfa503a9d2160a322c44bf62cdfc0109e87f48648b193fd53fe82d96642\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:14.430889 kubelet[2560]: E0129 11:36:14.430834 2560 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"183bedfa503a9d2160a322c44bf62cdfc0109e87f48648b193fd53fe82d96642\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:14.430973 kubelet[2560]: E0129 11:36:14.430927 2560 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"183bedfa503a9d2160a322c44bf62cdfc0109e87f48648b193fd53fe82d96642\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s9vh2" Jan 29 11:36:14.430973 kubelet[2560]: E0129 11:36:14.430954 2560 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"183bedfa503a9d2160a322c44bf62cdfc0109e87f48648b193fd53fe82d96642\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s9vh2" Jan 29 11:36:14.431057 kubelet[2560]: E0129 11:36:14.431012 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-s9vh2_calico-system(0ee1d7b9-9e01-4183-97ec-91d9420b2dab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-s9vh2_calico-system(0ee1d7b9-9e01-4183-97ec-91d9420b2dab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"183bedfa503a9d2160a322c44bf62cdfc0109e87f48648b193fd53fe82d96642\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s9vh2" podUID="0ee1d7b9-9e01-4183-97ec-91d9420b2dab" Jan 29 11:36:14.432323 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-183bedfa503a9d2160a322c44bf62cdfc0109e87f48648b193fd53fe82d96642-shm.mount: Deactivated successfully. Jan 29 11:36:14.459111 containerd[1492]: time="2025-01-29T11:36:14.459053275Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:36:14.460132 containerd[1492]: time="2025-01-29T11:36:14.460012969Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 8.114773602s" Jan 29 11:36:14.460132 containerd[1492]: time="2025-01-29T11:36:14.460053635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 29 11:36:14.473704 containerd[1492]: time="2025-01-29T11:36:14.472955009Z" level=info msg="CreateContainer within sandbox \"1c1ed092f32cbd93dffbdbae963f10cdf8637329ee99fc641d84f4d0568a92d5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 11:36:14.510965 containerd[1492]: time="2025-01-29T11:36:14.510883209Z" level=error msg="Failed to destroy network for sandbox \"e52c96ad530fba430b707a5dbe71a37681b523b56d3b4207670eff5fc15bfd6a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:14.511612 containerd[1492]: time="2025-01-29T11:36:14.511541056Z" level=error msg="encountered an error cleaning up failed sandbox \"e52c96ad530fba430b707a5dbe71a37681b523b56d3b4207670eff5fc15bfd6a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:14.511696 containerd[1492]: time="2025-01-29T11:36:14.511615396Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78d7549f7d-5n5j6,Uid:0725190d-a48f-4c98-9011-c6cdb64f50fe,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"e52c96ad530fba430b707a5dbe71a37681b523b56d3b4207670eff5fc15bfd6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:14.511988 kubelet[2560]: E0129 11:36:14.511914 2560 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e52c96ad530fba430b707a5dbe71a37681b523b56d3b4207670eff5fc15bfd6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:36:14.512061 kubelet[2560]: E0129 11:36:14.512000 2560 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e52c96ad530fba430b707a5dbe71a37681b523b56d3b4207670eff5fc15bfd6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78d7549f7d-5n5j6" Jan 29 11:36:14.512061 kubelet[2560]: E0129 11:36:14.512022 2560 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e52c96ad530fba430b707a5dbe71a37681b523b56d3b4207670eff5fc15bfd6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78d7549f7d-5n5j6" Jan 29 11:36:14.512138 kubelet[2560]: E0129 11:36:14.512073 2560 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-78d7549f7d-5n5j6_calico-apiserver(0725190d-a48f-4c98-9011-c6cdb64f50fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-78d7549f7d-5n5j6_calico-apiserver(0725190d-a48f-4c98-9011-c6cdb64f50fe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e52c96ad530fba430b707a5dbe71a37681b523b56d3b4207670eff5fc15bfd6a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78d7549f7d-5n5j6" podUID="0725190d-a48f-4c98-9011-c6cdb64f50fe" Jan 29 11:36:14.587328 containerd[1492]: time="2025-01-29T11:36:14.586302399Z" level=info msg="CreateContainer within sandbox \"1c1ed092f32cbd93dffbdbae963f10cdf8637329ee99fc641d84f4d0568a92d5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7a05fe8ca39ccd57e64dd6676e75dfedb931e9479aceedf2d01d9c6c0ee07efc\"" Jan 29 11:36:14.588507 containerd[1492]: time="2025-01-29T11:36:14.588112572Z" level=info msg="StartContainer for \"7a05fe8ca39ccd57e64dd6676e75dfedb931e9479aceedf2d01d9c6c0ee07efc\"" Jan 29 11:36:14.686922 systemd[1]: Started cri-containerd-7a05fe8ca39ccd57e64dd6676e75dfedb931e9479aceedf2d01d9c6c0ee07efc.scope - libcontainer container 7a05fe8ca39ccd57e64dd6676e75dfedb931e9479aceedf2d01d9c6c0ee07efc. Jan 29 11:36:14.697173 kubelet[2560]: I0129 11:36:14.697105 2560 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d8a5c622769c491a41f633c7530baca459b29e444f7e3520cae5792982531a8" Jan 29 11:36:14.698107 containerd[1492]: time="2025-01-29T11:36:14.698067837Z" level=info msg="StopPodSandbox for \"1d8a5c622769c491a41f633c7530baca459b29e444f7e3520cae5792982531a8\"" Jan 29 11:36:14.698364 containerd[1492]: time="2025-01-29T11:36:14.698325511Z" level=info msg="Ensure that sandbox 1d8a5c622769c491a41f633c7530baca459b29e444f7e3520cae5792982531a8 in task-service has been cleanup successfully" Jan 29 11:36:14.698903 containerd[1492]: time="2025-01-29T11:36:14.698796868Z" level=info msg="TearDown network for sandbox \"1d8a5c622769c491a41f633c7530baca459b29e444f7e3520cae5792982531a8\" successfully" Jan 29 11:36:14.698903 containerd[1492]: time="2025-01-29T11:36:14.698820642Z" level=info msg="StopPodSandbox for \"1d8a5c622769c491a41f633c7530baca459b29e444f7e3520cae5792982531a8\" returns successfully" Jan 29 11:36:14.699332 containerd[1492]: time="2025-01-29T11:36:14.699293661Z" level=info msg="StopPodSandbox for \"4c7481776cb0257d1f8d06312c485b245875bec0a4c771dfef98cf6f06e42854\"" Jan 29 11:36:14.699682 containerd[1492]: time="2025-01-29T11:36:14.699590088Z" level=info msg="TearDown network for sandbox \"4c7481776cb0257d1f8d06312c485b245875bec0a4c771dfef98cf6f06e42854\" successfully" Jan 29 11:36:14.699798 containerd[1492]: time="2025-01-29T11:36:14.699782640Z" level=info msg="StopPodSandbox for \"4c7481776cb0257d1f8d06312c485b245875bec0a4c771dfef98cf6f06e42854\" returns successfully" Jan 29 11:36:14.700696 containerd[1492]: time="2025-01-29T11:36:14.700676230Z" level=info msg="StopPodSandbox for \"287ed895e0fd544523f25485baaa91620d0f599e710587bc7ace1b80134c67ce\"" Jan 29 11:36:14.701208 containerd[1492]: time="2025-01-29T11:36:14.701013805Z" level=info msg="TearDown network for sandbox \"287ed895e0fd544523f25485baaa91620d0f599e710587bc7ace1b80134c67ce\" successfully" Jan 29 11:36:14.701208 containerd[1492]: time="2025-01-29T11:36:14.701162895Z" level=info msg="StopPodSandbox for \"287ed895e0fd544523f25485baaa91620d0f599e710587bc7ace1b80134c67ce\" returns successfully" Jan 29 11:36:14.702675 containerd[1492]: time="2025-01-29T11:36:14.702503164Z" level=info msg="StopPodSandbox for \"05f9cf1c50bc395566b534f6d9d826c90680313b67695139cea702c54a97f9a4\"" Jan 29 11:36:14.702675 containerd[1492]: time="2025-01-29T11:36:14.702614093Z" level=info msg="TearDown network for sandbox \"05f9cf1c50bc395566b534f6d9d826c90680313b67695139cea702c54a97f9a4\" successfully" Jan 29 11:36:14.702675 containerd[1492]: time="2025-01-29T11:36:14.702633149Z" level=info msg="StopPodSandbox for \"05f9cf1c50bc395566b534f6d9d826c90680313b67695139cea702c54a97f9a4\" returns successfully" Jan 29 11:36:14.702794 kubelet[2560]: I0129 11:36:14.702756 2560 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="183bedfa503a9d2160a322c44bf62cdfc0109e87f48648b193fd53fe82d96642" Jan 29 11:36:14.703364 containerd[1492]: time="2025-01-29T11:36:14.703312847Z" level=info msg="StopPodSandbox for \"183bedfa503a9d2160a322c44bf62cdfc0109e87f48648b193fd53fe82d96642\"" Jan 29 11:36:14.705150 containerd[1492]: time="2025-01-29T11:36:14.705015969Z" level=info msg="Ensure that sandbox 183bedfa503a9d2160a322c44bf62cdfc0109e87f48648b193fd53fe82d96642 in task-service has been cleanup successfully" Jan 29 11:36:14.706721 containerd[1492]: time="2025-01-29T11:36:14.706683353Z" level=info msg="StopPodSandbox for \"78be77b980959cfbde3c7658c4ee4124dfca02a33e1eb74d810982ec9ce9af2d\"" Jan 29 11:36:14.708195 containerd[1492]: time="2025-01-29T11:36:14.708177261Z" level=info msg="TearDown network for sandbox \"78be77b980959cfbde3c7658c4ee4124dfca02a33e1eb74d810982ec9ce9af2d\" successfully" Jan 29 11:36:14.708268 containerd[1492]: time="2025-01-29T11:36:14.708239508Z" level=info msg="StopPodSandbox for \"78be77b980959cfbde3c7658c4ee4124dfca02a33e1eb74d810982ec9ce9af2d\" returns successfully" Jan 29 11:36:14.708467 containerd[1492]: time="2025-01-29T11:36:14.707053609Z" level=info msg="TearDown network for sandbox \"183bedfa503a9d2160a322c44bf62cdfc0109e87f48648b193fd53fe82d96642\" successfully" Jan 29 11:36:14.708467 containerd[1492]: time="2025-01-29T11:36:14.708459572Z" level=info msg="StopPodSandbox for \"183bedfa503a9d2160a322c44bf62cdfc0109e87f48648b193fd53fe82d96642\" returns successfully" Jan 29 11:36:14.710132 containerd[1492]: time="2025-01-29T11:36:14.709643849Z" level=info msg="StopPodSandbox for \"8825c450d804aebd9a63530be345a53b152b35ff17031f6c5482c2b8008c22fd\"" Jan 29 11:36:14.710132 containerd[1492]: time="2025-01-29T11:36:14.709773091Z" level=info msg="TearDown network for sandbox \"8825c450d804aebd9a63530be345a53b152b35ff17031f6c5482c2b8008c22fd\" successfully" Jan 29 11:36:14.710132 containerd[1492]: time="2025-01-29T11:36:14.709790574Z" level=info msg="StopPodSandbox for \"8825c450d804aebd9a63530be345a53b152b35ff17031f6c5482c2b8008c22fd\" returns successfully" Jan 29 11:36:14.710132 containerd[1492]: time="2025-01-29T11:36:14.709851218Z" level=info msg="StopPodSandbox for \"fe40cde01f9bc00e0017a79d8d4efb77eed31f6ad079f48d7d46d61b289b6a4b\"" Jan 29 11:36:14.710132 containerd[1492]: time="2025-01-29T11:36:14.709960503Z" level=info msg="TearDown network for sandbox \"fe40cde01f9bc00e0017a79d8d4efb77eed31f6ad079f48d7d46d61b289b6a4b\" successfully" Jan 29 11:36:14.710132 containerd[1492]: time="2025-01-29T11:36:14.709977555Z" level=info msg="StopPodSandbox for \"fe40cde01f9bc00e0017a79d8d4efb77eed31f6ad079f48d7d46d61b289b6a4b\" returns successfully" Jan 29 11:36:14.710790 kubelet[2560]: E0129 11:36:14.710410 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:36:14.710950 containerd[1492]: time="2025-01-29T11:36:14.710919467Z" level=info msg="StopPodSandbox for \"e1a02e62fa690b02079fe64e81dbe8322e67817e7a20aa91d7cc841d5e25bba6\"" Jan 29 11:36:14.711143 containerd[1492]: time="2025-01-29T11:36:14.711027149Z" level=info msg="TearDown network for sandbox \"e1a02e62fa690b02079fe64e81dbe8322e67817e7a20aa91d7cc841d5e25bba6\" successfully" Jan 29 11:36:14.711143 containerd[1492]: time="2025-01-29T11:36:14.711042357Z" level=info msg="StopPodSandbox for \"e1a02e62fa690b02079fe64e81dbe8322e67817e7a20aa91d7cc841d5e25bba6\" returns successfully" Jan 29 11:36:14.711713 containerd[1492]: time="2025-01-29T11:36:14.711275065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8sz4x,Uid:d32f565b-8d7d-47d2-85bf-68725ec04cff,Namespace:kube-system,Attempt:6,}" Jan 29 11:36:14.711713 containerd[1492]: time="2025-01-29T11:36:14.711342140Z" level=info msg="StopPodSandbox for \"080cc56d0ab33d56eb6a1d38d66c3a2cdd67647fca21a8c854e3a90bdc763db1\"" Jan 29 11:36:14.711713 containerd[1492]: time="2025-01-29T11:36:14.711435006Z" level=info msg="TearDown network for sandbox \"080cc56d0ab33d56eb6a1d38d66c3a2cdd67647fca21a8c854e3a90bdc763db1\" successfully" Jan 29 11:36:14.711713 containerd[1492]: time="2025-01-29T11:36:14.711449302Z" level=info msg="StopPodSandbox for \"080cc56d0ab33d56eb6a1d38d66c3a2cdd67647fca21a8c854e3a90bdc763db1\" returns successfully" Jan 29 11:36:14.711995 containerd[1492]: time="2025-01-29T11:36:14.711781166Z" level=info msg="StopPodSandbox for \"fc2896e11c36986d8fde21f00e5a96b42534c3d205cbbed9dc871d57b74e32b9\"" Jan 29 11:36:14.711995 containerd[1492]: time="2025-01-29T11:36:14.711868541Z" level=info msg="TearDown network for sandbox \"fc2896e11c36986d8fde21f00e5a96b42534c3d205cbbed9dc871d57b74e32b9\" successfully" Jan 29 11:36:14.711995 containerd[1492]: time="2025-01-29T11:36:14.711882657Z" level=info msg="StopPodSandbox for \"fc2896e11c36986d8fde21f00e5a96b42534c3d205cbbed9dc871d57b74e32b9\" returns successfully" Jan 29 11:36:14.714073 containerd[1492]: time="2025-01-29T11:36:14.714032860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s9vh2,Uid:0ee1d7b9-9e01-4183-97ec-91d9420b2dab,Namespace:calico-system,Attempt:5,}" Jan 29 11:36:14.715647 kubelet[2560]: I0129 11:36:14.714292 2560 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4de1c4e4badf8994e3ce5ec4ee74b35d107c1caa3b89b472d0ec5af0549c89e" Jan 29 11:36:14.715915 containerd[1492]: time="2025-01-29T11:36:14.715566693Z" level=info msg="StopPodSandbox for \"d4de1c4e4badf8994e3ce5ec4ee74b35d107c1caa3b89b472d0ec5af0549c89e\"" Jan 29 11:36:14.716181 containerd[1492]: time="2025-01-29T11:36:14.716154968Z" level=info msg="Ensure that sandbox d4de1c4e4badf8994e3ce5ec4ee74b35d107c1caa3b89b472d0ec5af0549c89e in task-service has been cleanup successfully" Jan 29 11:36:14.717421 containerd[1492]: time="2025-01-29T11:36:14.717333284Z" level=info msg="TearDown network for sandbox \"d4de1c4e4badf8994e3ce5ec4ee74b35d107c1caa3b89b472d0ec5af0549c89e\" successfully" Jan 29 11:36:14.717479 containerd[1492]: time="2025-01-29T11:36:14.717419556Z" level=info msg="StopPodSandbox for \"d4de1c4e4badf8994e3ce5ec4ee74b35d107c1caa3b89b472d0ec5af0549c89e\" returns successfully" Jan 29 11:36:14.719418 containerd[1492]: time="2025-01-29T11:36:14.719354423Z" level=info msg="StopPodSandbox for \"9ede5520602dd767ddb315a2dea6c55d80b92b58b2e4a6098587bb67c0211e9a\"" Jan 29 11:36:14.719601 containerd[1492]: time="2025-01-29T11:36:14.719565510Z" level=info msg="TearDown network for sandbox \"9ede5520602dd767ddb315a2dea6c55d80b92b58b2e4a6098587bb67c0211e9a\" successfully" Jan 29 11:36:14.719601 containerd[1492]: time="2025-01-29T11:36:14.719597129Z" level=info msg="StopPodSandbox for \"9ede5520602dd767ddb315a2dea6c55d80b92b58b2e4a6098587bb67c0211e9a\" returns successfully" Jan 29 11:36:14.720585 containerd[1492]: time="2025-01-29T11:36:14.720548377Z" level=info msg="StopPodSandbox for \"0aa3e1e1cc1c6bf9e805ca1806db21a802b9119be72b5d45614a63e66a9541a6\"" Jan 29 11:36:14.723378 containerd[1492]: time="2025-01-29T11:36:14.723271868Z" level=info msg="TearDown network for sandbox \"0aa3e1e1cc1c6bf9e805ca1806db21a802b9119be72b5d45614a63e66a9541a6\" successfully" Jan 29 11:36:14.723523 kubelet[2560]: I0129 11:36:14.723398 2560 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79d39bbdd4f88a8ac890dc57de0b71341e4fe506913f34d3613e4ea9cfabf344" Jan 29 11:36:14.723818 containerd[1492]: time="2025-01-29T11:36:14.723794760Z" level=info msg="StopPodSandbox for \"0aa3e1e1cc1c6bf9e805ca1806db21a802b9119be72b5d45614a63e66a9541a6\" returns successfully" Jan 29 11:36:14.724874 containerd[1492]: time="2025-01-29T11:36:14.724835698Z" level=info msg="StopPodSandbox for \"b0603ed2dd195e5f4326b4b967ccff21260ba0e0d5173691aeb14b6a04acf563\"" Jan 29 11:36:14.725068 containerd[1492]: time="2025-01-29T11:36:14.724964279Z" level=info msg="TearDown network for sandbox \"b0603ed2dd195e5f4326b4b967ccff21260ba0e0d5173691aeb14b6a04acf563\" successfully" Jan 29 11:36:14.725068 containerd[1492]: time="2025-01-29T11:36:14.724979177Z" level=info msg="StopPodSandbox for \"b0603ed2dd195e5f4326b4b967ccff21260ba0e0d5173691aeb14b6a04acf563\" returns successfully" Jan 29 11:36:14.725068 containerd[1492]: time="2025-01-29T11:36:14.725035704Z" level=info msg="StopPodSandbox for \"79d39bbdd4f88a8ac890dc57de0b71341e4fe506913f34d3613e4ea9cfabf344\"" Jan 29 11:36:14.725263 containerd[1492]: time="2025-01-29T11:36:14.725231111Z" level=info msg="Ensure that sandbox 79d39bbdd4f88a8ac890dc57de0b71341e4fe506913f34d3613e4ea9cfabf344 in task-service has been cleanup successfully" Jan 29 11:36:14.726930 containerd[1492]: time="2025-01-29T11:36:14.726725540Z" level=info msg="StopPodSandbox for \"ca71e99b7c48067d7bf825de1146deea722737ac8db1aede627b41776be5c345\"" Jan 29 11:36:14.726930 containerd[1492]: time="2025-01-29T11:36:14.726837942Z" level=info msg="TearDown network for sandbox \"ca71e99b7c48067d7bf825de1146deea722737ac8db1aede627b41776be5c345\" successfully" Jan 29 11:36:14.726930 containerd[1492]: time="2025-01-29T11:36:14.726853581Z" level=info msg="StopPodSandbox for \"ca71e99b7c48067d7bf825de1146deea722737ac8db1aede627b41776be5c345\" returns successfully" Jan 29 11:36:14.727973 containerd[1492]: time="2025-01-29T11:36:14.727724398Z" level=info msg="StopPodSandbox for \"f81c812ded146071c6d5357fc23680f636d22e342d877bc1c706a89aba49dd93\"" Jan 29 11:36:14.727973 containerd[1492]: time="2025-01-29T11:36:14.727829675Z" level=info msg="TearDown network for sandbox \"f81c812ded146071c6d5357fc23680f636d22e342d877bc1c706a89aba49dd93\" successfully" Jan 29 11:36:14.727973 containerd[1492]: time="2025-01-29T11:36:14.727845836Z" level=info msg="StopPodSandbox for \"f81c812ded146071c6d5357fc23680f636d22e342d877bc1c706a89aba49dd93\" returns successfully" Jan 29 11:36:14.727973 containerd[1492]: time="2025-01-29T11:36:14.727926187Z" level=info msg="TearDown network for sandbox \"79d39bbdd4f88a8ac890dc57de0b71341e4fe506913f34d3613e4ea9cfabf344\" successfully" Jan 29 11:36:14.727973 containerd[1492]: time="2025-01-29T11:36:14.727940755Z" level=info msg="StopPodSandbox for \"79d39bbdd4f88a8ac890dc57de0b71341e4fe506913f34d3613e4ea9cfabf344\" returns successfully" Jan 29 11:36:14.731066 containerd[1492]: time="2025-01-29T11:36:14.731018501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78d7549f7d-g9z2x,Uid:c4ad5d31-5d68-4473-8b3c-72bfc21e63c5,Namespace:calico-apiserver,Attempt:6,}" Jan 29 11:36:14.731306 containerd[1492]: time="2025-01-29T11:36:14.731282887Z" level=info msg="StopPodSandbox for \"3a3ba7ab52526116889cf321914b1c4c8be6c1d810f1d28bbb52853d080bfba4\"" Jan 29 11:36:14.731426 containerd[1492]: time="2025-01-29T11:36:14.731405418Z" level=info msg="TearDown network for sandbox \"3a3ba7ab52526116889cf321914b1c4c8be6c1d810f1d28bbb52853d080bfba4\" successfully" Jan 29 11:36:14.731454 containerd[1492]: time="2025-01-29T11:36:14.731425205Z" level=info msg="StopPodSandbox for \"3a3ba7ab52526116889cf321914b1c4c8be6c1d810f1d28bbb52853d080bfba4\" returns successfully" Jan 29 11:36:14.732926 containerd[1492]: time="2025-01-29T11:36:14.732849262Z" level=info msg="StopPodSandbox for \"02f3fae8add70e6d47c8bf8ac88f4239e8c0deb00c48770a4763038b478021c8\"" Jan 29 11:36:14.733003 containerd[1492]: time="2025-01-29T11:36:14.732977694Z" level=info msg="TearDown network for sandbox \"02f3fae8add70e6d47c8bf8ac88f4239e8c0deb00c48770a4763038b478021c8\" successfully" Jan 29 11:36:14.733028 containerd[1492]: time="2025-01-29T11:36:14.732998352Z" level=info msg="StopPodSandbox for \"02f3fae8add70e6d47c8bf8ac88f4239e8c0deb00c48770a4763038b478021c8\" returns successfully" Jan 29 11:36:14.734492 containerd[1492]: time="2025-01-29T11:36:14.733589775Z" level=info msg="StopPodSandbox for \"dad5edafd75c208e507084ef0dcd7c29e4f3cd0afee0ca5f8f325c8bed2cca8a\"" Jan 29 11:36:14.734492 containerd[1492]: time="2025-01-29T11:36:14.734116084Z" level=info msg="TearDown network for sandbox \"dad5edafd75c208e507084ef0dcd7c29e4f3cd0afee0ca5f8f325c8bed2cca8a\" successfully" Jan 29 11:36:14.734492 containerd[1492]: time="2025-01-29T11:36:14.734133707Z" level=info msg="StopPodSandbox for \"dad5edafd75c208e507084ef0dcd7c29e4f3cd0afee0ca5f8f325c8bed2cca8a\" returns successfully" Jan 29 11:36:14.735685 kubelet[2560]: I0129 11:36:14.735651 2560 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1ad62e2e4cccfb4748651dd83ebb7c29397ff496d8fe77bbfc3dafc4e0ece8e" Jan 29 11:36:14.737259 containerd[1492]: time="2025-01-29T11:36:14.737221772Z" level=info msg="StopPodSandbox for \"c1ad62e2e4cccfb4748651dd83ebb7c29397ff496d8fe77bbfc3dafc4e0ece8e\"" Jan 29 11:36:14.737517 containerd[1492]: time="2025-01-29T11:36:14.737472303Z" level=info msg="Ensure that sandbox c1ad62e2e4cccfb4748651dd83ebb7c29397ff496d8fe77bbfc3dafc4e0ece8e in task-service has been cleanup successfully" Jan 29 11:36:14.737755 containerd[1492]: time="2025-01-29T11:36:14.737731109Z" level=info msg="StopPodSandbox for \"7cf0ee6b79c5f0ffbd1df3c4ba1fa347f2ab15775291f793b405df1eb50ddbdc\"" Jan 29 11:36:14.737842 containerd[1492]: time="2025-01-29T11:36:14.737820748Z" level=info msg="TearDown network for sandbox \"7cf0ee6b79c5f0ffbd1df3c4ba1fa347f2ab15775291f793b405df1eb50ddbdc\" successfully" Jan 29 11:36:14.737874 containerd[1492]: time="2025-01-29T11:36:14.737838922Z" level=info msg="StopPodSandbox for \"7cf0ee6b79c5f0ffbd1df3c4ba1fa347f2ab15775291f793b405df1eb50ddbdc\" returns successfully" Jan 29 11:36:14.738467 containerd[1492]: time="2025-01-29T11:36:14.738367886Z" level=info msg="StopPodSandbox for \"b5f2937f631c80a639072f6b6e4b81c67d79bd7b8b00d51fbbb621f6dcf86772\"" Jan 29 11:36:14.738815 containerd[1492]: time="2025-01-29T11:36:14.738612476Z" level=info msg="TearDown network for sandbox \"b5f2937f631c80a639072f6b6e4b81c67d79bd7b8b00d51fbbb621f6dcf86772\" successfully" Jan 29 11:36:14.738815 containerd[1492]: time="2025-01-29T11:36:14.738686305Z" level=info msg="StopPodSandbox for \"b5f2937f631c80a639072f6b6e4b81c67d79bd7b8b00d51fbbb621f6dcf86772\" returns successfully" Jan 29 11:36:14.739815 containerd[1492]: time="2025-01-29T11:36:14.739771485Z" level=info msg="TearDown network for sandbox \"c1ad62e2e4cccfb4748651dd83ebb7c29397ff496d8fe77bbfc3dafc4e0ece8e\" successfully" Jan 29 11:36:14.739815 containerd[1492]: time="2025-01-29T11:36:14.739796362Z" level=info msg="StopPodSandbox for \"c1ad62e2e4cccfb4748651dd83ebb7c29397ff496d8fe77bbfc3dafc4e0ece8e\" returns successfully" Jan 29 11:36:14.740414 containerd[1492]: time="2025-01-29T11:36:14.740382483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f5f6fb96-hcxll,Uid:5e835acf-6d91-46a2-a52a-32309f48a3b4,Namespace:calico-system,Attempt:6,}" Jan 29 11:36:14.740854 containerd[1492]: time="2025-01-29T11:36:14.740827520Z" level=info msg="StopPodSandbox for \"c813bd7f3e0a9030e60075419217b1ddcf7974cf9fe9ec3a62f9cc760570f4ff\"" Jan 29 11:36:14.740995 containerd[1492]: time="2025-01-29T11:36:14.740938389Z" level=info msg="TearDown network for sandbox \"c813bd7f3e0a9030e60075419217b1ddcf7974cf9fe9ec3a62f9cc760570f4ff\" successfully" Jan 29 11:36:14.740995 containerd[1492]: time="2025-01-29T11:36:14.740957164Z" level=info msg="StopPodSandbox for \"c813bd7f3e0a9030e60075419217b1ddcf7974cf9fe9ec3a62f9cc760570f4ff\" returns successfully" Jan 29 11:36:14.741369 containerd[1492]: time="2025-01-29T11:36:14.741321980Z" level=info msg="StopPodSandbox for \"5e37377a6c7e16ee19055fe266012ffe5537a92df9268d751dcb96f4c96359da\"" Jan 29 11:36:14.741445 containerd[1492]: time="2025-01-29T11:36:14.741414544Z" level=info msg="TearDown network for sandbox \"5e37377a6c7e16ee19055fe266012ffe5537a92df9268d751dcb96f4c96359da\" successfully" Jan 29 11:36:14.741445 containerd[1492]: time="2025-01-29T11:36:14.741428480Z" level=info msg="StopPodSandbox for \"5e37377a6c7e16ee19055fe266012ffe5537a92df9268d751dcb96f4c96359da\" returns successfully" Jan 29 11:36:14.741926 containerd[1492]: time="2025-01-29T11:36:14.741879377Z" level=info msg="StopPodSandbox for \"62e6cc0503d21814956a6c95475295a49bf85000e6e767d35029c30730d88e17\"" Jan 29 11:36:14.742039 containerd[1492]: time="2025-01-29T11:36:14.741976551Z" level=info msg="TearDown network for sandbox \"62e6cc0503d21814956a6c95475295a49bf85000e6e767d35029c30730d88e17\" successfully" Jan 29 11:36:14.742039 containerd[1492]: time="2025-01-29T11:36:14.741999524Z" level=info msg="StopPodSandbox for \"62e6cc0503d21814956a6c95475295a49bf85000e6e767d35029c30730d88e17\" returns successfully" Jan 29 11:36:14.742833 containerd[1492]: time="2025-01-29T11:36:14.742804106Z" level=info msg="StopPodSandbox for \"dd045a0823620908e29eab7f45fc540dbd8b4da70a585fad2b83ddf304ac7fc1\"" Jan 29 11:36:14.742942 containerd[1492]: time="2025-01-29T11:36:14.742919413Z" level=info msg="TearDown network for sandbox \"dd045a0823620908e29eab7f45fc540dbd8b4da70a585fad2b83ddf304ac7fc1\" successfully" Jan 29 11:36:14.742942 containerd[1492]: time="2025-01-29T11:36:14.742938268Z" level=info msg="StopPodSandbox for \"dd045a0823620908e29eab7f45fc540dbd8b4da70a585fad2b83ddf304ac7fc1\" returns successfully" Jan 29 11:36:14.743441 containerd[1492]: time="2025-01-29T11:36:14.743402421Z" level=info msg="StopPodSandbox for \"7796f2b8c41cac5338ffa9b402e0614ff1aa9234cde81ec23c1a0b7905f12023\"" Jan 29 11:36:14.743560 containerd[1492]: time="2025-01-29T11:36:14.743498271Z" level=info msg="TearDown network for sandbox \"7796f2b8c41cac5338ffa9b402e0614ff1aa9234cde81ec23c1a0b7905f12023\" successfully" Jan 29 11:36:14.743560 containerd[1492]: time="2025-01-29T11:36:14.743517958Z" level=info msg="StopPodSandbox for \"7796f2b8c41cac5338ffa9b402e0614ff1aa9234cde81ec23c1a0b7905f12023\" returns successfully" Jan 29 11:36:14.743859 kubelet[2560]: E0129 11:36:14.743830 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:36:14.744467 containerd[1492]: time="2025-01-29T11:36:14.744402501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qqzzk,Uid:cf2aa93c-f4bc-4322-9163-052200dd877a,Namespace:kube-system,Attempt:6,}" Jan 29 11:36:14.748361 containerd[1492]: time="2025-01-29T11:36:14.748307531Z" level=info msg="StartContainer for \"7a05fe8ca39ccd57e64dd6676e75dfedb931e9479aceedf2d01d9c6c0ee07efc\" returns successfully" Jan 29 11:36:14.748590 kubelet[2560]: I0129 11:36:14.748563 2560 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e52c96ad530fba430b707a5dbe71a37681b523b56d3b4207670eff5fc15bfd6a" Jan 29 11:36:14.750370 containerd[1492]: time="2025-01-29T11:36:14.750338731Z" level=info msg="StopPodSandbox for \"e52c96ad530fba430b707a5dbe71a37681b523b56d3b4207670eff5fc15bfd6a\"" Jan 29 11:36:14.750645 containerd[1492]: time="2025-01-29T11:36:14.750596124Z" level=info msg="Ensure that sandbox e52c96ad530fba430b707a5dbe71a37681b523b56d3b4207670eff5fc15bfd6a in task-service has been cleanup successfully" Jan 29 11:36:14.750871 containerd[1492]: time="2025-01-29T11:36:14.750831036Z" level=info msg="TearDown network for sandbox \"e52c96ad530fba430b707a5dbe71a37681b523b56d3b4207670eff5fc15bfd6a\" successfully" Jan 29 11:36:14.750871 containerd[1492]: time="2025-01-29T11:36:14.750855281Z" level=info msg="StopPodSandbox for \"e52c96ad530fba430b707a5dbe71a37681b523b56d3b4207670eff5fc15bfd6a\" returns successfully" Jan 29 11:36:14.751161 containerd[1492]: time="2025-01-29T11:36:14.751131090Z" level=info msg="StopPodSandbox for \"4467e330ce986042ee6370798d0cfac69c12394e8c48bb378f3d8274118997f5\"" Jan 29 11:36:14.751282 containerd[1492]: time="2025-01-29T11:36:14.751262617Z" level=info msg="TearDown network for sandbox \"4467e330ce986042ee6370798d0cfac69c12394e8c48bb378f3d8274118997f5\" successfully" Jan 29 11:36:14.751314 containerd[1492]: time="2025-01-29T11:36:14.751279329Z" level=info msg="StopPodSandbox for \"4467e330ce986042ee6370798d0cfac69c12394e8c48bb378f3d8274118997f5\" returns successfully" Jan 29 11:36:14.751647 containerd[1492]: time="2025-01-29T11:36:14.751603167Z" level=info msg="StopPodSandbox for \"9c2e1fb00b0ed28c181a6f5615fe212925f0c2f757fdb4ef8583ddfde1de9f84\"" Jan 29 11:36:14.751749 containerd[1492]: time="2025-01-29T11:36:14.751728453Z" level=info msg="TearDown network for sandbox \"9c2e1fb00b0ed28c181a6f5615fe212925f0c2f757fdb4ef8583ddfde1de9f84\" successfully" Jan 29 11:36:14.751786 containerd[1492]: time="2025-01-29T11:36:14.751747489Z" level=info msg="StopPodSandbox for \"9c2e1fb00b0ed28c181a6f5615fe212925f0c2f757fdb4ef8583ddfde1de9f84\" returns successfully" Jan 29 11:36:14.752135 containerd[1492]: time="2025-01-29T11:36:14.751990065Z" level=info msg="StopPodSandbox for \"6b8213b783a7c465b327152c49a96b3af800823e3978928ad141cc9679eef942\"" Jan 29 11:36:14.752135 containerd[1492]: time="2025-01-29T11:36:14.752083521Z" level=info msg="TearDown network for sandbox \"6b8213b783a7c465b327152c49a96b3af800823e3978928ad141cc9679eef942\" successfully" Jan 29 11:36:14.752135 containerd[1492]: time="2025-01-29T11:36:14.752096385Z" level=info msg="StopPodSandbox for \"6b8213b783a7c465b327152c49a96b3af800823e3978928ad141cc9679eef942\" returns successfully" Jan 29 11:36:14.752593 containerd[1492]: time="2025-01-29T11:36:14.752548695Z" level=info msg="StopPodSandbox for \"f289a1ea20b800c633270b52ec0a7cbce53731f38497fa4630193ed068515360\"" Jan 29 11:36:14.752823 containerd[1492]: time="2025-01-29T11:36:14.752797773Z" level=info msg="TearDown network for sandbox \"f289a1ea20b800c633270b52ec0a7cbce53731f38497fa4630193ed068515360\" successfully" Jan 29 11:36:14.752823 containerd[1492]: time="2025-01-29T11:36:14.752817260Z" level=info msg="StopPodSandbox for \"f289a1ea20b800c633270b52ec0a7cbce53731f38497fa4630193ed068515360\" returns successfully" Jan 29 11:36:14.753180 containerd[1492]: time="2025-01-29T11:36:14.753155315Z" level=info msg="StopPodSandbox for \"e4c786a1f87f123c515ee8f4dae18559dc1b81a1fbae2953ab843c3ffdeb2f38\"" Jan 29 11:36:14.753269 containerd[1492]: time="2025-01-29T11:36:14.753249733Z" level=info msg="TearDown network for sandbox \"e4c786a1f87f123c515ee8f4dae18559dc1b81a1fbae2953ab843c3ffdeb2f38\" successfully" Jan 29 11:36:14.753298 containerd[1492]: time="2025-01-29T11:36:14.753266755Z" level=info msg="StopPodSandbox for \"e4c786a1f87f123c515ee8f4dae18559dc1b81a1fbae2953ab843c3ffdeb2f38\" returns successfully" Jan 29 11:36:14.753701 containerd[1492]: time="2025-01-29T11:36:14.753676335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78d7549f7d-5n5j6,Uid:0725190d-a48f-4c98-9011-c6cdb64f50fe,Namespace:calico-apiserver,Attempt:6,}" Jan 29 11:36:14.825242 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 29 11:36:14.825365 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 29 11:36:15.123123 systemd[1]: run-netns-cni\x2de83b9b2b\x2d02e5\x2db9f7\x2d88ee\x2dd6d707f74117.mount: Deactivated successfully. Jan 29 11:36:15.123705 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e52c96ad530fba430b707a5dbe71a37681b523b56d3b4207670eff5fc15bfd6a-shm.mount: Deactivated successfully. Jan 29 11:36:15.123914 systemd[1]: run-netns-cni\x2dc8ee4d3d\x2dace0\x2dce5e\x2d2fab\x2d83f65307df8b.mount: Deactivated successfully. Jan 29 11:36:15.124062 systemd[1]: run-netns-cni\x2deee3df48\x2d358b\x2d428f\x2d2744\x2da692b097bd06.mount: Deactivated successfully. Jan 29 11:36:15.124237 systemd[1]: run-netns-cni\x2dc9af75b1\x2daf53\x2d709c\x2d5148\x2d120013a4ac5b.mount: Deactivated successfully. Jan 29 11:36:15.124381 systemd[1]: run-netns-cni\x2d3e2d662a\x2dc82f\x2db94b\x2da76a\x2d934b94a26c0f.mount: Deactivated successfully. Jan 29 11:36:15.124537 systemd[1]: run-netns-cni\x2d60e3548a\x2dfdf9\x2d15f2\x2d074c\x2da69c2ecc94cd.mount: Deactivated successfully. Jan 29 11:36:15.511344 systemd-networkd[1414]: cali3e0819ada2b: Link UP Jan 29 11:36:15.512247 systemd-networkd[1414]: cali3e0819ada2b: Gained carrier Jan 29 11:36:15.521896 containerd[1492]: 2025-01-29 11:36:15.204 [INFO][4742] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:36:15.521896 containerd[1492]: 2025-01-29 11:36:15.226 [INFO][4742] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--78d7549f7d--5n5j6-eth0 calico-apiserver-78d7549f7d- calico-apiserver 0725190d-a48f-4c98-9011-c6cdb64f50fe 772 0 2025-01-29 11:35:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:78d7549f7d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-78d7549f7d-5n5j6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3e0819ada2b [] []}} ContainerID="c1a9817c1635cef0eb688e5c53d2f784687ef0541fe6a4a858cfb18ac00a2ea0" Namespace="calico-apiserver" Pod="calico-apiserver-78d7549f7d-5n5j6" WorkloadEndpoint="localhost-k8s-calico--apiserver--78d7549f7d--5n5j6-" Jan 29 11:36:15.521896 containerd[1492]: 2025-01-29 11:36:15.227 [INFO][4742] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c1a9817c1635cef0eb688e5c53d2f784687ef0541fe6a4a858cfb18ac00a2ea0" Namespace="calico-apiserver" Pod="calico-apiserver-78d7549f7d-5n5j6" WorkloadEndpoint="localhost-k8s-calico--apiserver--78d7549f7d--5n5j6-eth0" Jan 29 11:36:15.521896 containerd[1492]: 2025-01-29 11:36:15.373 [INFO][4778] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c1a9817c1635cef0eb688e5c53d2f784687ef0541fe6a4a858cfb18ac00a2ea0" HandleID="k8s-pod-network.c1a9817c1635cef0eb688e5c53d2f784687ef0541fe6a4a858cfb18ac00a2ea0" Workload="localhost-k8s-calico--apiserver--78d7549f7d--5n5j6-eth0" Jan 29 11:36:15.521896 containerd[1492]: 2025-01-29 11:36:15.477 [INFO][4778] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c1a9817c1635cef0eb688e5c53d2f784687ef0541fe6a4a858cfb18ac00a2ea0" HandleID="k8s-pod-network.c1a9817c1635cef0eb688e5c53d2f784687ef0541fe6a4a858cfb18ac00a2ea0" Workload="localhost-k8s-calico--apiserver--78d7549f7d--5n5j6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00046e070), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-78d7549f7d-5n5j6", "timestamp":"2025-01-29 11:36:15.373060034 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:36:15.521896 containerd[1492]: 2025-01-29 11:36:15.477 [INFO][4778] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:36:15.521896 containerd[1492]: 2025-01-29 11:36:15.478 [INFO][4778] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:36:15.521896 containerd[1492]: 2025-01-29 11:36:15.478 [INFO][4778] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:36:15.521896 containerd[1492]: 2025-01-29 11:36:15.480 [INFO][4778] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c1a9817c1635cef0eb688e5c53d2f784687ef0541fe6a4a858cfb18ac00a2ea0" host="localhost" Jan 29 11:36:15.521896 containerd[1492]: 2025-01-29 11:36:15.484 [INFO][4778] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:36:15.521896 containerd[1492]: 2025-01-29 11:36:15.488 [INFO][4778] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:36:15.521896 containerd[1492]: 2025-01-29 11:36:15.489 [INFO][4778] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:36:15.521896 containerd[1492]: 2025-01-29 11:36:15.491 [INFO][4778] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:36:15.521896 containerd[1492]: 2025-01-29 11:36:15.491 [INFO][4778] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c1a9817c1635cef0eb688e5c53d2f784687ef0541fe6a4a858cfb18ac00a2ea0" host="localhost" Jan 29 11:36:15.521896 containerd[1492]: 2025-01-29 11:36:15.492 [INFO][4778] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c1a9817c1635cef0eb688e5c53d2f784687ef0541fe6a4a858cfb18ac00a2ea0 Jan 29 11:36:15.521896 containerd[1492]: 2025-01-29 11:36:15.496 [INFO][4778] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c1a9817c1635cef0eb688e5c53d2f784687ef0541fe6a4a858cfb18ac00a2ea0" host="localhost" Jan 29 11:36:15.521896 containerd[1492]: 2025-01-29 11:36:15.500 [INFO][4778] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.c1a9817c1635cef0eb688e5c53d2f784687ef0541fe6a4a858cfb18ac00a2ea0" host="localhost" Jan 29 11:36:15.521896 containerd[1492]: 2025-01-29 11:36:15.501 [INFO][4778] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.c1a9817c1635cef0eb688e5c53d2f784687ef0541fe6a4a858cfb18ac00a2ea0" host="localhost" Jan 29 11:36:15.521896 containerd[1492]: 2025-01-29 11:36:15.501 [INFO][4778] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:36:15.521896 containerd[1492]: 2025-01-29 11:36:15.501 [INFO][4778] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="c1a9817c1635cef0eb688e5c53d2f784687ef0541fe6a4a858cfb18ac00a2ea0" HandleID="k8s-pod-network.c1a9817c1635cef0eb688e5c53d2f784687ef0541fe6a4a858cfb18ac00a2ea0" Workload="localhost-k8s-calico--apiserver--78d7549f7d--5n5j6-eth0" Jan 29 11:36:15.522836 containerd[1492]: 2025-01-29 11:36:15.503 [INFO][4742] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c1a9817c1635cef0eb688e5c53d2f784687ef0541fe6a4a858cfb18ac00a2ea0" Namespace="calico-apiserver" Pod="calico-apiserver-78d7549f7d-5n5j6" WorkloadEndpoint="localhost-k8s-calico--apiserver--78d7549f7d--5n5j6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--78d7549f7d--5n5j6-eth0", GenerateName:"calico-apiserver-78d7549f7d-", Namespace:"calico-apiserver", SelfLink:"", UID:"0725190d-a48f-4c98-9011-c6cdb64f50fe", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 35, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78d7549f7d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-78d7549f7d-5n5j6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3e0819ada2b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:36:15.522836 containerd[1492]: 2025-01-29 11:36:15.504 [INFO][4742] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="c1a9817c1635cef0eb688e5c53d2f784687ef0541fe6a4a858cfb18ac00a2ea0" Namespace="calico-apiserver" Pod="calico-apiserver-78d7549f7d-5n5j6" WorkloadEndpoint="localhost-k8s-calico--apiserver--78d7549f7d--5n5j6-eth0" Jan 29 11:36:15.522836 containerd[1492]: 2025-01-29 11:36:15.504 [INFO][4742] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3e0819ada2b ContainerID="c1a9817c1635cef0eb688e5c53d2f784687ef0541fe6a4a858cfb18ac00a2ea0" Namespace="calico-apiserver" Pod="calico-apiserver-78d7549f7d-5n5j6" WorkloadEndpoint="localhost-k8s-calico--apiserver--78d7549f7d--5n5j6-eth0" Jan 29 11:36:15.522836 containerd[1492]: 2025-01-29 11:36:15.511 [INFO][4742] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c1a9817c1635cef0eb688e5c53d2f784687ef0541fe6a4a858cfb18ac00a2ea0" Namespace="calico-apiserver" Pod="calico-apiserver-78d7549f7d-5n5j6" WorkloadEndpoint="localhost-k8s-calico--apiserver--78d7549f7d--5n5j6-eth0" Jan 29 11:36:15.522836 containerd[1492]: 2025-01-29 11:36:15.511 [INFO][4742] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c1a9817c1635cef0eb688e5c53d2f784687ef0541fe6a4a858cfb18ac00a2ea0" Namespace="calico-apiserver" Pod="calico-apiserver-78d7549f7d-5n5j6" WorkloadEndpoint="localhost-k8s-calico--apiserver--78d7549f7d--5n5j6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--78d7549f7d--5n5j6-eth0", GenerateName:"calico-apiserver-78d7549f7d-", Namespace:"calico-apiserver", SelfLink:"", UID:"0725190d-a48f-4c98-9011-c6cdb64f50fe", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 35, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78d7549f7d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c1a9817c1635cef0eb688e5c53d2f784687ef0541fe6a4a858cfb18ac00a2ea0", Pod:"calico-apiserver-78d7549f7d-5n5j6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3e0819ada2b", MAC:"46:77:b8:5c:b9:7d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:36:15.522836 containerd[1492]: 2025-01-29 11:36:15.518 [INFO][4742] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c1a9817c1635cef0eb688e5c53d2f784687ef0541fe6a4a858cfb18ac00a2ea0" Namespace="calico-apiserver" Pod="calico-apiserver-78d7549f7d-5n5j6" WorkloadEndpoint="localhost-k8s-calico--apiserver--78d7549f7d--5n5j6-eth0" Jan 29 11:36:15.567167 containerd[1492]: time="2025-01-29T11:36:15.567006798Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:36:15.567167 containerd[1492]: time="2025-01-29T11:36:15.567111995Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:36:15.567953 containerd[1492]: time="2025-01-29T11:36:15.567904315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:36:15.568074 containerd[1492]: time="2025-01-29T11:36:15.568026845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:36:15.586835 systemd[1]: Started cri-containerd-c1a9817c1635cef0eb688e5c53d2f784687ef0541fe6a4a858cfb18ac00a2ea0.scope - libcontainer container c1a9817c1635cef0eb688e5c53d2f784687ef0541fe6a4a858cfb18ac00a2ea0. Jan 29 11:36:15.601399 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:36:15.610059 systemd-networkd[1414]: cali79f05abc360: Link UP Jan 29 11:36:15.610786 systemd-networkd[1414]: cali79f05abc360: Gained carrier Jan 29 11:36:15.624468 containerd[1492]: 2025-01-29 11:36:15.264 [INFO][4717] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:36:15.624468 containerd[1492]: 2025-01-29 11:36:15.283 [INFO][4717] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--78d7549f7d--g9z2x-eth0 calico-apiserver-78d7549f7d- calico-apiserver c4ad5d31-5d68-4473-8b3c-72bfc21e63c5 769 0 2025-01-29 11:35:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:78d7549f7d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-78d7549f7d-g9z2x eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali79f05abc360 [] []}} ContainerID="93dde905cbb451fba101889b6398ee03b451ccb3620fb837b534493ec9e9e140" Namespace="calico-apiserver" Pod="calico-apiserver-78d7549f7d-g9z2x" WorkloadEndpoint="localhost-k8s-calico--apiserver--78d7549f7d--g9z2x-" Jan 29 11:36:15.624468 containerd[1492]: 2025-01-29 11:36:15.283 [INFO][4717] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="93dde905cbb451fba101889b6398ee03b451ccb3620fb837b534493ec9e9e140" Namespace="calico-apiserver" Pod="calico-apiserver-78d7549f7d-g9z2x" WorkloadEndpoint="localhost-k8s-calico--apiserver--78d7549f7d--g9z2x-eth0" Jan 29 11:36:15.624468 containerd[1492]: 2025-01-29 11:36:15.382 [INFO][4809] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="93dde905cbb451fba101889b6398ee03b451ccb3620fb837b534493ec9e9e140" HandleID="k8s-pod-network.93dde905cbb451fba101889b6398ee03b451ccb3620fb837b534493ec9e9e140" Workload="localhost-k8s-calico--apiserver--78d7549f7d--g9z2x-eth0" Jan 29 11:36:15.624468 containerd[1492]: 2025-01-29 11:36:15.477 [INFO][4809] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="93dde905cbb451fba101889b6398ee03b451ccb3620fb837b534493ec9e9e140" HandleID="k8s-pod-network.93dde905cbb451fba101889b6398ee03b451ccb3620fb837b534493ec9e9e140" Workload="localhost-k8s-calico--apiserver--78d7549f7d--g9z2x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000555490), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-78d7549f7d-g9z2x", "timestamp":"2025-01-29 11:36:15.382314539 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:36:15.624468 containerd[1492]: 2025-01-29 11:36:15.478 [INFO][4809] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:36:15.624468 containerd[1492]: 2025-01-29 11:36:15.501 [INFO][4809] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:36:15.624468 containerd[1492]: 2025-01-29 11:36:15.501 [INFO][4809] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:36:15.624468 containerd[1492]: 2025-01-29 11:36:15.582 [INFO][4809] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.93dde905cbb451fba101889b6398ee03b451ccb3620fb837b534493ec9e9e140" host="localhost" Jan 29 11:36:15.624468 containerd[1492]: 2025-01-29 11:36:15.586 [INFO][4809] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:36:15.624468 containerd[1492]: 2025-01-29 11:36:15.590 [INFO][4809] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:36:15.624468 containerd[1492]: 2025-01-29 11:36:15.592 [INFO][4809] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:36:15.624468 containerd[1492]: 2025-01-29 11:36:15.594 [INFO][4809] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:36:15.624468 containerd[1492]: 2025-01-29 11:36:15.594 [INFO][4809] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.93dde905cbb451fba101889b6398ee03b451ccb3620fb837b534493ec9e9e140" host="localhost" Jan 29 11:36:15.624468 containerd[1492]: 2025-01-29 11:36:15.595 [INFO][4809] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.93dde905cbb451fba101889b6398ee03b451ccb3620fb837b534493ec9e9e140 Jan 29 11:36:15.624468 containerd[1492]: 2025-01-29 11:36:15.598 [INFO][4809] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.93dde905cbb451fba101889b6398ee03b451ccb3620fb837b534493ec9e9e140" host="localhost" Jan 29 11:36:15.624468 containerd[1492]: 2025-01-29 11:36:15.603 [INFO][4809] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.93dde905cbb451fba101889b6398ee03b451ccb3620fb837b534493ec9e9e140" host="localhost" Jan 29 11:36:15.624468 containerd[1492]: 2025-01-29 11:36:15.603 [INFO][4809] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.93dde905cbb451fba101889b6398ee03b451ccb3620fb837b534493ec9e9e140" host="localhost" Jan 29 11:36:15.624468 containerd[1492]: 2025-01-29 11:36:15.603 [INFO][4809] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:36:15.624468 containerd[1492]: 2025-01-29 11:36:15.603 [INFO][4809] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="93dde905cbb451fba101889b6398ee03b451ccb3620fb837b534493ec9e9e140" HandleID="k8s-pod-network.93dde905cbb451fba101889b6398ee03b451ccb3620fb837b534493ec9e9e140" Workload="localhost-k8s-calico--apiserver--78d7549f7d--g9z2x-eth0" Jan 29 11:36:15.625099 containerd[1492]: 2025-01-29 11:36:15.607 [INFO][4717] cni-plugin/k8s.go 386: Populated endpoint ContainerID="93dde905cbb451fba101889b6398ee03b451ccb3620fb837b534493ec9e9e140" Namespace="calico-apiserver" Pod="calico-apiserver-78d7549f7d-g9z2x" WorkloadEndpoint="localhost-k8s-calico--apiserver--78d7549f7d--g9z2x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--78d7549f7d--g9z2x-eth0", GenerateName:"calico-apiserver-78d7549f7d-", Namespace:"calico-apiserver", SelfLink:"", UID:"c4ad5d31-5d68-4473-8b3c-72bfc21e63c5", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 35, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78d7549f7d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-78d7549f7d-g9z2x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali79f05abc360", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:36:15.625099 containerd[1492]: 2025-01-29 11:36:15.607 [INFO][4717] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="93dde905cbb451fba101889b6398ee03b451ccb3620fb837b534493ec9e9e140" Namespace="calico-apiserver" Pod="calico-apiserver-78d7549f7d-g9z2x" WorkloadEndpoint="localhost-k8s-calico--apiserver--78d7549f7d--g9z2x-eth0" Jan 29 11:36:15.625099 containerd[1492]: 2025-01-29 11:36:15.607 [INFO][4717] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali79f05abc360 ContainerID="93dde905cbb451fba101889b6398ee03b451ccb3620fb837b534493ec9e9e140" Namespace="calico-apiserver" Pod="calico-apiserver-78d7549f7d-g9z2x" WorkloadEndpoint="localhost-k8s-calico--apiserver--78d7549f7d--g9z2x-eth0" Jan 29 11:36:15.625099 containerd[1492]: 2025-01-29 11:36:15.611 [INFO][4717] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="93dde905cbb451fba101889b6398ee03b451ccb3620fb837b534493ec9e9e140" Namespace="calico-apiserver" Pod="calico-apiserver-78d7549f7d-g9z2x" WorkloadEndpoint="localhost-k8s-calico--apiserver--78d7549f7d--g9z2x-eth0" Jan 29 11:36:15.625099 containerd[1492]: 2025-01-29 11:36:15.611 [INFO][4717] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="93dde905cbb451fba101889b6398ee03b451ccb3620fb837b534493ec9e9e140" Namespace="calico-apiserver" Pod="calico-apiserver-78d7549f7d-g9z2x" WorkloadEndpoint="localhost-k8s-calico--apiserver--78d7549f7d--g9z2x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--78d7549f7d--g9z2x-eth0", GenerateName:"calico-apiserver-78d7549f7d-", Namespace:"calico-apiserver", SelfLink:"", UID:"c4ad5d31-5d68-4473-8b3c-72bfc21e63c5", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 35, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78d7549f7d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"93dde905cbb451fba101889b6398ee03b451ccb3620fb837b534493ec9e9e140", Pod:"calico-apiserver-78d7549f7d-g9z2x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali79f05abc360", MAC:"26:a6:44:1f:11:5c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:36:15.625099 containerd[1492]: 2025-01-29 11:36:15.621 [INFO][4717] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="93dde905cbb451fba101889b6398ee03b451ccb3620fb837b534493ec9e9e140" Namespace="calico-apiserver" Pod="calico-apiserver-78d7549f7d-g9z2x" WorkloadEndpoint="localhost-k8s-calico--apiserver--78d7549f7d--g9z2x-eth0" Jan 29 11:36:15.643231 containerd[1492]: time="2025-01-29T11:36:15.643183310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78d7549f7d-5n5j6,Uid:0725190d-a48f-4c98-9011-c6cdb64f50fe,Namespace:calico-apiserver,Attempt:6,} returns sandbox id \"c1a9817c1635cef0eb688e5c53d2f784687ef0541fe6a4a858cfb18ac00a2ea0\"" Jan 29 11:36:15.644902 containerd[1492]: time="2025-01-29T11:36:15.644780632Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:36:15.644902 containerd[1492]: time="2025-01-29T11:36:15.644850214Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:36:15.644902 containerd[1492]: time="2025-01-29T11:36:15.644874048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:36:15.645384 containerd[1492]: time="2025-01-29T11:36:15.644981660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:36:15.645522 containerd[1492]: time="2025-01-29T11:36:15.645496858Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 11:36:15.668847 systemd[1]: Started cri-containerd-93dde905cbb451fba101889b6398ee03b451ccb3620fb837b534493ec9e9e140.scope - libcontainer container 93dde905cbb451fba101889b6398ee03b451ccb3620fb837b534493ec9e9e140. Jan 29 11:36:15.680848 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:36:15.712061 containerd[1492]: time="2025-01-29T11:36:15.712002444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78d7549f7d-g9z2x,Uid:c4ad5d31-5d68-4473-8b3c-72bfc21e63c5,Namespace:calico-apiserver,Attempt:6,} returns sandbox id \"93dde905cbb451fba101889b6398ee03b451ccb3620fb837b534493ec9e9e140\"" Jan 29 11:36:15.716514 systemd-networkd[1414]: cali50704b3a3ca: Link UP Jan 29 11:36:15.717106 systemd-networkd[1414]: cali50704b3a3ca: Gained carrier Jan 29 11:36:15.728555 containerd[1492]: 2025-01-29 11:36:15.227 [INFO][4733] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:36:15.728555 containerd[1492]: 2025-01-29 11:36:15.243 [INFO][4733] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7f5f6fb96--hcxll-eth0 calico-kube-controllers-7f5f6fb96- calico-system 5e835acf-6d91-46a2-a52a-32309f48a3b4 771 0 2025-01-29 11:35:52 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7f5f6fb96 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7f5f6fb96-hcxll eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali50704b3a3ca [] []}} ContainerID="db1643cc31dc292653fff6f3b1f70b1c50776745befb2152f4fc1fb12248accc" Namespace="calico-system" Pod="calico-kube-controllers-7f5f6fb96-hcxll" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f5f6fb96--hcxll-" Jan 29 11:36:15.728555 containerd[1492]: 2025-01-29 11:36:15.243 [INFO][4733] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="db1643cc31dc292653fff6f3b1f70b1c50776745befb2152f4fc1fb12248accc" Namespace="calico-system" Pod="calico-kube-controllers-7f5f6fb96-hcxll" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f5f6fb96--hcxll-eth0" Jan 29 11:36:15.728555 containerd[1492]: 2025-01-29 11:36:15.376 [INFO][4781] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="db1643cc31dc292653fff6f3b1f70b1c50776745befb2152f4fc1fb12248accc" HandleID="k8s-pod-network.db1643cc31dc292653fff6f3b1f70b1c50776745befb2152f4fc1fb12248accc" Workload="localhost-k8s-calico--kube--controllers--7f5f6fb96--hcxll-eth0" Jan 29 11:36:15.728555 containerd[1492]: 2025-01-29 11:36:15.477 [INFO][4781] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="db1643cc31dc292653fff6f3b1f70b1c50776745befb2152f4fc1fb12248accc" HandleID="k8s-pod-network.db1643cc31dc292653fff6f3b1f70b1c50776745befb2152f4fc1fb12248accc" Workload="localhost-k8s-calico--kube--controllers--7f5f6fb96--hcxll-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a7b70), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7f5f6fb96-hcxll", "timestamp":"2025-01-29 11:36:15.376485893 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:36:15.728555 containerd[1492]: 2025-01-29 11:36:15.478 [INFO][4781] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:36:15.728555 containerd[1492]: 2025-01-29 11:36:15.603 [INFO][4781] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:36:15.728555 containerd[1492]: 2025-01-29 11:36:15.603 [INFO][4781] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:36:15.728555 containerd[1492]: 2025-01-29 11:36:15.682 [INFO][4781] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.db1643cc31dc292653fff6f3b1f70b1c50776745befb2152f4fc1fb12248accc" host="localhost" Jan 29 11:36:15.728555 containerd[1492]: 2025-01-29 11:36:15.687 [INFO][4781] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:36:15.728555 containerd[1492]: 2025-01-29 11:36:15.691 [INFO][4781] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:36:15.728555 containerd[1492]: 2025-01-29 11:36:15.692 [INFO][4781] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:36:15.728555 containerd[1492]: 2025-01-29 11:36:15.695 [INFO][4781] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:36:15.728555 containerd[1492]: 2025-01-29 11:36:15.695 [INFO][4781] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.db1643cc31dc292653fff6f3b1f70b1c50776745befb2152f4fc1fb12248accc" host="localhost" Jan 29 11:36:15.728555 containerd[1492]: 2025-01-29 11:36:15.697 [INFO][4781] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.db1643cc31dc292653fff6f3b1f70b1c50776745befb2152f4fc1fb12248accc Jan 29 11:36:15.728555 containerd[1492]: 2025-01-29 11:36:15.700 [INFO][4781] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.db1643cc31dc292653fff6f3b1f70b1c50776745befb2152f4fc1fb12248accc" host="localhost" Jan 29 11:36:15.728555 containerd[1492]: 2025-01-29 11:36:15.708 [INFO][4781] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.db1643cc31dc292653fff6f3b1f70b1c50776745befb2152f4fc1fb12248accc" host="localhost" Jan 29 11:36:15.728555 containerd[1492]: 2025-01-29 11:36:15.708 [INFO][4781] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.db1643cc31dc292653fff6f3b1f70b1c50776745befb2152f4fc1fb12248accc" host="localhost" Jan 29 11:36:15.728555 containerd[1492]: 2025-01-29 11:36:15.708 [INFO][4781] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:36:15.728555 containerd[1492]: 2025-01-29 11:36:15.708 [INFO][4781] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="db1643cc31dc292653fff6f3b1f70b1c50776745befb2152f4fc1fb12248accc" HandleID="k8s-pod-network.db1643cc31dc292653fff6f3b1f70b1c50776745befb2152f4fc1fb12248accc" Workload="localhost-k8s-calico--kube--controllers--7f5f6fb96--hcxll-eth0" Jan 29 11:36:15.729138 containerd[1492]: 2025-01-29 11:36:15.712 [INFO][4733] cni-plugin/k8s.go 386: Populated endpoint ContainerID="db1643cc31dc292653fff6f3b1f70b1c50776745befb2152f4fc1fb12248accc" Namespace="calico-system" Pod="calico-kube-controllers-7f5f6fb96-hcxll" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f5f6fb96--hcxll-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7f5f6fb96--hcxll-eth0", GenerateName:"calico-kube-controllers-7f5f6fb96-", Namespace:"calico-system", SelfLink:"", UID:"5e835acf-6d91-46a2-a52a-32309f48a3b4", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 35, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f5f6fb96", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7f5f6fb96-hcxll", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali50704b3a3ca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:36:15.729138 containerd[1492]: 2025-01-29 11:36:15.712 [INFO][4733] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="db1643cc31dc292653fff6f3b1f70b1c50776745befb2152f4fc1fb12248accc" Namespace="calico-system" Pod="calico-kube-controllers-7f5f6fb96-hcxll" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f5f6fb96--hcxll-eth0" Jan 29 11:36:15.729138 containerd[1492]: 2025-01-29 11:36:15.712 [INFO][4733] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali50704b3a3ca ContainerID="db1643cc31dc292653fff6f3b1f70b1c50776745befb2152f4fc1fb12248accc" Namespace="calico-system" Pod="calico-kube-controllers-7f5f6fb96-hcxll" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f5f6fb96--hcxll-eth0" Jan 29 11:36:15.729138 containerd[1492]: 2025-01-29 11:36:15.717 [INFO][4733] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="db1643cc31dc292653fff6f3b1f70b1c50776745befb2152f4fc1fb12248accc" Namespace="calico-system" Pod="calico-kube-controllers-7f5f6fb96-hcxll" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f5f6fb96--hcxll-eth0" Jan 29 11:36:15.729138 containerd[1492]: 2025-01-29 11:36:15.717 [INFO][4733] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="db1643cc31dc292653fff6f3b1f70b1c50776745befb2152f4fc1fb12248accc" Namespace="calico-system" Pod="calico-kube-controllers-7f5f6fb96-hcxll" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f5f6fb96--hcxll-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7f5f6fb96--hcxll-eth0", GenerateName:"calico-kube-controllers-7f5f6fb96-", Namespace:"calico-system", SelfLink:"", UID:"5e835acf-6d91-46a2-a52a-32309f48a3b4", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 35, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f5f6fb96", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"db1643cc31dc292653fff6f3b1f70b1c50776745befb2152f4fc1fb12248accc", Pod:"calico-kube-controllers-7f5f6fb96-hcxll", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali50704b3a3ca", MAC:"6a:3c:f1:0d:da:25", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:36:15.729138 containerd[1492]: 2025-01-29 11:36:15.725 [INFO][4733] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="db1643cc31dc292653fff6f3b1f70b1c50776745befb2152f4fc1fb12248accc" Namespace="calico-system" Pod="calico-kube-controllers-7f5f6fb96-hcxll" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f5f6fb96--hcxll-eth0" Jan 29 11:36:15.748621 containerd[1492]: time="2025-01-29T11:36:15.748326287Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:36:15.748621 containerd[1492]: time="2025-01-29T11:36:15.748394176Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:36:15.748621 containerd[1492]: time="2025-01-29T11:36:15.748405397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:36:15.748621 containerd[1492]: time="2025-01-29T11:36:15.748482341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:36:15.756002 kubelet[2560]: E0129 11:36:15.755964 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:36:15.777673 kubelet[2560]: I0129 11:36:15.777519 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-w84hj" podStartSLOduration=1.874056009 podStartE2EDuration="23.777501486s" podCreationTimestamp="2025-01-29 11:35:52 +0000 UTC" firstStartedPulling="2025-01-29 11:35:52.557313505 +0000 UTC m=+18.578123273" lastFinishedPulling="2025-01-29 11:36:14.460758982 +0000 UTC m=+40.481568750" observedRunningTime="2025-01-29 11:36:15.777145607 +0000 UTC m=+41.797955375" watchObservedRunningTime="2025-01-29 11:36:15.777501486 +0000 UTC m=+41.798311254" Jan 29 11:36:15.780916 systemd[1]: Started cri-containerd-db1643cc31dc292653fff6f3b1f70b1c50776745befb2152f4fc1fb12248accc.scope - libcontainer container db1643cc31dc292653fff6f3b1f70b1c50776745befb2152f4fc1fb12248accc. Jan 29 11:36:15.793830 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:36:15.813754 systemd-networkd[1414]: cali295396afbb9: Link UP Jan 29 11:36:15.814618 systemd-networkd[1414]: cali295396afbb9: Gained carrier Jan 29 11:36:15.830338 containerd[1492]: time="2025-01-29T11:36:15.829441132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f5f6fb96-hcxll,Uid:5e835acf-6d91-46a2-a52a-32309f48a3b4,Namespace:calico-system,Attempt:6,} returns sandbox id \"db1643cc31dc292653fff6f3b1f70b1c50776745befb2152f4fc1fb12248accc\"" Jan 29 11:36:15.830824 containerd[1492]: 2025-01-29 11:36:15.246 [INFO][4696] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:36:15.830824 containerd[1492]: 2025-01-29 11:36:15.259 [INFO][4696] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--s9vh2-eth0 csi-node-driver- calico-system 0ee1d7b9-9e01-4183-97ec-91d9420b2dab 644 0 2025-01-29 11:35:52 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-s9vh2 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali295396afbb9 [] []}} ContainerID="692107412fda4d04c2d07a21be5def3c6c6c707799ab0e4716a533be23b02b94" Namespace="calico-system" Pod="csi-node-driver-s9vh2" WorkloadEndpoint="localhost-k8s-csi--node--driver--s9vh2-" Jan 29 11:36:15.830824 containerd[1492]: 2025-01-29 11:36:15.259 [INFO][4696] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="692107412fda4d04c2d07a21be5def3c6c6c707799ab0e4716a533be23b02b94" Namespace="calico-system" Pod="csi-node-driver-s9vh2" WorkloadEndpoint="localhost-k8s-csi--node--driver--s9vh2-eth0" Jan 29 11:36:15.830824 containerd[1492]: 2025-01-29 11:36:15.361 [INFO][4796] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="692107412fda4d04c2d07a21be5def3c6c6c707799ab0e4716a533be23b02b94" HandleID="k8s-pod-network.692107412fda4d04c2d07a21be5def3c6c6c707799ab0e4716a533be23b02b94" Workload="localhost-k8s-csi--node--driver--s9vh2-eth0" Jan 29 11:36:15.830824 containerd[1492]: 2025-01-29 11:36:15.477 [INFO][4796] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="692107412fda4d04c2d07a21be5def3c6c6c707799ab0e4716a533be23b02b94" HandleID="k8s-pod-network.692107412fda4d04c2d07a21be5def3c6c6c707799ab0e4716a533be23b02b94" Workload="localhost-k8s-csi--node--driver--s9vh2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f55c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-s9vh2", "timestamp":"2025-01-29 11:36:15.361257199 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:36:15.830824 containerd[1492]: 2025-01-29 11:36:15.477 [INFO][4796] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:36:15.830824 containerd[1492]: 2025-01-29 11:36:15.708 [INFO][4796] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:36:15.830824 containerd[1492]: 2025-01-29 11:36:15.708 [INFO][4796] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:36:15.830824 containerd[1492]: 2025-01-29 11:36:15.782 [INFO][4796] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.692107412fda4d04c2d07a21be5def3c6c6c707799ab0e4716a533be23b02b94" host="localhost" Jan 29 11:36:15.830824 containerd[1492]: 2025-01-29 11:36:15.787 [INFO][4796] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:36:15.830824 containerd[1492]: 2025-01-29 11:36:15.791 [INFO][4796] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:36:15.830824 containerd[1492]: 2025-01-29 11:36:15.792 [INFO][4796] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:36:15.830824 containerd[1492]: 2025-01-29 11:36:15.794 [INFO][4796] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:36:15.830824 containerd[1492]: 2025-01-29 11:36:15.794 [INFO][4796] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.692107412fda4d04c2d07a21be5def3c6c6c707799ab0e4716a533be23b02b94" host="localhost" Jan 29 11:36:15.830824 containerd[1492]: 2025-01-29 11:36:15.796 [INFO][4796] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.692107412fda4d04c2d07a21be5def3c6c6c707799ab0e4716a533be23b02b94 Jan 29 11:36:15.830824 containerd[1492]: 2025-01-29 11:36:15.799 [INFO][4796] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.692107412fda4d04c2d07a21be5def3c6c6c707799ab0e4716a533be23b02b94" host="localhost" Jan 29 11:36:15.830824 containerd[1492]: 2025-01-29 11:36:15.805 [INFO][4796] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.692107412fda4d04c2d07a21be5def3c6c6c707799ab0e4716a533be23b02b94" host="localhost" Jan 29 11:36:15.830824 containerd[1492]: 2025-01-29 11:36:15.805 [INFO][4796] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.692107412fda4d04c2d07a21be5def3c6c6c707799ab0e4716a533be23b02b94" host="localhost" Jan 29 11:36:15.830824 containerd[1492]: 2025-01-29 11:36:15.805 [INFO][4796] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:36:15.830824 containerd[1492]: 2025-01-29 11:36:15.805 [INFO][4796] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="692107412fda4d04c2d07a21be5def3c6c6c707799ab0e4716a533be23b02b94" HandleID="k8s-pod-network.692107412fda4d04c2d07a21be5def3c6c6c707799ab0e4716a533be23b02b94" Workload="localhost-k8s-csi--node--driver--s9vh2-eth0" Jan 29 11:36:15.831508 containerd[1492]: 2025-01-29 11:36:15.808 [INFO][4696] cni-plugin/k8s.go 386: Populated endpoint ContainerID="692107412fda4d04c2d07a21be5def3c6c6c707799ab0e4716a533be23b02b94" Namespace="calico-system" Pod="csi-node-driver-s9vh2" WorkloadEndpoint="localhost-k8s-csi--node--driver--s9vh2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--s9vh2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0ee1d7b9-9e01-4183-97ec-91d9420b2dab", ResourceVersion:"644", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 35, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-s9vh2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali295396afbb9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:36:15.831508 containerd[1492]: 2025-01-29 11:36:15.808 [INFO][4696] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="692107412fda4d04c2d07a21be5def3c6c6c707799ab0e4716a533be23b02b94" Namespace="calico-system" Pod="csi-node-driver-s9vh2" WorkloadEndpoint="localhost-k8s-csi--node--driver--s9vh2-eth0" Jan 29 11:36:15.831508 containerd[1492]: 2025-01-29 11:36:15.808 [INFO][4696] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali295396afbb9 ContainerID="692107412fda4d04c2d07a21be5def3c6c6c707799ab0e4716a533be23b02b94" Namespace="calico-system" Pod="csi-node-driver-s9vh2" WorkloadEndpoint="localhost-k8s-csi--node--driver--s9vh2-eth0" Jan 29 11:36:15.831508 containerd[1492]: 2025-01-29 11:36:15.815 [INFO][4696] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="692107412fda4d04c2d07a21be5def3c6c6c707799ab0e4716a533be23b02b94" Namespace="calico-system" Pod="csi-node-driver-s9vh2" WorkloadEndpoint="localhost-k8s-csi--node--driver--s9vh2-eth0" Jan 29 11:36:15.831508 containerd[1492]: 2025-01-29 11:36:15.815 [INFO][4696] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="692107412fda4d04c2d07a21be5def3c6c6c707799ab0e4716a533be23b02b94" Namespace="calico-system" Pod="csi-node-driver-s9vh2" WorkloadEndpoint="localhost-k8s-csi--node--driver--s9vh2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--s9vh2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0ee1d7b9-9e01-4183-97ec-91d9420b2dab", ResourceVersion:"644", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 35, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"692107412fda4d04c2d07a21be5def3c6c6c707799ab0e4716a533be23b02b94", Pod:"csi-node-driver-s9vh2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali295396afbb9", MAC:"4e:d9:fe:3f:8c:d1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:36:15.831508 containerd[1492]: 2025-01-29 11:36:15.827 [INFO][4696] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="692107412fda4d04c2d07a21be5def3c6c6c707799ab0e4716a533be23b02b94" Namespace="calico-system" Pod="csi-node-driver-s9vh2" WorkloadEndpoint="localhost-k8s-csi--node--driver--s9vh2-eth0" Jan 29 11:36:15.861576 containerd[1492]: time="2025-01-29T11:36:15.861214373Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:36:15.861576 containerd[1492]: time="2025-01-29T11:36:15.861298492Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:36:15.861576 containerd[1492]: time="2025-01-29T11:36:15.861315514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:36:15.861576 containerd[1492]: time="2025-01-29T11:36:15.861402086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:36:15.882749 systemd[1]: Started cri-containerd-692107412fda4d04c2d07a21be5def3c6c6c707799ab0e4716a533be23b02b94.scope - libcontainer container 692107412fda4d04c2d07a21be5def3c6c6c707799ab0e4716a533be23b02b94. Jan 29 11:36:15.899150 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:36:15.915325 containerd[1492]: time="2025-01-29T11:36:15.915267663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s9vh2,Uid:0ee1d7b9-9e01-4183-97ec-91d9420b2dab,Namespace:calico-system,Attempt:5,} returns sandbox id \"692107412fda4d04c2d07a21be5def3c6c6c707799ab0e4716a533be23b02b94\"" Jan 29 11:36:15.920143 systemd-networkd[1414]: calibe67a9a5a98: Link UP Jan 29 11:36:15.920333 systemd-networkd[1414]: calibe67a9a5a98: Gained carrier Jan 29 11:36:15.932676 containerd[1492]: 2025-01-29 11:36:15.145 [INFO][4685] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:36:15.932676 containerd[1492]: 2025-01-29 11:36:15.201 [INFO][4685] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--8sz4x-eth0 coredns-6f6b679f8f- kube-system d32f565b-8d7d-47d2-85bf-68725ec04cff 764 0 2025-01-29 11:35:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-8sz4x eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibe67a9a5a98 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="be99dd72866bde31a161a15e7127b99788f637959daf55f508cb5eba3062585f" Namespace="kube-system" Pod="coredns-6f6b679f8f-8sz4x" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--8sz4x-" Jan 29 11:36:15.932676 containerd[1492]: 2025-01-29 11:36:15.202 [INFO][4685] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="be99dd72866bde31a161a15e7127b99788f637959daf55f508cb5eba3062585f" Namespace="kube-system" Pod="coredns-6f6b679f8f-8sz4x" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--8sz4x-eth0" Jan 29 11:36:15.932676 containerd[1492]: 2025-01-29 11:36:15.361 [INFO][4779] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="be99dd72866bde31a161a15e7127b99788f637959daf55f508cb5eba3062585f" HandleID="k8s-pod-network.be99dd72866bde31a161a15e7127b99788f637959daf55f508cb5eba3062585f" Workload="localhost-k8s-coredns--6f6b679f8f--8sz4x-eth0" Jan 29 11:36:15.932676 containerd[1492]: 2025-01-29 11:36:15.475 [INFO][4779] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="be99dd72866bde31a161a15e7127b99788f637959daf55f508cb5eba3062585f" HandleID="k8s-pod-network.be99dd72866bde31a161a15e7127b99788f637959daf55f508cb5eba3062585f" Workload="localhost-k8s-coredns--6f6b679f8f--8sz4x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003099d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-8sz4x", "timestamp":"2025-01-29 11:36:15.361134008 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:36:15.932676 containerd[1492]: 2025-01-29 11:36:15.475 [INFO][4779] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:36:15.932676 containerd[1492]: 2025-01-29 11:36:15.805 [INFO][4779] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:36:15.932676 containerd[1492]: 2025-01-29 11:36:15.805 [INFO][4779] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:36:15.932676 containerd[1492]: 2025-01-29 11:36:15.884 [INFO][4779] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.be99dd72866bde31a161a15e7127b99788f637959daf55f508cb5eba3062585f" host="localhost" Jan 29 11:36:15.932676 containerd[1492]: 2025-01-29 11:36:15.890 [INFO][4779] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:36:15.932676 containerd[1492]: 2025-01-29 11:36:15.894 [INFO][4779] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:36:15.932676 containerd[1492]: 2025-01-29 11:36:15.896 [INFO][4779] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:36:15.932676 containerd[1492]: 2025-01-29 11:36:15.898 [INFO][4779] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:36:15.932676 containerd[1492]: 2025-01-29 11:36:15.898 [INFO][4779] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.be99dd72866bde31a161a15e7127b99788f637959daf55f508cb5eba3062585f" host="localhost" Jan 29 11:36:15.932676 containerd[1492]: 2025-01-29 11:36:15.900 [INFO][4779] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.be99dd72866bde31a161a15e7127b99788f637959daf55f508cb5eba3062585f Jan 29 11:36:15.932676 containerd[1492]: 2025-01-29 11:36:15.903 [INFO][4779] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.be99dd72866bde31a161a15e7127b99788f637959daf55f508cb5eba3062585f" host="localhost" Jan 29 11:36:15.932676 containerd[1492]: 2025-01-29 11:36:15.913 [INFO][4779] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.be99dd72866bde31a161a15e7127b99788f637959daf55f508cb5eba3062585f" host="localhost" Jan 29 11:36:15.932676 containerd[1492]: 2025-01-29 11:36:15.913 [INFO][4779] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.be99dd72866bde31a161a15e7127b99788f637959daf55f508cb5eba3062585f" host="localhost" Jan 29 11:36:15.932676 containerd[1492]: 2025-01-29 11:36:15.913 [INFO][4779] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:36:15.932676 containerd[1492]: 2025-01-29 11:36:15.913 [INFO][4779] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="be99dd72866bde31a161a15e7127b99788f637959daf55f508cb5eba3062585f" HandleID="k8s-pod-network.be99dd72866bde31a161a15e7127b99788f637959daf55f508cb5eba3062585f" Workload="localhost-k8s-coredns--6f6b679f8f--8sz4x-eth0" Jan 29 11:36:15.933211 containerd[1492]: 2025-01-29 11:36:15.918 [INFO][4685] cni-plugin/k8s.go 386: Populated endpoint ContainerID="be99dd72866bde31a161a15e7127b99788f637959daf55f508cb5eba3062585f" Namespace="kube-system" Pod="coredns-6f6b679f8f-8sz4x" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--8sz4x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--8sz4x-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"d32f565b-8d7d-47d2-85bf-68725ec04cff", ResourceVersion:"764", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 35, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-8sz4x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibe67a9a5a98", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:36:15.933211 containerd[1492]: 2025-01-29 11:36:15.918 [INFO][4685] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="be99dd72866bde31a161a15e7127b99788f637959daf55f508cb5eba3062585f" Namespace="kube-system" Pod="coredns-6f6b679f8f-8sz4x" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--8sz4x-eth0" Jan 29 11:36:15.933211 containerd[1492]: 2025-01-29 11:36:15.918 [INFO][4685] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibe67a9a5a98 ContainerID="be99dd72866bde31a161a15e7127b99788f637959daf55f508cb5eba3062585f" Namespace="kube-system" Pod="coredns-6f6b679f8f-8sz4x" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--8sz4x-eth0" Jan 29 11:36:15.933211 containerd[1492]: 2025-01-29 11:36:15.920 [INFO][4685] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="be99dd72866bde31a161a15e7127b99788f637959daf55f508cb5eba3062585f" Namespace="kube-system" Pod="coredns-6f6b679f8f-8sz4x" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--8sz4x-eth0" Jan 29 11:36:15.933211 containerd[1492]: 2025-01-29 11:36:15.920 [INFO][4685] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="be99dd72866bde31a161a15e7127b99788f637959daf55f508cb5eba3062585f" Namespace="kube-system" Pod="coredns-6f6b679f8f-8sz4x" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--8sz4x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--8sz4x-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"d32f565b-8d7d-47d2-85bf-68725ec04cff", ResourceVersion:"764", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 35, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"be99dd72866bde31a161a15e7127b99788f637959daf55f508cb5eba3062585f", Pod:"coredns-6f6b679f8f-8sz4x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibe67a9a5a98", MAC:"4a:72:27:7b:33:f7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:36:15.933211 containerd[1492]: 2025-01-29 11:36:15.929 [INFO][4685] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="be99dd72866bde31a161a15e7127b99788f637959daf55f508cb5eba3062585f" Namespace="kube-system" Pod="coredns-6f6b679f8f-8sz4x" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--8sz4x-eth0" Jan 29 11:36:15.960667 containerd[1492]: time="2025-01-29T11:36:15.960570565Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:36:15.960667 containerd[1492]: time="2025-01-29T11:36:15.960651637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:36:15.960667 containerd[1492]: time="2025-01-29T11:36:15.960671013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:36:15.960843 containerd[1492]: time="2025-01-29T11:36:15.960750452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:36:15.985091 systemd[1]: Started cri-containerd-be99dd72866bde31a161a15e7127b99788f637959daf55f508cb5eba3062585f.scope - libcontainer container be99dd72866bde31a161a15e7127b99788f637959daf55f508cb5eba3062585f. Jan 29 11:36:15.998262 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:36:16.020726 containerd[1492]: time="2025-01-29T11:36:16.020683278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8sz4x,Uid:d32f565b-8d7d-47d2-85bf-68725ec04cff,Namespace:kube-system,Attempt:6,} returns sandbox id \"be99dd72866bde31a161a15e7127b99788f637959daf55f508cb5eba3062585f\"" Jan 29 11:36:16.021545 kubelet[2560]: E0129 11:36:16.021485 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:36:16.023485 containerd[1492]: time="2025-01-29T11:36:16.023436242Z" level=info msg="CreateContainer within sandbox \"be99dd72866bde31a161a15e7127b99788f637959daf55f508cb5eba3062585f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:36:16.132919 systemd-networkd[1414]: calie75087da8cb: Link UP Jan 29 11:36:16.133865 systemd-networkd[1414]: calie75087da8cb: Gained carrier Jan 29 11:36:16.191274 containerd[1492]: 2025-01-29 11:36:15.235 [INFO][4757] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:36:16.191274 containerd[1492]: 2025-01-29 11:36:15.258 [INFO][4757] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--qqzzk-eth0 coredns-6f6b679f8f- kube-system cf2aa93c-f4bc-4322-9163-052200dd877a 774 0 2025-01-29 11:35:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-qqzzk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie75087da8cb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4ba0ffcc3c393b39ffd642095302b00e9b56a14cac8c92763ccebbea377231f3" Namespace="kube-system" Pod="coredns-6f6b679f8f-qqzzk" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--qqzzk-" Jan 29 11:36:16.191274 containerd[1492]: 2025-01-29 11:36:15.258 [INFO][4757] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4ba0ffcc3c393b39ffd642095302b00e9b56a14cac8c92763ccebbea377231f3" Namespace="kube-system" Pod="coredns-6f6b679f8f-qqzzk" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--qqzzk-eth0" Jan 29 11:36:16.191274 containerd[1492]: 2025-01-29 11:36:15.375 [INFO][4797] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4ba0ffcc3c393b39ffd642095302b00e9b56a14cac8c92763ccebbea377231f3" HandleID="k8s-pod-network.4ba0ffcc3c393b39ffd642095302b00e9b56a14cac8c92763ccebbea377231f3" Workload="localhost-k8s-coredns--6f6b679f8f--qqzzk-eth0" Jan 29 11:36:16.191274 containerd[1492]: 2025-01-29 11:36:15.480 [INFO][4797] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4ba0ffcc3c393b39ffd642095302b00e9b56a14cac8c92763ccebbea377231f3" HandleID="k8s-pod-network.4ba0ffcc3c393b39ffd642095302b00e9b56a14cac8c92763ccebbea377231f3" Workload="localhost-k8s-coredns--6f6b679f8f--qqzzk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000366e10), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-qqzzk", "timestamp":"2025-01-29 11:36:15.375678476 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:36:16.191274 containerd[1492]: 2025-01-29 11:36:15.480 [INFO][4797] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:36:16.191274 containerd[1492]: 2025-01-29 11:36:15.914 [INFO][4797] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:36:16.191274 containerd[1492]: 2025-01-29 11:36:15.914 [INFO][4797] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:36:16.191274 containerd[1492]: 2025-01-29 11:36:15.984 [INFO][4797] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4ba0ffcc3c393b39ffd642095302b00e9b56a14cac8c92763ccebbea377231f3" host="localhost" Jan 29 11:36:16.191274 containerd[1492]: 2025-01-29 11:36:15.990 [INFO][4797] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:36:16.191274 containerd[1492]: 2025-01-29 11:36:15.994 [INFO][4797] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:36:16.191274 containerd[1492]: 2025-01-29 11:36:15.996 [INFO][4797] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:36:16.191274 containerd[1492]: 2025-01-29 11:36:15.998 [INFO][4797] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:36:16.191274 containerd[1492]: 2025-01-29 11:36:15.998 [INFO][4797] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4ba0ffcc3c393b39ffd642095302b00e9b56a14cac8c92763ccebbea377231f3" host="localhost" Jan 29 11:36:16.191274 containerd[1492]: 2025-01-29 11:36:16.000 [INFO][4797] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4ba0ffcc3c393b39ffd642095302b00e9b56a14cac8c92763ccebbea377231f3 Jan 29 11:36:16.191274 containerd[1492]: 2025-01-29 11:36:16.022 [INFO][4797] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4ba0ffcc3c393b39ffd642095302b00e9b56a14cac8c92763ccebbea377231f3" host="localhost" Jan 29 11:36:16.191274 containerd[1492]: 2025-01-29 11:36:16.125 [INFO][4797] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.4ba0ffcc3c393b39ffd642095302b00e9b56a14cac8c92763ccebbea377231f3" host="localhost" Jan 29 11:36:16.191274 containerd[1492]: 2025-01-29 11:36:16.125 [INFO][4797] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.4ba0ffcc3c393b39ffd642095302b00e9b56a14cac8c92763ccebbea377231f3" host="localhost" Jan 29 11:36:16.191274 containerd[1492]: 2025-01-29 11:36:16.125 [INFO][4797] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:36:16.191274 containerd[1492]: 2025-01-29 11:36:16.125 [INFO][4797] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="4ba0ffcc3c393b39ffd642095302b00e9b56a14cac8c92763ccebbea377231f3" HandleID="k8s-pod-network.4ba0ffcc3c393b39ffd642095302b00e9b56a14cac8c92763ccebbea377231f3" Workload="localhost-k8s-coredns--6f6b679f8f--qqzzk-eth0" Jan 29 11:36:16.192058 containerd[1492]: 2025-01-29 11:36:16.129 [INFO][4757] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4ba0ffcc3c393b39ffd642095302b00e9b56a14cac8c92763ccebbea377231f3" Namespace="kube-system" Pod="coredns-6f6b679f8f-qqzzk" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--qqzzk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--qqzzk-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"cf2aa93c-f4bc-4322-9163-052200dd877a", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 35, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-qqzzk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie75087da8cb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:36:16.192058 containerd[1492]: 2025-01-29 11:36:16.129 [INFO][4757] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="4ba0ffcc3c393b39ffd642095302b00e9b56a14cac8c92763ccebbea377231f3" Namespace="kube-system" Pod="coredns-6f6b679f8f-qqzzk" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--qqzzk-eth0" Jan 29 11:36:16.192058 containerd[1492]: 2025-01-29 11:36:16.129 [INFO][4757] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie75087da8cb ContainerID="4ba0ffcc3c393b39ffd642095302b00e9b56a14cac8c92763ccebbea377231f3" Namespace="kube-system" Pod="coredns-6f6b679f8f-qqzzk" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--qqzzk-eth0" Jan 29 11:36:16.192058 containerd[1492]: 2025-01-29 11:36:16.130 [INFO][4757] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4ba0ffcc3c393b39ffd642095302b00e9b56a14cac8c92763ccebbea377231f3" Namespace="kube-system" Pod="coredns-6f6b679f8f-qqzzk" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--qqzzk-eth0" Jan 29 11:36:16.192058 containerd[1492]: 2025-01-29 11:36:16.131 [INFO][4757] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4ba0ffcc3c393b39ffd642095302b00e9b56a14cac8c92763ccebbea377231f3" Namespace="kube-system" Pod="coredns-6f6b679f8f-qqzzk" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--qqzzk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--qqzzk-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"cf2aa93c-f4bc-4322-9163-052200dd877a", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 35, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4ba0ffcc3c393b39ffd642095302b00e9b56a14cac8c92763ccebbea377231f3", Pod:"coredns-6f6b679f8f-qqzzk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie75087da8cb", MAC:"fa:da:bd:d4:12:a9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:36:16.192058 containerd[1492]: 2025-01-29 11:36:16.188 [INFO][4757] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4ba0ffcc3c393b39ffd642095302b00e9b56a14cac8c92763ccebbea377231f3" Namespace="kube-system" Pod="coredns-6f6b679f8f-qqzzk" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--qqzzk-eth0" Jan 29 11:36:16.392775 containerd[1492]: time="2025-01-29T11:36:16.392597422Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:36:16.392775 containerd[1492]: time="2025-01-29T11:36:16.392665390Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:36:16.392775 containerd[1492]: time="2025-01-29T11:36:16.392675740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:36:16.392775 containerd[1492]: time="2025-01-29T11:36:16.392737185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:36:16.413755 systemd[1]: Started cri-containerd-4ba0ffcc3c393b39ffd642095302b00e9b56a14cac8c92763ccebbea377231f3.scope - libcontainer container 4ba0ffcc3c393b39ffd642095302b00e9b56a14cac8c92763ccebbea377231f3. Jan 29 11:36:16.425506 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:36:16.460420 containerd[1492]: time="2025-01-29T11:36:16.460160336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qqzzk,Uid:cf2aa93c-f4bc-4322-9163-052200dd877a,Namespace:kube-system,Attempt:6,} returns sandbox id \"4ba0ffcc3c393b39ffd642095302b00e9b56a14cac8c92763ccebbea377231f3\"" Jan 29 11:36:16.461402 kubelet[2560]: E0129 11:36:16.461083 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:36:16.465557 containerd[1492]: time="2025-01-29T11:36:16.465536260Z" level=info msg="CreateContainer within sandbox \"4ba0ffcc3c393b39ffd642095302b00e9b56a14cac8c92763ccebbea377231f3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:36:16.573504 containerd[1492]: time="2025-01-29T11:36:16.573467189Z" level=info msg="CreateContainer within sandbox \"be99dd72866bde31a161a15e7127b99788f637959daf55f508cb5eba3062585f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bfd86129326a9ede64417ec114fb5a642a637ae9cceee2249efd0ab53b3ccdab\"" Jan 29 11:36:16.576298 containerd[1492]: time="2025-01-29T11:36:16.576171040Z" level=info msg="StartContainer for \"bfd86129326a9ede64417ec114fb5a642a637ae9cceee2249efd0ab53b3ccdab\"" Jan 29 11:36:16.579036 containerd[1492]: time="2025-01-29T11:36:16.578427421Z" level=info msg="CreateContainer within sandbox \"4ba0ffcc3c393b39ffd642095302b00e9b56a14cac8c92763ccebbea377231f3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0bcd16a8754a2eabd7032fbf1e76336ce9b8adac3daf9c4ba8a3072fccb68830\"" Jan 29 11:36:16.582196 containerd[1492]: time="2025-01-29T11:36:16.582176616Z" level=info msg="StartContainer for \"0bcd16a8754a2eabd7032fbf1e76336ce9b8adac3daf9c4ba8a3072fccb68830\"" Jan 29 11:36:16.636839 systemd[1]: Started cri-containerd-bfd86129326a9ede64417ec114fb5a642a637ae9cceee2249efd0ab53b3ccdab.scope - libcontainer container bfd86129326a9ede64417ec114fb5a642a637ae9cceee2249efd0ab53b3ccdab. Jan 29 11:36:16.639932 systemd[1]: Started cri-containerd-0bcd16a8754a2eabd7032fbf1e76336ce9b8adac3daf9c4ba8a3072fccb68830.scope - libcontainer container 0bcd16a8754a2eabd7032fbf1e76336ce9b8adac3daf9c4ba8a3072fccb68830. Jan 29 11:36:16.645706 kernel: bpftool[5342]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 29 11:36:16.685721 containerd[1492]: time="2025-01-29T11:36:16.685603612Z" level=info msg="StartContainer for \"0bcd16a8754a2eabd7032fbf1e76336ce9b8adac3daf9c4ba8a3072fccb68830\" returns successfully" Jan 29 11:36:16.686012 containerd[1492]: time="2025-01-29T11:36:16.685674866Z" level=info msg="StartContainer for \"bfd86129326a9ede64417ec114fb5a642a637ae9cceee2249efd0ab53b3ccdab\" returns successfully" Jan 29 11:36:16.773553 kubelet[2560]: E0129 11:36:16.773508 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:36:16.787242 kubelet[2560]: I0129 11:36:16.787136 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-8sz4x" podStartSLOduration=36.787117611 podStartE2EDuration="36.787117611s" podCreationTimestamp="2025-01-29 11:35:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:36:16.786246935 +0000 UTC m=+42.807056703" watchObservedRunningTime="2025-01-29 11:36:16.787117611 +0000 UTC m=+42.807927379" Jan 29 11:36:16.789193 kubelet[2560]: E0129 11:36:16.789161 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:36:16.789726 kubelet[2560]: E0129 11:36:16.789703 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:36:16.820695 kubelet[2560]: I0129 11:36:16.818750 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-qqzzk" podStartSLOduration=36.818732496 podStartE2EDuration="36.818732496s" podCreationTimestamp="2025-01-29 11:35:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:36:16.818016701 +0000 UTC m=+42.838826459" watchObservedRunningTime="2025-01-29 11:36:16.818732496 +0000 UTC m=+42.839542264" Jan 29 11:36:16.929787 systemd-networkd[1414]: cali50704b3a3ca: Gained IPv6LL Jan 29 11:36:16.990725 systemd-networkd[1414]: cali79f05abc360: Gained IPv6LL Jan 29 11:36:17.012329 systemd-networkd[1414]: vxlan.calico: Link UP Jan 29 11:36:17.012343 systemd-networkd[1414]: vxlan.calico: Gained carrier Jan 29 11:36:17.047933 systemd[1]: Started sshd@9-10.0.0.107:22-10.0.0.1:50776.service - OpenSSH per-connection server daemon (10.0.0.1:50776). Jan 29 11:36:17.054774 systemd-networkd[1414]: cali3e0819ada2b: Gained IPv6LL Jan 29 11:36:17.122081 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount218499587.mount: Deactivated successfully. Jan 29 11:36:17.178456 sshd[5445]: Accepted publickey for core from 10.0.0.1 port 50776 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:36:17.180117 sshd-session[5445]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:36:17.184062 systemd-logind[1475]: New session 10 of user core. Jan 29 11:36:17.194825 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 11:36:17.312699 systemd-networkd[1414]: cali295396afbb9: Gained IPv6LL Jan 29 11:36:17.336813 sshd[5447]: Connection closed by 10.0.0.1 port 50776 Jan 29 11:36:17.337188 sshd-session[5445]: pam_unix(sshd:session): session closed for user core Jan 29 11:36:17.341331 systemd[1]: sshd@9-10.0.0.107:22-10.0.0.1:50776.service: Deactivated successfully. Jan 29 11:36:17.344065 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 11:36:17.346212 systemd-logind[1475]: Session 10 logged out. Waiting for processes to exit. Jan 29 11:36:17.347236 systemd-logind[1475]: Removed session 10. Jan 29 11:36:17.502779 systemd-networkd[1414]: calie75087da8cb: Gained IPv6LL Jan 29 11:36:17.630870 systemd-networkd[1414]: calibe67a9a5a98: Gained IPv6LL Jan 29 11:36:17.791349 kubelet[2560]: E0129 11:36:17.791317 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:36:17.791886 kubelet[2560]: E0129 11:36:17.791414 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:36:18.654790 systemd-networkd[1414]: vxlan.calico: Gained IPv6LL Jan 29 11:36:18.792782 kubelet[2560]: E0129 11:36:18.792749 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:36:18.793240 kubelet[2560]: E0129 11:36:18.792882 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:36:19.212686 containerd[1492]: time="2025-01-29T11:36:19.212621055Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:36:19.213353 containerd[1492]: time="2025-01-29T11:36:19.213306003Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 29 11:36:19.214326 containerd[1492]: time="2025-01-29T11:36:19.214265275Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:36:19.216351 containerd[1492]: time="2025-01-29T11:36:19.216311730Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:36:19.217047 containerd[1492]: time="2025-01-29T11:36:19.217022254Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.571494518s" Jan 29 11:36:19.217047 containerd[1492]: time="2025-01-29T11:36:19.217046089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 29 11:36:19.218023 containerd[1492]: time="2025-01-29T11:36:19.217887590Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 11:36:19.218893 containerd[1492]: time="2025-01-29T11:36:19.218865116Z" level=info msg="CreateContainer within sandbox \"c1a9817c1635cef0eb688e5c53d2f784687ef0541fe6a4a858cfb18ac00a2ea0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 11:36:19.232323 containerd[1492]: time="2025-01-29T11:36:19.232274858Z" level=info msg="CreateContainer within sandbox \"c1a9817c1635cef0eb688e5c53d2f784687ef0541fe6a4a858cfb18ac00a2ea0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"dc98453b5c0394adc684cee7ea70ca68ff86784b7e00f155dd893bcf5d67f45a\"" Jan 29 11:36:19.233693 containerd[1492]: time="2025-01-29T11:36:19.232745362Z" level=info msg="StartContainer for \"dc98453b5c0394adc684cee7ea70ca68ff86784b7e00f155dd893bcf5d67f45a\"" Jan 29 11:36:19.262799 systemd[1]: Started cri-containerd-dc98453b5c0394adc684cee7ea70ca68ff86784b7e00f155dd893bcf5d67f45a.scope - libcontainer container dc98453b5c0394adc684cee7ea70ca68ff86784b7e00f155dd893bcf5d67f45a. Jan 29 11:36:19.302717 containerd[1492]: time="2025-01-29T11:36:19.302675055Z" level=info msg="StartContainer for \"dc98453b5c0394adc684cee7ea70ca68ff86784b7e00f155dd893bcf5d67f45a\" returns successfully" Jan 29 11:36:19.596066 containerd[1492]: time="2025-01-29T11:36:19.596015126Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:36:19.596880 containerd[1492]: time="2025-01-29T11:36:19.596844614Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 29 11:36:19.599437 containerd[1492]: time="2025-01-29T11:36:19.599401678Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 381.482117ms" Jan 29 11:36:19.599437 containerd[1492]: time="2025-01-29T11:36:19.599436494Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 29 11:36:19.600334 containerd[1492]: time="2025-01-29T11:36:19.600277042Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 29 11:36:19.601501 containerd[1492]: time="2025-01-29T11:36:19.601461918Z" level=info msg="CreateContainer within sandbox \"93dde905cbb451fba101889b6398ee03b451ccb3620fb837b534493ec9e9e140\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 11:36:19.617115 containerd[1492]: time="2025-01-29T11:36:19.617062797Z" level=info msg="CreateContainer within sandbox \"93dde905cbb451fba101889b6398ee03b451ccb3620fb837b534493ec9e9e140\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b0167c555176428f564497e6f512ef421307b3a03577d07a11661d5f97cecd33\"" Jan 29 11:36:19.618005 containerd[1492]: time="2025-01-29T11:36:19.617651924Z" level=info msg="StartContainer for \"b0167c555176428f564497e6f512ef421307b3a03577d07a11661d5f97cecd33\"" Jan 29 11:36:19.654817 systemd[1]: Started cri-containerd-b0167c555176428f564497e6f512ef421307b3a03577d07a11661d5f97cecd33.scope - libcontainer container b0167c555176428f564497e6f512ef421307b3a03577d07a11661d5f97cecd33. Jan 29 11:36:19.706603 containerd[1492]: time="2025-01-29T11:36:19.706568987Z" level=info msg="StartContainer for \"b0167c555176428f564497e6f512ef421307b3a03577d07a11661d5f97cecd33\" returns successfully" Jan 29 11:36:19.806287 kubelet[2560]: E0129 11:36:19.806253 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:36:20.102947 kubelet[2560]: I0129 11:36:20.102883 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-78d7549f7d-5n5j6" podStartSLOduration=24.530174048 podStartE2EDuration="28.102864122s" podCreationTimestamp="2025-01-29 11:35:52 +0000 UTC" firstStartedPulling="2025-01-29 11:36:15.645044097 +0000 UTC m=+41.665853865" lastFinishedPulling="2025-01-29 11:36:19.217734161 +0000 UTC m=+45.238543939" observedRunningTime="2025-01-29 11:36:19.935144519 +0000 UTC m=+45.955954287" watchObservedRunningTime="2025-01-29 11:36:20.102864122 +0000 UTC m=+46.123673890" Jan 29 11:36:20.103383 kubelet[2560]: I0129 11:36:20.103181 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-78d7549f7d-g9z2x" podStartSLOduration=24.218498982 podStartE2EDuration="28.103177421s" podCreationTimestamp="2025-01-29 11:35:52 +0000 UTC" firstStartedPulling="2025-01-29 11:36:15.715479219 +0000 UTC m=+41.736288987" lastFinishedPulling="2025-01-29 11:36:19.600157658 +0000 UTC m=+45.620967426" observedRunningTime="2025-01-29 11:36:20.102482305 +0000 UTC m=+46.123292083" watchObservedRunningTime="2025-01-29 11:36:20.103177421 +0000 UTC m=+46.123987189" Jan 29 11:36:20.807969 kubelet[2560]: I0129 11:36:20.807930 2560 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:36:20.807969 kubelet[2560]: I0129 11:36:20.807957 2560 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:36:22.350240 systemd[1]: Started sshd@10-10.0.0.107:22-10.0.0.1:33204.service - OpenSSH per-connection server daemon (10.0.0.1:33204). Jan 29 11:36:22.714817 sshd[5606]: Accepted publickey for core from 10.0.0.1 port 33204 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:36:22.716133 sshd-session[5606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:36:22.724934 systemd-logind[1475]: New session 11 of user core. Jan 29 11:36:22.731936 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 11:36:22.980204 sshd[5614]: Connection closed by 10.0.0.1 port 33204 Jan 29 11:36:22.981272 sshd-session[5606]: pam_unix(sshd:session): session closed for user core Jan 29 11:36:22.993966 systemd[1]: sshd@10-10.0.0.107:22-10.0.0.1:33204.service: Deactivated successfully. Jan 29 11:36:22.996532 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 11:36:22.999449 systemd-logind[1475]: Session 11 logged out. Waiting for processes to exit. Jan 29 11:36:23.006193 systemd[1]: Started sshd@11-10.0.0.107:22-10.0.0.1:33208.service - OpenSSH per-connection server daemon (10.0.0.1:33208). Jan 29 11:36:23.009579 systemd-logind[1475]: Removed session 11. Jan 29 11:36:23.048232 sshd[5629]: Accepted publickey for core from 10.0.0.1 port 33208 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:36:23.050356 sshd-session[5629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:36:23.055109 systemd-logind[1475]: New session 12 of user core. Jan 29 11:36:23.067014 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 11:36:23.260176 sshd[5631]: Connection closed by 10.0.0.1 port 33208 Jan 29 11:36:23.260852 sshd-session[5629]: pam_unix(sshd:session): session closed for user core Jan 29 11:36:23.271788 systemd[1]: sshd@11-10.0.0.107:22-10.0.0.1:33208.service: Deactivated successfully. Jan 29 11:36:23.273723 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 11:36:23.275528 systemd-logind[1475]: Session 12 logged out. Waiting for processes to exit. Jan 29 11:36:23.276895 systemd[1]: Started sshd@12-10.0.0.107:22-10.0.0.1:33216.service - OpenSSH per-connection server daemon (10.0.0.1:33216). Jan 29 11:36:23.277721 systemd-logind[1475]: Removed session 12. Jan 29 11:36:23.310675 containerd[1492]: time="2025-01-29T11:36:23.310610027Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:36:23.319984 sshd[5641]: Accepted publickey for core from 10.0.0.1 port 33216 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:36:23.321807 sshd-session[5641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:36:23.326132 systemd-logind[1475]: New session 13 of user core. Jan 29 11:36:23.337803 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 11:36:23.359225 containerd[1492]: time="2025-01-29T11:36:23.359139820Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 29 11:36:23.377466 containerd[1492]: time="2025-01-29T11:36:23.377408207Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:36:23.418790 containerd[1492]: time="2025-01-29T11:36:23.418738170Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:36:23.419655 containerd[1492]: time="2025-01-29T11:36:23.419487908Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 3.819183926s" Jan 29 11:36:23.419655 containerd[1492]: time="2025-01-29T11:36:23.419521601Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 29 11:36:23.420993 containerd[1492]: time="2025-01-29T11:36:23.420750970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 29 11:36:23.431693 containerd[1492]: time="2025-01-29T11:36:23.430902083Z" level=info msg="CreateContainer within sandbox \"db1643cc31dc292653fff6f3b1f70b1c50776745befb2152f4fc1fb12248accc\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 29 11:36:23.559817 sshd[5643]: Connection closed by 10.0.0.1 port 33216 Jan 29 11:36:23.560168 sshd-session[5641]: pam_unix(sshd:session): session closed for user core Jan 29 11:36:23.564013 systemd[1]: sshd@12-10.0.0.107:22-10.0.0.1:33216.service: Deactivated successfully. Jan 29 11:36:23.565872 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 11:36:23.567745 systemd-logind[1475]: Session 13 logged out. Waiting for processes to exit. Jan 29 11:36:23.568698 systemd-logind[1475]: Removed session 13. Jan 29 11:36:24.117046 containerd[1492]: time="2025-01-29T11:36:24.116971350Z" level=info msg="CreateContainer within sandbox \"db1643cc31dc292653fff6f3b1f70b1c50776745befb2152f4fc1fb12248accc\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"d8589dae18922eea77c7454d29fe6785f4ae19379a44c885df5747d35179b5ff\"" Jan 29 11:36:24.117684 containerd[1492]: time="2025-01-29T11:36:24.117617924Z" level=info msg="StartContainer for \"d8589dae18922eea77c7454d29fe6785f4ae19379a44c885df5747d35179b5ff\"" Jan 29 11:36:24.154138 systemd[1]: Started cri-containerd-d8589dae18922eea77c7454d29fe6785f4ae19379a44c885df5747d35179b5ff.scope - libcontainer container d8589dae18922eea77c7454d29fe6785f4ae19379a44c885df5747d35179b5ff. Jan 29 11:36:24.204026 containerd[1492]: time="2025-01-29T11:36:24.203855616Z" level=info msg="StartContainer for \"d8589dae18922eea77c7454d29fe6785f4ae19379a44c885df5747d35179b5ff\" returns successfully" Jan 29 11:36:25.018676 kubelet[2560]: I0129 11:36:25.018350 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7f5f6fb96-hcxll" podStartSLOduration=25.429540049 podStartE2EDuration="33.018331183s" podCreationTimestamp="2025-01-29 11:35:52 +0000 UTC" firstStartedPulling="2025-01-29 11:36:15.831479975 +0000 UTC m=+41.852289743" lastFinishedPulling="2025-01-29 11:36:23.420271109 +0000 UTC m=+49.441080877" observedRunningTime="2025-01-29 11:36:25.017730425 +0000 UTC m=+51.038540193" watchObservedRunningTime="2025-01-29 11:36:25.018331183 +0000 UTC m=+51.039140951" Jan 29 11:36:26.310168 kubelet[2560]: I0129 11:36:26.310135 2560 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:36:27.901934 containerd[1492]: time="2025-01-29T11:36:27.901810523Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:36:27.919367 containerd[1492]: time="2025-01-29T11:36:27.918674412Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 29 11:36:27.922653 containerd[1492]: time="2025-01-29T11:36:27.922372505Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:36:27.934692 containerd[1492]: time="2025-01-29T11:36:27.932586741Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:36:27.934692 containerd[1492]: time="2025-01-29T11:36:27.933484516Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 4.51270348s" Jan 29 11:36:27.934692 containerd[1492]: time="2025-01-29T11:36:27.933509443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 29 11:36:27.950566 containerd[1492]: time="2025-01-29T11:36:27.950464022Z" level=info msg="CreateContainer within sandbox \"692107412fda4d04c2d07a21be5def3c6c6c707799ab0e4716a533be23b02b94\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 29 11:36:28.258429 containerd[1492]: time="2025-01-29T11:36:28.258276232Z" level=info msg="CreateContainer within sandbox \"692107412fda4d04c2d07a21be5def3c6c6c707799ab0e4716a533be23b02b94\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8a0f90ae72275169b83678b459457679e99521ad5e7e127cb51557d6b49cd137\"" Jan 29 11:36:28.261013 containerd[1492]: time="2025-01-29T11:36:28.259400442Z" level=info msg="StartContainer for \"8a0f90ae72275169b83678b459457679e99521ad5e7e127cb51557d6b49cd137\"" Jan 29 11:36:28.375116 systemd[1]: Started cri-containerd-8a0f90ae72275169b83678b459457679e99521ad5e7e127cb51557d6b49cd137.scope - libcontainer container 8a0f90ae72275169b83678b459457679e99521ad5e7e127cb51557d6b49cd137. Jan 29 11:36:28.493883 containerd[1492]: time="2025-01-29T11:36:28.489982740Z" level=info msg="StartContainer for \"8a0f90ae72275169b83678b459457679e99521ad5e7e127cb51557d6b49cd137\" returns successfully" Jan 29 11:36:28.495470 containerd[1492]: time="2025-01-29T11:36:28.495194994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 29 11:36:28.607090 systemd[1]: Started sshd@13-10.0.0.107:22-10.0.0.1:58154.service - OpenSSH per-connection server daemon (10.0.0.1:58154). Jan 29 11:36:28.702252 sshd[5794]: Accepted publickey for core from 10.0.0.1 port 58154 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:36:28.703331 sshd-session[5794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:36:28.714864 systemd-logind[1475]: New session 14 of user core. Jan 29 11:36:28.725489 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 11:36:28.870688 sshd[5796]: Connection closed by 10.0.0.1 port 58154 Jan 29 11:36:28.871078 sshd-session[5794]: pam_unix(sshd:session): session closed for user core Jan 29 11:36:28.875712 systemd[1]: sshd@13-10.0.0.107:22-10.0.0.1:58154.service: Deactivated successfully. Jan 29 11:36:28.877827 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 11:36:28.878401 systemd-logind[1475]: Session 14 logged out. Waiting for processes to exit. Jan 29 11:36:28.879282 systemd-logind[1475]: Removed session 14. Jan 29 11:36:29.595354 kubelet[2560]: I0129 11:36:29.595287 2560 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:36:31.023742 containerd[1492]: time="2025-01-29T11:36:31.023114617Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:36:31.024301 containerd[1492]: time="2025-01-29T11:36:31.024110165Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 29 11:36:31.025469 containerd[1492]: time="2025-01-29T11:36:31.025421436Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:36:31.027613 containerd[1492]: time="2025-01-29T11:36:31.027581691Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:36:31.028662 containerd[1492]: time="2025-01-29T11:36:31.028297615Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.533060842s" Jan 29 11:36:31.028662 containerd[1492]: time="2025-01-29T11:36:31.028333342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 29 11:36:31.032844 containerd[1492]: time="2025-01-29T11:36:31.032812818Z" level=info msg="CreateContainer within sandbox \"692107412fda4d04c2d07a21be5def3c6c6c707799ab0e4716a533be23b02b94\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 29 11:36:31.147183 containerd[1492]: time="2025-01-29T11:36:31.147136540Z" level=info msg="CreateContainer within sandbox \"692107412fda4d04c2d07a21be5def3c6c6c707799ab0e4716a533be23b02b94\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"bc264c6b277343b16ce9f9aa7db6bedfabc5ceb9a4799037f04b99fd45ab18ae\"" Jan 29 11:36:31.148319 containerd[1492]: time="2025-01-29T11:36:31.147803010Z" level=info msg="StartContainer for \"bc264c6b277343b16ce9f9aa7db6bedfabc5ceb9a4799037f04b99fd45ab18ae\"" Jan 29 11:36:31.183791 systemd[1]: Started cri-containerd-bc264c6b277343b16ce9f9aa7db6bedfabc5ceb9a4799037f04b99fd45ab18ae.scope - libcontainer container bc264c6b277343b16ce9f9aa7db6bedfabc5ceb9a4799037f04b99fd45ab18ae. Jan 29 11:36:31.343805 containerd[1492]: time="2025-01-29T11:36:31.342072560Z" level=info msg="StartContainer for \"bc264c6b277343b16ce9f9aa7db6bedfabc5ceb9a4799037f04b99fd45ab18ae\" returns successfully" Jan 29 11:36:32.024816 kubelet[2560]: I0129 11:36:32.024602 2560 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-s9vh2" podStartSLOduration=24.912787452 podStartE2EDuration="40.024583832s" podCreationTimestamp="2025-01-29 11:35:52 +0000 UTC" firstStartedPulling="2025-01-29 11:36:15.917868952 +0000 UTC m=+41.938678720" lastFinishedPulling="2025-01-29 11:36:31.029665331 +0000 UTC m=+57.050475100" observedRunningTime="2025-01-29 11:36:32.023859372 +0000 UTC m=+58.044669140" watchObservedRunningTime="2025-01-29 11:36:32.024583832 +0000 UTC m=+58.045393600" Jan 29 11:36:32.126275 kubelet[2560]: I0129 11:36:32.125927 2560 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 29 11:36:32.126275 kubelet[2560]: I0129 11:36:32.126009 2560 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 29 11:36:33.892802 systemd[1]: Started sshd@14-10.0.0.107:22-10.0.0.1:58164.service - OpenSSH per-connection server daemon (10.0.0.1:58164). Jan 29 11:36:33.938458 sshd[5855]: Accepted publickey for core from 10.0.0.1 port 58164 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:36:33.941160 sshd-session[5855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:36:33.945164 systemd-logind[1475]: New session 15 of user core. Jan 29 11:36:33.951764 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 11:36:34.038534 containerd[1492]: time="2025-01-29T11:36:34.038342661Z" level=info msg="StopPodSandbox for \"f81c812ded146071c6d5357fc23680f636d22e342d877bc1c706a89aba49dd93\"" Jan 29 11:36:34.038534 containerd[1492]: time="2025-01-29T11:36:34.038481772Z" level=info msg="TearDown network for sandbox \"f81c812ded146071c6d5357fc23680f636d22e342d877bc1c706a89aba49dd93\" successfully" Jan 29 11:36:34.038534 containerd[1492]: time="2025-01-29T11:36:34.038494176Z" level=info msg="StopPodSandbox for \"f81c812ded146071c6d5357fc23680f636d22e342d877bc1c706a89aba49dd93\" returns successfully" Jan 29 11:36:34.039129 containerd[1492]: time="2025-01-29T11:36:34.039044408Z" level=info msg="RemovePodSandbox for \"f81c812ded146071c6d5357fc23680f636d22e342d877bc1c706a89aba49dd93\"" Jan 29 11:36:34.055685 containerd[1492]: time="2025-01-29T11:36:34.055600618Z" level=info msg="Forcibly stopping sandbox \"f81c812ded146071c6d5357fc23680f636d22e342d877bc1c706a89aba49dd93\"" Jan 29 11:36:34.055838 containerd[1492]: time="2025-01-29T11:36:34.055783491Z" level=info msg="TearDown network for sandbox \"f81c812ded146071c6d5357fc23680f636d22e342d877bc1c706a89aba49dd93\" successfully" Jan 29 11:36:34.114224 sshd[5857]: Connection closed by 10.0.0.1 port 58164 Jan 29 11:36:34.114667 sshd-session[5855]: pam_unix(sshd:session): session closed for user core Jan 29 11:36:34.120020 systemd[1]: sshd@14-10.0.0.107:22-10.0.0.1:58164.service: Deactivated successfully. Jan 29 11:36:34.122224 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 11:36:34.122920 systemd-logind[1475]: Session 15 logged out. Waiting for processes to exit. Jan 29 11:36:34.123836 systemd-logind[1475]: Removed session 15. Jan 29 11:36:34.202046 containerd[1492]: time="2025-01-29T11:36:34.201922316Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f81c812ded146071c6d5357fc23680f636d22e342d877bc1c706a89aba49dd93\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:34.202046 containerd[1492]: time="2025-01-29T11:36:34.202005773Z" level=info msg="RemovePodSandbox \"f81c812ded146071c6d5357fc23680f636d22e342d877bc1c706a89aba49dd93\" returns successfully" Jan 29 11:36:34.202545 containerd[1492]: time="2025-01-29T11:36:34.202523734Z" level=info msg="StopPodSandbox for \"ca71e99b7c48067d7bf825de1146deea722737ac8db1aede627b41776be5c345\"" Jan 29 11:36:34.202667 containerd[1492]: time="2025-01-29T11:36:34.202648599Z" level=info msg="TearDown network for sandbox \"ca71e99b7c48067d7bf825de1146deea722737ac8db1aede627b41776be5c345\" successfully" Jan 29 11:36:34.202667 containerd[1492]: time="2025-01-29T11:36:34.202664710Z" level=info msg="StopPodSandbox for \"ca71e99b7c48067d7bf825de1146deea722737ac8db1aede627b41776be5c345\" returns successfully" Jan 29 11:36:34.203082 containerd[1492]: time="2025-01-29T11:36:34.203045023Z" level=info msg="RemovePodSandbox for \"ca71e99b7c48067d7bf825de1146deea722737ac8db1aede627b41776be5c345\"" Jan 29 11:36:34.203167 containerd[1492]: time="2025-01-29T11:36:34.203090728Z" level=info msg="Forcibly stopping sandbox \"ca71e99b7c48067d7bf825de1146deea722737ac8db1aede627b41776be5c345\"" Jan 29 11:36:34.203243 containerd[1492]: time="2025-01-29T11:36:34.203200084Z" level=info msg="TearDown network for sandbox \"ca71e99b7c48067d7bf825de1146deea722737ac8db1aede627b41776be5c345\" successfully" Jan 29 11:36:34.288403 containerd[1492]: time="2025-01-29T11:36:34.288357240Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ca71e99b7c48067d7bf825de1146deea722737ac8db1aede627b41776be5c345\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:34.288690 containerd[1492]: time="2025-01-29T11:36:34.288654909Z" level=info msg="RemovePodSandbox \"ca71e99b7c48067d7bf825de1146deea722737ac8db1aede627b41776be5c345\" returns successfully" Jan 29 11:36:34.289079 containerd[1492]: time="2025-01-29T11:36:34.289057976Z" level=info msg="StopPodSandbox for \"b0603ed2dd195e5f4326b4b967ccff21260ba0e0d5173691aeb14b6a04acf563\"" Jan 29 11:36:34.289175 containerd[1492]: time="2025-01-29T11:36:34.289157412Z" level=info msg="TearDown network for sandbox \"b0603ed2dd195e5f4326b4b967ccff21260ba0e0d5173691aeb14b6a04acf563\" successfully" Jan 29 11:36:34.289175 containerd[1492]: time="2025-01-29T11:36:34.289172220Z" level=info msg="StopPodSandbox for \"b0603ed2dd195e5f4326b4b967ccff21260ba0e0d5173691aeb14b6a04acf563\" returns successfully" Jan 29 11:36:34.292848 containerd[1492]: time="2025-01-29T11:36:34.292808593Z" level=info msg="RemovePodSandbox for \"b0603ed2dd195e5f4326b4b967ccff21260ba0e0d5173691aeb14b6a04acf563\"" Jan 29 11:36:34.292916 containerd[1492]: time="2025-01-29T11:36:34.292857545Z" level=info msg="Forcibly stopping sandbox \"b0603ed2dd195e5f4326b4b967ccff21260ba0e0d5173691aeb14b6a04acf563\"" Jan 29 11:36:34.295665 containerd[1492]: time="2025-01-29T11:36:34.292973023Z" level=info msg="TearDown network for sandbox \"b0603ed2dd195e5f4326b4b967ccff21260ba0e0d5173691aeb14b6a04acf563\" successfully" Jan 29 11:36:34.347844 containerd[1492]: time="2025-01-29T11:36:34.347756709Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b0603ed2dd195e5f4326b4b967ccff21260ba0e0d5173691aeb14b6a04acf563\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:34.347844 containerd[1492]: time="2025-01-29T11:36:34.347855835Z" level=info msg="RemovePodSandbox \"b0603ed2dd195e5f4326b4b967ccff21260ba0e0d5173691aeb14b6a04acf563\" returns successfully" Jan 29 11:36:34.348508 containerd[1492]: time="2025-01-29T11:36:34.348459779Z" level=info msg="StopPodSandbox for \"0aa3e1e1cc1c6bf9e805ca1806db21a802b9119be72b5d45614a63e66a9541a6\"" Jan 29 11:36:34.348707 containerd[1492]: time="2025-01-29T11:36:34.348604931Z" level=info msg="TearDown network for sandbox \"0aa3e1e1cc1c6bf9e805ca1806db21a802b9119be72b5d45614a63e66a9541a6\" successfully" Jan 29 11:36:34.348707 containerd[1492]: time="2025-01-29T11:36:34.348619669Z" level=info msg="StopPodSandbox for \"0aa3e1e1cc1c6bf9e805ca1806db21a802b9119be72b5d45614a63e66a9541a6\" returns successfully" Jan 29 11:36:34.349102 containerd[1492]: time="2025-01-29T11:36:34.349073180Z" level=info msg="RemovePodSandbox for \"0aa3e1e1cc1c6bf9e805ca1806db21a802b9119be72b5d45614a63e66a9541a6\"" Jan 29 11:36:34.349173 containerd[1492]: time="2025-01-29T11:36:34.349101744Z" level=info msg="Forcibly stopping sandbox \"0aa3e1e1cc1c6bf9e805ca1806db21a802b9119be72b5d45614a63e66a9541a6\"" Jan 29 11:36:34.349242 containerd[1492]: time="2025-01-29T11:36:34.349182405Z" level=info msg="TearDown network for sandbox \"0aa3e1e1cc1c6bf9e805ca1806db21a802b9119be72b5d45614a63e66a9541a6\" successfully" Jan 29 11:36:34.368155 containerd[1492]: time="2025-01-29T11:36:34.368099515Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0aa3e1e1cc1c6bf9e805ca1806db21a802b9119be72b5d45614a63e66a9541a6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:34.368301 containerd[1492]: time="2025-01-29T11:36:34.368190195Z" level=info msg="RemovePodSandbox \"0aa3e1e1cc1c6bf9e805ca1806db21a802b9119be72b5d45614a63e66a9541a6\" returns successfully" Jan 29 11:36:34.368701 containerd[1492]: time="2025-01-29T11:36:34.368677870Z" level=info msg="StopPodSandbox for \"9ede5520602dd767ddb315a2dea6c55d80b92b58b2e4a6098587bb67c0211e9a\"" Jan 29 11:36:34.368815 containerd[1492]: time="2025-01-29T11:36:34.368793397Z" level=info msg="TearDown network for sandbox \"9ede5520602dd767ddb315a2dea6c55d80b92b58b2e4a6098587bb67c0211e9a\" successfully" Jan 29 11:36:34.368815 containerd[1492]: time="2025-01-29T11:36:34.368810519Z" level=info msg="StopPodSandbox for \"9ede5520602dd767ddb315a2dea6c55d80b92b58b2e4a6098587bb67c0211e9a\" returns successfully" Jan 29 11:36:34.369085 containerd[1492]: time="2025-01-29T11:36:34.369062142Z" level=info msg="RemovePodSandbox for \"9ede5520602dd767ddb315a2dea6c55d80b92b58b2e4a6098587bb67c0211e9a\"" Jan 29 11:36:34.369149 containerd[1492]: time="2025-01-29T11:36:34.369087469Z" level=info msg="Forcibly stopping sandbox \"9ede5520602dd767ddb315a2dea6c55d80b92b58b2e4a6098587bb67c0211e9a\"" Jan 29 11:36:34.369220 containerd[1492]: time="2025-01-29T11:36:34.369167309Z" level=info msg="TearDown network for sandbox \"9ede5520602dd767ddb315a2dea6c55d80b92b58b2e4a6098587bb67c0211e9a\" successfully" Jan 29 11:36:34.393526 containerd[1492]: time="2025-01-29T11:36:34.393490694Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9ede5520602dd767ddb315a2dea6c55d80b92b58b2e4a6098587bb67c0211e9a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:34.393661 containerd[1492]: time="2025-01-29T11:36:34.393547410Z" level=info msg="RemovePodSandbox \"9ede5520602dd767ddb315a2dea6c55d80b92b58b2e4a6098587bb67c0211e9a\" returns successfully" Jan 29 11:36:34.393933 containerd[1492]: time="2025-01-29T11:36:34.393904000Z" level=info msg="StopPodSandbox for \"d4de1c4e4badf8994e3ce5ec4ee74b35d107c1caa3b89b472d0ec5af0549c89e\"" Jan 29 11:36:34.394028 containerd[1492]: time="2025-01-29T11:36:34.394006943Z" level=info msg="TearDown network for sandbox \"d4de1c4e4badf8994e3ce5ec4ee74b35d107c1caa3b89b472d0ec5af0549c89e\" successfully" Jan 29 11:36:34.394028 containerd[1492]: time="2025-01-29T11:36:34.394023394Z" level=info msg="StopPodSandbox for \"d4de1c4e4badf8994e3ce5ec4ee74b35d107c1caa3b89b472d0ec5af0549c89e\" returns successfully" Jan 29 11:36:34.394310 containerd[1492]: time="2025-01-29T11:36:34.394238357Z" level=info msg="RemovePodSandbox for \"d4de1c4e4badf8994e3ce5ec4ee74b35d107c1caa3b89b472d0ec5af0549c89e\"" Jan 29 11:36:34.394310 containerd[1492]: time="2025-01-29T11:36:34.394265237Z" level=info msg="Forcibly stopping sandbox \"d4de1c4e4badf8994e3ce5ec4ee74b35d107c1caa3b89b472d0ec5af0549c89e\"" Jan 29 11:36:34.394469 containerd[1492]: time="2025-01-29T11:36:34.394421641Z" level=info msg="TearDown network for sandbox \"d4de1c4e4badf8994e3ce5ec4ee74b35d107c1caa3b89b472d0ec5af0549c89e\" successfully" Jan 29 11:36:34.412948 containerd[1492]: time="2025-01-29T11:36:34.412901600Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d4de1c4e4badf8994e3ce5ec4ee74b35d107c1caa3b89b472d0ec5af0549c89e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:34.412948 containerd[1492]: time="2025-01-29T11:36:34.412946596Z" level=info msg="RemovePodSandbox \"d4de1c4e4badf8994e3ce5ec4ee74b35d107c1caa3b89b472d0ec5af0549c89e\" returns successfully" Jan 29 11:36:34.413226 containerd[1492]: time="2025-01-29T11:36:34.413203307Z" level=info msg="StopPodSandbox for \"fe40cde01f9bc00e0017a79d8d4efb77eed31f6ad079f48d7d46d61b289b6a4b\"" Jan 29 11:36:34.413300 containerd[1492]: time="2025-01-29T11:36:34.413282576Z" level=info msg="TearDown network for sandbox \"fe40cde01f9bc00e0017a79d8d4efb77eed31f6ad079f48d7d46d61b289b6a4b\" successfully" Jan 29 11:36:34.413300 containerd[1492]: time="2025-01-29T11:36:34.413294658Z" level=info msg="StopPodSandbox for \"fe40cde01f9bc00e0017a79d8d4efb77eed31f6ad079f48d7d46d61b289b6a4b\" returns successfully" Jan 29 11:36:34.413595 containerd[1492]: time="2025-01-29T11:36:34.413572850Z" level=info msg="RemovePodSandbox for \"fe40cde01f9bc00e0017a79d8d4efb77eed31f6ad079f48d7d46d61b289b6a4b\"" Jan 29 11:36:34.413659 containerd[1492]: time="2025-01-29T11:36:34.413595352Z" level=info msg="Forcibly stopping sandbox \"fe40cde01f9bc00e0017a79d8d4efb77eed31f6ad079f48d7d46d61b289b6a4b\"" Jan 29 11:36:34.413703 containerd[1492]: time="2025-01-29T11:36:34.413677907Z" level=info msg="TearDown network for sandbox \"fe40cde01f9bc00e0017a79d8d4efb77eed31f6ad079f48d7d46d61b289b6a4b\" successfully" Jan 29 11:36:34.435938 containerd[1492]: time="2025-01-29T11:36:34.435910529Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fe40cde01f9bc00e0017a79d8d4efb77eed31f6ad079f48d7d46d61b289b6a4b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:34.436013 containerd[1492]: time="2025-01-29T11:36:34.435961024Z" level=info msg="RemovePodSandbox \"fe40cde01f9bc00e0017a79d8d4efb77eed31f6ad079f48d7d46d61b289b6a4b\" returns successfully" Jan 29 11:36:34.436323 containerd[1492]: time="2025-01-29T11:36:34.436281095Z" level=info msg="StopPodSandbox for \"78be77b980959cfbde3c7658c4ee4124dfca02a33e1eb74d810982ec9ce9af2d\"" Jan 29 11:36:34.436482 containerd[1492]: time="2025-01-29T11:36:34.436445363Z" level=info msg="TearDown network for sandbox \"78be77b980959cfbde3c7658c4ee4124dfca02a33e1eb74d810982ec9ce9af2d\" successfully" Jan 29 11:36:34.436482 containerd[1492]: time="2025-01-29T11:36:34.436461413Z" level=info msg="StopPodSandbox for \"78be77b980959cfbde3c7658c4ee4124dfca02a33e1eb74d810982ec9ce9af2d\" returns successfully" Jan 29 11:36:34.436744 containerd[1492]: time="2025-01-29T11:36:34.436709208Z" level=info msg="RemovePodSandbox for \"78be77b980959cfbde3c7658c4ee4124dfca02a33e1eb74d810982ec9ce9af2d\"" Jan 29 11:36:34.436744 containerd[1492]: time="2025-01-29T11:36:34.436741368Z" level=info msg="Forcibly stopping sandbox \"78be77b980959cfbde3c7658c4ee4124dfca02a33e1eb74d810982ec9ce9af2d\"" Jan 29 11:36:34.436872 containerd[1492]: time="2025-01-29T11:36:34.436834042Z" level=info msg="TearDown network for sandbox \"78be77b980959cfbde3c7658c4ee4124dfca02a33e1eb74d810982ec9ce9af2d\" successfully" Jan 29 11:36:34.460442 containerd[1492]: time="2025-01-29T11:36:34.460292565Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"78be77b980959cfbde3c7658c4ee4124dfca02a33e1eb74d810982ec9ce9af2d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:34.460442 containerd[1492]: time="2025-01-29T11:36:34.460332069Z" level=info msg="RemovePodSandbox \"78be77b980959cfbde3c7658c4ee4124dfca02a33e1eb74d810982ec9ce9af2d\" returns successfully" Jan 29 11:36:34.460621 containerd[1492]: time="2025-01-29T11:36:34.460589662Z" level=info msg="StopPodSandbox for \"05f9cf1c50bc395566b534f6d9d826c90680313b67695139cea702c54a97f9a4\"" Jan 29 11:36:34.460745 containerd[1492]: time="2025-01-29T11:36:34.460721870Z" level=info msg="TearDown network for sandbox \"05f9cf1c50bc395566b534f6d9d826c90680313b67695139cea702c54a97f9a4\" successfully" Jan 29 11:36:34.460745 containerd[1492]: time="2025-01-29T11:36:34.460739473Z" level=info msg="StopPodSandbox for \"05f9cf1c50bc395566b534f6d9d826c90680313b67695139cea702c54a97f9a4\" returns successfully" Jan 29 11:36:34.461224 containerd[1492]: time="2025-01-29T11:36:34.461200048Z" level=info msg="RemovePodSandbox for \"05f9cf1c50bc395566b534f6d9d826c90680313b67695139cea702c54a97f9a4\"" Jan 29 11:36:34.461297 containerd[1492]: time="2025-01-29T11:36:34.461223041Z" level=info msg="Forcibly stopping sandbox \"05f9cf1c50bc395566b534f6d9d826c90680313b67695139cea702c54a97f9a4\"" Jan 29 11:36:34.461365 containerd[1492]: time="2025-01-29T11:36:34.461309653Z" level=info msg="TearDown network for sandbox \"05f9cf1c50bc395566b534f6d9d826c90680313b67695139cea702c54a97f9a4\" successfully" Jan 29 11:36:34.466238 containerd[1492]: time="2025-01-29T11:36:34.466203678Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"05f9cf1c50bc395566b534f6d9d826c90680313b67695139cea702c54a97f9a4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:34.466337 containerd[1492]: time="2025-01-29T11:36:34.466248101Z" level=info msg="RemovePodSandbox \"05f9cf1c50bc395566b534f6d9d826c90680313b67695139cea702c54a97f9a4\" returns successfully" Jan 29 11:36:34.466697 containerd[1492]: time="2025-01-29T11:36:34.466666987Z" level=info msg="StopPodSandbox for \"287ed895e0fd544523f25485baaa91620d0f599e710587bc7ace1b80134c67ce\"" Jan 29 11:36:34.466794 containerd[1492]: time="2025-01-29T11:36:34.466773687Z" level=info msg="TearDown network for sandbox \"287ed895e0fd544523f25485baaa91620d0f599e710587bc7ace1b80134c67ce\" successfully" Jan 29 11:36:34.466794 containerd[1492]: time="2025-01-29T11:36:34.466791100Z" level=info msg="StopPodSandbox for \"287ed895e0fd544523f25485baaa91620d0f599e710587bc7ace1b80134c67ce\" returns successfully" Jan 29 11:36:34.467091 containerd[1492]: time="2025-01-29T11:36:34.467053362Z" level=info msg="RemovePodSandbox for \"287ed895e0fd544523f25485baaa91620d0f599e710587bc7ace1b80134c67ce\"" Jan 29 11:36:34.467091 containerd[1492]: time="2025-01-29T11:36:34.467081044Z" level=info msg="Forcibly stopping sandbox \"287ed895e0fd544523f25485baaa91620d0f599e710587bc7ace1b80134c67ce\"" Jan 29 11:36:34.467212 containerd[1492]: time="2025-01-29T11:36:34.467166544Z" level=info msg="TearDown network for sandbox \"287ed895e0fd544523f25485baaa91620d0f599e710587bc7ace1b80134c67ce\" successfully" Jan 29 11:36:34.471118 containerd[1492]: time="2025-01-29T11:36:34.471087452Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"287ed895e0fd544523f25485baaa91620d0f599e710587bc7ace1b80134c67ce\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:34.471172 containerd[1492]: time="2025-01-29T11:36:34.471130633Z" level=info msg="RemovePodSandbox \"287ed895e0fd544523f25485baaa91620d0f599e710587bc7ace1b80134c67ce\" returns successfully" Jan 29 11:36:34.471463 containerd[1492]: time="2025-01-29T11:36:34.471439373Z" level=info msg="StopPodSandbox for \"4c7481776cb0257d1f8d06312c485b245875bec0a4c771dfef98cf6f06e42854\"" Jan 29 11:36:34.471586 containerd[1492]: time="2025-01-29T11:36:34.471549249Z" level=info msg="TearDown network for sandbox \"4c7481776cb0257d1f8d06312c485b245875bec0a4c771dfef98cf6f06e42854\" successfully" Jan 29 11:36:34.471614 containerd[1492]: time="2025-01-29T11:36:34.471580668Z" level=info msg="StopPodSandbox for \"4c7481776cb0257d1f8d06312c485b245875bec0a4c771dfef98cf6f06e42854\" returns successfully" Jan 29 11:36:34.471891 containerd[1492]: time="2025-01-29T11:36:34.471870462Z" level=info msg="RemovePodSandbox for \"4c7481776cb0257d1f8d06312c485b245875bec0a4c771dfef98cf6f06e42854\"" Jan 29 11:36:34.471932 containerd[1492]: time="2025-01-29T11:36:34.471894106Z" level=info msg="Forcibly stopping sandbox \"4c7481776cb0257d1f8d06312c485b245875bec0a4c771dfef98cf6f06e42854\"" Jan 29 11:36:34.472005 containerd[1492]: time="2025-01-29T11:36:34.471971822Z" level=info msg="TearDown network for sandbox \"4c7481776cb0257d1f8d06312c485b245875bec0a4c771dfef98cf6f06e42854\" successfully" Jan 29 11:36:34.475608 containerd[1492]: time="2025-01-29T11:36:34.475563382Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4c7481776cb0257d1f8d06312c485b245875bec0a4c771dfef98cf6f06e42854\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:34.475608 containerd[1492]: time="2025-01-29T11:36:34.475607816Z" level=info msg="RemovePodSandbox \"4c7481776cb0257d1f8d06312c485b245875bec0a4c771dfef98cf6f06e42854\" returns successfully" Jan 29 11:36:34.475874 containerd[1492]: time="2025-01-29T11:36:34.475846333Z" level=info msg="StopPodSandbox for \"1d8a5c622769c491a41f633c7530baca459b29e444f7e3520cae5792982531a8\"" Jan 29 11:36:34.475944 containerd[1492]: time="2025-01-29T11:36:34.475923508Z" level=info msg="TearDown network for sandbox \"1d8a5c622769c491a41f633c7530baca459b29e444f7e3520cae5792982531a8\" successfully" Jan 29 11:36:34.475944 containerd[1492]: time="2025-01-29T11:36:34.475937394Z" level=info msg="StopPodSandbox for \"1d8a5c622769c491a41f633c7530baca459b29e444f7e3520cae5792982531a8\" returns successfully" Jan 29 11:36:34.476192 containerd[1492]: time="2025-01-29T11:36:34.476163228Z" level=info msg="RemovePodSandbox for \"1d8a5c622769c491a41f633c7530baca459b29e444f7e3520cae5792982531a8\"" Jan 29 11:36:34.476192 containerd[1492]: time="2025-01-29T11:36:34.476187644Z" level=info msg="Forcibly stopping sandbox \"1d8a5c622769c491a41f633c7530baca459b29e444f7e3520cae5792982531a8\"" Jan 29 11:36:34.476291 containerd[1492]: time="2025-01-29T11:36:34.476259599Z" level=info msg="TearDown network for sandbox \"1d8a5c622769c491a41f633c7530baca459b29e444f7e3520cae5792982531a8\" successfully" Jan 29 11:36:34.479742 containerd[1492]: time="2025-01-29T11:36:34.479713661Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1d8a5c622769c491a41f633c7530baca459b29e444f7e3520cae5792982531a8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:34.479791 containerd[1492]: time="2025-01-29T11:36:34.479753125Z" level=info msg="RemovePodSandbox \"1d8a5c622769c491a41f633c7530baca459b29e444f7e3520cae5792982531a8\" returns successfully" Jan 29 11:36:34.480078 containerd[1492]: time="2025-01-29T11:36:34.480044251Z" level=info msg="StopPodSandbox for \"b5f2937f631c80a639072f6b6e4b81c67d79bd7b8b00d51fbbb621f6dcf86772\"" Jan 29 11:36:34.480183 containerd[1492]: time="2025-01-29T11:36:34.480164196Z" level=info msg="TearDown network for sandbox \"b5f2937f631c80a639072f6b6e4b81c67d79bd7b8b00d51fbbb621f6dcf86772\" successfully" Jan 29 11:36:34.480183 containerd[1492]: time="2025-01-29T11:36:34.480179625Z" level=info msg="StopPodSandbox for \"b5f2937f631c80a639072f6b6e4b81c67d79bd7b8b00d51fbbb621f6dcf86772\" returns successfully" Jan 29 11:36:34.480453 containerd[1492]: time="2025-01-29T11:36:34.480430566Z" level=info msg="RemovePodSandbox for \"b5f2937f631c80a639072f6b6e4b81c67d79bd7b8b00d51fbbb621f6dcf86772\"" Jan 29 11:36:34.480453 containerd[1492]: time="2025-01-29T11:36:34.480451395Z" level=info msg="Forcibly stopping sandbox \"b5f2937f631c80a639072f6b6e4b81c67d79bd7b8b00d51fbbb621f6dcf86772\"" Jan 29 11:36:34.480573 containerd[1492]: time="2025-01-29T11:36:34.480522378Z" level=info msg="TearDown network for sandbox \"b5f2937f631c80a639072f6b6e4b81c67d79bd7b8b00d51fbbb621f6dcf86772\" successfully" Jan 29 11:36:34.484378 containerd[1492]: time="2025-01-29T11:36:34.484349310Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b5f2937f631c80a639072f6b6e4b81c67d79bd7b8b00d51fbbb621f6dcf86772\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:34.484847 containerd[1492]: time="2025-01-29T11:36:34.484819352Z" level=info msg="RemovePodSandbox \"b5f2937f631c80a639072f6b6e4b81c67d79bd7b8b00d51fbbb621f6dcf86772\" returns successfully" Jan 29 11:36:34.487689 containerd[1492]: time="2025-01-29T11:36:34.487653491Z" level=info msg="StopPodSandbox for \"7cf0ee6b79c5f0ffbd1df3c4ba1fa347f2ab15775291f793b405df1eb50ddbdc\"" Jan 29 11:36:34.487826 containerd[1492]: time="2025-01-29T11:36:34.487776181Z" level=info msg="TearDown network for sandbox \"7cf0ee6b79c5f0ffbd1df3c4ba1fa347f2ab15775291f793b405df1eb50ddbdc\" successfully" Jan 29 11:36:34.487826 containerd[1492]: time="2025-01-29T11:36:34.487821846Z" level=info msg="StopPodSandbox for \"7cf0ee6b79c5f0ffbd1df3c4ba1fa347f2ab15775291f793b405df1eb50ddbdc\" returns successfully" Jan 29 11:36:34.488141 containerd[1492]: time="2025-01-29T11:36:34.488097313Z" level=info msg="RemovePodSandbox for \"7cf0ee6b79c5f0ffbd1df3c4ba1fa347f2ab15775291f793b405df1eb50ddbdc\"" Jan 29 11:36:34.488141 containerd[1492]: time="2025-01-29T11:36:34.488123693Z" level=info msg="Forcibly stopping sandbox \"7cf0ee6b79c5f0ffbd1df3c4ba1fa347f2ab15775291f793b405df1eb50ddbdc\"" Jan 29 11:36:34.488243 containerd[1492]: time="2025-01-29T11:36:34.488202241Z" level=info msg="TearDown network for sandbox \"7cf0ee6b79c5f0ffbd1df3c4ba1fa347f2ab15775291f793b405df1eb50ddbdc\" successfully" Jan 29 11:36:34.492396 containerd[1492]: time="2025-01-29T11:36:34.492353250Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7cf0ee6b79c5f0ffbd1df3c4ba1fa347f2ab15775291f793b405df1eb50ddbdc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:34.492477 containerd[1492]: time="2025-01-29T11:36:34.492432799Z" level=info msg="RemovePodSandbox \"7cf0ee6b79c5f0ffbd1df3c4ba1fa347f2ab15775291f793b405df1eb50ddbdc\" returns successfully" Jan 29 11:36:34.492733 containerd[1492]: time="2025-01-29T11:36:34.492707064Z" level=info msg="StopPodSandbox for \"dad5edafd75c208e507084ef0dcd7c29e4f3cd0afee0ca5f8f325c8bed2cca8a\"" Jan 29 11:36:34.492825 containerd[1492]: time="2025-01-29T11:36:34.492803535Z" level=info msg="TearDown network for sandbox \"dad5edafd75c208e507084ef0dcd7c29e4f3cd0afee0ca5f8f325c8bed2cca8a\" successfully" Jan 29 11:36:34.492825 containerd[1492]: time="2025-01-29T11:36:34.492821388Z" level=info msg="StopPodSandbox for \"dad5edafd75c208e507084ef0dcd7c29e4f3cd0afee0ca5f8f325c8bed2cca8a\" returns successfully" Jan 29 11:36:34.493061 containerd[1492]: time="2025-01-29T11:36:34.493017727Z" level=info msg="RemovePodSandbox for \"dad5edafd75c208e507084ef0dcd7c29e4f3cd0afee0ca5f8f325c8bed2cca8a\"" Jan 29 11:36:34.493061 containerd[1492]: time="2025-01-29T11:36:34.493051220Z" level=info msg="Forcibly stopping sandbox \"dad5edafd75c208e507084ef0dcd7c29e4f3cd0afee0ca5f8f325c8bed2cca8a\"" Jan 29 11:36:34.493190 containerd[1492]: time="2025-01-29T11:36:34.493118556Z" level=info msg="TearDown network for sandbox \"dad5edafd75c208e507084ef0dcd7c29e4f3cd0afee0ca5f8f325c8bed2cca8a\" successfully" Jan 29 11:36:34.496820 containerd[1492]: time="2025-01-29T11:36:34.496774497Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dad5edafd75c208e507084ef0dcd7c29e4f3cd0afee0ca5f8f325c8bed2cca8a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:34.496886 containerd[1492]: time="2025-01-29T11:36:34.496819060Z" level=info msg="RemovePodSandbox \"dad5edafd75c208e507084ef0dcd7c29e4f3cd0afee0ca5f8f325c8bed2cca8a\" returns successfully" Jan 29 11:36:34.497110 containerd[1492]: time="2025-01-29T11:36:34.497078738Z" level=info msg="StopPodSandbox for \"02f3fae8add70e6d47c8bf8ac88f4239e8c0deb00c48770a4763038b478021c8\"" Jan 29 11:36:34.497205 containerd[1492]: time="2025-01-29T11:36:34.497171522Z" level=info msg="TearDown network for sandbox \"02f3fae8add70e6d47c8bf8ac88f4239e8c0deb00c48770a4763038b478021c8\" successfully" Jan 29 11:36:34.497205 containerd[1492]: time="2025-01-29T11:36:34.497201959Z" level=info msg="StopPodSandbox for \"02f3fae8add70e6d47c8bf8ac88f4239e8c0deb00c48770a4763038b478021c8\" returns successfully" Jan 29 11:36:34.497427 containerd[1492]: time="2025-01-29T11:36:34.497383800Z" level=info msg="RemovePodSandbox for \"02f3fae8add70e6d47c8bf8ac88f4239e8c0deb00c48770a4763038b478021c8\"" Jan 29 11:36:34.497427 containerd[1492]: time="2025-01-29T11:36:34.497410090Z" level=info msg="Forcibly stopping sandbox \"02f3fae8add70e6d47c8bf8ac88f4239e8c0deb00c48770a4763038b478021c8\"" Jan 29 11:36:34.497499 containerd[1492]: time="2025-01-29T11:36:34.497466706Z" level=info msg="TearDown network for sandbox \"02f3fae8add70e6d47c8bf8ac88f4239e8c0deb00c48770a4763038b478021c8\" successfully" Jan 29 11:36:34.501159 containerd[1492]: time="2025-01-29T11:36:34.501131614Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"02f3fae8add70e6d47c8bf8ac88f4239e8c0deb00c48770a4763038b478021c8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:34.501212 containerd[1492]: time="2025-01-29T11:36:34.501163584Z" level=info msg="RemovePodSandbox \"02f3fae8add70e6d47c8bf8ac88f4239e8c0deb00c48770a4763038b478021c8\" returns successfully" Jan 29 11:36:34.501437 containerd[1492]: time="2025-01-29T11:36:34.501412661Z" level=info msg="StopPodSandbox for \"3a3ba7ab52526116889cf321914b1c4c8be6c1d810f1d28bbb52853d080bfba4\"" Jan 29 11:36:34.501528 containerd[1492]: time="2025-01-29T11:36:34.501508491Z" level=info msg="TearDown network for sandbox \"3a3ba7ab52526116889cf321914b1c4c8be6c1d810f1d28bbb52853d080bfba4\" successfully" Jan 29 11:36:34.501528 containerd[1492]: time="2025-01-29T11:36:34.501525533Z" level=info msg="StopPodSandbox for \"3a3ba7ab52526116889cf321914b1c4c8be6c1d810f1d28bbb52853d080bfba4\" returns successfully" Jan 29 11:36:34.501733 containerd[1492]: time="2025-01-29T11:36:34.501712994Z" level=info msg="RemovePodSandbox for \"3a3ba7ab52526116889cf321914b1c4c8be6c1d810f1d28bbb52853d080bfba4\"" Jan 29 11:36:34.501733 containerd[1492]: time="2025-01-29T11:36:34.501733023Z" level=info msg="Forcibly stopping sandbox \"3a3ba7ab52526116889cf321914b1c4c8be6c1d810f1d28bbb52853d080bfba4\"" Jan 29 11:36:34.501818 containerd[1492]: time="2025-01-29T11:36:34.501795169Z" level=info msg="TearDown network for sandbox \"3a3ba7ab52526116889cf321914b1c4c8be6c1d810f1d28bbb52853d080bfba4\" successfully" Jan 29 11:36:34.506711 containerd[1492]: time="2025-01-29T11:36:34.506670478Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3a3ba7ab52526116889cf321914b1c4c8be6c1d810f1d28bbb52853d080bfba4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:34.506787 containerd[1492]: time="2025-01-29T11:36:34.506712607Z" level=info msg="RemovePodSandbox \"3a3ba7ab52526116889cf321914b1c4c8be6c1d810f1d28bbb52853d080bfba4\" returns successfully" Jan 29 11:36:34.506987 containerd[1492]: time="2025-01-29T11:36:34.506959961Z" level=info msg="StopPodSandbox for \"79d39bbdd4f88a8ac890dc57de0b71341e4fe506913f34d3613e4ea9cfabf344\"" Jan 29 11:36:34.507074 containerd[1492]: time="2025-01-29T11:36:34.507054258Z" level=info msg="TearDown network for sandbox \"79d39bbdd4f88a8ac890dc57de0b71341e4fe506913f34d3613e4ea9cfabf344\" successfully" Jan 29 11:36:34.507104 containerd[1492]: time="2025-01-29T11:36:34.507071059Z" level=info msg="StopPodSandbox for \"79d39bbdd4f88a8ac890dc57de0b71341e4fe506913f34d3613e4ea9cfabf344\" returns successfully" Jan 29 11:36:34.507338 containerd[1492]: time="2025-01-29T11:36:34.507314196Z" level=info msg="RemovePodSandbox for \"79d39bbdd4f88a8ac890dc57de0b71341e4fe506913f34d3613e4ea9cfabf344\"" Jan 29 11:36:34.507414 containerd[1492]: time="2025-01-29T11:36:34.507340536Z" level=info msg="Forcibly stopping sandbox \"79d39bbdd4f88a8ac890dc57de0b71341e4fe506913f34d3613e4ea9cfabf344\"" Jan 29 11:36:34.507465 containerd[1492]: time="2025-01-29T11:36:34.507431647Z" level=info msg="TearDown network for sandbox \"79d39bbdd4f88a8ac890dc57de0b71341e4fe506913f34d3613e4ea9cfabf344\" successfully" Jan 29 11:36:34.554267 containerd[1492]: time="2025-01-29T11:36:34.554228295Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"79d39bbdd4f88a8ac890dc57de0b71341e4fe506913f34d3613e4ea9cfabf344\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:34.554342 containerd[1492]: time="2025-01-29T11:36:34.554283048Z" level=info msg="RemovePodSandbox \"79d39bbdd4f88a8ac890dc57de0b71341e4fe506913f34d3613e4ea9cfabf344\" returns successfully" Jan 29 11:36:34.554684 containerd[1492]: time="2025-01-29T11:36:34.554649836Z" level=info msg="StopPodSandbox for \"e4c786a1f87f123c515ee8f4dae18559dc1b81a1fbae2953ab843c3ffdeb2f38\"" Jan 29 11:36:34.554792 containerd[1492]: time="2025-01-29T11:36:34.554767096Z" level=info msg="TearDown network for sandbox \"e4c786a1f87f123c515ee8f4dae18559dc1b81a1fbae2953ab843c3ffdeb2f38\" successfully" Jan 29 11:36:34.554792 containerd[1492]: time="2025-01-29T11:36:34.554784409Z" level=info msg="StopPodSandbox for \"e4c786a1f87f123c515ee8f4dae18559dc1b81a1fbae2953ab843c3ffdeb2f38\" returns successfully" Jan 29 11:36:34.555037 containerd[1492]: time="2025-01-29T11:36:34.555015021Z" level=info msg="RemovePodSandbox for \"e4c786a1f87f123c515ee8f4dae18559dc1b81a1fbae2953ab843c3ffdeb2f38\"" Jan 29 11:36:34.555081 containerd[1492]: time="2025-01-29T11:36:34.555040599Z" level=info msg="Forcibly stopping sandbox \"e4c786a1f87f123c515ee8f4dae18559dc1b81a1fbae2953ab843c3ffdeb2f38\"" Jan 29 11:36:34.555135 containerd[1492]: time="2025-01-29T11:36:34.555115961Z" level=info msg="TearDown network for sandbox \"e4c786a1f87f123c515ee8f4dae18559dc1b81a1fbae2953ab843c3ffdeb2f38\" successfully" Jan 29 11:36:34.561248 containerd[1492]: time="2025-01-29T11:36:34.561191502Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e4c786a1f87f123c515ee8f4dae18559dc1b81a1fbae2953ab843c3ffdeb2f38\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:34.561301 containerd[1492]: time="2025-01-29T11:36:34.561265361Z" level=info msg="RemovePodSandbox \"e4c786a1f87f123c515ee8f4dae18559dc1b81a1fbae2953ab843c3ffdeb2f38\" returns successfully" Jan 29 11:36:34.561712 containerd[1492]: time="2025-01-29T11:36:34.561685649Z" level=info msg="StopPodSandbox for \"f289a1ea20b800c633270b52ec0a7cbce53731f38497fa4630193ed068515360\"" Jan 29 11:36:34.561812 containerd[1492]: time="2025-01-29T11:36:34.561794845Z" level=info msg="TearDown network for sandbox \"f289a1ea20b800c633270b52ec0a7cbce53731f38497fa4630193ed068515360\" successfully" Jan 29 11:36:34.561812 containerd[1492]: time="2025-01-29T11:36:34.561810154Z" level=info msg="StopPodSandbox for \"f289a1ea20b800c633270b52ec0a7cbce53731f38497fa4630193ed068515360\" returns successfully" Jan 29 11:36:34.562066 containerd[1492]: time="2025-01-29T11:36:34.562040045Z" level=info msg="RemovePodSandbox for \"f289a1ea20b800c633270b52ec0a7cbce53731f38497fa4630193ed068515360\"" Jan 29 11:36:34.562066 containerd[1492]: time="2025-01-29T11:36:34.562061234Z" level=info msg="Forcibly stopping sandbox \"f289a1ea20b800c633270b52ec0a7cbce53731f38497fa4630193ed068515360\"" Jan 29 11:36:34.562166 containerd[1492]: time="2025-01-29T11:36:34.562122109Z" level=info msg="TearDown network for sandbox \"f289a1ea20b800c633270b52ec0a7cbce53731f38497fa4630193ed068515360\" successfully" Jan 29 11:36:34.569472 containerd[1492]: time="2025-01-29T11:36:34.569430554Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f289a1ea20b800c633270b52ec0a7cbce53731f38497fa4630193ed068515360\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:34.569535 containerd[1492]: time="2025-01-29T11:36:34.569491448Z" level=info msg="RemovePodSandbox \"f289a1ea20b800c633270b52ec0a7cbce53731f38497fa4630193ed068515360\" returns successfully" Jan 29 11:36:34.569970 containerd[1492]: time="2025-01-29T11:36:34.569920173Z" level=info msg="StopPodSandbox for \"6b8213b783a7c465b327152c49a96b3af800823e3978928ad141cc9679eef942\"" Jan 29 11:36:34.570059 containerd[1492]: time="2025-01-29T11:36:34.570037092Z" level=info msg="TearDown network for sandbox \"6b8213b783a7c465b327152c49a96b3af800823e3978928ad141cc9679eef942\" successfully" Jan 29 11:36:34.570097 containerd[1492]: time="2025-01-29T11:36:34.570053883Z" level=info msg="StopPodSandbox for \"6b8213b783a7c465b327152c49a96b3af800823e3978928ad141cc9679eef942\" returns successfully" Jan 29 11:36:34.570338 containerd[1492]: time="2025-01-29T11:36:34.570290998Z" level=info msg="RemovePodSandbox for \"6b8213b783a7c465b327152c49a96b3af800823e3978928ad141cc9679eef942\"" Jan 29 11:36:34.570338 containerd[1492]: time="2025-01-29T11:36:34.570314262Z" level=info msg="Forcibly stopping sandbox \"6b8213b783a7c465b327152c49a96b3af800823e3978928ad141cc9679eef942\"" Jan 29 11:36:34.570519 containerd[1492]: time="2025-01-29T11:36:34.570392779Z" level=info msg="TearDown network for sandbox \"6b8213b783a7c465b327152c49a96b3af800823e3978928ad141cc9679eef942\" successfully" Jan 29 11:36:34.574607 containerd[1492]: time="2025-01-29T11:36:34.574569217Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6b8213b783a7c465b327152c49a96b3af800823e3978928ad141cc9679eef942\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:34.574673 containerd[1492]: time="2025-01-29T11:36:34.574639700Z" level=info msg="RemovePodSandbox \"6b8213b783a7c465b327152c49a96b3af800823e3978928ad141cc9679eef942\" returns successfully" Jan 29 11:36:34.574907 containerd[1492]: time="2025-01-29T11:36:34.574876654Z" level=info msg="StopPodSandbox for \"9c2e1fb00b0ed28c181a6f5615fe212925f0c2f757fdb4ef8583ddfde1de9f84\"" Jan 29 11:36:34.574988 containerd[1492]: time="2025-01-29T11:36:34.574967174Z" level=info msg="TearDown network for sandbox \"9c2e1fb00b0ed28c181a6f5615fe212925f0c2f757fdb4ef8583ddfde1de9f84\" successfully" Jan 29 11:36:34.574988 containerd[1492]: time="2025-01-29T11:36:34.574984286Z" level=info msg="StopPodSandbox for \"9c2e1fb00b0ed28c181a6f5615fe212925f0c2f757fdb4ef8583ddfde1de9f84\" returns successfully" Jan 29 11:36:34.575219 containerd[1492]: time="2025-01-29T11:36:34.575174182Z" level=info msg="RemovePodSandbox for \"9c2e1fb00b0ed28c181a6f5615fe212925f0c2f757fdb4ef8583ddfde1de9f84\"" Jan 29 11:36:34.575219 containerd[1492]: time="2025-01-29T11:36:34.575201613Z" level=info msg="Forcibly stopping sandbox \"9c2e1fb00b0ed28c181a6f5615fe212925f0c2f757fdb4ef8583ddfde1de9f84\"" Jan 29 11:36:34.575320 containerd[1492]: time="2025-01-29T11:36:34.575276514Z" level=info msg="TearDown network for sandbox \"9c2e1fb00b0ed28c181a6f5615fe212925f0c2f757fdb4ef8583ddfde1de9f84\" successfully" Jan 29 11:36:34.579507 containerd[1492]: time="2025-01-29T11:36:34.579459885Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9c2e1fb00b0ed28c181a6f5615fe212925f0c2f757fdb4ef8583ddfde1de9f84\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:34.579549 containerd[1492]: time="2025-01-29T11:36:34.579509057Z" level=info msg="RemovePodSandbox \"9c2e1fb00b0ed28c181a6f5615fe212925f0c2f757fdb4ef8583ddfde1de9f84\" returns successfully" Jan 29 11:36:34.579857 containerd[1492]: time="2025-01-29T11:36:34.579824188Z" level=info msg="StopPodSandbox for \"4467e330ce986042ee6370798d0cfac69c12394e8c48bb378f3d8274118997f5\"" Jan 29 11:36:34.579938 containerd[1492]: time="2025-01-29T11:36:34.579923865Z" level=info msg="TearDown network for sandbox \"4467e330ce986042ee6370798d0cfac69c12394e8c48bb378f3d8274118997f5\" successfully" Jan 29 11:36:34.579969 containerd[1492]: time="2025-01-29T11:36:34.579938833Z" level=info msg="StopPodSandbox for \"4467e330ce986042ee6370798d0cfac69c12394e8c48bb378f3d8274118997f5\" returns successfully" Jan 29 11:36:34.580181 containerd[1492]: time="2025-01-29T11:36:34.580133508Z" level=info msg="RemovePodSandbox for \"4467e330ce986042ee6370798d0cfac69c12394e8c48bb378f3d8274118997f5\"" Jan 29 11:36:34.580181 containerd[1492]: time="2025-01-29T11:36:34.580156963Z" level=info msg="Forcibly stopping sandbox \"4467e330ce986042ee6370798d0cfac69c12394e8c48bb378f3d8274118997f5\"" Jan 29 11:36:34.580266 containerd[1492]: time="2025-01-29T11:36:34.580224851Z" level=info msg="TearDown network for sandbox \"4467e330ce986042ee6370798d0cfac69c12394e8c48bb378f3d8274118997f5\" successfully" Jan 29 11:36:34.584249 containerd[1492]: time="2025-01-29T11:36:34.584197846Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4467e330ce986042ee6370798d0cfac69c12394e8c48bb378f3d8274118997f5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:34.584249 containerd[1492]: time="2025-01-29T11:36:34.584235086Z" level=info msg="RemovePodSandbox \"4467e330ce986042ee6370798d0cfac69c12394e8c48bb378f3d8274118997f5\" returns successfully" Jan 29 11:36:34.584509 containerd[1492]: time="2025-01-29T11:36:34.584488271Z" level=info msg="StopPodSandbox for \"e52c96ad530fba430b707a5dbe71a37681b523b56d3b4207670eff5fc15bfd6a\"" Jan 29 11:36:34.584590 containerd[1492]: time="2025-01-29T11:36:34.584564003Z" level=info msg="TearDown network for sandbox \"e52c96ad530fba430b707a5dbe71a37681b523b56d3b4207670eff5fc15bfd6a\" successfully" Jan 29 11:36:34.584590 containerd[1492]: time="2025-01-29T11:36:34.584574924Z" level=info msg="StopPodSandbox for \"e52c96ad530fba430b707a5dbe71a37681b523b56d3b4207670eff5fc15bfd6a\" returns successfully" Jan 29 11:36:34.585046 containerd[1492]: time="2025-01-29T11:36:34.585025930Z" level=info msg="RemovePodSandbox for \"e52c96ad530fba430b707a5dbe71a37681b523b56d3b4207670eff5fc15bfd6a\"" Jan 29 11:36:34.585046 containerd[1492]: time="2025-01-29T11:36:34.585045748Z" level=info msg="Forcibly stopping sandbox \"e52c96ad530fba430b707a5dbe71a37681b523b56d3b4207670eff5fc15bfd6a\"" Jan 29 11:36:34.585142 containerd[1492]: time="2025-01-29T11:36:34.585115148Z" level=info msg="TearDown network for sandbox \"e52c96ad530fba430b707a5dbe71a37681b523b56d3b4207670eff5fc15bfd6a\" successfully" Jan 29 11:36:34.588701 containerd[1492]: time="2025-01-29T11:36:34.588666902Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e52c96ad530fba430b707a5dbe71a37681b523b56d3b4207670eff5fc15bfd6a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:34.588701 containerd[1492]: time="2025-01-29T11:36:34.588698462Z" level=info msg="RemovePodSandbox \"e52c96ad530fba430b707a5dbe71a37681b523b56d3b4207670eff5fc15bfd6a\" returns successfully" Jan 29 11:36:34.588979 containerd[1492]: time="2025-01-29T11:36:34.588955204Z" level=info msg="StopPodSandbox for \"7796f2b8c41cac5338ffa9b402e0614ff1aa9234cde81ec23c1a0b7905f12023\"" Jan 29 11:36:34.589072 containerd[1492]: time="2025-01-29T11:36:34.589041937Z" level=info msg="TearDown network for sandbox \"7796f2b8c41cac5338ffa9b402e0614ff1aa9234cde81ec23c1a0b7905f12023\" successfully" Jan 29 11:36:34.589072 containerd[1492]: time="2025-01-29T11:36:34.589052877Z" level=info msg="StopPodSandbox for \"7796f2b8c41cac5338ffa9b402e0614ff1aa9234cde81ec23c1a0b7905f12023\" returns successfully" Jan 29 11:36:34.589317 containerd[1492]: time="2025-01-29T11:36:34.589265396Z" level=info msg="RemovePodSandbox for \"7796f2b8c41cac5338ffa9b402e0614ff1aa9234cde81ec23c1a0b7905f12023\"" Jan 29 11:36:34.589317 containerd[1492]: time="2025-01-29T11:36:34.589287257Z" level=info msg="Forcibly stopping sandbox \"7796f2b8c41cac5338ffa9b402e0614ff1aa9234cde81ec23c1a0b7905f12023\"" Jan 29 11:36:34.589382 containerd[1492]: time="2025-01-29T11:36:34.589348672Z" level=info msg="TearDown network for sandbox \"7796f2b8c41cac5338ffa9b402e0614ff1aa9234cde81ec23c1a0b7905f12023\" successfully" Jan 29 11:36:34.592962 containerd[1492]: time="2025-01-29T11:36:34.592920004Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7796f2b8c41cac5338ffa9b402e0614ff1aa9234cde81ec23c1a0b7905f12023\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:34.592962 containerd[1492]: time="2025-01-29T11:36:34.592953157Z" level=info msg="RemovePodSandbox \"7796f2b8c41cac5338ffa9b402e0614ff1aa9234cde81ec23c1a0b7905f12023\" returns successfully" Jan 29 11:36:34.593305 containerd[1492]: time="2025-01-29T11:36:34.593262707Z" level=info msg="StopPodSandbox for \"dd045a0823620908e29eab7f45fc540dbd8b4da70a585fad2b83ddf304ac7fc1\"" Jan 29 11:36:34.593449 containerd[1492]: time="2025-01-29T11:36:34.593411847Z" level=info msg="TearDown network for sandbox \"dd045a0823620908e29eab7f45fc540dbd8b4da70a585fad2b83ddf304ac7fc1\" successfully" Jan 29 11:36:34.593449 containerd[1492]: time="2025-01-29T11:36:34.593430542Z" level=info msg="StopPodSandbox for \"dd045a0823620908e29eab7f45fc540dbd8b4da70a585fad2b83ddf304ac7fc1\" returns successfully" Jan 29 11:36:34.595574 containerd[1492]: time="2025-01-29T11:36:34.593728411Z" level=info msg="RemovePodSandbox for \"dd045a0823620908e29eab7f45fc540dbd8b4da70a585fad2b83ddf304ac7fc1\"" Jan 29 11:36:34.595574 containerd[1492]: time="2025-01-29T11:36:34.593758086Z" level=info msg="Forcibly stopping sandbox \"dd045a0823620908e29eab7f45fc540dbd8b4da70a585fad2b83ddf304ac7fc1\"" Jan 29 11:36:34.595574 containerd[1492]: time="2025-01-29T11:36:34.593841724Z" level=info msg="TearDown network for sandbox \"dd045a0823620908e29eab7f45fc540dbd8b4da70a585fad2b83ddf304ac7fc1\" successfully" Jan 29 11:36:34.597461 containerd[1492]: time="2025-01-29T11:36:34.597417073Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dd045a0823620908e29eab7f45fc540dbd8b4da70a585fad2b83ddf304ac7fc1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:34.597554 containerd[1492]: time="2025-01-29T11:36:34.597471976Z" level=info msg="RemovePodSandbox \"dd045a0823620908e29eab7f45fc540dbd8b4da70a585fad2b83ddf304ac7fc1\" returns successfully" Jan 29 11:36:34.597795 containerd[1492]: time="2025-01-29T11:36:34.597763875Z" level=info msg="StopPodSandbox for \"62e6cc0503d21814956a6c95475295a49bf85000e6e767d35029c30730d88e17\"" Jan 29 11:36:34.597890 containerd[1492]: time="2025-01-29T11:36:34.597872979Z" level=info msg="TearDown network for sandbox \"62e6cc0503d21814956a6c95475295a49bf85000e6e767d35029c30730d88e17\" successfully" Jan 29 11:36:34.597927 containerd[1492]: time="2025-01-29T11:36:34.597888318Z" level=info msg="StopPodSandbox for \"62e6cc0503d21814956a6c95475295a49bf85000e6e767d35029c30730d88e17\" returns successfully" Jan 29 11:36:34.598105 containerd[1492]: time="2025-01-29T11:36:34.598083694Z" level=info msg="RemovePodSandbox for \"62e6cc0503d21814956a6c95475295a49bf85000e6e767d35029c30730d88e17\"" Jan 29 11:36:34.598142 containerd[1492]: time="2025-01-29T11:36:34.598105966Z" level=info msg="Forcibly stopping sandbox \"62e6cc0503d21814956a6c95475295a49bf85000e6e767d35029c30730d88e17\"" Jan 29 11:36:34.598218 containerd[1492]: time="2025-01-29T11:36:34.598186808Z" level=info msg="TearDown network for sandbox \"62e6cc0503d21814956a6c95475295a49bf85000e6e767d35029c30730d88e17\" successfully" Jan 29 11:36:34.602507 containerd[1492]: time="2025-01-29T11:36:34.602461619Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"62e6cc0503d21814956a6c95475295a49bf85000e6e767d35029c30730d88e17\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:34.602547 containerd[1492]: time="2025-01-29T11:36:34.602509329Z" level=info msg="RemovePodSandbox \"62e6cc0503d21814956a6c95475295a49bf85000e6e767d35029c30730d88e17\" returns successfully" Jan 29 11:36:34.602873 containerd[1492]: time="2025-01-29T11:36:34.602844208Z" level=info msg="StopPodSandbox for \"5e37377a6c7e16ee19055fe266012ffe5537a92df9268d751dcb96f4c96359da\"" Jan 29 11:36:34.602961 containerd[1492]: time="2025-01-29T11:36:34.602932123Z" level=info msg="TearDown network for sandbox \"5e37377a6c7e16ee19055fe266012ffe5537a92df9268d751dcb96f4c96359da\" successfully" Jan 29 11:36:34.602961 containerd[1492]: time="2025-01-29T11:36:34.602942412Z" level=info msg="StopPodSandbox for \"5e37377a6c7e16ee19055fe266012ffe5537a92df9268d751dcb96f4c96359da\" returns successfully" Jan 29 11:36:34.603254 containerd[1492]: time="2025-01-29T11:36:34.603219993Z" level=info msg="RemovePodSandbox for \"5e37377a6c7e16ee19055fe266012ffe5537a92df9268d751dcb96f4c96359da\"" Jan 29 11:36:34.603299 containerd[1492]: time="2025-01-29T11:36:34.603258976Z" level=info msg="Forcibly stopping sandbox \"5e37377a6c7e16ee19055fe266012ffe5537a92df9268d751dcb96f4c96359da\"" Jan 29 11:36:34.603394 containerd[1492]: time="2025-01-29T11:36:34.603337403Z" level=info msg="TearDown network for sandbox \"5e37377a6c7e16ee19055fe266012ffe5537a92df9268d751dcb96f4c96359da\" successfully" Jan 29 11:36:34.607328 containerd[1492]: time="2025-01-29T11:36:34.607273009Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5e37377a6c7e16ee19055fe266012ffe5537a92df9268d751dcb96f4c96359da\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:34.607419 containerd[1492]: time="2025-01-29T11:36:34.607332610Z" level=info msg="RemovePodSandbox \"5e37377a6c7e16ee19055fe266012ffe5537a92df9268d751dcb96f4c96359da\" returns successfully" Jan 29 11:36:34.607727 containerd[1492]: time="2025-01-29T11:36:34.607702495Z" level=info msg="StopPodSandbox for \"c813bd7f3e0a9030e60075419217b1ddcf7974cf9fe9ec3a62f9cc760570f4ff\"" Jan 29 11:36:34.607817 containerd[1492]: time="2025-01-29T11:36:34.607797523Z" level=info msg="TearDown network for sandbox \"c813bd7f3e0a9030e60075419217b1ddcf7974cf9fe9ec3a62f9cc760570f4ff\" successfully" Jan 29 11:36:34.607817 containerd[1492]: time="2025-01-29T11:36:34.607809726Z" level=info msg="StopPodSandbox for \"c813bd7f3e0a9030e60075419217b1ddcf7974cf9fe9ec3a62f9cc760570f4ff\" returns successfully" Jan 29 11:36:34.608130 containerd[1492]: time="2025-01-29T11:36:34.608100301Z" level=info msg="RemovePodSandbox for \"c813bd7f3e0a9030e60075419217b1ddcf7974cf9fe9ec3a62f9cc760570f4ff\"" Jan 29 11:36:34.608130 containerd[1492]: time="2025-01-29T11:36:34.608120208Z" level=info msg="Forcibly stopping sandbox \"c813bd7f3e0a9030e60075419217b1ddcf7974cf9fe9ec3a62f9cc760570f4ff\"" Jan 29 11:36:34.608230 containerd[1492]: time="2025-01-29T11:36:34.608189539Z" level=info msg="TearDown network for sandbox \"c813bd7f3e0a9030e60075419217b1ddcf7974cf9fe9ec3a62f9cc760570f4ff\" successfully" Jan 29 11:36:34.615018 containerd[1492]: time="2025-01-29T11:36:34.614906955Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c813bd7f3e0a9030e60075419217b1ddcf7974cf9fe9ec3a62f9cc760570f4ff\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:34.615018 containerd[1492]: time="2025-01-29T11:36:34.614963571Z" level=info msg="RemovePodSandbox \"c813bd7f3e0a9030e60075419217b1ddcf7974cf9fe9ec3a62f9cc760570f4ff\" returns successfully" Jan 29 11:36:34.615577 containerd[1492]: time="2025-01-29T11:36:34.615529523Z" level=info msg="StopPodSandbox for \"c1ad62e2e4cccfb4748651dd83ebb7c29397ff496d8fe77bbfc3dafc4e0ece8e\"" Jan 29 11:36:34.615728 containerd[1492]: time="2025-01-29T11:36:34.615659507Z" level=info msg="TearDown network for sandbox \"c1ad62e2e4cccfb4748651dd83ebb7c29397ff496d8fe77bbfc3dafc4e0ece8e\" successfully" Jan 29 11:36:34.615728 containerd[1492]: time="2025-01-29T11:36:34.615677681Z" level=info msg="StopPodSandbox for \"c1ad62e2e4cccfb4748651dd83ebb7c29397ff496d8fe77bbfc3dafc4e0ece8e\" returns successfully" Jan 29 11:36:34.616115 containerd[1492]: time="2025-01-29T11:36:34.616072662Z" level=info msg="RemovePodSandbox for \"c1ad62e2e4cccfb4748651dd83ebb7c29397ff496d8fe77bbfc3dafc4e0ece8e\"" Jan 29 11:36:34.616148 containerd[1492]: time="2025-01-29T11:36:34.616123026Z" level=info msg="Forcibly stopping sandbox \"c1ad62e2e4cccfb4748651dd83ebb7c29397ff496d8fe77bbfc3dafc4e0ece8e\"" Jan 29 11:36:34.616286 containerd[1492]: time="2025-01-29T11:36:34.616235959Z" level=info msg="TearDown network for sandbox \"c1ad62e2e4cccfb4748651dd83ebb7c29397ff496d8fe77bbfc3dafc4e0ece8e\" successfully" Jan 29 11:36:34.620613 containerd[1492]: time="2025-01-29T11:36:34.620568199Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c1ad62e2e4cccfb4748651dd83ebb7c29397ff496d8fe77bbfc3dafc4e0ece8e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:34.620613 containerd[1492]: time="2025-01-29T11:36:34.620608775Z" level=info msg="RemovePodSandbox \"c1ad62e2e4cccfb4748651dd83ebb7c29397ff496d8fe77bbfc3dafc4e0ece8e\" returns successfully" Jan 29 11:36:34.620934 containerd[1492]: time="2025-01-29T11:36:34.620902155Z" level=info msg="StopPodSandbox for \"fc2896e11c36986d8fde21f00e5a96b42534c3d205cbbed9dc871d57b74e32b9\"" Jan 29 11:36:34.621045 containerd[1492]: time="2025-01-29T11:36:34.621015158Z" level=info msg="TearDown network for sandbox \"fc2896e11c36986d8fde21f00e5a96b42534c3d205cbbed9dc871d57b74e32b9\" successfully" Jan 29 11:36:34.621045 containerd[1492]: time="2025-01-29T11:36:34.621029224Z" level=info msg="StopPodSandbox for \"fc2896e11c36986d8fde21f00e5a96b42534c3d205cbbed9dc871d57b74e32b9\" returns successfully" Jan 29 11:36:34.621342 containerd[1492]: time="2025-01-29T11:36:34.621301816Z" level=info msg="RemovePodSandbox for \"fc2896e11c36986d8fde21f00e5a96b42534c3d205cbbed9dc871d57b74e32b9\"" Jan 29 11:36:34.621394 containerd[1492]: time="2025-01-29T11:36:34.621346910Z" level=info msg="Forcibly stopping sandbox \"fc2896e11c36986d8fde21f00e5a96b42534c3d205cbbed9dc871d57b74e32b9\"" Jan 29 11:36:34.621503 containerd[1492]: time="2025-01-29T11:36:34.621459772Z" level=info msg="TearDown network for sandbox \"fc2896e11c36986d8fde21f00e5a96b42534c3d205cbbed9dc871d57b74e32b9\" successfully" Jan 29 11:36:34.625483 containerd[1492]: time="2025-01-29T11:36:34.625422588Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fc2896e11c36986d8fde21f00e5a96b42534c3d205cbbed9dc871d57b74e32b9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:34.625541 containerd[1492]: time="2025-01-29T11:36:34.625496166Z" level=info msg="RemovePodSandbox \"fc2896e11c36986d8fde21f00e5a96b42534c3d205cbbed9dc871d57b74e32b9\" returns successfully" Jan 29 11:36:34.625861 containerd[1492]: time="2025-01-29T11:36:34.625832969Z" level=info msg="StopPodSandbox for \"080cc56d0ab33d56eb6a1d38d66c3a2cdd67647fca21a8c854e3a90bdc763db1\"" Jan 29 11:36:34.625968 containerd[1492]: time="2025-01-29T11:36:34.625945380Z" level=info msg="TearDown network for sandbox \"080cc56d0ab33d56eb6a1d38d66c3a2cdd67647fca21a8c854e3a90bdc763db1\" successfully" Jan 29 11:36:34.625968 containerd[1492]: time="2025-01-29T11:36:34.625961280Z" level=info msg="StopPodSandbox for \"080cc56d0ab33d56eb6a1d38d66c3a2cdd67647fca21a8c854e3a90bdc763db1\" returns successfully" Jan 29 11:36:34.626196 containerd[1492]: time="2025-01-29T11:36:34.626165743Z" level=info msg="RemovePodSandbox for \"080cc56d0ab33d56eb6a1d38d66c3a2cdd67647fca21a8c854e3a90bdc763db1\"" Jan 29 11:36:34.626196 containerd[1492]: time="2025-01-29T11:36:34.626194407Z" level=info msg="Forcibly stopping sandbox \"080cc56d0ab33d56eb6a1d38d66c3a2cdd67647fca21a8c854e3a90bdc763db1\"" Jan 29 11:36:34.626309 containerd[1492]: time="2025-01-29T11:36:34.626265360Z" level=info msg="TearDown network for sandbox \"080cc56d0ab33d56eb6a1d38d66c3a2cdd67647fca21a8c854e3a90bdc763db1\" successfully" Jan 29 11:36:34.630026 containerd[1492]: time="2025-01-29T11:36:34.629993677Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"080cc56d0ab33d56eb6a1d38d66c3a2cdd67647fca21a8c854e3a90bdc763db1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:34.630098 containerd[1492]: time="2025-01-29T11:36:34.630050373Z" level=info msg="RemovePodSandbox \"080cc56d0ab33d56eb6a1d38d66c3a2cdd67647fca21a8c854e3a90bdc763db1\" returns successfully" Jan 29 11:36:34.630374 containerd[1492]: time="2025-01-29T11:36:34.630354564Z" level=info msg="StopPodSandbox for \"e1a02e62fa690b02079fe64e81dbe8322e67817e7a20aa91d7cc841d5e25bba6\"" Jan 29 11:36:34.630472 containerd[1492]: time="2025-01-29T11:36:34.630456815Z" level=info msg="TearDown network for sandbox \"e1a02e62fa690b02079fe64e81dbe8322e67817e7a20aa91d7cc841d5e25bba6\" successfully" Jan 29 11:36:34.630508 containerd[1492]: time="2025-01-29T11:36:34.630471262Z" level=info msg="StopPodSandbox for \"e1a02e62fa690b02079fe64e81dbe8322e67817e7a20aa91d7cc841d5e25bba6\" returns successfully" Jan 29 11:36:34.630770 containerd[1492]: time="2025-01-29T11:36:34.630738925Z" level=info msg="RemovePodSandbox for \"e1a02e62fa690b02079fe64e81dbe8322e67817e7a20aa91d7cc841d5e25bba6\"" Jan 29 11:36:34.630770 containerd[1492]: time="2025-01-29T11:36:34.630761929Z" level=info msg="Forcibly stopping sandbox \"e1a02e62fa690b02079fe64e81dbe8322e67817e7a20aa91d7cc841d5e25bba6\"" Jan 29 11:36:34.630951 containerd[1492]: time="2025-01-29T11:36:34.630859462Z" level=info msg="TearDown network for sandbox \"e1a02e62fa690b02079fe64e81dbe8322e67817e7a20aa91d7cc841d5e25bba6\" successfully" Jan 29 11:36:34.634835 containerd[1492]: time="2025-01-29T11:36:34.634783465Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e1a02e62fa690b02079fe64e81dbe8322e67817e7a20aa91d7cc841d5e25bba6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:34.634927 containerd[1492]: time="2025-01-29T11:36:34.634859348Z" level=info msg="RemovePodSandbox \"e1a02e62fa690b02079fe64e81dbe8322e67817e7a20aa91d7cc841d5e25bba6\" returns successfully" Jan 29 11:36:34.635240 containerd[1492]: time="2025-01-29T11:36:34.635204545Z" level=info msg="StopPodSandbox for \"8825c450d804aebd9a63530be345a53b152b35ff17031f6c5482c2b8008c22fd\"" Jan 29 11:36:34.635378 containerd[1492]: time="2025-01-29T11:36:34.635345570Z" level=info msg="TearDown network for sandbox \"8825c450d804aebd9a63530be345a53b152b35ff17031f6c5482c2b8008c22fd\" successfully" Jan 29 11:36:34.635378 containerd[1492]: time="2025-01-29T11:36:34.635367390Z" level=info msg="StopPodSandbox for \"8825c450d804aebd9a63530be345a53b152b35ff17031f6c5482c2b8008c22fd\" returns successfully" Jan 29 11:36:34.637454 containerd[1492]: time="2025-01-29T11:36:34.635794052Z" level=info msg="RemovePodSandbox for \"8825c450d804aebd9a63530be345a53b152b35ff17031f6c5482c2b8008c22fd\"" Jan 29 11:36:34.637454 containerd[1492]: time="2025-01-29T11:36:34.635817666Z" level=info msg="Forcibly stopping sandbox \"8825c450d804aebd9a63530be345a53b152b35ff17031f6c5482c2b8008c22fd\"" Jan 29 11:36:34.637454 containerd[1492]: time="2025-01-29T11:36:34.635882518Z" level=info msg="TearDown network for sandbox \"8825c450d804aebd9a63530be345a53b152b35ff17031f6c5482c2b8008c22fd\" successfully" Jan 29 11:36:34.639914 containerd[1492]: time="2025-01-29T11:36:34.639876022Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8825c450d804aebd9a63530be345a53b152b35ff17031f6c5482c2b8008c22fd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:34.639986 containerd[1492]: time="2025-01-29T11:36:34.639933970Z" level=info msg="RemovePodSandbox \"8825c450d804aebd9a63530be345a53b152b35ff17031f6c5482c2b8008c22fd\" returns successfully" Jan 29 11:36:34.640302 containerd[1492]: time="2025-01-29T11:36:34.640276945Z" level=info msg="StopPodSandbox for \"183bedfa503a9d2160a322c44bf62cdfc0109e87f48648b193fd53fe82d96642\"" Jan 29 11:36:34.640436 containerd[1492]: time="2025-01-29T11:36:34.640411547Z" level=info msg="TearDown network for sandbox \"183bedfa503a9d2160a322c44bf62cdfc0109e87f48648b193fd53fe82d96642\" successfully" Jan 29 11:36:34.640436 containerd[1492]: time="2025-01-29T11:36:34.640427357Z" level=info msg="StopPodSandbox for \"183bedfa503a9d2160a322c44bf62cdfc0109e87f48648b193fd53fe82d96642\" returns successfully" Jan 29 11:36:34.640743 containerd[1492]: time="2025-01-29T11:36:34.640710057Z" level=info msg="RemovePodSandbox for \"183bedfa503a9d2160a322c44bf62cdfc0109e87f48648b193fd53fe82d96642\"" Jan 29 11:36:34.640743 containerd[1492]: time="2025-01-29T11:36:34.640742267Z" level=info msg="Forcibly stopping sandbox \"183bedfa503a9d2160a322c44bf62cdfc0109e87f48648b193fd53fe82d96642\"" Jan 29 11:36:34.640910 containerd[1492]: time="2025-01-29T11:36:34.640839099Z" level=info msg="TearDown network for sandbox \"183bedfa503a9d2160a322c44bf62cdfc0109e87f48648b193fd53fe82d96642\" successfully" Jan 29 11:36:34.645087 containerd[1492]: time="2025-01-29T11:36:34.645040544Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"183bedfa503a9d2160a322c44bf62cdfc0109e87f48648b193fd53fe82d96642\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:36:34.645195 containerd[1492]: time="2025-01-29T11:36:34.645098503Z" level=info msg="RemovePodSandbox \"183bedfa503a9d2160a322c44bf62cdfc0109e87f48648b193fd53fe82d96642\" returns successfully" Jan 29 11:36:39.128602 systemd[1]: Started sshd@15-10.0.0.107:22-10.0.0.1:39940.service - OpenSSH per-connection server daemon (10.0.0.1:39940). Jan 29 11:36:39.177568 sshd[5898]: Accepted publickey for core from 10.0.0.1 port 39940 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:36:39.178950 sshd-session[5898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:36:39.182884 systemd-logind[1475]: New session 16 of user core. Jan 29 11:36:39.189778 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 11:36:39.311456 sshd[5900]: Connection closed by 10.0.0.1 port 39940 Jan 29 11:36:39.311935 sshd-session[5898]: pam_unix(sshd:session): session closed for user core Jan 29 11:36:39.321541 systemd[1]: sshd@15-10.0.0.107:22-10.0.0.1:39940.service: Deactivated successfully. Jan 29 11:36:39.323918 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 11:36:39.325674 systemd-logind[1475]: Session 16 logged out. Waiting for processes to exit. Jan 29 11:36:39.334908 systemd[1]: Started sshd@16-10.0.0.107:22-10.0.0.1:39942.service - OpenSSH per-connection server daemon (10.0.0.1:39942). Jan 29 11:36:39.335860 systemd-logind[1475]: Removed session 16. Jan 29 11:36:39.370563 sshd[5912]: Accepted publickey for core from 10.0.0.1 port 39942 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:36:39.372007 sshd-session[5912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:36:39.375753 systemd-logind[1475]: New session 17 of user core. Jan 29 11:36:39.386747 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 11:36:39.568979 sshd[5914]: Connection closed by 10.0.0.1 port 39942 Jan 29 11:36:39.569429 sshd-session[5912]: pam_unix(sshd:session): session closed for user core Jan 29 11:36:39.580393 systemd[1]: sshd@16-10.0.0.107:22-10.0.0.1:39942.service: Deactivated successfully. Jan 29 11:36:39.583323 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 11:36:39.585130 systemd-logind[1475]: Session 17 logged out. Waiting for processes to exit. Jan 29 11:36:39.586688 systemd[1]: Started sshd@17-10.0.0.107:22-10.0.0.1:39948.service - OpenSSH per-connection server daemon (10.0.0.1:39948). Jan 29 11:36:39.587942 systemd-logind[1475]: Removed session 17. Jan 29 11:36:39.636057 sshd[5928]: Accepted publickey for core from 10.0.0.1 port 39948 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:36:39.639816 sshd-session[5928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:36:39.642922 kubelet[2560]: E0129 11:36:39.642900 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:36:39.645296 systemd-logind[1475]: New session 18 of user core. Jan 29 11:36:39.648767 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 11:36:41.802722 sshd[5948]: Connection closed by 10.0.0.1 port 39948 Jan 29 11:36:41.803118 sshd-session[5928]: pam_unix(sshd:session): session closed for user core Jan 29 11:36:41.810699 systemd[1]: sshd@17-10.0.0.107:22-10.0.0.1:39948.service: Deactivated successfully. Jan 29 11:36:41.812576 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 11:36:41.813365 systemd-logind[1475]: Session 18 logged out. Waiting for processes to exit. Jan 29 11:36:41.822762 systemd[1]: Started sshd@18-10.0.0.107:22-10.0.0.1:39952.service - OpenSSH per-connection server daemon (10.0.0.1:39952). Jan 29 11:36:41.824905 systemd-logind[1475]: Removed session 18. Jan 29 11:36:41.873551 sshd[5968]: Accepted publickey for core from 10.0.0.1 port 39952 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:36:41.875020 sshd-session[5968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:36:41.879142 systemd-logind[1475]: New session 19 of user core. Jan 29 11:36:41.887750 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 11:36:42.236747 sshd[5970]: Connection closed by 10.0.0.1 port 39952 Jan 29 11:36:42.236934 sshd-session[5968]: pam_unix(sshd:session): session closed for user core Jan 29 11:36:42.248768 systemd[1]: sshd@18-10.0.0.107:22-10.0.0.1:39952.service: Deactivated successfully. Jan 29 11:36:42.250879 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 11:36:42.252376 systemd-logind[1475]: Session 19 logged out. Waiting for processes to exit. Jan 29 11:36:42.253911 systemd[1]: Started sshd@19-10.0.0.107:22-10.0.0.1:39956.service - OpenSSH per-connection server daemon (10.0.0.1:39956). Jan 29 11:36:42.254813 systemd-logind[1475]: Removed session 19. Jan 29 11:36:42.293030 sshd[5980]: Accepted publickey for core from 10.0.0.1 port 39956 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:36:42.294404 sshd-session[5980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:36:42.298483 systemd-logind[1475]: New session 20 of user core. Jan 29 11:36:42.304792 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 11:36:42.428257 sshd[5982]: Connection closed by 10.0.0.1 port 39956 Jan 29 11:36:42.428597 sshd-session[5980]: pam_unix(sshd:session): session closed for user core Jan 29 11:36:42.432142 systemd[1]: sshd@19-10.0.0.107:22-10.0.0.1:39956.service: Deactivated successfully. Jan 29 11:36:42.434197 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 11:36:42.434824 systemd-logind[1475]: Session 20 logged out. Waiting for processes to exit. Jan 29 11:36:42.435539 systemd-logind[1475]: Removed session 20. Jan 29 11:36:47.450205 systemd[1]: Started sshd@20-10.0.0.107:22-10.0.0.1:43024.service - OpenSSH per-connection server daemon (10.0.0.1:43024). Jan 29 11:36:47.487060 sshd[5995]: Accepted publickey for core from 10.0.0.1 port 43024 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:36:47.489792 sshd-session[5995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:36:47.494172 systemd-logind[1475]: New session 21 of user core. Jan 29 11:36:47.515895 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 11:36:47.622257 sshd[5997]: Connection closed by 10.0.0.1 port 43024 Jan 29 11:36:47.622665 sshd-session[5995]: pam_unix(sshd:session): session closed for user core Jan 29 11:36:47.626723 systemd[1]: sshd@20-10.0.0.107:22-10.0.0.1:43024.service: Deactivated successfully. Jan 29 11:36:47.628557 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 11:36:47.629312 systemd-logind[1475]: Session 21 logged out. Waiting for processes to exit. Jan 29 11:36:47.630264 systemd-logind[1475]: Removed session 21. Jan 29 11:36:48.057471 kubelet[2560]: E0129 11:36:48.057402 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:36:52.635791 systemd[1]: Started sshd@21-10.0.0.107:22-10.0.0.1:43032.service - OpenSSH per-connection server daemon (10.0.0.1:43032). Jan 29 11:36:52.680749 sshd[6012]: Accepted publickey for core from 10.0.0.1 port 43032 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:36:52.682284 sshd-session[6012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:36:52.686212 systemd-logind[1475]: New session 22 of user core. Jan 29 11:36:52.696759 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 11:36:52.799782 sshd[6014]: Connection closed by 10.0.0.1 port 43032 Jan 29 11:36:52.800156 sshd-session[6012]: pam_unix(sshd:session): session closed for user core Jan 29 11:36:52.803874 systemd[1]: sshd@21-10.0.0.107:22-10.0.0.1:43032.service: Deactivated successfully. Jan 29 11:36:52.807156 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 11:36:52.807900 systemd-logind[1475]: Session 22 logged out. Waiting for processes to exit. Jan 29 11:36:52.808806 systemd-logind[1475]: Removed session 22. Jan 29 11:36:56.057254 kubelet[2560]: E0129 11:36:56.057201 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:36:57.811748 systemd[1]: Started sshd@22-10.0.0.107:22-10.0.0.1:35816.service - OpenSSH per-connection server daemon (10.0.0.1:35816). Jan 29 11:36:57.852472 sshd[6034]: Accepted publickey for core from 10.0.0.1 port 35816 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:36:57.853896 sshd-session[6034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:36:57.857602 systemd-logind[1475]: New session 23 of user core. Jan 29 11:36:57.863766 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 11:36:57.966754 sshd[6036]: Connection closed by 10.0.0.1 port 35816 Jan 29 11:36:57.967099 sshd-session[6034]: pam_unix(sshd:session): session closed for user core Jan 29 11:36:57.971159 systemd[1]: sshd@22-10.0.0.107:22-10.0.0.1:35816.service: Deactivated successfully. Jan 29 11:36:57.972993 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 11:36:57.973525 systemd-logind[1475]: Session 23 logged out. Waiting for processes to exit. Jan 29 11:36:57.974416 systemd-logind[1475]: Removed session 23. Jan 29 11:37:00.142300 systemd[1]: run-containerd-runc-k8s.io-d8589dae18922eea77c7454d29fe6785f4ae19379a44c885df5747d35179b5ff-runc.iVanxI.mount: Deactivated successfully. Jan 29 11:37:02.057559 kubelet[2560]: E0129 11:37:02.057505 2560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:37:02.983652 systemd[1]: Started sshd@23-10.0.0.107:22-10.0.0.1:35830.service - OpenSSH per-connection server daemon (10.0.0.1:35830). Jan 29 11:37:03.025202 sshd[6070]: Accepted publickey for core from 10.0.0.1 port 35830 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:37:03.027015 sshd-session[6070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:37:03.031346 systemd-logind[1475]: New session 24 of user core. Jan 29 11:37:03.041904 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 11:37:03.149762 sshd[6072]: Connection closed by 10.0.0.1 port 35830 Jan 29 11:37:03.150122 sshd-session[6070]: pam_unix(sshd:session): session closed for user core Jan 29 11:37:03.153888 systemd[1]: sshd@23-10.0.0.107:22-10.0.0.1:35830.service: Deactivated successfully. Jan 29 11:37:03.155851 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 11:37:03.156408 systemd-logind[1475]: Session 24 logged out. Waiting for processes to exit. Jan 29 11:37:03.157340 systemd-logind[1475]: Removed session 24.