Jan 29 11:30:52.915530 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 09:36:13 -00 2025 Jan 29 11:30:52.915557 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 11:30:52.915572 kernel: BIOS-provided physical RAM map: Jan 29 11:30:52.915580 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 29 11:30:52.915589 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 29 11:30:52.915597 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 29 11:30:52.915607 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 29 11:30:52.915615 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 29 11:30:52.915624 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 29 11:30:52.915635 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 29 11:30:52.915643 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 29 11:30:52.915652 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 29 11:30:52.915660 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 29 11:30:52.915669 kernel: NX (Execute Disable) protection: active Jan 29 11:30:52.915679 kernel: APIC: Static calls initialized Jan 29 11:30:52.915691 kernel: SMBIOS 2.8 present. Jan 29 11:30:52.915700 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 29 11:30:52.915709 kernel: Hypervisor detected: KVM Jan 29 11:30:52.915718 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 11:30:52.915727 kernel: kvm-clock: using sched offset of 3221177016 cycles Jan 29 11:30:52.915737 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 11:30:52.915746 kernel: tsc: Detected 2794.748 MHz processor Jan 29 11:30:52.915756 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 11:30:52.915766 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 11:30:52.915775 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 29 11:30:52.915788 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 29 11:30:52.915797 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 11:30:52.915806 kernel: Using GB pages for direct mapping Jan 29 11:30:52.915816 kernel: ACPI: Early table checksum verification disabled Jan 29 11:30:52.915825 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 29 11:30:52.915834 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:30:52.915844 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:30:52.915853 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:30:52.915866 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 29 11:30:52.915875 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:30:52.915885 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:30:52.915894 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:30:52.915903 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:30:52.915913 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Jan 29 11:30:52.915922 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Jan 29 11:30:52.915936 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 29 11:30:52.915949 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Jan 29 11:30:52.915958 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Jan 29 11:30:52.915968 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Jan 29 11:30:52.915978 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Jan 29 11:30:52.915988 kernel: No NUMA configuration found Jan 29 11:30:52.915997 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 29 11:30:52.916007 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 29 11:30:52.916020 kernel: Zone ranges: Jan 29 11:30:52.916030 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 11:30:52.916040 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 29 11:30:52.916049 kernel: Normal empty Jan 29 11:30:52.916059 kernel: Movable zone start for each node Jan 29 11:30:52.916069 kernel: Early memory node ranges Jan 29 11:30:52.916078 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 29 11:30:52.916088 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 29 11:30:52.916100 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 29 11:30:52.916115 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 11:30:52.916126 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 29 11:30:52.916136 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 29 11:30:52.916145 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 29 11:30:52.916155 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 11:30:52.916165 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 11:30:52.916175 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 11:30:52.916184 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 11:30:52.916194 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 11:30:52.916207 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 11:30:52.916216 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 11:30:52.916226 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 11:30:52.916236 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 29 11:30:52.916246 kernel: TSC deadline timer available Jan 29 11:30:52.916256 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 29 11:30:52.916265 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 11:30:52.916275 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 29 11:30:52.916285 kernel: kvm-guest: setup PV sched yield Jan 29 11:30:52.916297 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 29 11:30:52.916317 kernel: Booting paravirtualized kernel on KVM Jan 29 11:30:52.916327 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 11:30:52.916337 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 29 11:30:52.916347 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 29 11:30:52.916357 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 29 11:30:52.916366 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 29 11:30:52.916376 kernel: kvm-guest: PV spinlocks enabled Jan 29 11:30:52.916386 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 29 11:30:52.916400 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 11:30:52.916410 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 11:30:52.916420 kernel: random: crng init done Jan 29 11:30:52.916430 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 11:30:52.916453 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 11:30:52.916463 kernel: Fallback order for Node 0: 0 Jan 29 11:30:52.916473 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 29 11:30:52.916483 kernel: Policy zone: DMA32 Jan 29 11:30:52.916493 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 11:30:52.916507 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2301K rwdata, 22736K rodata, 42972K init, 2220K bss, 136900K reserved, 0K cma-reserved) Jan 29 11:30:52.916517 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 29 11:30:52.916527 kernel: ftrace: allocating 37923 entries in 149 pages Jan 29 11:30:52.916536 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 11:30:52.916546 kernel: Dynamic Preempt: voluntary Jan 29 11:30:52.916556 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 11:30:52.916566 kernel: rcu: RCU event tracing is enabled. Jan 29 11:30:52.916577 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 29 11:30:52.916587 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 11:30:52.916600 kernel: Rude variant of Tasks RCU enabled. Jan 29 11:30:52.916610 kernel: Tracing variant of Tasks RCU enabled. Jan 29 11:30:52.916619 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 11:30:52.916629 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 29 11:30:52.916639 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 29 11:30:52.916649 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 11:30:52.916658 kernel: Console: colour VGA+ 80x25 Jan 29 11:30:52.916668 kernel: printk: console [ttyS0] enabled Jan 29 11:30:52.916678 kernel: ACPI: Core revision 20230628 Jan 29 11:30:52.916690 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 29 11:30:52.916700 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 11:30:52.916710 kernel: x2apic enabled Jan 29 11:30:52.916720 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 11:30:52.916730 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 29 11:30:52.916739 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 29 11:30:52.916749 kernel: kvm-guest: setup PV IPIs Jan 29 11:30:52.916772 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 29 11:30:52.916782 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 29 11:30:52.916792 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jan 29 11:30:52.916802 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 29 11:30:52.916815 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 29 11:30:52.916825 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 29 11:30:52.916836 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 11:30:52.916846 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 11:30:52.916856 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 11:30:52.916869 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 11:30:52.916880 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 29 11:30:52.916890 kernel: RETBleed: Mitigation: untrained return thunk Jan 29 11:30:52.916900 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 11:30:52.916911 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 11:30:52.916921 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 29 11:30:52.916932 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 29 11:30:52.916942 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 29 11:30:52.916955 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 11:30:52.916965 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 11:30:52.916976 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 11:30:52.916986 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 11:30:52.916996 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 29 11:30:52.917007 kernel: Freeing SMP alternatives memory: 32K Jan 29 11:30:52.917017 kernel: pid_max: default: 32768 minimum: 301 Jan 29 11:30:52.917027 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 11:30:52.917037 kernel: landlock: Up and running. Jan 29 11:30:52.917050 kernel: SELinux: Initializing. Jan 29 11:30:52.917060 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:30:52.917070 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:30:52.917081 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 29 11:30:52.917091 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:30:52.917102 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:30:52.917112 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:30:52.917122 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 29 11:30:52.917133 kernel: ... version: 0 Jan 29 11:30:52.917145 kernel: ... bit width: 48 Jan 29 11:30:52.917156 kernel: ... generic registers: 6 Jan 29 11:30:52.917166 kernel: ... value mask: 0000ffffffffffff Jan 29 11:30:52.917176 kernel: ... max period: 00007fffffffffff Jan 29 11:30:52.917186 kernel: ... fixed-purpose events: 0 Jan 29 11:30:52.917197 kernel: ... event mask: 000000000000003f Jan 29 11:30:52.917207 kernel: signal: max sigframe size: 1776 Jan 29 11:30:52.917217 kernel: rcu: Hierarchical SRCU implementation. Jan 29 11:30:52.917227 kernel: rcu: Max phase no-delay instances is 400. Jan 29 11:30:52.917240 kernel: smp: Bringing up secondary CPUs ... Jan 29 11:30:52.917251 kernel: smpboot: x86: Booting SMP configuration: Jan 29 11:30:52.917261 kernel: .... node #0, CPUs: #1 #2 #3 Jan 29 11:30:52.917271 kernel: smp: Brought up 1 node, 4 CPUs Jan 29 11:30:52.917281 kernel: smpboot: Max logical packages: 1 Jan 29 11:30:52.917291 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jan 29 11:30:52.917313 kernel: devtmpfs: initialized Jan 29 11:30:52.917323 kernel: x86/mm: Memory block size: 128MB Jan 29 11:30:52.917334 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 11:30:52.917344 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 29 11:30:52.917357 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 11:30:52.917368 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 11:30:52.917378 kernel: audit: initializing netlink subsys (disabled) Jan 29 11:30:52.917388 kernel: audit: type=2000 audit(1738150252.908:1): state=initialized audit_enabled=0 res=1 Jan 29 11:30:52.917398 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 11:30:52.917409 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 11:30:52.917419 kernel: cpuidle: using governor menu Jan 29 11:30:52.917429 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 11:30:52.917451 kernel: dca service started, version 1.12.1 Jan 29 11:30:52.917466 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 29 11:30:52.917476 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 29 11:30:52.917487 kernel: PCI: Using configuration type 1 for base access Jan 29 11:30:52.917497 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 11:30:52.917507 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 11:30:52.917518 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 11:30:52.917528 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 11:30:52.917538 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 11:30:52.917552 kernel: ACPI: Added _OSI(Module Device) Jan 29 11:30:52.917562 kernel: ACPI: Added _OSI(Processor Device) Jan 29 11:30:52.917572 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 11:30:52.917583 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 11:30:52.917593 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 11:30:52.917603 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 11:30:52.917614 kernel: ACPI: Interpreter enabled Jan 29 11:30:52.917624 kernel: ACPI: PM: (supports S0 S3 S5) Jan 29 11:30:52.917634 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 11:30:52.917644 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 11:30:52.917658 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 11:30:52.917668 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 29 11:30:52.917678 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 11:30:52.917959 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 11:30:52.918129 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 29 11:30:52.918286 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 29 11:30:52.918312 kernel: PCI host bridge to bus 0000:00 Jan 29 11:30:52.918502 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 11:30:52.918647 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 11:30:52.918787 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 11:30:52.918926 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 29 11:30:52.919088 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 29 11:30:52.919340 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 29 11:30:52.919542 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 11:30:52.919728 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 29 11:30:52.919895 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 29 11:30:52.920050 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 29 11:30:52.920203 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 29 11:30:52.920367 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 29 11:30:52.920542 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 11:30:52.920808 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 11:30:52.920974 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 29 11:30:52.921134 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 29 11:30:52.921288 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 29 11:30:52.921491 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 29 11:30:52.921684 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 29 11:30:52.921847 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 29 11:30:52.922007 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 29 11:30:52.922177 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 29 11:30:52.922344 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 29 11:30:52.922551 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 29 11:30:52.922705 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 29 11:30:52.922856 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 29 11:30:52.923016 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 29 11:30:52.923175 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 29 11:30:52.923354 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 29 11:30:52.923527 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 29 11:30:52.923683 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 29 11:30:52.923847 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 29 11:30:52.924001 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 29 11:30:52.924020 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 11:30:52.924031 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 11:30:52.924042 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 11:30:52.924052 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 11:30:52.924062 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 29 11:30:52.924072 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 29 11:30:52.924083 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 29 11:30:52.924093 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 29 11:30:52.924103 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 29 11:30:52.924117 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 29 11:30:52.924127 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 29 11:30:52.924138 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 29 11:30:52.924148 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 29 11:30:52.924161 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 29 11:30:52.924174 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 29 11:30:52.924187 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 29 11:30:52.924199 kernel: iommu: Default domain type: Translated Jan 29 11:30:52.924212 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 11:30:52.924229 kernel: PCI: Using ACPI for IRQ routing Jan 29 11:30:52.924242 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 11:30:52.924254 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 29 11:30:52.924268 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 29 11:30:52.924518 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 29 11:30:52.924677 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 29 11:30:52.924829 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 11:30:52.924844 kernel: vgaarb: loaded Jan 29 11:30:52.924859 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 29 11:30:52.924870 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 29 11:30:52.924881 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 11:30:52.924891 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 11:30:52.924901 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 11:30:52.924912 kernel: pnp: PnP ACPI init Jan 29 11:30:52.925076 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 29 11:30:52.925096 kernel: pnp: PnP ACPI: found 6 devices Jan 29 11:30:52.925109 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 11:30:52.925127 kernel: NET: Registered PF_INET protocol family Jan 29 11:30:52.925140 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 11:30:52.925154 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 11:30:52.925167 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 11:30:52.925180 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 11:30:52.925193 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 11:30:52.925206 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 11:30:52.925218 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:30:52.925231 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:30:52.925241 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 11:30:52.925252 kernel: NET: Registered PF_XDP protocol family Jan 29 11:30:52.925402 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 11:30:52.925558 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 11:30:52.925697 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 11:30:52.925835 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 29 11:30:52.925971 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 29 11:30:52.926110 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 29 11:30:52.926132 kernel: PCI: CLS 0 bytes, default 64 Jan 29 11:30:52.926143 kernel: Initialise system trusted keyrings Jan 29 11:30:52.926154 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 11:30:52.926164 kernel: Key type asymmetric registered Jan 29 11:30:52.926174 kernel: Asymmetric key parser 'x509' registered Jan 29 11:30:52.926185 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 11:30:52.926195 kernel: io scheduler mq-deadline registered Jan 29 11:30:52.926205 kernel: io scheduler kyber registered Jan 29 11:30:52.926216 kernel: io scheduler bfq registered Jan 29 11:30:52.926229 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 11:30:52.926240 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 29 11:30:52.926250 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 29 11:30:52.926260 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 29 11:30:52.926271 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 11:30:52.926281 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 11:30:52.926291 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 11:30:52.926317 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 11:30:52.926328 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 11:30:52.926532 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 29 11:30:52.926549 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 11:30:52.926691 kernel: rtc_cmos 00:04: registered as rtc0 Jan 29 11:30:52.926834 kernel: rtc_cmos 00:04: setting system clock to 2025-01-29T11:30:52 UTC (1738150252) Jan 29 11:30:52.926975 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 29 11:30:52.926989 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 29 11:30:52.927000 kernel: NET: Registered PF_INET6 protocol family Jan 29 11:30:52.927010 kernel: Segment Routing with IPv6 Jan 29 11:30:52.927024 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 11:30:52.927035 kernel: NET: Registered PF_PACKET protocol family Jan 29 11:30:52.927045 kernel: Key type dns_resolver registered Jan 29 11:30:52.927055 kernel: IPI shorthand broadcast: enabled Jan 29 11:30:52.927066 kernel: sched_clock: Marking stable (623002642, 115319432)->(760274516, -21952442) Jan 29 11:30:52.927076 kernel: registered taskstats version 1 Jan 29 11:30:52.927086 kernel: Loading compiled-in X.509 certificates Jan 29 11:30:52.927097 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: de92a621108c58f5771c86c5c3ccb1aa0728ed55' Jan 29 11:30:52.927108 kernel: Key type .fscrypt registered Jan 29 11:30:52.927121 kernel: Key type fscrypt-provisioning registered Jan 29 11:30:52.927131 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 11:30:52.927142 kernel: ima: Allocated hash algorithm: sha1 Jan 29 11:30:52.927152 kernel: ima: No architecture policies found Jan 29 11:30:52.927162 kernel: clk: Disabling unused clocks Jan 29 11:30:52.927172 kernel: Freeing unused kernel image (initmem) memory: 42972K Jan 29 11:30:52.927183 kernel: Write protecting the kernel read-only data: 36864k Jan 29 11:30:52.927193 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Jan 29 11:30:52.927206 kernel: Run /init as init process Jan 29 11:30:52.927217 kernel: with arguments: Jan 29 11:30:52.927227 kernel: /init Jan 29 11:30:52.927237 kernel: with environment: Jan 29 11:30:52.927247 kernel: HOME=/ Jan 29 11:30:52.927257 kernel: TERM=linux Jan 29 11:30:52.927268 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 11:30:52.927280 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:30:52.927293 systemd[1]: Detected virtualization kvm. Jan 29 11:30:52.927318 systemd[1]: Detected architecture x86-64. Jan 29 11:30:52.927329 systemd[1]: Running in initrd. Jan 29 11:30:52.927340 systemd[1]: No hostname configured, using default hostname. Jan 29 11:30:52.927351 systemd[1]: Hostname set to . Jan 29 11:30:52.927362 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:30:52.927373 systemd[1]: Queued start job for default target initrd.target. Jan 29 11:30:52.927384 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:30:52.927396 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:30:52.927411 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 11:30:52.927437 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:30:52.927466 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 11:30:52.927478 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 11:30:52.927492 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 11:30:52.927506 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 11:30:52.927518 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:30:52.927529 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:30:52.927541 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:30:52.927552 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:30:52.927563 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:30:52.927575 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:30:52.927586 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:30:52.927600 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:30:52.927612 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:30:52.927624 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:30:52.927635 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:30:52.927647 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:30:52.927658 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:30:52.927669 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:30:52.927681 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 11:30:52.927695 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:30:52.927706 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 11:30:52.927718 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 11:30:52.927729 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:30:52.927740 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:30:52.927752 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:30:52.927763 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 11:30:52.927775 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:30:52.927786 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 11:30:52.927801 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:30:52.927836 systemd-journald[194]: Collecting audit messages is disabled. Jan 29 11:30:52.927865 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:30:52.927877 systemd-journald[194]: Journal started Jan 29 11:30:52.927903 systemd-journald[194]: Runtime Journal (/run/log/journal/bd18affd420d423dbc377984be366869) is 6.0M, max 48.4M, 42.3M free. Jan 29 11:30:52.919567 systemd-modules-load[195]: Inserted module 'overlay' Jan 29 11:30:52.953291 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 11:30:52.953330 kernel: Bridge firewalling registered Jan 29 11:30:52.947199 systemd-modules-load[195]: Inserted module 'br_netfilter' Jan 29 11:30:52.956251 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:30:52.958577 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:30:52.960075 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:30:52.976650 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:30:52.980616 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:30:52.981392 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:30:52.983587 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:30:52.993829 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:30:52.997230 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:30:52.998999 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:30:53.010597 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 11:30:53.011886 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:30:53.016205 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:30:53.025895 dracut-cmdline[227]: dracut-dracut-053 Jan 29 11:30:53.028492 dracut-cmdline[227]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 11:30:53.049751 systemd-resolved[231]: Positive Trust Anchors: Jan 29 11:30:53.049768 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:30:53.049799 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:30:53.052380 systemd-resolved[231]: Defaulting to hostname 'linux'. Jan 29 11:30:53.053480 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:30:53.060641 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:30:53.112494 kernel: SCSI subsystem initialized Jan 29 11:30:53.122474 kernel: Loading iSCSI transport class v2.0-870. Jan 29 11:30:53.133478 kernel: iscsi: registered transport (tcp) Jan 29 11:30:53.158490 kernel: iscsi: registered transport (qla4xxx) Jan 29 11:30:53.158570 kernel: QLogic iSCSI HBA Driver Jan 29 11:30:53.214662 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 11:30:53.227696 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 11:30:53.264716 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 11:30:53.264785 kernel: device-mapper: uevent: version 1.0.3 Jan 29 11:30:53.264797 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 11:30:53.309481 kernel: raid6: avx2x4 gen() 28665 MB/s Jan 29 11:30:53.326475 kernel: raid6: avx2x2 gen() 29642 MB/s Jan 29 11:30:53.343601 kernel: raid6: avx2x1 gen() 24395 MB/s Jan 29 11:30:53.343645 kernel: raid6: using algorithm avx2x2 gen() 29642 MB/s Jan 29 11:30:53.361612 kernel: raid6: .... xor() 18737 MB/s, rmw enabled Jan 29 11:30:53.361671 kernel: raid6: using avx2x2 recovery algorithm Jan 29 11:30:53.393487 kernel: xor: automatically using best checksumming function avx Jan 29 11:30:53.555480 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 11:30:53.570492 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:30:53.575735 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:30:53.592603 systemd-udevd[414]: Using default interface naming scheme 'v255'. Jan 29 11:30:53.597575 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:30:53.606657 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 11:30:53.622637 dracut-pre-trigger[422]: rd.md=0: removing MD RAID activation Jan 29 11:30:53.655917 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:30:53.668020 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:30:53.731608 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:30:53.745690 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 11:30:53.761811 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 11:30:53.765962 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:30:53.773923 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 29 11:30:53.787239 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 29 11:30:53.787402 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 11:30:53.787424 kernel: GPT:9289727 != 19775487 Jan 29 11:30:53.787474 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 11:30:53.787485 kernel: GPT:9289727 != 19775487 Jan 29 11:30:53.787494 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 11:30:53.787504 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:30:53.768050 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:30:53.770291 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:30:53.787243 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 11:30:53.792560 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 11:30:53.807287 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:30:53.813956 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 11:30:53.813995 kernel: AES CTR mode by8 optimization enabled Jan 29 11:30:53.816192 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:30:53.817707 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (476) Jan 29 11:30:53.822465 kernel: BTRFS: device fsid 5ba3c9ea-61f2-4fe6-a507-2966757f6d44 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (471) Jan 29 11:30:53.824578 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:30:53.827782 kernel: libata version 3.00 loaded. Jan 29 11:30:53.827841 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:30:53.830731 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:30:53.830916 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:30:53.833585 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:30:53.840611 kernel: ahci 0000:00:1f.2: version 3.0 Jan 29 11:30:53.855122 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 29 11:30:53.855150 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 29 11:30:53.855534 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 29 11:30:53.855735 kernel: scsi host0: ahci Jan 29 11:30:53.855886 kernel: scsi host1: ahci Jan 29 11:30:53.856035 kernel: scsi host2: ahci Jan 29 11:30:53.856176 kernel: scsi host3: ahci Jan 29 11:30:53.856371 kernel: scsi host4: ahci Jan 29 11:30:53.856585 kernel: scsi host5: ahci Jan 29 11:30:53.856726 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 29 11:30:53.856737 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 29 11:30:53.856748 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 29 11:30:53.856758 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 29 11:30:53.856768 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 29 11:30:53.856778 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 29 11:30:53.847771 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:30:53.857488 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 11:30:53.877153 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 11:30:53.905897 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:30:53.906254 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:30:53.913135 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 11:30:53.916384 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 11:30:53.927600 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 11:30:53.928578 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:30:53.950953 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:30:54.015360 disk-uuid[556]: Primary Header is updated. Jan 29 11:30:54.015360 disk-uuid[556]: Secondary Entries is updated. Jan 29 11:30:54.015360 disk-uuid[556]: Secondary Header is updated. Jan 29 11:30:54.020464 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:30:54.024463 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:30:54.165941 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 29 11:30:54.166026 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 29 11:30:54.166059 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 29 11:30:54.167473 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 29 11:30:54.220479 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 29 11:30:54.221467 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 29 11:30:54.221495 kernel: ata3.00: applying bridge limits Jan 29 11:30:54.222465 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 29 11:30:54.223460 kernel: ata3.00: configured for UDMA/100 Jan 29 11:30:54.224466 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 29 11:30:54.273498 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 29 11:30:54.286332 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 11:30:54.286352 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 29 11:30:55.042469 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:30:55.042602 disk-uuid[565]: The operation has completed successfully. Jan 29 11:30:55.073841 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 11:30:55.073969 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 11:30:55.095665 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 11:30:55.116065 sh[592]: Success Jan 29 11:30:55.129494 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 29 11:30:55.164723 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 11:30:55.179291 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 11:30:55.182236 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 11:30:55.195402 kernel: BTRFS info (device dm-0): first mount of filesystem 5ba3c9ea-61f2-4fe6-a507-2966757f6d44 Jan 29 11:30:55.195456 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:30:55.195468 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 11:30:55.195479 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 11:30:55.196778 kernel: BTRFS info (device dm-0): using free space tree Jan 29 11:30:55.201278 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 11:30:55.203656 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 11:30:55.216631 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 11:30:55.224853 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 11:30:55.230341 kernel: BTRFS info (device vda6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:30:55.230367 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:30:55.230381 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:30:55.232479 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:30:55.241572 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 11:30:55.243338 kernel: BTRFS info (device vda6): last unmount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:30:55.340561 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:30:55.598342 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:30:55.629153 systemd-networkd[770]: lo: Link UP Jan 29 11:30:55.629164 systemd-networkd[770]: lo: Gained carrier Jan 29 11:30:55.630864 systemd-networkd[770]: Enumeration completed Jan 29 11:30:55.631252 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:30:55.631256 systemd-networkd[770]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:30:55.631995 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:30:55.633037 systemd-networkd[770]: eth0: Link UP Jan 29 11:30:55.633047 systemd-networkd[770]: eth0: Gained carrier Jan 29 11:30:55.633062 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:30:55.674564 systemd[1]: Reached target network.target - Network. Jan 29 11:30:55.696509 systemd-networkd[770]: eth0: DHCPv4 address 10.0.0.69/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:30:55.699124 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 11:30:55.715595 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 11:30:55.798184 ignition[775]: Ignition 2.20.0 Jan 29 11:30:55.798197 ignition[775]: Stage: fetch-offline Jan 29 11:30:55.798247 ignition[775]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:30:55.798257 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:30:55.798361 ignition[775]: parsed url from cmdline: "" Jan 29 11:30:55.798365 ignition[775]: no config URL provided Jan 29 11:30:55.798370 ignition[775]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:30:55.798379 ignition[775]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:30:55.798413 ignition[775]: op(1): [started] loading QEMU firmware config module Jan 29 11:30:55.798418 ignition[775]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 29 11:30:55.810500 ignition[775]: op(1): [finished] loading QEMU firmware config module Jan 29 11:30:55.849301 ignition[775]: parsing config with SHA512: b7710161b660a1006c9d27987192bdf389c080af40239cac67807ecd4187c6423bd6915488845de649b8c389f1b990cff4881d3268c20b861732c2b330a89104 Jan 29 11:30:55.853601 unknown[775]: fetched base config from "system" Jan 29 11:30:55.853615 unknown[775]: fetched user config from "qemu" Jan 29 11:30:55.854045 ignition[775]: fetch-offline: fetch-offline passed Jan 29 11:30:55.854112 ignition[775]: Ignition finished successfully Jan 29 11:30:55.856857 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:30:55.858740 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 29 11:30:55.863666 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 11:30:55.878052 ignition[785]: Ignition 2.20.0 Jan 29 11:30:55.878065 ignition[785]: Stage: kargs Jan 29 11:30:55.878216 ignition[785]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:30:55.878238 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:30:55.879050 ignition[785]: kargs: kargs passed Jan 29 11:30:55.879095 ignition[785]: Ignition finished successfully Jan 29 11:30:55.883125 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 11:30:55.902617 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 11:30:55.915416 ignition[794]: Ignition 2.20.0 Jan 29 11:30:55.915427 ignition[794]: Stage: disks Jan 29 11:30:55.915621 ignition[794]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:30:55.915633 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:30:55.916628 ignition[794]: disks: disks passed Jan 29 11:30:55.919137 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 11:30:55.916680 ignition[794]: Ignition finished successfully Jan 29 11:30:55.920603 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 11:30:55.922273 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:30:55.924612 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:30:55.925715 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:30:55.927620 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:30:55.938695 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 11:30:55.951622 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 11:30:55.959020 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 11:30:55.974621 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 11:30:56.074465 kernel: EXT4-fs (vda9): mounted filesystem 2fbf9359-701e-4995-b3f7-74280bd2b1c9 r/w with ordered data mode. Quota mode: none. Jan 29 11:30:56.074814 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 11:30:56.076484 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 11:30:56.089542 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:30:56.091241 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 11:30:56.092344 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 11:30:56.092381 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 11:30:56.103554 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (812) Jan 29 11:30:56.103577 kernel: BTRFS info (device vda6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:30:56.103589 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:30:56.103599 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:30:56.092404 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:30:56.100084 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 11:30:56.107944 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:30:56.105324 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 11:30:56.110183 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:30:56.145773 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 11:30:56.150003 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Jan 29 11:30:56.153896 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 11:30:56.157982 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 11:30:56.246417 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 11:30:56.254645 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 11:30:56.258493 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 11:30:56.262628 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 11:30:56.263940 kernel: BTRFS info (device vda6): last unmount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:30:56.295559 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 11:30:56.295712 ignition[927]: INFO : Ignition 2.20.0 Jan 29 11:30:56.298236 ignition[927]: INFO : Stage: mount Jan 29 11:30:56.298236 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:30:56.298236 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:30:56.298236 ignition[927]: INFO : mount: mount passed Jan 29 11:30:56.298236 ignition[927]: INFO : Ignition finished successfully Jan 29 11:30:56.302013 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 11:30:56.315561 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 11:30:56.323080 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:30:56.336125 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (940) Jan 29 11:30:56.336160 kernel: BTRFS info (device vda6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:30:56.336172 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:30:56.337657 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:30:56.340469 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:30:56.341746 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:30:56.362059 ignition[957]: INFO : Ignition 2.20.0 Jan 29 11:30:56.362059 ignition[957]: INFO : Stage: files Jan 29 11:30:56.363914 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:30:56.363914 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:30:56.363914 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Jan 29 11:30:56.367933 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 11:30:56.367933 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 11:30:56.372969 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 11:30:56.374704 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 11:30:56.376227 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 11:30:56.375498 unknown[957]: wrote ssh authorized keys file for user: core Jan 29 11:30:56.379036 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 11:30:56.379036 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 29 11:30:56.422074 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 11:30:56.520999 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 11:30:56.520999 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 29 11:30:56.524938 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 11:30:56.524938 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:30:56.524938 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:30:56.524938 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:30:56.524938 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:30:56.524938 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:30:56.524938 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:30:56.524938 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:30:56.524938 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:30:56.524938 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:30:56.524938 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:30:56.524938 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:30:56.524938 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 29 11:30:57.025355 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 29 11:30:57.270595 systemd-networkd[770]: eth0: Gained IPv6LL Jan 29 11:30:57.438579 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:30:57.438579 ignition[957]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 29 11:30:57.442757 ignition[957]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:30:57.442757 ignition[957]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:30:57.442757 ignition[957]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 29 11:30:57.442757 ignition[957]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 29 11:30:57.442757 ignition[957]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:30:57.442757 ignition[957]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:30:57.442757 ignition[957]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 29 11:30:57.442757 ignition[957]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 29 11:30:57.467291 ignition[957]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:30:57.472269 ignition[957]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:30:57.474039 ignition[957]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 29 11:30:57.474039 ignition[957]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 29 11:30:57.474039 ignition[957]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 11:30:57.474039 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:30:57.474039 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:30:57.474039 ignition[957]: INFO : files: files passed Jan 29 11:30:57.474039 ignition[957]: INFO : Ignition finished successfully Jan 29 11:30:57.474978 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 11:30:57.485611 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 11:30:57.487952 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 11:30:57.490157 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 11:30:57.490294 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 11:30:57.498292 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory Jan 29 11:30:57.501213 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:30:57.502930 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:30:57.504556 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:30:57.504511 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:30:57.506044 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 11:30:57.517614 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 11:30:57.542803 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 11:30:57.542964 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 11:30:57.544265 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 11:30:57.546489 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 11:30:57.549463 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 11:30:57.552117 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 11:30:57.573623 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:30:57.584732 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 11:30:57.594503 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:30:57.597048 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:30:57.598348 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 11:30:57.600429 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 11:30:57.600565 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:30:57.603138 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 11:30:57.604725 systemd[1]: Stopped target basic.target - Basic System. Jan 29 11:30:57.607027 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 11:30:57.609159 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:30:57.611239 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 11:30:57.613434 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 11:30:57.615842 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:30:57.618643 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 11:30:57.621097 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 11:30:57.623853 systemd[1]: Stopped target swap.target - Swaps. Jan 29 11:30:57.626044 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 11:30:57.626220 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:30:57.628872 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:30:57.630915 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:30:57.633485 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 11:30:57.633629 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:30:57.636247 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 11:30:57.636378 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 11:30:57.639169 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 11:30:57.639307 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:30:57.641741 systemd[1]: Stopped target paths.target - Path Units. Jan 29 11:30:57.643931 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 11:30:57.648514 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:30:57.650598 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 11:30:57.652749 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 11:30:57.654584 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 11:30:57.654684 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:30:57.656730 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 11:30:57.656831 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:30:57.659533 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 11:30:57.659707 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:30:57.661684 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 11:30:57.661827 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 11:30:57.671669 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 11:30:57.673926 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 11:30:57.675049 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 11:30:57.675213 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:30:57.677407 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 11:30:57.677625 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:30:57.683158 ignition[1011]: INFO : Ignition 2.20.0 Jan 29 11:30:57.683158 ignition[1011]: INFO : Stage: umount Jan 29 11:30:57.685597 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:30:57.685597 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:30:57.685597 ignition[1011]: INFO : umount: umount passed Jan 29 11:30:57.685597 ignition[1011]: INFO : Ignition finished successfully Jan 29 11:30:57.684131 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 11:30:57.684283 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 11:30:57.693805 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 11:30:57.693940 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 11:30:57.697083 systemd[1]: Stopped target network.target - Network. Jan 29 11:30:57.697170 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 11:30:57.697255 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 11:30:57.700100 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 11:30:57.700166 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 11:30:57.701278 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 11:30:57.701328 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 11:30:57.703431 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 11:30:57.703523 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 11:30:57.705567 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 11:30:57.708391 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 11:30:57.712339 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 11:30:57.718303 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 11:30:57.718496 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 11:30:57.723064 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 11:30:57.723145 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:30:57.725551 systemd-networkd[770]: eth0: DHCPv6 lease lost Jan 29 11:30:57.728243 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 11:30:57.728414 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 11:30:57.730001 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 11:30:57.730043 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:30:57.738554 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 11:30:57.739535 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 11:30:57.739606 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:30:57.740245 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:30:57.740288 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:30:57.740791 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 11:30:57.740840 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 11:30:57.741268 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:30:57.754319 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 11:30:57.754521 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 11:30:57.775680 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 11:30:57.775938 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:30:57.777187 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 11:30:57.777256 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 11:30:57.780574 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 11:30:57.780630 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:30:57.782766 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 11:30:57.782839 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:30:57.785799 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 11:30:57.785873 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 11:30:57.787475 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:30:57.787544 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:30:57.797645 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 11:30:57.800115 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 11:30:57.800214 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:30:57.801558 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 11:30:57.801620 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:30:57.803879 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 11:30:57.803958 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:30:57.806373 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:30:57.806463 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:30:57.809219 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 11:30:57.809383 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 11:30:57.935862 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 11:30:57.936002 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 11:30:57.938241 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 11:30:57.940101 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 11:30:57.940154 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 11:30:57.954633 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 11:30:57.963655 systemd[1]: Switching root. Jan 29 11:30:57.995977 systemd-journald[194]: Journal stopped Jan 29 11:30:59.152734 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Jan 29 11:30:59.152809 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 11:30:59.152834 kernel: SELinux: policy capability open_perms=1 Jan 29 11:30:59.152849 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 11:30:59.152864 kernel: SELinux: policy capability always_check_network=0 Jan 29 11:30:59.152883 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 11:30:59.152900 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 11:30:59.152914 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 11:30:59.152929 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 11:30:59.152943 kernel: audit: type=1403 audit(1738150258.406:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 11:30:59.152959 systemd[1]: Successfully loaded SELinux policy in 53.656ms. Jan 29 11:30:59.152991 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.190ms. Jan 29 11:30:59.153010 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:30:59.153026 systemd[1]: Detected virtualization kvm. Jan 29 11:30:59.153046 systemd[1]: Detected architecture x86-64. Jan 29 11:30:59.153061 systemd[1]: Detected first boot. Jan 29 11:30:59.153077 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:30:59.153093 zram_generator::config[1056]: No configuration found. Jan 29 11:30:59.153110 systemd[1]: Populated /etc with preset unit settings. Jan 29 11:30:59.153127 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 11:30:59.153155 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 11:30:59.153171 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 11:30:59.153191 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 11:30:59.153207 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 11:30:59.153223 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 11:30:59.153238 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 11:30:59.153260 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 11:30:59.153278 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 11:30:59.153294 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 11:30:59.153310 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 11:30:59.153326 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:30:59.153345 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:30:59.153362 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 11:30:59.153378 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 11:30:59.153394 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 11:30:59.153411 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:30:59.153427 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 11:30:59.153458 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:30:59.153475 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 11:30:59.153491 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 11:30:59.153511 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 11:30:59.153528 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 11:30:59.153544 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:30:59.153560 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:30:59.153576 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:30:59.153592 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:30:59.153613 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 11:30:59.153631 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 11:30:59.153650 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:30:59.153667 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:30:59.153683 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:30:59.153699 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 11:30:59.153715 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 11:30:59.153731 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 11:30:59.153747 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 11:30:59.153763 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:30:59.153782 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 11:30:59.153798 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 11:30:59.153814 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 11:30:59.153831 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 11:30:59.153848 systemd[1]: Reached target machines.target - Containers. Jan 29 11:30:59.153864 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 11:30:59.153880 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:30:59.153896 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:30:59.153912 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 11:30:59.153931 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:30:59.153947 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:30:59.153963 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:30:59.153978 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 11:30:59.153999 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:30:59.154015 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 11:30:59.154032 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 11:30:59.154048 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 11:30:59.154066 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 11:30:59.154083 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 11:30:59.154098 kernel: loop: module loaded Jan 29 11:30:59.154113 kernel: fuse: init (API version 7.39) Jan 29 11:30:59.154129 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:30:59.154155 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:30:59.154171 kernel: ACPI: bus type drm_connector registered Jan 29 11:30:59.154187 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 11:30:59.154203 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 11:30:59.154244 systemd-journald[1126]: Collecting audit messages is disabled. Jan 29 11:30:59.154272 systemd-journald[1126]: Journal started Jan 29 11:30:59.154299 systemd-journald[1126]: Runtime Journal (/run/log/journal/bd18affd420d423dbc377984be366869) is 6.0M, max 48.4M, 42.3M free. Jan 29 11:30:58.927232 systemd[1]: Queued start job for default target multi-user.target. Jan 29 11:30:58.944299 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 11:30:58.944862 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 11:30:59.156520 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:30:59.158920 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 11:30:59.158951 systemd[1]: Stopped verity-setup.service. Jan 29 11:30:59.162462 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:30:59.167698 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:30:59.168196 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 11:30:59.169390 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 11:30:59.170640 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 11:30:59.171867 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 11:30:59.173198 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 11:30:59.174483 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 11:30:59.175744 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 11:30:59.177301 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:30:59.178883 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 11:30:59.179073 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 11:30:59.180633 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:30:59.180821 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:30:59.182496 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:30:59.182686 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:30:59.184260 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:30:59.184460 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:30:59.186067 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 11:30:59.186265 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 11:30:59.187700 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:30:59.187884 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:30:59.189312 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:30:59.190848 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 11:30:59.192421 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 11:30:59.205797 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 11:30:59.214530 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 11:30:59.216918 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 11:30:59.218205 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 11:30:59.218245 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:30:59.220703 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 11:30:59.223405 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 11:30:59.228611 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 11:30:59.230127 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:30:59.231946 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 11:30:59.236534 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 11:30:59.238204 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:30:59.241295 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 11:30:59.242589 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:30:59.244118 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:30:59.250679 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 11:30:59.256108 systemd-journald[1126]: Time spent on flushing to /var/log/journal/bd18affd420d423dbc377984be366869 is 19.412ms for 951 entries. Jan 29 11:30:59.256108 systemd-journald[1126]: System Journal (/var/log/journal/bd18affd420d423dbc377984be366869) is 8.0M, max 195.6M, 187.6M free. Jan 29 11:30:59.301733 systemd-journald[1126]: Received client request to flush runtime journal. Jan 29 11:30:59.301790 kernel: loop0: detected capacity change from 0 to 140992 Jan 29 11:30:59.257784 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:30:59.263324 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 11:30:59.265003 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 11:30:59.266976 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 11:30:59.271717 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 11:30:59.281186 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:30:59.291785 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:30:59.298601 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 11:30:59.306481 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 11:30:59.309749 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 11:30:59.317566 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Jan 29 11:30:59.317587 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Jan 29 11:30:59.319718 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 11:30:59.322277 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 11:30:59.328785 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:30:59.340787 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 11:30:59.344535 kernel: loop1: detected capacity change from 0 to 138184 Jan 29 11:30:59.343033 udevadm[1186]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 11:30:59.344661 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 11:30:59.345486 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 11:30:59.368970 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 11:30:59.376680 kernel: loop2: detected capacity change from 0 to 210664 Jan 29 11:30:59.376874 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:30:59.399512 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Jan 29 11:30:59.399532 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Jan 29 11:30:59.405508 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:30:59.411478 kernel: loop3: detected capacity change from 0 to 140992 Jan 29 11:30:59.424482 kernel: loop4: detected capacity change from 0 to 138184 Jan 29 11:30:59.437485 kernel: loop5: detected capacity change from 0 to 210664 Jan 29 11:30:59.445031 (sd-merge)[1198]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 29 11:30:59.445826 (sd-merge)[1198]: Merged extensions into '/usr'. Jan 29 11:30:59.450471 systemd[1]: Reloading requested from client PID 1170 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 11:30:59.450489 systemd[1]: Reloading... Jan 29 11:30:59.518492 zram_generator::config[1224]: No configuration found. Jan 29 11:30:59.609599 ldconfig[1165]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 11:30:59.649262 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:30:59.704206 systemd[1]: Reloading finished in 253 ms. Jan 29 11:30:59.738379 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 11:30:59.740479 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 11:30:59.753830 systemd[1]: Starting ensure-sysext.service... Jan 29 11:30:59.756516 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:30:59.766094 systemd[1]: Reloading requested from client PID 1261 ('systemctl') (unit ensure-sysext.service)... Jan 29 11:30:59.766125 systemd[1]: Reloading... Jan 29 11:30:59.784047 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 11:30:59.784615 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 11:30:59.785941 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 11:30:59.786367 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Jan 29 11:30:59.786738 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Jan 29 11:30:59.791265 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:30:59.791280 systemd-tmpfiles[1262]: Skipping /boot Jan 29 11:30:59.810012 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:30:59.810202 systemd-tmpfiles[1262]: Skipping /boot Jan 29 11:30:59.828478 zram_generator::config[1289]: No configuration found. Jan 29 11:30:59.949047 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:31:00.004783 systemd[1]: Reloading finished in 238 ms. Jan 29 11:31:00.026745 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 11:31:00.042277 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:31:00.053752 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:31:00.058257 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 11:31:00.061716 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 11:31:00.067017 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:31:00.073915 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:31:00.078545 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 11:31:00.082252 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:31:00.083522 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:31:00.086579 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:31:00.091279 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:31:00.097756 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:31:00.099632 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:31:00.103793 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 11:31:00.106777 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:31:00.107781 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:31:00.107969 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:31:00.111766 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:31:00.111952 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:31:00.115805 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:31:00.115998 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:31:00.123251 systemd-udevd[1335]: Using default interface naming scheme 'v255'. Jan 29 11:31:00.123817 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 11:31:00.130058 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 11:31:00.133052 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:31:00.133351 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:31:00.139562 augenrules[1363]: No rules Jan 29 11:31:00.141833 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:31:00.144815 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:31:00.149099 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:31:00.150551 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:31:00.153969 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 11:31:00.155144 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:31:00.156708 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:31:00.157387 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:31:00.158341 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:31:00.158982 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:31:00.161151 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:31:00.161433 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:31:00.167189 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:31:00.169127 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 11:31:00.170951 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:31:00.171212 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:31:00.185913 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 11:31:00.189921 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 11:31:00.197280 systemd[1]: Finished ensure-sysext.service. Jan 29 11:31:00.204621 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:31:00.211623 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:31:00.212785 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:31:00.217607 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:31:00.225617 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:31:00.228274 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:31:00.229616 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:31:00.233505 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:31:00.234661 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:31:00.237652 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 11:31:00.238974 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:31:00.239072 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:31:00.239723 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:31:00.240048 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:31:00.242893 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:31:00.243097 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:31:00.248856 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1388) Jan 29 11:31:00.247177 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 11:31:00.260986 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:31:00.261192 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:31:00.265036 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:31:00.277464 augenrules[1400]: /sbin/augenrules: No change Jan 29 11:31:00.304512 augenrules[1434]: No rules Jan 29 11:31:00.305511 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:31:00.305851 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:31:00.320468 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 29 11:31:00.320810 systemd-resolved[1331]: Positive Trust Anchors: Jan 29 11:31:00.321067 systemd-resolved[1331]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:31:00.321179 systemd-resolved[1331]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:31:00.325275 systemd-resolved[1331]: Defaulting to hostname 'linux'. Jan 29 11:31:00.325503 kernel: ACPI: button: Power Button [PWRF] Jan 29 11:31:00.327202 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:31:00.328690 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:31:00.339321 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:31:00.347541 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 29 11:31:00.346602 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 11:31:00.352976 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 29 11:31:00.353335 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 29 11:31:00.353766 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 29 11:31:00.355856 systemd-networkd[1410]: lo: Link UP Jan 29 11:31:00.355867 systemd-networkd[1410]: lo: Gained carrier Jan 29 11:31:00.357815 systemd-networkd[1410]: Enumeration completed Jan 29 11:31:00.357916 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:31:00.358246 systemd-networkd[1410]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:31:00.358251 systemd-networkd[1410]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:31:00.359155 systemd[1]: Reached target network.target - Network. Jan 29 11:31:00.360206 systemd-networkd[1410]: eth0: Link UP Jan 29 11:31:00.360211 systemd-networkd[1410]: eth0: Gained carrier Jan 29 11:31:00.360224 systemd-networkd[1410]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:31:00.368643 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 11:31:00.377551 systemd-networkd[1410]: eth0: DHCPv4 address 10.0.0.69/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:31:00.377972 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 11:31:00.395054 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:31:00.396751 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 11:31:01.441705 systemd-timesyncd[1413]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 29 11:31:01.441777 systemd-timesyncd[1413]: Initial clock synchronization to Wed 2025-01-29 11:31:01.441497 UTC. Jan 29 11:31:01.444459 systemd-resolved[1331]: Clock change detected. Flushing caches. Jan 29 11:31:01.447599 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 11:31:01.472443 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 11:31:01.539459 kernel: kvm_amd: TSC scaling supported Jan 29 11:31:01.539603 kernel: kvm_amd: Nested Virtualization enabled Jan 29 11:31:01.539633 kernel: kvm_amd: Nested Paging enabled Jan 29 11:31:01.539655 kernel: kvm_amd: LBR virtualization supported Jan 29 11:31:01.539680 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 29 11:31:01.539704 kernel: kvm_amd: Virtual GIF supported Jan 29 11:31:01.558445 kernel: EDAC MC: Ver: 3.0.0 Jan 29 11:31:01.571036 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:31:01.606153 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 11:31:01.617586 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 11:31:01.628059 lvm[1458]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:31:01.663786 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 11:31:01.665449 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:31:01.666720 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:31:01.668077 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 11:31:01.669483 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 11:31:01.671209 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 11:31:01.672527 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 11:31:01.673908 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 11:31:01.675230 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 11:31:01.675265 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:31:01.676278 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:31:01.678176 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 11:31:01.681087 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 11:31:01.694915 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 11:31:01.697505 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 11:31:01.699363 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 11:31:01.700688 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:31:01.701778 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:31:01.702893 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:31:01.702931 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:31:01.704173 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 11:31:01.706653 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 11:31:01.711505 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 11:31:01.714047 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 11:31:01.716185 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 11:31:01.718369 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 11:31:01.719741 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 11:31:01.720111 jq[1465]: false Jan 29 11:31:01.722464 lvm[1462]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:31:01.722717 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 11:31:01.728595 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 11:31:01.733678 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 11:31:01.738088 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 11:31:01.738853 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 11:31:01.742600 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 11:31:01.747598 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 11:31:01.747603 dbus-daemon[1464]: [system] SELinux support is enabled Jan 29 11:31:01.749925 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 11:31:01.755779 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 11:31:01.761630 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 11:31:01.769687 update_engine[1475]: I20250129 11:31:01.767565 1475 main.cc:92] Flatcar Update Engine starting Jan 29 11:31:01.761951 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 11:31:01.762999 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 11:31:01.763279 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 11:31:01.771644 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 11:31:01.776514 extend-filesystems[1466]: Found loop3 Jan 29 11:31:01.776514 extend-filesystems[1466]: Found loop4 Jan 29 11:31:01.776514 extend-filesystems[1466]: Found loop5 Jan 29 11:31:01.776514 extend-filesystems[1466]: Found sr0 Jan 29 11:31:01.776514 extend-filesystems[1466]: Found vda Jan 29 11:31:01.776514 extend-filesystems[1466]: Found vda1 Jan 29 11:31:01.776514 extend-filesystems[1466]: Found vda2 Jan 29 11:31:01.776514 extend-filesystems[1466]: Found vda3 Jan 29 11:31:01.776514 extend-filesystems[1466]: Found usr Jan 29 11:31:01.776514 extend-filesystems[1466]: Found vda4 Jan 29 11:31:01.776514 extend-filesystems[1466]: Found vda6 Jan 29 11:31:01.776514 extend-filesystems[1466]: Found vda7 Jan 29 11:31:01.776514 extend-filesystems[1466]: Found vda9 Jan 29 11:31:01.776514 extend-filesystems[1466]: Checking size of /dev/vda9 Jan 29 11:31:01.959716 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1376) Jan 29 11:31:01.959768 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 29 11:31:01.959783 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 29 11:31:01.959831 jq[1477]: true Jan 29 11:31:01.771714 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 11:31:01.960004 tar[1484]: linux-amd64/helm Jan 29 11:31:01.960406 extend-filesystems[1466]: Resized partition /dev/vda9 Jan 29 11:31:01.963480 update_engine[1475]: I20250129 11:31:01.785713 1475 update_check_scheduler.cc:74] Next update check in 7m44s Jan 29 11:31:01.774514 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 11:31:01.963664 extend-filesystems[1499]: resize2fs 1.47.1 (20-May-2024) Jan 29 11:31:01.963664 extend-filesystems[1499]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 11:31:01.963664 extend-filesystems[1499]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 11:31:01.963664 extend-filesystems[1499]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 29 11:31:01.774539 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 11:31:01.966893 jq[1492]: true Jan 29 11:31:01.967065 extend-filesystems[1466]: Resized filesystem in /dev/vda9 Jan 29 11:31:01.777424 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 11:31:01.777706 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 11:31:01.793960 systemd[1]: Started update-engine.service - Update Engine. Jan 29 11:31:01.797923 (ntainerd)[1495]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 11:31:01.798578 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 11:31:01.810460 systemd-logind[1471]: Watching system buttons on /dev/input/event1 (Power Button) Jan 29 11:31:01.810480 systemd-logind[1471]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 11:31:01.815238 systemd-logind[1471]: New seat seat0. Jan 29 11:31:01.822218 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 11:31:01.962129 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 11:31:01.962395 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 11:31:01.974937 locksmithd[1501]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 11:31:02.014662 sshd_keygen[1480]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 11:31:02.036128 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 11:31:02.072844 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 11:31:02.085992 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 11:31:02.086292 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 11:31:02.101751 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 11:31:02.172583 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 11:31:02.231533 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 11:31:02.234492 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 11:31:02.236071 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 11:31:02.453808 bash[1517]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:31:02.455059 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 11:31:02.471036 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 29 11:31:02.507501 containerd[1495]: time="2025-01-29T11:31:02.507269288Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 11:31:02.533934 containerd[1495]: time="2025-01-29T11:31:02.533879442Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:31:02.536083 containerd[1495]: time="2025-01-29T11:31:02.536037659Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:31:02.536083 containerd[1495]: time="2025-01-29T11:31:02.536080339Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 11:31:02.536136 containerd[1495]: time="2025-01-29T11:31:02.536099174Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 11:31:02.536361 containerd[1495]: time="2025-01-29T11:31:02.536336269Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 11:31:02.536361 containerd[1495]: time="2025-01-29T11:31:02.536359292Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 11:31:02.536522 containerd[1495]: time="2025-01-29T11:31:02.536461965Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:31:02.536522 containerd[1495]: time="2025-01-29T11:31:02.536479418Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:31:02.536717 containerd[1495]: time="2025-01-29T11:31:02.536696725Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:31:02.536760 containerd[1495]: time="2025-01-29T11:31:02.536717053Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 11:31:02.536760 containerd[1495]: time="2025-01-29T11:31:02.536731530Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:31:02.536760 containerd[1495]: time="2025-01-29T11:31:02.536742681Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 11:31:02.536926 containerd[1495]: time="2025-01-29T11:31:02.536856244Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:31:02.537177 containerd[1495]: time="2025-01-29T11:31:02.537149344Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:31:02.537316 containerd[1495]: time="2025-01-29T11:31:02.537289747Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:31:02.537316 containerd[1495]: time="2025-01-29T11:31:02.537308292Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 11:31:02.537483 containerd[1495]: time="2025-01-29T11:31:02.537459035Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 11:31:02.537547 containerd[1495]: time="2025-01-29T11:31:02.537532412Z" level=info msg="metadata content store policy set" policy=shared Jan 29 11:31:02.570034 tar[1484]: linux-amd64/LICENSE Jan 29 11:31:02.570151 tar[1484]: linux-amd64/README.md Jan 29 11:31:02.587315 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 11:31:02.651975 containerd[1495]: time="2025-01-29T11:31:02.651903399Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 11:31:02.652107 containerd[1495]: time="2025-01-29T11:31:02.652054011Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 11:31:02.652107 containerd[1495]: time="2025-01-29T11:31:02.652081122Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 11:31:02.652191 containerd[1495]: time="2025-01-29T11:31:02.652104746Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 11:31:02.652191 containerd[1495]: time="2025-01-29T11:31:02.652124123Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 11:31:02.652391 containerd[1495]: time="2025-01-29T11:31:02.652354415Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 11:31:02.652707 containerd[1495]: time="2025-01-29T11:31:02.652667822Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 11:31:02.652835 containerd[1495]: time="2025-01-29T11:31:02.652806492Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 11:31:02.652835 containerd[1495]: time="2025-01-29T11:31:02.652831599Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 11:31:02.652886 containerd[1495]: time="2025-01-29T11:31:02.652850825Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 11:31:02.652886 containerd[1495]: time="2025-01-29T11:31:02.652867497Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 11:31:02.652923 containerd[1495]: time="2025-01-29T11:31:02.652883928Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 11:31:02.652923 containerd[1495]: time="2025-01-29T11:31:02.652899787Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 11:31:02.652923 containerd[1495]: time="2025-01-29T11:31:02.652915617Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 11:31:02.652973 containerd[1495]: time="2025-01-29T11:31:02.652931647Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 11:31:02.652973 containerd[1495]: time="2025-01-29T11:31:02.652946465Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 11:31:02.652973 containerd[1495]: time="2025-01-29T11:31:02.652961032Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 11:31:02.653029 containerd[1495]: time="2025-01-29T11:31:02.652975038Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 11:31:02.653029 containerd[1495]: time="2025-01-29T11:31:02.653002540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 11:31:02.653029 containerd[1495]: time="2025-01-29T11:31:02.653018330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 11:31:02.653095 containerd[1495]: time="2025-01-29T11:31:02.653032176Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 11:31:02.653095 containerd[1495]: time="2025-01-29T11:31:02.653057974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 11:31:02.653095 containerd[1495]: time="2025-01-29T11:31:02.653073543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 11:31:02.653095 containerd[1495]: time="2025-01-29T11:31:02.653088652Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 11:31:02.653169 containerd[1495]: time="2025-01-29T11:31:02.653102808Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 11:31:02.653169 containerd[1495]: time="2025-01-29T11:31:02.653117446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 11:31:02.653169 containerd[1495]: time="2025-01-29T11:31:02.653133175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 11:31:02.653169 containerd[1495]: time="2025-01-29T11:31:02.653151189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 11:31:02.653169 containerd[1495]: time="2025-01-29T11:31:02.653166407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 11:31:02.653256 containerd[1495]: time="2025-01-29T11:31:02.653183008Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 11:31:02.653256 containerd[1495]: time="2025-01-29T11:31:02.653197205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 11:31:02.653256 containerd[1495]: time="2025-01-29T11:31:02.653214427Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 11:31:02.653256 containerd[1495]: time="2025-01-29T11:31:02.653239054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 11:31:02.653335 containerd[1495]: time="2025-01-29T11:31:02.653254903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 11:31:02.653335 containerd[1495]: time="2025-01-29T11:31:02.653268198Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 11:31:02.653381 containerd[1495]: time="2025-01-29T11:31:02.653333711Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 11:31:02.653381 containerd[1495]: time="2025-01-29T11:31:02.653356644Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 11:31:02.653381 containerd[1495]: time="2025-01-29T11:31:02.653369268Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 11:31:02.653481 containerd[1495]: time="2025-01-29T11:31:02.653385318Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 11:31:02.653481 containerd[1495]: time="2025-01-29T11:31:02.653397060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 11:31:02.653481 containerd[1495]: time="2025-01-29T11:31:02.653438548Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 11:31:02.653481 containerd[1495]: time="2025-01-29T11:31:02.653453446Z" level=info msg="NRI interface is disabled by configuration." Jan 29 11:31:02.653481 containerd[1495]: time="2025-01-29T11:31:02.653466129Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 11:31:02.653855 containerd[1495]: time="2025-01-29T11:31:02.653787382Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 11:31:02.653855 containerd[1495]: time="2025-01-29T11:31:02.653848757Z" level=info msg="Connect containerd service" Jan 29 11:31:02.654008 containerd[1495]: time="2025-01-29T11:31:02.653880657Z" level=info msg="using legacy CRI server" Jan 29 11:31:02.654008 containerd[1495]: time="2025-01-29T11:31:02.653889163Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 11:31:02.654057 containerd[1495]: time="2025-01-29T11:31:02.654020389Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 11:31:02.654792 containerd[1495]: time="2025-01-29T11:31:02.654739888Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:31:02.654955 containerd[1495]: time="2025-01-29T11:31:02.654906050Z" level=info msg="Start subscribing containerd event" Jan 29 11:31:02.654994 containerd[1495]: time="2025-01-29T11:31:02.654962205Z" level=info msg="Start recovering state" Jan 29 11:31:02.655080 containerd[1495]: time="2025-01-29T11:31:02.655060159Z" level=info msg="Start event monitor" Jan 29 11:31:02.655637 containerd[1495]: time="2025-01-29T11:31:02.655138856Z" level=info msg="Start snapshots syncer" Jan 29 11:31:02.655637 containerd[1495]: time="2025-01-29T11:31:02.655156449Z" level=info msg="Start cni network conf syncer for default" Jan 29 11:31:02.655637 containerd[1495]: time="2025-01-29T11:31:02.655167270Z" level=info msg="Start streaming server" Jan 29 11:31:02.655637 containerd[1495]: time="2025-01-29T11:31:02.655179743Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 11:31:02.655637 containerd[1495]: time="2025-01-29T11:31:02.655240176Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 11:31:02.655401 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 11:31:02.655871 containerd[1495]: time="2025-01-29T11:31:02.655833880Z" level=info msg="containerd successfully booted in 0.150058s" Jan 29 11:31:02.665541 systemd-networkd[1410]: eth0: Gained IPv6LL Jan 29 11:31:02.668620 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 11:31:02.670602 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 11:31:02.687716 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 29 11:31:02.690471 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:31:02.693000 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 11:31:02.714177 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 29 11:31:02.714454 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 29 11:31:02.717348 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 11:31:02.718469 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 11:31:03.871091 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:31:03.889311 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 11:31:03.889717 (kubelet)[1577]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:31:03.892038 systemd[1]: Startup finished in 768ms (kernel) + 5.688s (initrd) + 4.493s (userspace) = 10.950s. Jan 29 11:31:04.888251 kubelet[1577]: E0129 11:31:04.888145 1577 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:31:04.892213 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:31:04.892440 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:31:04.892773 systemd[1]: kubelet.service: Consumed 2.029s CPU time. Jan 29 11:31:11.586930 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 11:31:11.605948 systemd[1]: Started sshd@0-10.0.0.69:22-10.0.0.1:36662.service - OpenSSH per-connection server daemon (10.0.0.1:36662). Jan 29 11:31:11.721940 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 36662 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:31:11.724601 sshd-session[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:31:11.757925 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 11:31:11.774626 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 11:31:11.780191 systemd-logind[1471]: New session 1 of user core. Jan 29 11:31:11.803504 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 11:31:11.828276 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 11:31:11.834446 (systemd)[1595]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 11:31:12.019251 systemd[1595]: Queued start job for default target default.target. Jan 29 11:31:12.032305 systemd[1595]: Created slice app.slice - User Application Slice. Jan 29 11:31:12.032349 systemd[1595]: Reached target paths.target - Paths. Jan 29 11:31:12.032370 systemd[1595]: Reached target timers.target - Timers. Jan 29 11:31:12.035618 systemd[1595]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 11:31:12.057579 systemd[1595]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 11:31:12.057791 systemd[1595]: Reached target sockets.target - Sockets. Jan 29 11:31:12.057814 systemd[1595]: Reached target basic.target - Basic System. Jan 29 11:31:12.057883 systemd[1595]: Reached target default.target - Main User Target. Jan 29 11:31:12.057939 systemd[1595]: Startup finished in 211ms. Jan 29 11:31:12.060203 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 11:31:12.065561 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 11:31:12.144826 systemd[1]: Started sshd@1-10.0.0.69:22-10.0.0.1:36678.service - OpenSSH per-connection server daemon (10.0.0.1:36678). Jan 29 11:31:12.224691 sshd[1606]: Accepted publickey for core from 10.0.0.1 port 36678 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:31:12.230737 sshd-session[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:31:12.255823 systemd-logind[1471]: New session 2 of user core. Jan 29 11:31:12.263722 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 11:31:12.336991 sshd[1608]: Connection closed by 10.0.0.1 port 36678 Jan 29 11:31:12.335635 sshd-session[1606]: pam_unix(sshd:session): session closed for user core Jan 29 11:31:12.356434 systemd[1]: sshd@1-10.0.0.69:22-10.0.0.1:36678.service: Deactivated successfully. Jan 29 11:31:12.358647 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 11:31:12.360459 systemd-logind[1471]: Session 2 logged out. Waiting for processes to exit. Jan 29 11:31:12.382964 systemd[1]: Started sshd@2-10.0.0.69:22-10.0.0.1:36682.service - OpenSSH per-connection server daemon (10.0.0.1:36682). Jan 29 11:31:12.384619 systemd-logind[1471]: Removed session 2. Jan 29 11:31:12.447991 sshd[1613]: Accepted publickey for core from 10.0.0.1 port 36682 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:31:12.450693 sshd-session[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:31:12.463480 systemd-logind[1471]: New session 3 of user core. Jan 29 11:31:12.477847 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 11:31:12.545663 sshd[1615]: Connection closed by 10.0.0.1 port 36682 Jan 29 11:31:12.545632 sshd-session[1613]: pam_unix(sshd:session): session closed for user core Jan 29 11:31:12.559027 systemd[1]: sshd@2-10.0.0.69:22-10.0.0.1:36682.service: Deactivated successfully. Jan 29 11:31:12.572011 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 11:31:12.575607 systemd-logind[1471]: Session 3 logged out. Waiting for processes to exit. Jan 29 11:31:12.587947 systemd[1]: Started sshd@3-10.0.0.69:22-10.0.0.1:36690.service - OpenSSH per-connection server daemon (10.0.0.1:36690). Jan 29 11:31:12.592523 systemd-logind[1471]: Removed session 3. Jan 29 11:31:12.657369 sshd[1620]: Accepted publickey for core from 10.0.0.1 port 36690 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:31:12.656109 sshd-session[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:31:12.667901 systemd-logind[1471]: New session 4 of user core. Jan 29 11:31:12.675978 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 11:31:12.750067 sshd[1622]: Connection closed by 10.0.0.1 port 36690 Jan 29 11:31:12.749024 sshd-session[1620]: pam_unix(sshd:session): session closed for user core Jan 29 11:31:12.762059 systemd[1]: sshd@3-10.0.0.69:22-10.0.0.1:36690.service: Deactivated successfully. Jan 29 11:31:12.764718 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 11:31:12.771581 systemd-logind[1471]: Session 4 logged out. Waiting for processes to exit. Jan 29 11:31:12.785995 systemd[1]: Started sshd@4-10.0.0.69:22-10.0.0.1:36692.service - OpenSSH per-connection server daemon (10.0.0.1:36692). Jan 29 11:31:12.793268 systemd-logind[1471]: Removed session 4. Jan 29 11:31:12.882037 sshd[1627]: Accepted publickey for core from 10.0.0.1 port 36692 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:31:12.882801 sshd-session[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:31:12.894614 systemd-logind[1471]: New session 5 of user core. Jan 29 11:31:12.909744 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 11:31:13.017607 sudo[1630]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 11:31:13.018069 sudo[1630]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:31:13.049730 sudo[1630]: pam_unix(sudo:session): session closed for user root Jan 29 11:31:13.054915 sshd[1629]: Connection closed by 10.0.0.1 port 36692 Jan 29 11:31:13.055070 sshd-session[1627]: pam_unix(sshd:session): session closed for user core Jan 29 11:31:13.067876 systemd[1]: sshd@4-10.0.0.69:22-10.0.0.1:36692.service: Deactivated successfully. Jan 29 11:31:13.073653 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 11:31:13.080347 systemd-logind[1471]: Session 5 logged out. Waiting for processes to exit. Jan 29 11:31:13.088294 systemd[1]: Started sshd@5-10.0.0.69:22-10.0.0.1:36706.service - OpenSSH per-connection server daemon (10.0.0.1:36706). Jan 29 11:31:13.089903 systemd-logind[1471]: Removed session 5. Jan 29 11:31:13.146007 sshd[1635]: Accepted publickey for core from 10.0.0.1 port 36706 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:31:13.152229 sshd-session[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:31:13.167220 systemd-logind[1471]: New session 6 of user core. Jan 29 11:31:13.181105 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 11:31:13.257167 sudo[1639]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 11:31:13.261025 sudo[1639]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:31:13.274704 sudo[1639]: pam_unix(sudo:session): session closed for user root Jan 29 11:31:13.283024 sudo[1638]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 11:31:13.288779 sudo[1638]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:31:13.325019 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:31:13.398221 augenrules[1661]: No rules Jan 29 11:31:13.402290 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:31:13.402642 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:31:13.404960 sudo[1638]: pam_unix(sudo:session): session closed for user root Jan 29 11:31:13.409380 sshd[1637]: Connection closed by 10.0.0.1 port 36706 Jan 29 11:31:13.408156 sshd-session[1635]: pam_unix(sshd:session): session closed for user core Jan 29 11:31:13.429802 systemd[1]: sshd@5-10.0.0.69:22-10.0.0.1:36706.service: Deactivated successfully. Jan 29 11:31:13.435713 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 11:31:13.441111 systemd-logind[1471]: Session 6 logged out. Waiting for processes to exit. Jan 29 11:31:13.452049 systemd[1]: Started sshd@6-10.0.0.69:22-10.0.0.1:36716.service - OpenSSH per-connection server daemon (10.0.0.1:36716). Jan 29 11:31:13.458686 systemd-logind[1471]: Removed session 6. Jan 29 11:31:13.505630 sshd[1669]: Accepted publickey for core from 10.0.0.1 port 36716 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:31:13.508618 sshd-session[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:31:13.523305 systemd-logind[1471]: New session 7 of user core. Jan 29 11:31:13.547226 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 11:31:13.621268 sudo[1672]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 11:31:13.622251 sudo[1672]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:31:14.209661 (dockerd)[1692]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 11:31:14.210161 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 11:31:14.762381 dockerd[1692]: time="2025-01-29T11:31:14.761453798Z" level=info msg="Starting up" Jan 29 11:31:15.021962 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 11:31:15.044617 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:31:15.073612 systemd[1]: var-lib-docker-metacopy\x2dcheck575060691-merged.mount: Deactivated successfully. Jan 29 11:31:15.391834 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:31:15.400478 (kubelet)[1725]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:31:15.494521 dockerd[1692]: time="2025-01-29T11:31:15.494458544Z" level=info msg="Loading containers: start." Jan 29 11:31:15.541756 kubelet[1725]: E0129 11:31:15.541176 1725 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:31:15.553582 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:31:15.553865 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:31:16.244357 kernel: Initializing XFRM netlink socket Jan 29 11:31:16.565792 systemd-networkd[1410]: docker0: Link UP Jan 29 11:31:16.685332 dockerd[1692]: time="2025-01-29T11:31:16.684517850Z" level=info msg="Loading containers: done." Jan 29 11:31:16.750298 dockerd[1692]: time="2025-01-29T11:31:16.745785008Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 11:31:16.750298 dockerd[1692]: time="2025-01-29T11:31:16.749834021Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Jan 29 11:31:16.751237 dockerd[1692]: time="2025-01-29T11:31:16.750889290Z" level=info msg="Daemon has completed initialization" Jan 29 11:31:16.987904 dockerd[1692]: time="2025-01-29T11:31:16.987219323Z" level=info msg="API listen on /run/docker.sock" Jan 29 11:31:16.988674 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 11:31:18.427156 containerd[1495]: time="2025-01-29T11:31:18.427091738Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 29 11:31:21.704052 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2836485216.mount: Deactivated successfully. Jan 29 11:31:23.330594 containerd[1495]: time="2025-01-29T11:31:23.330529985Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:31:23.331281 containerd[1495]: time="2025-01-29T11:31:23.331244555Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677012" Jan 29 11:31:23.332429 containerd[1495]: time="2025-01-29T11:31:23.332385525Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:31:23.335527 containerd[1495]: time="2025-01-29T11:31:23.335501648Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:31:23.336663 containerd[1495]: time="2025-01-29T11:31:23.336630526Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 4.909493132s" Jan 29 11:31:23.336719 containerd[1495]: time="2025-01-29T11:31:23.336668597Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 29 11:31:23.360735 containerd[1495]: time="2025-01-29T11:31:23.360695596Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 29 11:31:25.620688 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 11:31:25.633724 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:31:25.810683 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:31:25.810905 (kubelet)[1987]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:31:25.873255 kubelet[1987]: E0129 11:31:25.873111 1987 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:31:25.877850 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:31:25.878112 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:31:28.455721 containerd[1495]: time="2025-01-29T11:31:28.455652249Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:31:28.499674 containerd[1495]: time="2025-01-29T11:31:28.499617279Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605745" Jan 29 11:31:28.556796 containerd[1495]: time="2025-01-29T11:31:28.556765372Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:31:28.586168 containerd[1495]: time="2025-01-29T11:31:28.586106467Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:31:28.587193 containerd[1495]: time="2025-01-29T11:31:28.587153100Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 5.226418721s" Jan 29 11:31:28.587279 containerd[1495]: time="2025-01-29T11:31:28.587186372Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 29 11:31:28.612590 containerd[1495]: time="2025-01-29T11:31:28.612546271Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 29 11:31:29.997376 containerd[1495]: time="2025-01-29T11:31:29.997311169Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:31:29.998099 containerd[1495]: time="2025-01-29T11:31:29.998047069Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783064" Jan 29 11:31:29.999253 containerd[1495]: time="2025-01-29T11:31:29.999219578Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:31:30.002527 containerd[1495]: time="2025-01-29T11:31:30.002020872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:31:30.003166 containerd[1495]: time="2025-01-29T11:31:30.003108882Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.390523368s" Jan 29 11:31:30.003166 containerd[1495]: time="2025-01-29T11:31:30.003154698Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 29 11:31:30.026191 containerd[1495]: time="2025-01-29T11:31:30.026150674Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 29 11:31:31.005931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4259153232.mount: Deactivated successfully. Jan 29 11:31:31.867400 containerd[1495]: time="2025-01-29T11:31:31.867330223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:31:31.868336 containerd[1495]: time="2025-01-29T11:31:31.868288820Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058337" Jan 29 11:31:31.869502 containerd[1495]: time="2025-01-29T11:31:31.869471067Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:31:31.871647 containerd[1495]: time="2025-01-29T11:31:31.871605230Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:31:31.872258 containerd[1495]: time="2025-01-29T11:31:31.872216927Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 1.846029785s" Jan 29 11:31:31.872300 containerd[1495]: time="2025-01-29T11:31:31.872256742Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 29 11:31:31.896572 containerd[1495]: time="2025-01-29T11:31:31.896520695Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 11:31:32.438011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2354087821.mount: Deactivated successfully. Jan 29 11:31:33.504822 containerd[1495]: time="2025-01-29T11:31:33.504762923Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:31:33.505533 containerd[1495]: time="2025-01-29T11:31:33.505492612Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 29 11:31:33.506900 containerd[1495]: time="2025-01-29T11:31:33.506867300Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:31:33.509565 containerd[1495]: time="2025-01-29T11:31:33.509506389Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:31:33.510529 containerd[1495]: time="2025-01-29T11:31:33.510500854Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.613936788s" Jan 29 11:31:33.510596 containerd[1495]: time="2025-01-29T11:31:33.510531021Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 29 11:31:33.531396 containerd[1495]: time="2025-01-29T11:31:33.531357758Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 29 11:31:34.159575 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3642855969.mount: Deactivated successfully. Jan 29 11:31:34.166613 containerd[1495]: time="2025-01-29T11:31:34.166572209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:31:34.167426 containerd[1495]: time="2025-01-29T11:31:34.167376979Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 29 11:31:34.168518 containerd[1495]: time="2025-01-29T11:31:34.168475044Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:31:34.170607 containerd[1495]: time="2025-01-29T11:31:34.170568827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:31:34.171327 containerd[1495]: time="2025-01-29T11:31:34.171285748Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 639.892383ms" Jan 29 11:31:34.171327 containerd[1495]: time="2025-01-29T11:31:34.171315777Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 29 11:31:34.192141 containerd[1495]: time="2025-01-29T11:31:34.192088389Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 29 11:31:35.070627 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2789027043.mount: Deactivated successfully. Jan 29 11:31:36.120726 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 29 11:31:36.129621 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:31:36.265651 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:31:36.271319 (kubelet)[2109]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:31:36.501296 kubelet[2109]: E0129 11:31:36.501149 2109 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:31:36.505848 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:31:36.506058 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:31:38.873679 containerd[1495]: time="2025-01-29T11:31:38.873614532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:31:38.874447 containerd[1495]: time="2025-01-29T11:31:38.874375659Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jan 29 11:31:38.875758 containerd[1495]: time="2025-01-29T11:31:38.875711266Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:31:38.879130 containerd[1495]: time="2025-01-29T11:31:38.879075728Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:31:38.880518 containerd[1495]: time="2025-01-29T11:31:38.880474677Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 4.688342783s" Jan 29 11:31:38.880563 containerd[1495]: time="2025-01-29T11:31:38.880514773Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 29 11:31:41.560078 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:31:41.575691 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:31:41.594079 systemd[1]: Reloading requested from client PID 2233 ('systemctl') (unit session-7.scope)... Jan 29 11:31:41.594093 systemd[1]: Reloading... Jan 29 11:31:41.666439 zram_generator::config[2275]: No configuration found. Jan 29 11:31:41.870346 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:31:41.946653 systemd[1]: Reloading finished in 352 ms. Jan 29 11:31:41.998903 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:31:42.001897 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 11:31:42.002135 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:31:42.003726 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:31:42.146519 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:31:42.150779 (kubelet)[2322]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:31:42.185344 kubelet[2322]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:31:42.185344 kubelet[2322]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:31:42.185344 kubelet[2322]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:31:42.186226 kubelet[2322]: I0129 11:31:42.186187 2322 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:31:42.583055 kubelet[2322]: I0129 11:31:42.582949 2322 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 11:31:42.583055 kubelet[2322]: I0129 11:31:42.582986 2322 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:31:42.583223 kubelet[2322]: I0129 11:31:42.583201 2322 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 11:31:42.596501 kubelet[2322]: I0129 11:31:42.596457 2322 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:31:42.597152 kubelet[2322]: E0129 11:31:42.597108 2322 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.69:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.69:6443: connect: connection refused Jan 29 11:31:42.607012 kubelet[2322]: I0129 11:31:42.606985 2322 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:31:42.607670 kubelet[2322]: I0129 11:31:42.607633 2322 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:31:42.607831 kubelet[2322]: I0129 11:31:42.607664 2322 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 11:31:42.607930 kubelet[2322]: I0129 11:31:42.607842 2322 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:31:42.607930 kubelet[2322]: I0129 11:31:42.607850 2322 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 11:31:42.607994 kubelet[2322]: I0129 11:31:42.607968 2322 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:31:42.608586 kubelet[2322]: I0129 11:31:42.608566 2322 kubelet.go:400] "Attempting to sync node with API server" Jan 29 11:31:42.608586 kubelet[2322]: I0129 11:31:42.608582 2322 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:31:42.608645 kubelet[2322]: I0129 11:31:42.608608 2322 kubelet.go:312] "Adding apiserver pod source" Jan 29 11:31:42.608645 kubelet[2322]: I0129 11:31:42.608628 2322 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:31:42.611518 kubelet[2322]: W0129 11:31:42.611473 2322 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.69:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.69:6443: connect: connection refused Jan 29 11:31:42.611684 kubelet[2322]: W0129 11:31:42.611620 2322 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.69:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.69:6443: connect: connection refused Jan 29 11:31:42.611684 kubelet[2322]: E0129 11:31:42.611682 2322 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.69:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.69:6443: connect: connection refused Jan 29 11:31:42.612503 kubelet[2322]: E0129 11:31:42.612379 2322 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.69:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.69:6443: connect: connection refused Jan 29 11:31:42.612605 kubelet[2322]: I0129 11:31:42.612574 2322 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:31:42.613946 kubelet[2322]: I0129 11:31:42.613921 2322 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:31:42.614133 kubelet[2322]: W0129 11:31:42.613991 2322 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 11:31:42.615142 kubelet[2322]: I0129 11:31:42.615117 2322 server.go:1264] "Started kubelet" Jan 29 11:31:42.617874 kubelet[2322]: I0129 11:31:42.616476 2322 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:31:42.618537 kubelet[2322]: I0129 11:31:42.618488 2322 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:31:42.619676 kubelet[2322]: I0129 11:31:42.619653 2322 server.go:455] "Adding debug handlers to kubelet server" Jan 29 11:31:42.620700 kubelet[2322]: I0129 11:31:42.620650 2322 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:31:42.620938 kubelet[2322]: I0129 11:31:42.620914 2322 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:31:42.621109 kubelet[2322]: E0129 11:31:42.620990 2322 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.69:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.69:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f267db99c2fe8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 11:31:42.615089128 +0000 UTC m=+0.460541033,LastTimestamp:2025-01-29 11:31:42.615089128 +0000 UTC m=+0.460541033,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 11:31:42.622077 kubelet[2322]: I0129 11:31:42.622057 2322 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 11:31:42.622282 kubelet[2322]: I0129 11:31:42.622263 2322 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 11:31:42.622499 kubelet[2322]: I0129 11:31:42.622484 2322 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:31:42.622784 kubelet[2322]: W0129 11:31:42.622637 2322 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.69:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.69:6443: connect: connection refused Jan 29 11:31:42.622784 kubelet[2322]: E0129 11:31:42.622683 2322 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.69:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.69:6443: connect: connection refused Jan 29 11:31:42.622869 kubelet[2322]: E0129 11:31:42.622843 2322 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.69:6443: connect: connection refused" interval="200ms" Jan 29 11:31:42.623209 kubelet[2322]: I0129 11:31:42.623171 2322 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:31:42.623394 kubelet[2322]: I0129 11:31:42.623369 2322 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:31:42.623734 kubelet[2322]: E0129 11:31:42.623708 2322 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:31:42.624597 kubelet[2322]: I0129 11:31:42.624577 2322 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:31:42.633376 kubelet[2322]: I0129 11:31:42.633323 2322 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:31:42.634753 kubelet[2322]: I0129 11:31:42.634725 2322 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:31:42.634753 kubelet[2322]: I0129 11:31:42.634754 2322 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:31:42.634837 kubelet[2322]: I0129 11:31:42.634772 2322 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 11:31:42.634837 kubelet[2322]: E0129 11:31:42.634810 2322 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:31:42.638980 kubelet[2322]: W0129 11:31:42.638945 2322 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.69:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.69:6443: connect: connection refused Jan 29 11:31:42.639030 kubelet[2322]: E0129 11:31:42.638988 2322 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.69:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.69:6443: connect: connection refused Jan 29 11:31:42.639186 kubelet[2322]: I0129 11:31:42.639147 2322 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:31:42.639186 kubelet[2322]: I0129 11:31:42.639166 2322 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:31:42.639186 kubelet[2322]: I0129 11:31:42.639186 2322 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:31:42.723664 kubelet[2322]: I0129 11:31:42.723634 2322 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 11:31:42.724109 kubelet[2322]: E0129 11:31:42.724065 2322 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.69:6443/api/v1/nodes\": dial tcp 10.0.0.69:6443: connect: connection refused" node="localhost" Jan 29 11:31:42.735261 kubelet[2322]: E0129 11:31:42.735203 2322 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 11:31:42.823905 kubelet[2322]: E0129 11:31:42.823862 2322 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.69:6443: connect: connection refused" interval="400ms" Jan 29 11:31:42.925300 kubelet[2322]: I0129 11:31:42.925279 2322 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 11:31:42.925700 kubelet[2322]: E0129 11:31:42.925652 2322 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.69:6443/api/v1/nodes\": dial tcp 10.0.0.69:6443: connect: connection refused" node="localhost" Jan 29 11:31:42.935721 kubelet[2322]: E0129 11:31:42.935684 2322 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 11:31:42.989527 kubelet[2322]: I0129 11:31:42.989492 2322 policy_none.go:49] "None policy: Start" Jan 29 11:31:42.990250 kubelet[2322]: I0129 11:31:42.990214 2322 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:31:42.990250 kubelet[2322]: I0129 11:31:42.990238 2322 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:31:42.998200 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 11:31:43.016686 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 11:31:43.019451 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 11:31:43.035383 kubelet[2322]: I0129 11:31:43.035287 2322 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:31:43.035723 kubelet[2322]: I0129 11:31:43.035560 2322 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:31:43.035723 kubelet[2322]: I0129 11:31:43.035674 2322 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:31:43.036837 kubelet[2322]: E0129 11:31:43.036818 2322 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 29 11:31:43.177015 kubelet[2322]: E0129 11:31:43.176833 2322 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.69:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.69:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f267db99c2fe8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 11:31:42.615089128 +0000 UTC m=+0.460541033,LastTimestamp:2025-01-29 11:31:42.615089128 +0000 UTC m=+0.460541033,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 11:31:43.224683 kubelet[2322]: E0129 11:31:43.224633 2322 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.69:6443: connect: connection refused" interval="800ms" Jan 29 11:31:43.327436 kubelet[2322]: I0129 11:31:43.327370 2322 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 11:31:43.327846 kubelet[2322]: E0129 11:31:43.327807 2322 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.69:6443/api/v1/nodes\": dial tcp 10.0.0.69:6443: connect: connection refused" node="localhost" Jan 29 11:31:43.335983 kubelet[2322]: I0129 11:31:43.335937 2322 topology_manager.go:215] "Topology Admit Handler" podUID="4bc31ce65a6d9df3be39c5a83c947d44" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 29 11:31:43.336828 kubelet[2322]: I0129 11:31:43.336792 2322 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 29 11:31:43.337487 kubelet[2322]: I0129 11:31:43.337461 2322 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 29 11:31:43.344706 systemd[1]: Created slice kubepods-burstable-pod4bc31ce65a6d9df3be39c5a83c947d44.slice - libcontainer container kubepods-burstable-pod4bc31ce65a6d9df3be39c5a83c947d44.slice. Jan 29 11:31:43.357046 systemd[1]: Created slice kubepods-burstable-pod9b8b5886141f9311660bb6b224a0f76c.slice - libcontainer container kubepods-burstable-pod9b8b5886141f9311660bb6b224a0f76c.slice. Jan 29 11:31:43.361447 systemd[1]: Created slice kubepods-burstable-pod4b186e12ac9f083392bb0d1970b49be4.slice - libcontainer container kubepods-burstable-pod4b186e12ac9f083392bb0d1970b49be4.slice. Jan 29 11:31:43.427408 kubelet[2322]: I0129 11:31:43.427245 2322 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4bc31ce65a6d9df3be39c5a83c947d44-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4bc31ce65a6d9df3be39c5a83c947d44\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:31:43.427408 kubelet[2322]: I0129 11:31:43.427319 2322 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:31:43.427408 kubelet[2322]: I0129 11:31:43.427350 2322 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:31:43.427408 kubelet[2322]: I0129 11:31:43.427377 2322 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:31:43.427408 kubelet[2322]: I0129 11:31:43.427401 2322 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 29 11:31:43.427628 kubelet[2322]: I0129 11:31:43.427443 2322 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4bc31ce65a6d9df3be39c5a83c947d44-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4bc31ce65a6d9df3be39c5a83c947d44\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:31:43.427628 kubelet[2322]: I0129 11:31:43.427464 2322 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4bc31ce65a6d9df3be39c5a83c947d44-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4bc31ce65a6d9df3be39c5a83c947d44\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:31:43.427628 kubelet[2322]: I0129 11:31:43.427503 2322 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:31:43.427628 kubelet[2322]: I0129 11:31:43.427559 2322 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:31:43.504027 kubelet[2322]: W0129 11:31:43.503939 2322 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.69:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.69:6443: connect: connection refused Jan 29 11:31:43.504027 kubelet[2322]: E0129 11:31:43.504023 2322 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.69:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.69:6443: connect: connection refused Jan 29 11:31:43.656186 kubelet[2322]: E0129 11:31:43.656132 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:31:43.656698 containerd[1495]: time="2025-01-29T11:31:43.656665398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4bc31ce65a6d9df3be39c5a83c947d44,Namespace:kube-system,Attempt:0,}" Jan 29 11:31:43.659925 kubelet[2322]: E0129 11:31:43.659891 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:31:43.660205 containerd[1495]: time="2025-01-29T11:31:43.660168216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,}" Jan 29 11:31:43.663473 kubelet[2322]: E0129 11:31:43.663444 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:31:43.663869 containerd[1495]: time="2025-01-29T11:31:43.663819437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,}" Jan 29 11:31:43.674357 kubelet[2322]: W0129 11:31:43.674284 2322 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.69:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.69:6443: connect: connection refused Jan 29 11:31:43.674357 kubelet[2322]: E0129 11:31:43.674348 2322 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.69:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.69:6443: connect: connection refused Jan 29 11:31:43.747291 kubelet[2322]: W0129 11:31:43.747113 2322 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.69:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.69:6443: connect: connection refused Jan 29 11:31:43.747291 kubelet[2322]: E0129 11:31:43.747174 2322 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.69:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.69:6443: connect: connection refused Jan 29 11:31:44.002621 kubelet[2322]: W0129 11:31:44.002411 2322 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.69:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.69:6443: connect: connection refused Jan 29 11:31:44.002621 kubelet[2322]: E0129 11:31:44.002492 2322 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.69:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.69:6443: connect: connection refused Jan 29 11:31:44.025963 kubelet[2322]: E0129 11:31:44.025911 2322 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.69:6443: connect: connection refused" interval="1.6s" Jan 29 11:31:44.128971 kubelet[2322]: I0129 11:31:44.128925 2322 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 11:31:44.129284 kubelet[2322]: E0129 11:31:44.129245 2322 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.69:6443/api/v1/nodes\": dial tcp 10.0.0.69:6443: connect: connection refused" node="localhost" Jan 29 11:31:44.545767 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2182951625.mount: Deactivated successfully. Jan 29 11:31:44.649063 kubelet[2322]: E0129 11:31:44.649005 2322 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.69:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.69:6443: connect: connection refused Jan 29 11:31:44.799541 containerd[1495]: time="2025-01-29T11:31:44.799326050Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:31:44.803216 containerd[1495]: time="2025-01-29T11:31:44.803164902Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:31:44.806172 containerd[1495]: time="2025-01-29T11:31:44.806136984Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:31:44.807407 containerd[1495]: time="2025-01-29T11:31:44.807348278Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:31:44.808607 containerd[1495]: time="2025-01-29T11:31:44.808549974Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 29 11:31:44.809993 containerd[1495]: time="2025-01-29T11:31:44.809923286Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:31:44.811431 containerd[1495]: time="2025-01-29T11:31:44.811371370Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:31:44.812950 containerd[1495]: time="2025-01-29T11:31:44.812911840Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:31:44.814196 containerd[1495]: time="2025-01-29T11:31:44.814142050Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.15022159s" Jan 29 11:31:44.820369 containerd[1495]: time="2025-01-29T11:31:44.820287028Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.160035093s" Jan 29 11:31:44.821174 containerd[1495]: time="2025-01-29T11:31:44.821130352Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.164372397s" Jan 29 11:31:44.953897 containerd[1495]: time="2025-01-29T11:31:44.953785586Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:31:44.953897 containerd[1495]: time="2025-01-29T11:31:44.953838115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:31:44.953897 containerd[1495]: time="2025-01-29T11:31:44.953761079Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:31:44.954108 containerd[1495]: time="2025-01-29T11:31:44.953859126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:31:44.956436 containerd[1495]: time="2025-01-29T11:31:44.955119283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:31:44.956436 containerd[1495]: time="2025-01-29T11:31:44.953841262Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:31:44.956436 containerd[1495]: time="2025-01-29T11:31:44.955246254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:31:44.956436 containerd[1495]: time="2025-01-29T11:31:44.955343519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:31:44.956436 containerd[1495]: time="2025-01-29T11:31:44.955426858Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:31:44.956436 containerd[1495]: time="2025-01-29T11:31:44.955583004Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:31:44.956820 containerd[1495]: time="2025-01-29T11:31:44.956754062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:31:44.957054 containerd[1495]: time="2025-01-29T11:31:44.956955135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:31:44.983640 systemd[1]: Started cri-containerd-77a02be521947d47a2badfe5cb3a17770d730233fe572497d8acc6e315c321be.scope - libcontainer container 77a02be521947d47a2badfe5cb3a17770d730233fe572497d8acc6e315c321be. Jan 29 11:31:44.985323 systemd[1]: Started cri-containerd-c3ea388eba60c90d4d64ca251ccf5ca489a54c0e4046972bfa6bb41ed56770b8.scope - libcontainer container c3ea388eba60c90d4d64ca251ccf5ca489a54c0e4046972bfa6bb41ed56770b8. Jan 29 11:31:44.990127 systemd[1]: Started cri-containerd-21e57668459f490ddd57f2ea2c4d2d94d9a62255c3895d050cd8948841fad054.scope - libcontainer container 21e57668459f490ddd57f2ea2c4d2d94d9a62255c3895d050cd8948841fad054. Jan 29 11:31:45.032902 containerd[1495]: time="2025-01-29T11:31:45.032854374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,} returns sandbox id \"77a02be521947d47a2badfe5cb3a17770d730233fe572497d8acc6e315c321be\"" Jan 29 11:31:45.033300 containerd[1495]: time="2025-01-29T11:31:45.033269724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4bc31ce65a6d9df3be39c5a83c947d44,Namespace:kube-system,Attempt:0,} returns sandbox id \"c3ea388eba60c90d4d64ca251ccf5ca489a54c0e4046972bfa6bb41ed56770b8\"" Jan 29 11:31:45.038836 kubelet[2322]: E0129 11:31:45.038806 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:31:45.038912 kubelet[2322]: E0129 11:31:45.038869 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:31:45.040651 containerd[1495]: time="2025-01-29T11:31:45.040602258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,} returns sandbox id \"21e57668459f490ddd57f2ea2c4d2d94d9a62255c3895d050cd8948841fad054\"" Jan 29 11:31:45.042283 kubelet[2322]: E0129 11:31:45.042256 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:31:45.043121 containerd[1495]: time="2025-01-29T11:31:45.043060490Z" level=info msg="CreateContainer within sandbox \"77a02be521947d47a2badfe5cb3a17770d730233fe572497d8acc6e315c321be\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 11:31:45.043226 containerd[1495]: time="2025-01-29T11:31:45.043187030Z" level=info msg="CreateContainer within sandbox \"c3ea388eba60c90d4d64ca251ccf5ca489a54c0e4046972bfa6bb41ed56770b8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 11:31:45.044453 containerd[1495]: time="2025-01-29T11:31:45.044421646Z" level=info msg="CreateContainer within sandbox \"21e57668459f490ddd57f2ea2c4d2d94d9a62255c3895d050cd8948841fad054\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 11:31:45.065294 containerd[1495]: time="2025-01-29T11:31:45.065158904Z" level=info msg="CreateContainer within sandbox \"21e57668459f490ddd57f2ea2c4d2d94d9a62255c3895d050cd8948841fad054\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"111340fc7517436e70b4d8e129454a24b313e3a20106d2bba62ac557e7d5a4bc\"" Jan 29 11:31:45.066502 containerd[1495]: time="2025-01-29T11:31:45.066468924Z" level=info msg="StartContainer for \"111340fc7517436e70b4d8e129454a24b313e3a20106d2bba62ac557e7d5a4bc\"" Jan 29 11:31:45.078313 containerd[1495]: time="2025-01-29T11:31:45.078196561Z" level=info msg="CreateContainer within sandbox \"77a02be521947d47a2badfe5cb3a17770d730233fe572497d8acc6e315c321be\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d333bb351da09a5a8d187a9cb0b127b6775d24577ffab88453235302772da97b\"" Jan 29 11:31:45.078793 containerd[1495]: time="2025-01-29T11:31:45.078769409Z" level=info msg="StartContainer for \"d333bb351da09a5a8d187a9cb0b127b6775d24577ffab88453235302772da97b\"" Jan 29 11:31:45.080560 containerd[1495]: time="2025-01-29T11:31:45.080534654Z" level=info msg="CreateContainer within sandbox \"c3ea388eba60c90d4d64ca251ccf5ca489a54c0e4046972bfa6bb41ed56770b8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0934aa2242f482945ac6aa899e870d9a37090bb0b27c1fe646fbfed542faefdd\"" Jan 29 11:31:45.082703 containerd[1495]: time="2025-01-29T11:31:45.082661937Z" level=info msg="StartContainer for \"0934aa2242f482945ac6aa899e870d9a37090bb0b27c1fe646fbfed542faefdd\"" Jan 29 11:31:45.095597 systemd[1]: Started cri-containerd-111340fc7517436e70b4d8e129454a24b313e3a20106d2bba62ac557e7d5a4bc.scope - libcontainer container 111340fc7517436e70b4d8e129454a24b313e3a20106d2bba62ac557e7d5a4bc. Jan 29 11:31:45.113559 systemd[1]: Started cri-containerd-0934aa2242f482945ac6aa899e870d9a37090bb0b27c1fe646fbfed542faefdd.scope - libcontainer container 0934aa2242f482945ac6aa899e870d9a37090bb0b27c1fe646fbfed542faefdd. Jan 29 11:31:45.116603 systemd[1]: Started cri-containerd-d333bb351da09a5a8d187a9cb0b127b6775d24577ffab88453235302772da97b.scope - libcontainer container d333bb351da09a5a8d187a9cb0b127b6775d24577ffab88453235302772da97b. Jan 29 11:31:45.156983 containerd[1495]: time="2025-01-29T11:31:45.156807550Z" level=info msg="StartContainer for \"111340fc7517436e70b4d8e129454a24b313e3a20106d2bba62ac557e7d5a4bc\" returns successfully" Jan 29 11:31:45.165908 containerd[1495]: time="2025-01-29T11:31:45.165425437Z" level=info msg="StartContainer for \"d333bb351da09a5a8d187a9cb0b127b6775d24577ffab88453235302772da97b\" returns successfully" Jan 29 11:31:45.173906 containerd[1495]: time="2025-01-29T11:31:45.173865766Z" level=info msg="StartContainer for \"0934aa2242f482945ac6aa899e870d9a37090bb0b27c1fe646fbfed542faefdd\" returns successfully" Jan 29 11:31:45.648718 kubelet[2322]: E0129 11:31:45.648628 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:31:45.650082 kubelet[2322]: E0129 11:31:45.649887 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:31:45.651741 kubelet[2322]: E0129 11:31:45.651705 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:31:45.731769 kubelet[2322]: I0129 11:31:45.731457 2322 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 11:31:46.353041 kubelet[2322]: E0129 11:31:46.352997 2322 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 29 11:31:46.432078 kubelet[2322]: I0129 11:31:46.432018 2322 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 29 11:31:46.614644 kubelet[2322]: I0129 11:31:46.614534 2322 apiserver.go:52] "Watching apiserver" Jan 29 11:31:46.622582 kubelet[2322]: I0129 11:31:46.622559 2322 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 11:31:46.657150 kubelet[2322]: E0129 11:31:46.657119 2322 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 29 11:31:46.657587 kubelet[2322]: E0129 11:31:46.657573 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:31:46.944072 update_engine[1475]: I20250129 11:31:46.943997 1475 update_attempter.cc:509] Updating boot flags... Jan 29 11:31:46.977451 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2602) Jan 29 11:31:47.015319 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2604) Jan 29 11:31:47.046524 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2604) Jan 29 11:31:47.986690 kubelet[2322]: E0129 11:31:47.986650 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:31:48.582946 systemd[1]: Reloading requested from client PID 2613 ('systemctl') (unit session-7.scope)... Jan 29 11:31:48.582964 systemd[1]: Reloading... Jan 29 11:31:48.652456 zram_generator::config[2653]: No configuration found. Jan 29 11:31:48.655432 kubelet[2322]: E0129 11:31:48.655351 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:31:48.801680 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:31:48.899282 systemd[1]: Reloading finished in 315 ms. Jan 29 11:31:48.945161 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:31:48.945386 kubelet[2322]: E0129 11:31:48.945086 2322 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{localhost.181f267db99c2fe8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 11:31:42.615089128 +0000 UTC m=+0.460541033,LastTimestamp:2025-01-29 11:31:42.615089128 +0000 UTC m=+0.460541033,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 11:31:48.945677 kubelet[2322]: I0129 11:31:48.945371 2322 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:31:48.962127 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 11:31:48.962513 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:31:48.975753 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:31:49.170253 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:31:49.175239 (kubelet)[2697]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:31:49.227152 kubelet[2697]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:31:49.227152 kubelet[2697]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:31:49.227152 kubelet[2697]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:31:49.227656 kubelet[2697]: I0129 11:31:49.227179 2697 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:31:49.231648 kubelet[2697]: I0129 11:31:49.231619 2697 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 11:31:49.231648 kubelet[2697]: I0129 11:31:49.231637 2697 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:31:49.233430 kubelet[2697]: I0129 11:31:49.233399 2697 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 11:31:49.234758 kubelet[2697]: I0129 11:31:49.234740 2697 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 11:31:49.235877 kubelet[2697]: I0129 11:31:49.235788 2697 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:31:49.244492 kubelet[2697]: I0129 11:31:49.244464 2697 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:31:49.244780 kubelet[2697]: I0129 11:31:49.244745 2697 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:31:49.245006 kubelet[2697]: I0129 11:31:49.244775 2697 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 11:31:49.245170 kubelet[2697]: I0129 11:31:49.245030 2697 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:31:49.245170 kubelet[2697]: I0129 11:31:49.245043 2697 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 11:31:49.245170 kubelet[2697]: I0129 11:31:49.245106 2697 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:31:49.245251 kubelet[2697]: I0129 11:31:49.245235 2697 kubelet.go:400] "Attempting to sync node with API server" Jan 29 11:31:49.245277 kubelet[2697]: I0129 11:31:49.245258 2697 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:31:49.245310 kubelet[2697]: I0129 11:31:49.245297 2697 kubelet.go:312] "Adding apiserver pod source" Jan 29 11:31:49.245353 kubelet[2697]: I0129 11:31:49.245330 2697 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:31:49.246047 kubelet[2697]: I0129 11:31:49.246033 2697 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:31:49.247911 kubelet[2697]: I0129 11:31:49.246517 2697 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:31:49.247911 kubelet[2697]: I0129 11:31:49.246933 2697 server.go:1264] "Started kubelet" Jan 29 11:31:49.247911 kubelet[2697]: I0129 11:31:49.247193 2697 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:31:49.247911 kubelet[2697]: I0129 11:31:49.247518 2697 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:31:49.247911 kubelet[2697]: I0129 11:31:49.247016 2697 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:31:49.250968 kubelet[2697]: I0129 11:31:49.250953 2697 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:31:49.253528 kubelet[2697]: I0129 11:31:49.253501 2697 server.go:455] "Adding debug handlers to kubelet server" Jan 29 11:31:49.258796 kubelet[2697]: I0129 11:31:49.258767 2697 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 11:31:49.259295 kubelet[2697]: I0129 11:31:49.259264 2697 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 11:31:49.259478 kubelet[2697]: I0129 11:31:49.259462 2697 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:31:49.262192 kubelet[2697]: I0129 11:31:49.262173 2697 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:31:49.262275 kubelet[2697]: I0129 11:31:49.262257 2697 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:31:49.264587 kubelet[2697]: E0129 11:31:49.264522 2697 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:31:49.264777 kubelet[2697]: I0129 11:31:49.264725 2697 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:31:49.265676 kubelet[2697]: I0129 11:31:49.265589 2697 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:31:49.266723 kubelet[2697]: I0129 11:31:49.266703 2697 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:31:49.266760 kubelet[2697]: I0129 11:31:49.266731 2697 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:31:49.266760 kubelet[2697]: I0129 11:31:49.266749 2697 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 11:31:49.266811 kubelet[2697]: E0129 11:31:49.266787 2697 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:31:49.302886 kubelet[2697]: I0129 11:31:49.302848 2697 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:31:49.302886 kubelet[2697]: I0129 11:31:49.302872 2697 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:31:49.302886 kubelet[2697]: I0129 11:31:49.302898 2697 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:31:49.303117 kubelet[2697]: I0129 11:31:49.303091 2697 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 11:31:49.303149 kubelet[2697]: I0129 11:31:49.303112 2697 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 11:31:49.303149 kubelet[2697]: I0129 11:31:49.303135 2697 policy_none.go:49] "None policy: Start" Jan 29 11:31:49.303895 kubelet[2697]: I0129 11:31:49.303872 2697 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:31:49.303935 kubelet[2697]: I0129 11:31:49.303906 2697 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:31:49.304154 kubelet[2697]: I0129 11:31:49.304111 2697 state_mem.go:75] "Updated machine memory state" Jan 29 11:31:49.308714 kubelet[2697]: I0129 11:31:49.308691 2697 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:31:49.308913 kubelet[2697]: I0129 11:31:49.308876 2697 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:31:49.309055 kubelet[2697]: I0129 11:31:49.308986 2697 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:31:49.364623 kubelet[2697]: I0129 11:31:49.364585 2697 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 11:31:49.367779 kubelet[2697]: I0129 11:31:49.367742 2697 topology_manager.go:215] "Topology Admit Handler" podUID="4bc31ce65a6d9df3be39c5a83c947d44" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 29 11:31:49.367904 kubelet[2697]: I0129 11:31:49.367883 2697 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 29 11:31:49.367956 kubelet[2697]: I0129 11:31:49.367943 2697 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 29 11:31:49.408343 kubelet[2697]: E0129 11:31:49.408304 2697 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 29 11:31:49.409619 kubelet[2697]: I0129 11:31:49.409598 2697 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 29 11:31:49.409702 kubelet[2697]: I0129 11:31:49.409687 2697 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 29 11:31:49.561132 kubelet[2697]: I0129 11:31:49.560977 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:31:49.561132 kubelet[2697]: I0129 11:31:49.561022 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:31:49.561132 kubelet[2697]: I0129 11:31:49.561048 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:31:49.561132 kubelet[2697]: I0129 11:31:49.561075 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4bc31ce65a6d9df3be39c5a83c947d44-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4bc31ce65a6d9df3be39c5a83c947d44\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:31:49.561132 kubelet[2697]: I0129 11:31:49.561094 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:31:49.561509 kubelet[2697]: I0129 11:31:49.561114 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:31:49.561509 kubelet[2697]: I0129 11:31:49.561132 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 29 11:31:49.561509 kubelet[2697]: I0129 11:31:49.561149 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4bc31ce65a6d9df3be39c5a83c947d44-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4bc31ce65a6d9df3be39c5a83c947d44\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:31:49.561509 kubelet[2697]: I0129 11:31:49.561165 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4bc31ce65a6d9df3be39c5a83c947d44-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4bc31ce65a6d9df3be39c5a83c947d44\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:31:49.708033 kubelet[2697]: E0129 11:31:49.707992 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:31:49.708204 kubelet[2697]: E0129 11:31:49.707992 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:31:49.709687 kubelet[2697]: E0129 11:31:49.709666 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:31:50.246145 kubelet[2697]: I0129 11:31:50.246111 2697 apiserver.go:52] "Watching apiserver" Jan 29 11:31:50.260010 kubelet[2697]: I0129 11:31:50.259977 2697 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 11:31:50.283662 kubelet[2697]: E0129 11:31:50.283638 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:31:50.283905 kubelet[2697]: E0129 11:31:50.283883 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:31:50.420497 kubelet[2697]: E0129 11:31:50.419949 2697 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 29 11:31:50.420497 kubelet[2697]: E0129 11:31:50.420341 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:31:50.432122 kubelet[2697]: I0129 11:31:50.432055 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.432023026 podStartE2EDuration="1.432023026s" podCreationTimestamp="2025-01-29 11:31:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:31:50.420095272 +0000 UTC m=+1.240068045" watchObservedRunningTime="2025-01-29 11:31:50.432023026 +0000 UTC m=+1.251995789" Jan 29 11:31:50.432340 kubelet[2697]: I0129 11:31:50.432168 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.432164705 podStartE2EDuration="1.432164705s" podCreationTimestamp="2025-01-29 11:31:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:31:50.431621546 +0000 UTC m=+1.251594309" watchObservedRunningTime="2025-01-29 11:31:50.432164705 +0000 UTC m=+1.252137468" Jan 29 11:31:50.452401 kubelet[2697]: I0129 11:31:50.451890 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.4518672 podStartE2EDuration="3.4518672s" podCreationTimestamp="2025-01-29 11:31:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:31:50.442990042 +0000 UTC m=+1.262962805" watchObservedRunningTime="2025-01-29 11:31:50.4518672 +0000 UTC m=+1.271839963" Jan 29 11:31:51.285013 kubelet[2697]: E0129 11:31:51.284980 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:31:52.238569 kubelet[2697]: E0129 11:31:52.238535 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:31:52.286619 kubelet[2697]: E0129 11:31:52.286577 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:31:53.695729 sudo[1672]: pam_unix(sudo:session): session closed for user root Jan 29 11:31:53.697027 sshd[1671]: Connection closed by 10.0.0.1 port 36716 Jan 29 11:31:53.697505 sshd-session[1669]: pam_unix(sshd:session): session closed for user core Jan 29 11:31:53.701452 systemd[1]: sshd@6-10.0.0.69:22-10.0.0.1:36716.service: Deactivated successfully. Jan 29 11:31:53.703348 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 11:31:53.703577 systemd[1]: session-7.scope: Consumed 5.192s CPU time, 196.2M memory peak, 0B memory swap peak. Jan 29 11:31:53.704038 systemd-logind[1471]: Session 7 logged out. Waiting for processes to exit. Jan 29 11:31:53.705000 systemd-logind[1471]: Removed session 7. Jan 29 11:31:54.573967 kubelet[2697]: E0129 11:31:54.573930 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:32:01.956941 kubelet[2697]: E0129 11:32:01.956889 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:32:02.243292 kubelet[2697]: E0129 11:32:02.242666 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:32:04.211717 kubelet[2697]: I0129 11:32:04.211671 2697 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 11:32:04.212152 kubelet[2697]: I0129 11:32:04.212119 2697 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 11:32:04.212188 containerd[1495]: time="2025-01-29T11:32:04.211967898Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 11:32:04.577733 kubelet[2697]: E0129 11:32:04.577608 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:32:05.375100 kubelet[2697]: I0129 11:32:05.374021 2697 topology_manager.go:215] "Topology Admit Handler" podUID="b5c4939a-9a98-442b-9a69-2fc1fb9186ce" podNamespace="kube-system" podName="kube-proxy-c8kbx" Jan 29 11:32:05.380601 systemd[1]: Created slice kubepods-besteffort-podb5c4939a_9a98_442b_9a69_2fc1fb9186ce.slice - libcontainer container kubepods-besteffort-podb5c4939a_9a98_442b_9a69_2fc1fb9186ce.slice. Jan 29 11:32:05.468974 kubelet[2697]: I0129 11:32:05.468913 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5c4939a-9a98-442b-9a69-2fc1fb9186ce-lib-modules\") pod \"kube-proxy-c8kbx\" (UID: \"b5c4939a-9a98-442b-9a69-2fc1fb9186ce\") " pod="kube-system/kube-proxy-c8kbx" Jan 29 11:32:05.468974 kubelet[2697]: I0129 11:32:05.468976 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xxjh\" (UniqueName: \"kubernetes.io/projected/b5c4939a-9a98-442b-9a69-2fc1fb9186ce-kube-api-access-8xxjh\") pod \"kube-proxy-c8kbx\" (UID: \"b5c4939a-9a98-442b-9a69-2fc1fb9186ce\") " pod="kube-system/kube-proxy-c8kbx" Jan 29 11:32:05.469222 kubelet[2697]: I0129 11:32:05.469007 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b5c4939a-9a98-442b-9a69-2fc1fb9186ce-xtables-lock\") pod \"kube-proxy-c8kbx\" (UID: \"b5c4939a-9a98-442b-9a69-2fc1fb9186ce\") " pod="kube-system/kube-proxy-c8kbx" Jan 29 11:32:05.469222 kubelet[2697]: I0129 11:32:05.469031 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b5c4939a-9a98-442b-9a69-2fc1fb9186ce-kube-proxy\") pod \"kube-proxy-c8kbx\" (UID: \"b5c4939a-9a98-442b-9a69-2fc1fb9186ce\") " pod="kube-system/kube-proxy-c8kbx" Jan 29 11:32:05.650877 kubelet[2697]: I0129 11:32:05.650722 2697 topology_manager.go:215] "Topology Admit Handler" podUID="8c38a999-de3a-44e4-af4a-c882743e405d" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-tpgpl" Jan 29 11:32:05.657916 systemd[1]: Created slice kubepods-besteffort-pod8c38a999_de3a_44e4_af4a_c882743e405d.slice - libcontainer container kubepods-besteffort-pod8c38a999_de3a_44e4_af4a_c882743e405d.slice. Jan 29 11:32:05.670847 kubelet[2697]: I0129 11:32:05.670807 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8c38a999-de3a-44e4-af4a-c882743e405d-var-lib-calico\") pod \"tigera-operator-7bc55997bb-tpgpl\" (UID: \"8c38a999-de3a-44e4-af4a-c882743e405d\") " pod="tigera-operator/tigera-operator-7bc55997bb-tpgpl" Jan 29 11:32:05.670847 kubelet[2697]: I0129 11:32:05.670849 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8pcx\" (UniqueName: \"kubernetes.io/projected/8c38a999-de3a-44e4-af4a-c882743e405d-kube-api-access-p8pcx\") pod \"tigera-operator-7bc55997bb-tpgpl\" (UID: \"8c38a999-de3a-44e4-af4a-c882743e405d\") " pod="tigera-operator/tigera-operator-7bc55997bb-tpgpl" Jan 29 11:32:05.961109 containerd[1495]: time="2025-01-29T11:32:05.960812375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-tpgpl,Uid:8c38a999-de3a-44e4-af4a-c882743e405d,Namespace:tigera-operator,Attempt:0,}" Jan 29 11:32:05.990452 kubelet[2697]: E0129 11:32:05.990386 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:32:05.992137 containerd[1495]: time="2025-01-29T11:32:05.991788810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c8kbx,Uid:b5c4939a-9a98-442b-9a69-2fc1fb9186ce,Namespace:kube-system,Attempt:0,}" Jan 29 11:32:06.005693 containerd[1495]: time="2025-01-29T11:32:06.005550531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:32:06.005693 containerd[1495]: time="2025-01-29T11:32:06.005642534Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:32:06.005693 containerd[1495]: time="2025-01-29T11:32:06.005668803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:32:06.005891 containerd[1495]: time="2025-01-29T11:32:06.005819817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:32:06.033597 systemd[1]: Started cri-containerd-41f774ea0c5420af9bfc46d606228840e1c122c8e5c308a2007b2eeebe76b286.scope - libcontainer container 41f774ea0c5420af9bfc46d606228840e1c122c8e5c308a2007b2eeebe76b286. Jan 29 11:32:06.038918 containerd[1495]: time="2025-01-29T11:32:06.038467584Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:32:06.038918 containerd[1495]: time="2025-01-29T11:32:06.038595585Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:32:06.038918 containerd[1495]: time="2025-01-29T11:32:06.038619700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:32:06.038918 containerd[1495]: time="2025-01-29T11:32:06.038732042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:32:06.056595 systemd[1]: Started cri-containerd-58f39e28e9d6ee1f29517554a5b6c2bdec899c7d774b4a41cb96716e57a9548a.scope - libcontainer container 58f39e28e9d6ee1f29517554a5b6c2bdec899c7d774b4a41cb96716e57a9548a. Jan 29 11:32:06.076582 containerd[1495]: time="2025-01-29T11:32:06.076496668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-tpgpl,Uid:8c38a999-de3a-44e4-af4a-c882743e405d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"41f774ea0c5420af9bfc46d606228840e1c122c8e5c308a2007b2eeebe76b286\"" Jan 29 11:32:06.079546 containerd[1495]: time="2025-01-29T11:32:06.078631013Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 29 11:32:06.081924 containerd[1495]: time="2025-01-29T11:32:06.081896710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c8kbx,Uid:b5c4939a-9a98-442b-9a69-2fc1fb9186ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"58f39e28e9d6ee1f29517554a5b6c2bdec899c7d774b4a41cb96716e57a9548a\"" Jan 29 11:32:06.082721 kubelet[2697]: E0129 11:32:06.082523 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:32:06.097080 containerd[1495]: time="2025-01-29T11:32:06.097024516Z" level=info msg="CreateContainer within sandbox \"58f39e28e9d6ee1f29517554a5b6c2bdec899c7d774b4a41cb96716e57a9548a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 11:32:06.465365 containerd[1495]: time="2025-01-29T11:32:06.465307374Z" level=info msg="CreateContainer within sandbox \"58f39e28e9d6ee1f29517554a5b6c2bdec899c7d774b4a41cb96716e57a9548a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"62842e410168a2560fa267b993b1c55f5198ba0655842fcec92800ee8a562e3a\"" Jan 29 11:32:06.465902 containerd[1495]: time="2025-01-29T11:32:06.465869341Z" level=info msg="StartContainer for \"62842e410168a2560fa267b993b1c55f5198ba0655842fcec92800ee8a562e3a\"" Jan 29 11:32:06.502611 systemd[1]: Started cri-containerd-62842e410168a2560fa267b993b1c55f5198ba0655842fcec92800ee8a562e3a.scope - libcontainer container 62842e410168a2560fa267b993b1c55f5198ba0655842fcec92800ee8a562e3a. Jan 29 11:32:06.534482 containerd[1495]: time="2025-01-29T11:32:06.534409520Z" level=info msg="StartContainer for \"62842e410168a2560fa267b993b1c55f5198ba0655842fcec92800ee8a562e3a\" returns successfully" Jan 29 11:32:07.312595 kubelet[2697]: E0129 11:32:07.312553 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:32:07.320308 kubelet[2697]: I0129 11:32:07.320239 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-c8kbx" podStartSLOduration=2.320218029 podStartE2EDuration="2.320218029s" podCreationTimestamp="2025-01-29 11:32:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:32:07.319470052 +0000 UTC m=+18.139442805" watchObservedRunningTime="2025-01-29 11:32:07.320218029 +0000 UTC m=+18.140190792" Jan 29 11:32:08.314614 kubelet[2697]: E0129 11:32:08.314583 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:32:08.364713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3641124312.mount: Deactivated successfully. Jan 29 11:32:09.605324 containerd[1495]: time="2025-01-29T11:32:09.605262033Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:32:09.648609 containerd[1495]: time="2025-01-29T11:32:09.648565683Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Jan 29 11:32:09.683584 containerd[1495]: time="2025-01-29T11:32:09.683518428Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:32:09.709529 containerd[1495]: time="2025-01-29T11:32:09.709484341Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:32:09.710480 containerd[1495]: time="2025-01-29T11:32:09.710428907Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 3.631752448s" Jan 29 11:32:09.710480 containerd[1495]: time="2025-01-29T11:32:09.710482097Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 29 11:32:09.712668 containerd[1495]: time="2025-01-29T11:32:09.712642551Z" level=info msg="CreateContainer within sandbox \"41f774ea0c5420af9bfc46d606228840e1c122c8e5c308a2007b2eeebe76b286\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 29 11:32:09.964661 containerd[1495]: time="2025-01-29T11:32:09.964617982Z" level=info msg="CreateContainer within sandbox \"41f774ea0c5420af9bfc46d606228840e1c122c8e5c308a2007b2eeebe76b286\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e5659c52264e4c359d1f95c7568f99e5519aa41b9dac58aca2622cb141346cf7\"" Jan 29 11:32:09.965145 containerd[1495]: time="2025-01-29T11:32:09.965102233Z" level=info msg="StartContainer for \"e5659c52264e4c359d1f95c7568f99e5519aa41b9dac58aca2622cb141346cf7\"" Jan 29 11:32:09.998587 systemd[1]: Started cri-containerd-e5659c52264e4c359d1f95c7568f99e5519aa41b9dac58aca2622cb141346cf7.scope - libcontainer container e5659c52264e4c359d1f95c7568f99e5519aa41b9dac58aca2622cb141346cf7. Jan 29 11:32:10.316099 containerd[1495]: time="2025-01-29T11:32:10.315935540Z" level=info msg="StartContainer for \"e5659c52264e4c359d1f95c7568f99e5519aa41b9dac58aca2622cb141346cf7\" returns successfully" Jan 29 11:32:10.407083 kubelet[2697]: I0129 11:32:10.406994 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-tpgpl" podStartSLOduration=1.774019373 podStartE2EDuration="5.406970116s" podCreationTimestamp="2025-01-29 11:32:05 +0000 UTC" firstStartedPulling="2025-01-29 11:32:06.078253002 +0000 UTC m=+16.898225765" lastFinishedPulling="2025-01-29 11:32:09.711203745 +0000 UTC m=+20.531176508" observedRunningTime="2025-01-29 11:32:10.406893191 +0000 UTC m=+21.226865954" watchObservedRunningTime="2025-01-29 11:32:10.406970116 +0000 UTC m=+21.226942879" Jan 29 11:32:13.844586 kubelet[2697]: I0129 11:32:13.844536 2697 topology_manager.go:215] "Topology Admit Handler" podUID="063ce57a-346d-430a-9a46-b71f669b8144" podNamespace="calico-system" podName="calico-typha-64c9d8f7ff-fg9w2" Jan 29 11:32:13.858602 systemd[1]: Created slice kubepods-besteffort-pod063ce57a_346d_430a_9a46_b71f669b8144.slice - libcontainer container kubepods-besteffort-pod063ce57a_346d_430a_9a46_b71f669b8144.slice. Jan 29 11:32:13.927359 kubelet[2697]: I0129 11:32:13.927313 2697 topology_manager.go:215] "Topology Admit Handler" podUID="a36b6cb0-3c2f-47d0-92ce-f864e9b320c9" podNamespace="calico-system" podName="calico-node-4v26b" Jan 29 11:32:13.928006 kubelet[2697]: I0129 11:32:13.927783 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/063ce57a-346d-430a-9a46-b71f669b8144-tigera-ca-bundle\") pod \"calico-typha-64c9d8f7ff-fg9w2\" (UID: \"063ce57a-346d-430a-9a46-b71f669b8144\") " pod="calico-system/calico-typha-64c9d8f7ff-fg9w2" Jan 29 11:32:13.928006 kubelet[2697]: I0129 11:32:13.927832 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5rzn\" (UniqueName: \"kubernetes.io/projected/063ce57a-346d-430a-9a46-b71f669b8144-kube-api-access-x5rzn\") pod \"calico-typha-64c9d8f7ff-fg9w2\" (UID: \"063ce57a-346d-430a-9a46-b71f669b8144\") " pod="calico-system/calico-typha-64c9d8f7ff-fg9w2" Jan 29 11:32:13.928006 kubelet[2697]: I0129 11:32:13.927881 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/063ce57a-346d-430a-9a46-b71f669b8144-typha-certs\") pod \"calico-typha-64c9d8f7ff-fg9w2\" (UID: \"063ce57a-346d-430a-9a46-b71f669b8144\") " pod="calico-system/calico-typha-64c9d8f7ff-fg9w2" Jan 29 11:32:13.937945 systemd[1]: Created slice kubepods-besteffort-poda36b6cb0_3c2f_47d0_92ce_f864e9b320c9.slice - libcontainer container kubepods-besteffort-poda36b6cb0_3c2f_47d0_92ce_f864e9b320c9.slice. Jan 29 11:32:14.028493 kubelet[2697]: I0129 11:32:14.028439 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a36b6cb0-3c2f-47d0-92ce-f864e9b320c9-xtables-lock\") pod \"calico-node-4v26b\" (UID: \"a36b6cb0-3c2f-47d0-92ce-f864e9b320c9\") " pod="calico-system/calico-node-4v26b" Jan 29 11:32:14.028493 kubelet[2697]: I0129 11:32:14.028492 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a36b6cb0-3c2f-47d0-92ce-f864e9b320c9-var-lib-calico\") pod \"calico-node-4v26b\" (UID: \"a36b6cb0-3c2f-47d0-92ce-f864e9b320c9\") " pod="calico-system/calico-node-4v26b" Jan 29 11:32:14.028665 kubelet[2697]: I0129 11:32:14.028530 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a36b6cb0-3c2f-47d0-92ce-f864e9b320c9-var-run-calico\") pod \"calico-node-4v26b\" (UID: \"a36b6cb0-3c2f-47d0-92ce-f864e9b320c9\") " pod="calico-system/calico-node-4v26b" Jan 29 11:32:14.028665 kubelet[2697]: I0129 11:32:14.028547 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a36b6cb0-3c2f-47d0-92ce-f864e9b320c9-flexvol-driver-host\") pod \"calico-node-4v26b\" (UID: \"a36b6cb0-3c2f-47d0-92ce-f864e9b320c9\") " pod="calico-system/calico-node-4v26b" Jan 29 11:32:14.028665 kubelet[2697]: I0129 11:32:14.028561 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8kxc\" (UniqueName: \"kubernetes.io/projected/a36b6cb0-3c2f-47d0-92ce-f864e9b320c9-kube-api-access-w8kxc\") pod \"calico-node-4v26b\" (UID: \"a36b6cb0-3c2f-47d0-92ce-f864e9b320c9\") " pod="calico-system/calico-node-4v26b" Jan 29 11:32:14.028665 kubelet[2697]: I0129 11:32:14.028577 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a36b6cb0-3c2f-47d0-92ce-f864e9b320c9-cni-log-dir\") pod \"calico-node-4v26b\" (UID: \"a36b6cb0-3c2f-47d0-92ce-f864e9b320c9\") " pod="calico-system/calico-node-4v26b" Jan 29 11:32:14.028665 kubelet[2697]: I0129 11:32:14.028608 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a36b6cb0-3c2f-47d0-92ce-f864e9b320c9-policysync\") pod \"calico-node-4v26b\" (UID: \"a36b6cb0-3c2f-47d0-92ce-f864e9b320c9\") " pod="calico-system/calico-node-4v26b" Jan 29 11:32:14.028836 kubelet[2697]: I0129 11:32:14.028632 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a36b6cb0-3c2f-47d0-92ce-f864e9b320c9-node-certs\") pod \"calico-node-4v26b\" (UID: \"a36b6cb0-3c2f-47d0-92ce-f864e9b320c9\") " pod="calico-system/calico-node-4v26b" Jan 29 11:32:14.028836 kubelet[2697]: I0129 11:32:14.028654 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a36b6cb0-3c2f-47d0-92ce-f864e9b320c9-cni-bin-dir\") pod \"calico-node-4v26b\" (UID: \"a36b6cb0-3c2f-47d0-92ce-f864e9b320c9\") " pod="calico-system/calico-node-4v26b" Jan 29 11:32:14.028836 kubelet[2697]: I0129 11:32:14.028674 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a36b6cb0-3c2f-47d0-92ce-f864e9b320c9-lib-modules\") pod \"calico-node-4v26b\" (UID: \"a36b6cb0-3c2f-47d0-92ce-f864e9b320c9\") " pod="calico-system/calico-node-4v26b" Jan 29 11:32:14.028836 kubelet[2697]: I0129 11:32:14.028694 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a36b6cb0-3c2f-47d0-92ce-f864e9b320c9-tigera-ca-bundle\") pod \"calico-node-4v26b\" (UID: \"a36b6cb0-3c2f-47d0-92ce-f864e9b320c9\") " pod="calico-system/calico-node-4v26b" Jan 29 11:32:14.028836 kubelet[2697]: I0129 11:32:14.028713 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a36b6cb0-3c2f-47d0-92ce-f864e9b320c9-cni-net-dir\") pod \"calico-node-4v26b\" (UID: \"a36b6cb0-3c2f-47d0-92ce-f864e9b320c9\") " pod="calico-system/calico-node-4v26b" Jan 29 11:32:14.048284 kubelet[2697]: I0129 11:32:14.048238 2697 topology_manager.go:215] "Topology Admit Handler" podUID="eb49b472-01c5-4cb5-84d5-9a1a2c4b969d" podNamespace="calico-system" podName="csi-node-driver-qtzv2" Jan 29 11:32:14.048528 kubelet[2697]: E0129 11:32:14.048502 2697 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qtzv2" podUID="eb49b472-01c5-4cb5-84d5-9a1a2c4b969d" Jan 29 11:32:14.129815 kubelet[2697]: I0129 11:32:14.129772 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eb49b472-01c5-4cb5-84d5-9a1a2c4b969d-kubelet-dir\") pod \"csi-node-driver-qtzv2\" (UID: \"eb49b472-01c5-4cb5-84d5-9a1a2c4b969d\") " pod="calico-system/csi-node-driver-qtzv2" Jan 29 11:32:14.129953 kubelet[2697]: I0129 11:32:14.129831 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/eb49b472-01c5-4cb5-84d5-9a1a2c4b969d-registration-dir\") pod \"csi-node-driver-qtzv2\" (UID: \"eb49b472-01c5-4cb5-84d5-9a1a2c4b969d\") " pod="calico-system/csi-node-driver-qtzv2" Jan 29 11:32:14.129953 kubelet[2697]: I0129 11:32:14.129911 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cxdb\" (UniqueName: \"kubernetes.io/projected/eb49b472-01c5-4cb5-84d5-9a1a2c4b969d-kube-api-access-7cxdb\") pod \"csi-node-driver-qtzv2\" (UID: \"eb49b472-01c5-4cb5-84d5-9a1a2c4b969d\") " pod="calico-system/csi-node-driver-qtzv2" Jan 29 11:32:14.129953 kubelet[2697]: I0129 11:32:14.129929 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/eb49b472-01c5-4cb5-84d5-9a1a2c4b969d-socket-dir\") pod \"csi-node-driver-qtzv2\" (UID: \"eb49b472-01c5-4cb5-84d5-9a1a2c4b969d\") " pod="calico-system/csi-node-driver-qtzv2" Jan 29 11:32:14.129953 kubelet[2697]: I0129 11:32:14.129944 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/eb49b472-01c5-4cb5-84d5-9a1a2c4b969d-varrun\") pod \"csi-node-driver-qtzv2\" (UID: \"eb49b472-01c5-4cb5-84d5-9a1a2c4b969d\") " pod="calico-system/csi-node-driver-qtzv2" Jan 29 11:32:14.137429 kubelet[2697]: E0129 11:32:14.136848 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:14.137429 kubelet[2697]: W0129 11:32:14.136877 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:14.137429 kubelet[2697]: E0129 11:32:14.136931 2697 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:14.139089 kubelet[2697]: E0129 11:32:14.139076 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:14.139089 kubelet[2697]: W0129 11:32:14.139087 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:14.139158 kubelet[2697]: E0129 11:32:14.139097 2697 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:14.168659 kubelet[2697]: E0129 11:32:14.168611 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:32:14.169240 containerd[1495]: time="2025-01-29T11:32:14.169202219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64c9d8f7ff-fg9w2,Uid:063ce57a-346d-430a-9a46-b71f669b8144,Namespace:calico-system,Attempt:0,}" Jan 29 11:32:14.231426 kubelet[2697]: E0129 11:32:14.231363 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:14.231426 kubelet[2697]: W0129 11:32:14.231391 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:14.231426 kubelet[2697]: E0129 11:32:14.231437 2697 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:14.231825 kubelet[2697]: E0129 11:32:14.231793 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:14.231825 kubelet[2697]: W0129 11:32:14.231820 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:14.231916 kubelet[2697]: E0129 11:32:14.231866 2697 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:14.232196 kubelet[2697]: E0129 11:32:14.232164 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:14.232196 kubelet[2697]: W0129 11:32:14.232180 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:14.232196 kubelet[2697]: E0129 11:32:14.232195 2697 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:14.232462 kubelet[2697]: E0129 11:32:14.232435 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:14.232462 kubelet[2697]: W0129 11:32:14.232470 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:14.232653 kubelet[2697]: E0129 11:32:14.232487 2697 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:14.232740 kubelet[2697]: E0129 11:32:14.232710 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:14.232780 kubelet[2697]: W0129 11:32:14.232738 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:14.232780 kubelet[2697]: E0129 11:32:14.232773 2697 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:14.233120 kubelet[2697]: E0129 11:32:14.233090 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:14.233120 kubelet[2697]: W0129 11:32:14.233105 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:14.233120 kubelet[2697]: E0129 11:32:14.233121 2697 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:14.233346 kubelet[2697]: E0129 11:32:14.233320 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:14.233346 kubelet[2697]: W0129 11:32:14.233334 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:14.233346 kubelet[2697]: E0129 11:32:14.233348 2697 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:14.233582 kubelet[2697]: E0129 11:32:14.233563 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:14.233582 kubelet[2697]: W0129 11:32:14.233578 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:14.233653 kubelet[2697]: E0129 11:32:14.233622 2697 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:14.233804 kubelet[2697]: E0129 11:32:14.233786 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:14.233804 kubelet[2697]: W0129 11:32:14.233798 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:14.233885 kubelet[2697]: E0129 11:32:14.233828 2697 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:14.234030 kubelet[2697]: E0129 11:32:14.234002 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:14.234030 kubelet[2697]: W0129 11:32:14.234017 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:14.234102 kubelet[2697]: E0129 11:32:14.234031 2697 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:14.234264 kubelet[2697]: E0129 11:32:14.234237 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:14.234264 kubelet[2697]: W0129 11:32:14.234252 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:14.234333 kubelet[2697]: E0129 11:32:14.234266 2697 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:14.234513 kubelet[2697]: E0129 11:32:14.234495 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:14.234513 kubelet[2697]: W0129 11:32:14.234507 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:14.234581 kubelet[2697]: E0129 11:32:14.234521 2697 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:14.234760 kubelet[2697]: E0129 11:32:14.234735 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:14.234760 kubelet[2697]: W0129 11:32:14.234747 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:14.234760 kubelet[2697]: E0129 11:32:14.234760 2697 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:14.234994 kubelet[2697]: E0129 11:32:14.234978 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:14.234994 kubelet[2697]: W0129 11:32:14.234987 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:14.235064 kubelet[2697]: E0129 11:32:14.235015 2697 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:14.235202 kubelet[2697]: E0129 11:32:14.235185 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:14.235202 kubelet[2697]: W0129 11:32:14.235196 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:14.235281 kubelet[2697]: E0129 11:32:14.235222 2697 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:14.235404 kubelet[2697]: E0129 11:32:14.235389 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:14.235404 kubelet[2697]: W0129 11:32:14.235398 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:14.235497 kubelet[2697]: E0129 11:32:14.235438 2697 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:14.235628 kubelet[2697]: E0129 11:32:14.235608 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:14.235628 kubelet[2697]: W0129 11:32:14.235618 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:14.235697 kubelet[2697]: E0129 11:32:14.235648 2697 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:14.235828 kubelet[2697]: E0129 11:32:14.235811 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:14.235828 kubelet[2697]: W0129 11:32:14.235821 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:14.235902 kubelet[2697]: E0129 11:32:14.235833 2697 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:14.236078 kubelet[2697]: E0129 11:32:14.236062 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:14.236078 kubelet[2697]: W0129 11:32:14.236072 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:14.236139 kubelet[2697]: E0129 11:32:14.236084 2697 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:14.236330 kubelet[2697]: E0129 11:32:14.236314 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:14.236330 kubelet[2697]: W0129 11:32:14.236325 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:14.236393 kubelet[2697]: E0129 11:32:14.236336 2697 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:14.236558 kubelet[2697]: E0129 11:32:14.236542 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:14.236558 kubelet[2697]: W0129 11:32:14.236552 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:14.236628 kubelet[2697]: E0129 11:32:14.236566 2697 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:14.236894 kubelet[2697]: E0129 11:32:14.236850 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:14.236894 kubelet[2697]: W0129 11:32:14.236874 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:14.236894 kubelet[2697]: E0129 11:32:14.236891 2697 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:14.237148 kubelet[2697]: E0129 11:32:14.237130 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:14.237148 kubelet[2697]: W0129 11:32:14.237142 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:14.237211 kubelet[2697]: E0129 11:32:14.237184 2697 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:14.237376 kubelet[2697]: E0129 11:32:14.237359 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:14.237376 kubelet[2697]: W0129 11:32:14.237372 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:14.237464 kubelet[2697]: E0129 11:32:14.237400 2697 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:14.237634 kubelet[2697]: E0129 11:32:14.237616 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:14.237634 kubelet[2697]: W0129 11:32:14.237630 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:14.237696 kubelet[2697]: E0129 11:32:14.237642 2697 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:14.242590 kubelet[2697]: E0129 11:32:14.242106 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:32:14.242783 containerd[1495]: time="2025-01-29T11:32:14.242743629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4v26b,Uid:a36b6cb0-3c2f-47d0-92ce-f864e9b320c9,Namespace:calico-system,Attempt:0,}" Jan 29 11:32:14.246967 kubelet[2697]: E0129 11:32:14.246937 2697 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:14.246967 kubelet[2697]: W0129 11:32:14.246958 2697 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:14.247063 kubelet[2697]: E0129 11:32:14.246975 2697 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:14.296715 containerd[1495]: time="2025-01-29T11:32:14.296606478Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:32:14.296715 containerd[1495]: time="2025-01-29T11:32:14.296690326Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:32:14.296836 containerd[1495]: time="2025-01-29T11:32:14.296703460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:32:14.302482 containerd[1495]: time="2025-01-29T11:32:14.302095543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:32:14.303985 containerd[1495]: time="2025-01-29T11:32:14.303818370Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:32:14.303985 containerd[1495]: time="2025-01-29T11:32:14.303902389Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:32:14.305733 containerd[1495]: time="2025-01-29T11:32:14.303963063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:32:14.305733 containerd[1495]: time="2025-01-29T11:32:14.305688215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:32:14.325613 systemd[1]: Started cri-containerd-8ce5de6a4522c3de275eacebb46f8a5a5139b469cf62bc7bb6cca23e20c889a7.scope - libcontainer container 8ce5de6a4522c3de275eacebb46f8a5a5139b469cf62bc7bb6cca23e20c889a7. Jan 29 11:32:14.330428 systemd[1]: Started cri-containerd-3ee21ef4050ccd111435a0544a1fb757ac81bd2550610a120d7c78332e44e5c7.scope - libcontainer container 3ee21ef4050ccd111435a0544a1fb757ac81bd2550610a120d7c78332e44e5c7. Jan 29 11:32:14.361686 containerd[1495]: time="2025-01-29T11:32:14.361647854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4v26b,Uid:a36b6cb0-3c2f-47d0-92ce-f864e9b320c9,Namespace:calico-system,Attempt:0,} returns sandbox id \"3ee21ef4050ccd111435a0544a1fb757ac81bd2550610a120d7c78332e44e5c7\"" Jan 29 11:32:14.362657 kubelet[2697]: E0129 11:32:14.362627 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:32:14.366509 containerd[1495]: time="2025-01-29T11:32:14.366462231Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 29 11:32:14.376179 containerd[1495]: time="2025-01-29T11:32:14.376137663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64c9d8f7ff-fg9w2,Uid:063ce57a-346d-430a-9a46-b71f669b8144,Namespace:calico-system,Attempt:0,} returns sandbox id \"8ce5de6a4522c3de275eacebb46f8a5a5139b469cf62bc7bb6cca23e20c889a7\"" Jan 29 11:32:14.377590 kubelet[2697]: E0129 11:32:14.376723 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:32:15.913860 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount59141042.mount: Deactivated successfully. Jan 29 11:32:15.990491 containerd[1495]: time="2025-01-29T11:32:15.990404051Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:32:15.991275 containerd[1495]: time="2025-01-29T11:32:15.991132099Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 29 11:32:15.993051 containerd[1495]: time="2025-01-29T11:32:15.992992615Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:32:15.995721 containerd[1495]: time="2025-01-29T11:32:15.995672480Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:32:15.996223 containerd[1495]: time="2025-01-29T11:32:15.996192768Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.629693877s" Jan 29 11:32:15.996272 containerd[1495]: time="2025-01-29T11:32:15.996226641Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 29 11:32:15.997644 containerd[1495]: time="2025-01-29T11:32:15.997613266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 29 11:32:15.998869 containerd[1495]: time="2025-01-29T11:32:15.998836435Z" level=info msg="CreateContainer within sandbox \"3ee21ef4050ccd111435a0544a1fb757ac81bd2550610a120d7c78332e44e5c7\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 11:32:16.018212 containerd[1495]: time="2025-01-29T11:32:16.018148218Z" level=info msg="CreateContainer within sandbox \"3ee21ef4050ccd111435a0544a1fb757ac81bd2550610a120d7c78332e44e5c7\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"bd39147a7d48fe5c47c3751ac4b8757c7151b75cbe6496547ec7dd8494848c1a\"" Jan 29 11:32:16.018652 containerd[1495]: time="2025-01-29T11:32:16.018598022Z" level=info msg="StartContainer for \"bd39147a7d48fe5c47c3751ac4b8757c7151b75cbe6496547ec7dd8494848c1a\"" Jan 29 11:32:16.048544 systemd[1]: Started cri-containerd-bd39147a7d48fe5c47c3751ac4b8757c7151b75cbe6496547ec7dd8494848c1a.scope - libcontainer container bd39147a7d48fe5c47c3751ac4b8757c7151b75cbe6496547ec7dd8494848c1a. Jan 29 11:32:16.092682 systemd[1]: cri-containerd-bd39147a7d48fe5c47c3751ac4b8757c7151b75cbe6496547ec7dd8494848c1a.scope: Deactivated successfully. Jan 29 11:32:16.267862 kubelet[2697]: E0129 11:32:16.267710 2697 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qtzv2" podUID="eb49b472-01c5-4cb5-84d5-9a1a2c4b969d" Jan 29 11:32:16.377078 containerd[1495]: time="2025-01-29T11:32:16.377025333Z" level=info msg="StartContainer for \"bd39147a7d48fe5c47c3751ac4b8757c7151b75cbe6496547ec7dd8494848c1a\" returns successfully" Jan 29 11:32:16.399880 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd39147a7d48fe5c47c3751ac4b8757c7151b75cbe6496547ec7dd8494848c1a-rootfs.mount: Deactivated successfully. Jan 29 11:32:16.567112 containerd[1495]: time="2025-01-29T11:32:16.566954401Z" level=info msg="shim disconnected" id=bd39147a7d48fe5c47c3751ac4b8757c7151b75cbe6496547ec7dd8494848c1a namespace=k8s.io Jan 29 11:32:16.567112 containerd[1495]: time="2025-01-29T11:32:16.567028339Z" level=warning msg="cleaning up after shim disconnected" id=bd39147a7d48fe5c47c3751ac4b8757c7151b75cbe6496547ec7dd8494848c1a namespace=k8s.io Jan 29 11:32:16.567112 containerd[1495]: time="2025-01-29T11:32:16.567037486Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:32:17.382853 kubelet[2697]: E0129 11:32:17.382816 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:32:17.936616 systemd[1]: Started sshd@7-10.0.0.69:22-10.0.0.1:45122.service - OpenSSH per-connection server daemon (10.0.0.1:45122). Jan 29 11:32:18.084619 sshd[3272]: Accepted publickey for core from 10.0.0.1 port 45122 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:32:18.086539 sshd-session[3272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:32:18.091333 systemd-logind[1471]: New session 8 of user core. Jan 29 11:32:18.099612 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 11:32:18.241523 sshd[3278]: Connection closed by 10.0.0.1 port 45122 Jan 29 11:32:18.242103 sshd-session[3272]: pam_unix(sshd:session): session closed for user core Jan 29 11:32:18.245673 systemd[1]: sshd@7-10.0.0.69:22-10.0.0.1:45122.service: Deactivated successfully. Jan 29 11:32:18.247795 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 11:32:18.249901 systemd-logind[1471]: Session 8 logged out. Waiting for processes to exit. Jan 29 11:32:18.252031 systemd-logind[1471]: Removed session 8. Jan 29 11:32:18.273555 kubelet[2697]: E0129 11:32:18.273494 2697 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qtzv2" podUID="eb49b472-01c5-4cb5-84d5-9a1a2c4b969d" Jan 29 11:32:18.461055 containerd[1495]: time="2025-01-29T11:32:18.460969678Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:32:18.461921 containerd[1495]: time="2025-01-29T11:32:18.461849952Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Jan 29 11:32:18.463162 containerd[1495]: time="2025-01-29T11:32:18.463122542Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:32:18.465810 containerd[1495]: time="2025-01-29T11:32:18.465767921Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:32:18.466335 containerd[1495]: time="2025-01-29T11:32:18.466304469Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.468662819s" Jan 29 11:32:18.466372 containerd[1495]: time="2025-01-29T11:32:18.466334485Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 29 11:32:18.467253 containerd[1495]: time="2025-01-29T11:32:18.467205501Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 29 11:32:18.480056 containerd[1495]: time="2025-01-29T11:32:18.480015593Z" level=info msg="CreateContainer within sandbox \"8ce5de6a4522c3de275eacebb46f8a5a5139b469cf62bc7bb6cca23e20c889a7\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 29 11:32:18.498317 containerd[1495]: time="2025-01-29T11:32:18.498223814Z" level=info msg="CreateContainer within sandbox \"8ce5de6a4522c3de275eacebb46f8a5a5139b469cf62bc7bb6cca23e20c889a7\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"cb3260dd45d635c3fbf6998fc942ed24139716a0ffcc7d43f430220982489654\"" Jan 29 11:32:18.498898 containerd[1495]: time="2025-01-29T11:32:18.498859187Z" level=info msg="StartContainer for \"cb3260dd45d635c3fbf6998fc942ed24139716a0ffcc7d43f430220982489654\"" Jan 29 11:32:18.530554 systemd[1]: Started cri-containerd-cb3260dd45d635c3fbf6998fc942ed24139716a0ffcc7d43f430220982489654.scope - libcontainer container cb3260dd45d635c3fbf6998fc942ed24139716a0ffcc7d43f430220982489654. Jan 29 11:32:18.599317 containerd[1495]: time="2025-01-29T11:32:18.599264802Z" level=info msg="StartContainer for \"cb3260dd45d635c3fbf6998fc942ed24139716a0ffcc7d43f430220982489654\" returns successfully" Jan 29 11:32:19.387117 kubelet[2697]: E0129 11:32:19.387084 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:32:19.398644 kubelet[2697]: I0129 11:32:19.398224 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-64c9d8f7ff-fg9w2" podStartSLOduration=2.308696123 podStartE2EDuration="6.398184142s" podCreationTimestamp="2025-01-29 11:32:13 +0000 UTC" firstStartedPulling="2025-01-29 11:32:14.377633114 +0000 UTC m=+25.197605867" lastFinishedPulling="2025-01-29 11:32:18.467121123 +0000 UTC m=+29.287093886" observedRunningTime="2025-01-29 11:32:19.397838963 +0000 UTC m=+30.217811726" watchObservedRunningTime="2025-01-29 11:32:19.398184142 +0000 UTC m=+30.218156905" Jan 29 11:32:20.267905 kubelet[2697]: E0129 11:32:20.267849 2697 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qtzv2" podUID="eb49b472-01c5-4cb5-84d5-9a1a2c4b969d" Jan 29 11:32:20.388944 kubelet[2697]: I0129 11:32:20.388872 2697 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:32:20.389587 kubelet[2697]: E0129 11:32:20.389546 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:32:21.390566 kubelet[2697]: E0129 11:32:21.390538 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:32:22.267758 kubelet[2697]: E0129 11:32:22.267706 2697 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qtzv2" podUID="eb49b472-01c5-4cb5-84d5-9a1a2c4b969d" Jan 29 11:32:22.392288 kubelet[2697]: E0129 11:32:22.392249 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:32:23.255651 systemd[1]: Started sshd@8-10.0.0.69:22-10.0.0.1:45124.service - OpenSSH per-connection server daemon (10.0.0.1:45124). Jan 29 11:32:23.312725 sshd[3346]: Accepted publickey for core from 10.0.0.1 port 45124 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:32:23.314328 sshd-session[3346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:32:23.319393 systemd-logind[1471]: New session 9 of user core. Jan 29 11:32:23.323535 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 11:32:23.441133 sshd[3348]: Connection closed by 10.0.0.1 port 45124 Jan 29 11:32:23.441464 sshd-session[3346]: pam_unix(sshd:session): session closed for user core Jan 29 11:32:23.445676 systemd[1]: sshd@8-10.0.0.69:22-10.0.0.1:45124.service: Deactivated successfully. Jan 29 11:32:23.447473 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 11:32:23.448122 systemd-logind[1471]: Session 9 logged out. Waiting for processes to exit. Jan 29 11:32:23.449153 systemd-logind[1471]: Removed session 9. Jan 29 11:32:24.267594 kubelet[2697]: E0129 11:32:24.267555 2697 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qtzv2" podUID="eb49b472-01c5-4cb5-84d5-9a1a2c4b969d" Jan 29 11:32:25.211326 containerd[1495]: time="2025-01-29T11:32:25.211276439Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:32:25.275135 containerd[1495]: time="2025-01-29T11:32:25.275043705Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 29 11:32:25.319886 containerd[1495]: time="2025-01-29T11:32:25.319832796Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:32:25.402044 containerd[1495]: time="2025-01-29T11:32:25.401997890Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:32:25.402708 containerd[1495]: time="2025-01-29T11:32:25.402683808Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 6.935446316s" Jan 29 11:32:25.402708 containerd[1495]: time="2025-01-29T11:32:25.402708624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 29 11:32:25.404430 containerd[1495]: time="2025-01-29T11:32:25.404390973Z" level=info msg="CreateContainer within sandbox \"3ee21ef4050ccd111435a0544a1fb757ac81bd2550610a120d7c78332e44e5c7\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 11:32:25.714699 containerd[1495]: time="2025-01-29T11:32:25.714642251Z" level=info msg="CreateContainer within sandbox \"3ee21ef4050ccd111435a0544a1fb757ac81bd2550610a120d7c78332e44e5c7\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f7a3f03d481c40243a461414eb2a07b2b631af9d40c7235b345b419e45df3328\"" Jan 29 11:32:25.715164 containerd[1495]: time="2025-01-29T11:32:25.715128935Z" level=info msg="StartContainer for \"f7a3f03d481c40243a461414eb2a07b2b631af9d40c7235b345b419e45df3328\"" Jan 29 11:32:25.747571 systemd[1]: Started cri-containerd-f7a3f03d481c40243a461414eb2a07b2b631af9d40c7235b345b419e45df3328.scope - libcontainer container f7a3f03d481c40243a461414eb2a07b2b631af9d40c7235b345b419e45df3328. Jan 29 11:32:25.783244 containerd[1495]: time="2025-01-29T11:32:25.783200261Z" level=info msg="StartContainer for \"f7a3f03d481c40243a461414eb2a07b2b631af9d40c7235b345b419e45df3328\" returns successfully" Jan 29 11:32:26.268033 kubelet[2697]: E0129 11:32:26.267960 2697 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qtzv2" podUID="eb49b472-01c5-4cb5-84d5-9a1a2c4b969d" Jan 29 11:32:26.401138 kubelet[2697]: E0129 11:32:26.401097 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:32:27.185218 systemd[1]: cri-containerd-f7a3f03d481c40243a461414eb2a07b2b631af9d40c7235b345b419e45df3328.scope: Deactivated successfully. Jan 29 11:32:27.206724 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f7a3f03d481c40243a461414eb2a07b2b631af9d40c7235b345b419e45df3328-rootfs.mount: Deactivated successfully. Jan 29 11:32:27.237885 kubelet[2697]: I0129 11:32:27.237685 2697 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 29 11:32:27.309171 kubelet[2697]: I0129 11:32:27.309109 2697 topology_manager.go:215] "Topology Admit Handler" podUID="ce5f6883-5ebc-45bd-8052-20316de2d012" podNamespace="kube-system" podName="coredns-7db6d8ff4d-pgcpq" Jan 29 11:32:27.314225 kubelet[2697]: I0129 11:32:27.314178 2697 topology_manager.go:215] "Topology Admit Handler" podUID="dfb9e1ad-f94c-4aa8-a1d0-d67fe50cc0e9" podNamespace="calico-system" podName="calico-kube-controllers-748549c4c9-7d2cf" Jan 29 11:32:27.314472 kubelet[2697]: I0129 11:32:27.314400 2697 topology_manager.go:215] "Topology Admit Handler" podUID="59ad9644-a5c7-4480-bc20-dbeaa0a967d1" podNamespace="kube-system" podName="coredns-7db6d8ff4d-bpqc6" Jan 29 11:32:27.314732 kubelet[2697]: I0129 11:32:27.314688 2697 topology_manager.go:215] "Topology Admit Handler" podUID="6a3ccfc9-9edc-4b98-a77a-7df17efe2895" podNamespace="calico-apiserver" podName="calico-apiserver-7f846fb45c-qnzlv" Jan 29 11:32:27.315045 kubelet[2697]: I0129 11:32:27.315015 2697 topology_manager.go:215] "Topology Admit Handler" podUID="355edf79-8969-4232-bff0-a38923ed3709" podNamespace="calico-apiserver" podName="calico-apiserver-7f846fb45c-49zts" Jan 29 11:32:27.319808 systemd[1]: Created slice kubepods-burstable-podce5f6883_5ebc_45bd_8052_20316de2d012.slice - libcontainer container kubepods-burstable-podce5f6883_5ebc_45bd_8052_20316de2d012.slice. Jan 29 11:32:27.332596 systemd[1]: Created slice kubepods-besteffort-poddfb9e1ad_f94c_4aa8_a1d0_d67fe50cc0e9.slice - libcontainer container kubepods-besteffort-poddfb9e1ad_f94c_4aa8_a1d0_d67fe50cc0e9.slice. Jan 29 11:32:27.338888 systemd[1]: Created slice kubepods-burstable-pod59ad9644_a5c7_4480_bc20_dbeaa0a967d1.slice - libcontainer container kubepods-burstable-pod59ad9644_a5c7_4480_bc20_dbeaa0a967d1.slice. Jan 29 11:32:27.344323 systemd[1]: Created slice kubepods-besteffort-pod6a3ccfc9_9edc_4b98_a77a_7df17efe2895.slice - libcontainer container kubepods-besteffort-pod6a3ccfc9_9edc_4b98_a77a_7df17efe2895.slice. Jan 29 11:32:27.348160 systemd[1]: Created slice kubepods-besteffort-pod355edf79_8969_4232_bff0_a38923ed3709.slice - libcontainer container kubepods-besteffort-pod355edf79_8969_4232_bff0_a38923ed3709.slice. Jan 29 11:32:27.402787 kubelet[2697]: E0129 11:32:27.402756 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:32:27.415084 kubelet[2697]: I0129 11:32:27.415033 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77z5x\" (UniqueName: \"kubernetes.io/projected/ce5f6883-5ebc-45bd-8052-20316de2d012-kube-api-access-77z5x\") pod \"coredns-7db6d8ff4d-pgcpq\" (UID: \"ce5f6883-5ebc-45bd-8052-20316de2d012\") " pod="kube-system/coredns-7db6d8ff4d-pgcpq" Jan 29 11:32:27.415084 kubelet[2697]: I0129 11:32:27.415083 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv6rn\" (UniqueName: \"kubernetes.io/projected/6a3ccfc9-9edc-4b98-a77a-7df17efe2895-kube-api-access-jv6rn\") pod \"calico-apiserver-7f846fb45c-qnzlv\" (UID: \"6a3ccfc9-9edc-4b98-a77a-7df17efe2895\") " pod="calico-apiserver/calico-apiserver-7f846fb45c-qnzlv" Jan 29 11:32:27.415198 kubelet[2697]: I0129 11:32:27.415143 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6a3ccfc9-9edc-4b98-a77a-7df17efe2895-calico-apiserver-certs\") pod \"calico-apiserver-7f846fb45c-qnzlv\" (UID: \"6a3ccfc9-9edc-4b98-a77a-7df17efe2895\") " pod="calico-apiserver/calico-apiserver-7f846fb45c-qnzlv" Jan 29 11:32:27.415198 kubelet[2697]: I0129 11:32:27.415175 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce5f6883-5ebc-45bd-8052-20316de2d012-config-volume\") pod \"coredns-7db6d8ff4d-pgcpq\" (UID: \"ce5f6883-5ebc-45bd-8052-20316de2d012\") " pod="kube-system/coredns-7db6d8ff4d-pgcpq" Jan 29 11:32:27.415259 kubelet[2697]: I0129 11:32:27.415198 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dfb9e1ad-f94c-4aa8-a1d0-d67fe50cc0e9-tigera-ca-bundle\") pod \"calico-kube-controllers-748549c4c9-7d2cf\" (UID: \"dfb9e1ad-f94c-4aa8-a1d0-d67fe50cc0e9\") " pod="calico-system/calico-kube-controllers-748549c4c9-7d2cf" Jan 29 11:32:27.415259 kubelet[2697]: I0129 11:32:27.415220 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg9zt\" (UniqueName: \"kubernetes.io/projected/dfb9e1ad-f94c-4aa8-a1d0-d67fe50cc0e9-kube-api-access-dg9zt\") pod \"calico-kube-controllers-748549c4c9-7d2cf\" (UID: \"dfb9e1ad-f94c-4aa8-a1d0-d67fe50cc0e9\") " pod="calico-system/calico-kube-controllers-748549c4c9-7d2cf" Jan 29 11:32:27.415259 kubelet[2697]: I0129 11:32:27.415247 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-982wp\" (UniqueName: \"kubernetes.io/projected/355edf79-8969-4232-bff0-a38923ed3709-kube-api-access-982wp\") pod \"calico-apiserver-7f846fb45c-49zts\" (UID: \"355edf79-8969-4232-bff0-a38923ed3709\") " pod="calico-apiserver/calico-apiserver-7f846fb45c-49zts" Jan 29 11:32:27.415334 kubelet[2697]: I0129 11:32:27.415273 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59ad9644-a5c7-4480-bc20-dbeaa0a967d1-config-volume\") pod \"coredns-7db6d8ff4d-bpqc6\" (UID: \"59ad9644-a5c7-4480-bc20-dbeaa0a967d1\") " pod="kube-system/coredns-7db6d8ff4d-bpqc6" Jan 29 11:32:27.415334 kubelet[2697]: I0129 11:32:27.415295 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/355edf79-8969-4232-bff0-a38923ed3709-calico-apiserver-certs\") pod \"calico-apiserver-7f846fb45c-49zts\" (UID: \"355edf79-8969-4232-bff0-a38923ed3709\") " pod="calico-apiserver/calico-apiserver-7f846fb45c-49zts" Jan 29 11:32:27.415403 kubelet[2697]: I0129 11:32:27.415379 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgmqz\" (UniqueName: \"kubernetes.io/projected/59ad9644-a5c7-4480-bc20-dbeaa0a967d1-kube-api-access-fgmqz\") pod \"coredns-7db6d8ff4d-bpqc6\" (UID: \"59ad9644-a5c7-4480-bc20-dbeaa0a967d1\") " pod="kube-system/coredns-7db6d8ff4d-bpqc6" Jan 29 11:32:27.624781 kubelet[2697]: E0129 11:32:27.624741 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:32:27.641687 kubelet[2697]: E0129 11:32:27.641664 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:32:27.711220 containerd[1495]: time="2025-01-29T11:32:27.711175249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f846fb45c-qnzlv,Uid:6a3ccfc9-9edc-4b98-a77a-7df17efe2895,Namespace:calico-apiserver,Attempt:0,}" Jan 29 11:32:27.711955 containerd[1495]: time="2025-01-29T11:32:27.711388389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bpqc6,Uid:59ad9644-a5c7-4480-bc20-dbeaa0a967d1,Namespace:kube-system,Attempt:0,}" Jan 29 11:32:27.711955 containerd[1495]: time="2025-01-29T11:32:27.711611969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-748549c4c9-7d2cf,Uid:dfb9e1ad-f94c-4aa8-a1d0-d67fe50cc0e9,Namespace:calico-system,Attempt:0,}" Jan 29 11:32:27.711955 containerd[1495]: time="2025-01-29T11:32:27.711175279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f846fb45c-49zts,Uid:355edf79-8969-4232-bff0-a38923ed3709,Namespace:calico-apiserver,Attempt:0,}" Jan 29 11:32:27.711955 containerd[1495]: time="2025-01-29T11:32:27.711777680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pgcpq,Uid:ce5f6883-5ebc-45bd-8052-20316de2d012,Namespace:kube-system,Attempt:0,}" Jan 29 11:32:27.861708 containerd[1495]: time="2025-01-29T11:32:27.861639965Z" level=info msg="shim disconnected" id=f7a3f03d481c40243a461414eb2a07b2b631af9d40c7235b345b419e45df3328 namespace=k8s.io Jan 29 11:32:27.861708 containerd[1495]: time="2025-01-29T11:32:27.861701150Z" level=warning msg="cleaning up after shim disconnected" id=f7a3f03d481c40243a461414eb2a07b2b631af9d40c7235b345b419e45df3328 namespace=k8s.io Jan 29 11:32:27.861708 containerd[1495]: time="2025-01-29T11:32:27.861710597Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:32:28.273287 systemd[1]: Created slice kubepods-besteffort-podeb49b472_01c5_4cb5_84d5_9a1a2c4b969d.slice - libcontainer container kubepods-besteffort-podeb49b472_01c5_4cb5_84d5_9a1a2c4b969d.slice. Jan 29 11:32:28.275644 containerd[1495]: time="2025-01-29T11:32:28.275592573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qtzv2,Uid:eb49b472-01c5-4cb5-84d5-9a1a2c4b969d,Namespace:calico-system,Attempt:0,}" Jan 29 11:32:28.406380 kubelet[2697]: E0129 11:32:28.406342 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:32:28.407619 containerd[1495]: time="2025-01-29T11:32:28.406976510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 29 11:32:28.454343 systemd[1]: Started sshd@9-10.0.0.69:22-10.0.0.1:56408.service - OpenSSH per-connection server daemon (10.0.0.1:56408). Jan 29 11:32:28.601004 sshd[3437]: Accepted publickey for core from 10.0.0.1 port 56408 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:32:28.601120 sshd-session[3437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:32:28.605270 systemd-logind[1471]: New session 10 of user core. Jan 29 11:32:28.621555 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 11:32:28.782064 sshd[3439]: Connection closed by 10.0.0.1 port 56408 Jan 29 11:32:28.782438 sshd-session[3437]: pam_unix(sshd:session): session closed for user core Jan 29 11:32:28.786325 systemd[1]: sshd@9-10.0.0.69:22-10.0.0.1:56408.service: Deactivated successfully. Jan 29 11:32:28.788518 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 11:32:28.789156 systemd-logind[1471]: Session 10 logged out. Waiting for processes to exit. Jan 29 11:32:28.790147 systemd-logind[1471]: Removed session 10. Jan 29 11:32:29.024727 containerd[1495]: time="2025-01-29T11:32:29.024671940Z" level=error msg="Failed to destroy network for sandbox \"fe6ac203e82a4009391dc90c7b6ce83180175a75113cfc6c681bac96b7142886\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:29.025403 containerd[1495]: time="2025-01-29T11:32:29.025092348Z" level=error msg="encountered an error cleaning up failed sandbox \"fe6ac203e82a4009391dc90c7b6ce83180175a75113cfc6c681bac96b7142886\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:29.025403 containerd[1495]: time="2025-01-29T11:32:29.025148834Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-748549c4c9-7d2cf,Uid:dfb9e1ad-f94c-4aa8-a1d0-d67fe50cc0e9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fe6ac203e82a4009391dc90c7b6ce83180175a75113cfc6c681bac96b7142886\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:29.025682 kubelet[2697]: E0129 11:32:29.025462 2697 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe6ac203e82a4009391dc90c7b6ce83180175a75113cfc6c681bac96b7142886\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:29.025682 kubelet[2697]: E0129 11:32:29.025594 2697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe6ac203e82a4009391dc90c7b6ce83180175a75113cfc6c681bac96b7142886\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-748549c4c9-7d2cf" Jan 29 11:32:29.025682 kubelet[2697]: E0129 11:32:29.025633 2697 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe6ac203e82a4009391dc90c7b6ce83180175a75113cfc6c681bac96b7142886\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-748549c4c9-7d2cf" Jan 29 11:32:29.025814 kubelet[2697]: E0129 11:32:29.025709 2697 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-748549c4c9-7d2cf_calico-system(dfb9e1ad-f94c-4aa8-a1d0-d67fe50cc0e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-748549c4c9-7d2cf_calico-system(dfb9e1ad-f94c-4aa8-a1d0-d67fe50cc0e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fe6ac203e82a4009391dc90c7b6ce83180175a75113cfc6c681bac96b7142886\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-748549c4c9-7d2cf" podUID="dfb9e1ad-f94c-4aa8-a1d0-d67fe50cc0e9" Jan 29 11:32:29.026480 containerd[1495]: time="2025-01-29T11:32:29.026049174Z" level=error msg="Failed to destroy network for sandbox \"a57e13c259cc81ce2e477b9469ae3554a9ae4291cb2db4e7129310c60db62f90\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:29.026480 containerd[1495]: time="2025-01-29T11:32:29.026376839Z" level=error msg="encountered an error cleaning up failed sandbox \"a57e13c259cc81ce2e477b9469ae3554a9ae4291cb2db4e7129310c60db62f90\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:29.026480 containerd[1495]: time="2025-01-29T11:32:29.026426161Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f846fb45c-49zts,Uid:355edf79-8969-4232-bff0-a38923ed3709,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a57e13c259cc81ce2e477b9469ae3554a9ae4291cb2db4e7129310c60db62f90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:29.027013 kubelet[2697]: E0129 11:32:29.026841 2697 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a57e13c259cc81ce2e477b9469ae3554a9ae4291cb2db4e7129310c60db62f90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:29.027013 kubelet[2697]: E0129 11:32:29.026909 2697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a57e13c259cc81ce2e477b9469ae3554a9ae4291cb2db4e7129310c60db62f90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f846fb45c-49zts" Jan 29 11:32:29.027013 kubelet[2697]: E0129 11:32:29.026929 2697 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a57e13c259cc81ce2e477b9469ae3554a9ae4291cb2db4e7129310c60db62f90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f846fb45c-49zts" Jan 29 11:32:29.027122 kubelet[2697]: E0129 11:32:29.026970 2697 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f846fb45c-49zts_calico-apiserver(355edf79-8969-4232-bff0-a38923ed3709)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f846fb45c-49zts_calico-apiserver(355edf79-8969-4232-bff0-a38923ed3709)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a57e13c259cc81ce2e477b9469ae3554a9ae4291cb2db4e7129310c60db62f90\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f846fb45c-49zts" podUID="355edf79-8969-4232-bff0-a38923ed3709" Jan 29 11:32:29.040017 containerd[1495]: time="2025-01-29T11:32:29.039931118Z" level=error msg="Failed to destroy network for sandbox \"88b4cc24607b9cc69cf8566db1a11a31044711ee8251a11957523ee4d62e4b16\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:29.040567 containerd[1495]: time="2025-01-29T11:32:29.040529381Z" level=error msg="encountered an error cleaning up failed sandbox \"88b4cc24607b9cc69cf8566db1a11a31044711ee8251a11957523ee4d62e4b16\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:29.040706 containerd[1495]: time="2025-01-29T11:32:29.040670967Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pgcpq,Uid:ce5f6883-5ebc-45bd-8052-20316de2d012,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"88b4cc24607b9cc69cf8566db1a11a31044711ee8251a11957523ee4d62e4b16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:29.041043 kubelet[2697]: E0129 11:32:29.041006 2697 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88b4cc24607b9cc69cf8566db1a11a31044711ee8251a11957523ee4d62e4b16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:29.041882 kubelet[2697]: E0129 11:32:29.041106 2697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88b4cc24607b9cc69cf8566db1a11a31044711ee8251a11957523ee4d62e4b16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-pgcpq" Jan 29 11:32:29.041882 kubelet[2697]: E0129 11:32:29.041127 2697 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88b4cc24607b9cc69cf8566db1a11a31044711ee8251a11957523ee4d62e4b16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-pgcpq" Jan 29 11:32:29.041882 kubelet[2697]: E0129 11:32:29.041176 2697 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-pgcpq_kube-system(ce5f6883-5ebc-45bd-8052-20316de2d012)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-pgcpq_kube-system(ce5f6883-5ebc-45bd-8052-20316de2d012)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"88b4cc24607b9cc69cf8566db1a11a31044711ee8251a11957523ee4d62e4b16\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-pgcpq" podUID="ce5f6883-5ebc-45bd-8052-20316de2d012" Jan 29 11:32:29.050780 containerd[1495]: time="2025-01-29T11:32:29.050690472Z" level=error msg="Failed to destroy network for sandbox \"62fbf0cacbcea32d6948f7a7cdc9388760af0845df2b850d76ebf9e79a41f942\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:29.051272 containerd[1495]: time="2025-01-29T11:32:29.051230815Z" level=error msg="Failed to destroy network for sandbox \"068481ee10bd71b1783eeab1c40f9ef93d9dc158fa6a9beeeba5fd6980a6abd7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:29.051318 containerd[1495]: time="2025-01-29T11:32:29.051296098Z" level=error msg="encountered an error cleaning up failed sandbox \"62fbf0cacbcea32d6948f7a7cdc9388760af0845df2b850d76ebf9e79a41f942\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:29.051380 containerd[1495]: time="2025-01-29T11:32:29.051356492Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bpqc6,Uid:59ad9644-a5c7-4480-bc20-dbeaa0a967d1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"62fbf0cacbcea32d6948f7a7cdc9388760af0845df2b850d76ebf9e79a41f942\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:29.051632 kubelet[2697]: E0129 11:32:29.051586 2697 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62fbf0cacbcea32d6948f7a7cdc9388760af0845df2b850d76ebf9e79a41f942\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:29.051632 kubelet[2697]: E0129 11:32:29.051628 2697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62fbf0cacbcea32d6948f7a7cdc9388760af0845df2b850d76ebf9e79a41f942\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bpqc6" Jan 29 11:32:29.051632 kubelet[2697]: E0129 11:32:29.051646 2697 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62fbf0cacbcea32d6948f7a7cdc9388760af0845df2b850d76ebf9e79a41f942\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bpqc6" Jan 29 11:32:29.051830 kubelet[2697]: E0129 11:32:29.051682 2697 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-bpqc6_kube-system(59ad9644-a5c7-4480-bc20-dbeaa0a967d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-bpqc6_kube-system(59ad9644-a5c7-4480-bc20-dbeaa0a967d1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"62fbf0cacbcea32d6948f7a7cdc9388760af0845df2b850d76ebf9e79a41f942\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-bpqc6" podUID="59ad9644-a5c7-4480-bc20-dbeaa0a967d1" Jan 29 11:32:29.052008 containerd[1495]: time="2025-01-29T11:32:29.051974572Z" level=error msg="encountered an error cleaning up failed sandbox \"068481ee10bd71b1783eeab1c40f9ef93d9dc158fa6a9beeeba5fd6980a6abd7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:29.052087 containerd[1495]: time="2025-01-29T11:32:29.052059391Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qtzv2,Uid:eb49b472-01c5-4cb5-84d5-9a1a2c4b969d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"068481ee10bd71b1783eeab1c40f9ef93d9dc158fa6a9beeeba5fd6980a6abd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:29.052405 kubelet[2697]: E0129 11:32:29.052354 2697 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"068481ee10bd71b1783eeab1c40f9ef93d9dc158fa6a9beeeba5fd6980a6abd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:29.052484 kubelet[2697]: E0129 11:32:29.052450 2697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"068481ee10bd71b1783eeab1c40f9ef93d9dc158fa6a9beeeba5fd6980a6abd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qtzv2" Jan 29 11:32:29.052523 kubelet[2697]: E0129 11:32:29.052488 2697 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"068481ee10bd71b1783eeab1c40f9ef93d9dc158fa6a9beeeba5fd6980a6abd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qtzv2" Jan 29 11:32:29.052622 kubelet[2697]: E0129 11:32:29.052553 2697 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qtzv2_calico-system(eb49b472-01c5-4cb5-84d5-9a1a2c4b969d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qtzv2_calico-system(eb49b472-01c5-4cb5-84d5-9a1a2c4b969d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"068481ee10bd71b1783eeab1c40f9ef93d9dc158fa6a9beeeba5fd6980a6abd7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qtzv2" podUID="eb49b472-01c5-4cb5-84d5-9a1a2c4b969d" Jan 29 11:32:29.064569 containerd[1495]: time="2025-01-29T11:32:29.064513725Z" level=error msg="Failed to destroy network for sandbox \"bd1c4fbf33953e5862c77e9bcf30d3094180127b1d08f643c0857a95af354d3a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:29.065065 containerd[1495]: time="2025-01-29T11:32:29.065026147Z" level=error msg="encountered an error cleaning up failed sandbox \"bd1c4fbf33953e5862c77e9bcf30d3094180127b1d08f643c0857a95af354d3a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:29.065147 containerd[1495]: time="2025-01-29T11:32:29.065111497Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f846fb45c-qnzlv,Uid:6a3ccfc9-9edc-4b98-a77a-7df17efe2895,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bd1c4fbf33953e5862c77e9bcf30d3094180127b1d08f643c0857a95af354d3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:29.065361 kubelet[2697]: E0129 11:32:29.065333 2697 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd1c4fbf33953e5862c77e9bcf30d3094180127b1d08f643c0857a95af354d3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:29.065402 kubelet[2697]: E0129 11:32:29.065372 2697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd1c4fbf33953e5862c77e9bcf30d3094180127b1d08f643c0857a95af354d3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f846fb45c-qnzlv" Jan 29 11:32:29.065402 kubelet[2697]: E0129 11:32:29.065388 2697 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd1c4fbf33953e5862c77e9bcf30d3094180127b1d08f643c0857a95af354d3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f846fb45c-qnzlv" Jan 29 11:32:29.065477 kubelet[2697]: E0129 11:32:29.065459 2697 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f846fb45c-qnzlv_calico-apiserver(6a3ccfc9-9edc-4b98-a77a-7df17efe2895)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f846fb45c-qnzlv_calico-apiserver(6a3ccfc9-9edc-4b98-a77a-7df17efe2895)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bd1c4fbf33953e5862c77e9bcf30d3094180127b1d08f643c0857a95af354d3a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f846fb45c-qnzlv" podUID="6a3ccfc9-9edc-4b98-a77a-7df17efe2895" Jan 29 11:32:29.408764 kubelet[2697]: I0129 11:32:29.408708 2697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="068481ee10bd71b1783eeab1c40f9ef93d9dc158fa6a9beeeba5fd6980a6abd7" Jan 29 11:32:29.409876 kubelet[2697]: I0129 11:32:29.409310 2697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62fbf0cacbcea32d6948f7a7cdc9388760af0845df2b850d76ebf9e79a41f942" Jan 29 11:32:29.409933 containerd[1495]: time="2025-01-29T11:32:29.409446351Z" level=info msg="StopPodSandbox for \"068481ee10bd71b1783eeab1c40f9ef93d9dc158fa6a9beeeba5fd6980a6abd7\"" Jan 29 11:32:29.409933 containerd[1495]: time="2025-01-29T11:32:29.409719393Z" level=info msg="Ensure that sandbox 068481ee10bd71b1783eeab1c40f9ef93d9dc158fa6a9beeeba5fd6980a6abd7 in task-service has been cleanup successfully" Jan 29 11:32:29.409933 containerd[1495]: time="2025-01-29T11:32:29.409835952Z" level=info msg="StopPodSandbox for \"62fbf0cacbcea32d6948f7a7cdc9388760af0845df2b850d76ebf9e79a41f942\"" Jan 29 11:32:29.410153 containerd[1495]: time="2025-01-29T11:32:29.410069831Z" level=info msg="TearDown network for sandbox \"068481ee10bd71b1783eeab1c40f9ef93d9dc158fa6a9beeeba5fd6980a6abd7\" successfully" Jan 29 11:32:29.410153 containerd[1495]: time="2025-01-29T11:32:29.410086292Z" level=info msg="StopPodSandbox for \"068481ee10bd71b1783eeab1c40f9ef93d9dc158fa6a9beeeba5fd6980a6abd7\" returns successfully" Jan 29 11:32:29.410207 containerd[1495]: time="2025-01-29T11:32:29.410148459Z" level=info msg="Ensure that sandbox 62fbf0cacbcea32d6948f7a7cdc9388760af0845df2b850d76ebf9e79a41f942 in task-service has been cleanup successfully" Jan 29 11:32:29.410335 kubelet[2697]: I0129 11:32:29.410295 2697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd1c4fbf33953e5862c77e9bcf30d3094180127b1d08f643c0857a95af354d3a" Jan 29 11:32:29.410369 containerd[1495]: time="2025-01-29T11:32:29.410319790Z" level=info msg="TearDown network for sandbox \"62fbf0cacbcea32d6948f7a7cdc9388760af0845df2b850d76ebf9e79a41f942\" successfully" Jan 29 11:32:29.410369 containerd[1495]: time="2025-01-29T11:32:29.410331592Z" level=info msg="StopPodSandbox for \"62fbf0cacbcea32d6948f7a7cdc9388760af0845df2b850d76ebf9e79a41f942\" returns successfully" Jan 29 11:32:29.410926 kubelet[2697]: E0129 11:32:29.410897 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:32:29.411237 containerd[1495]: time="2025-01-29T11:32:29.411194722Z" level=info msg="StopPodSandbox for \"bd1c4fbf33953e5862c77e9bcf30d3094180127b1d08f643c0857a95af354d3a\"" Jan 29 11:32:29.411237 containerd[1495]: time="2025-01-29T11:32:29.411219017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bpqc6,Uid:59ad9644-a5c7-4480-bc20-dbeaa0a967d1,Namespace:kube-system,Attempt:1,}" Jan 29 11:32:29.411401 containerd[1495]: time="2025-01-29T11:32:29.411246719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qtzv2,Uid:eb49b472-01c5-4cb5-84d5-9a1a2c4b969d,Namespace:calico-system,Attempt:1,}" Jan 29 11:32:29.411501 containerd[1495]: time="2025-01-29T11:32:29.411441165Z" level=info msg="Ensure that sandbox bd1c4fbf33953e5862c77e9bcf30d3094180127b1d08f643c0857a95af354d3a in task-service has been cleanup successfully" Jan 29 11:32:29.411687 containerd[1495]: time="2025-01-29T11:32:29.411641901Z" level=info msg="TearDown network for sandbox \"bd1c4fbf33953e5862c77e9bcf30d3094180127b1d08f643c0857a95af354d3a\" successfully" Jan 29 11:32:29.411687 containerd[1495]: time="2025-01-29T11:32:29.411661909Z" level=info msg="StopPodSandbox for \"bd1c4fbf33953e5862c77e9bcf30d3094180127b1d08f643c0857a95af354d3a\" returns successfully" Jan 29 11:32:29.412067 containerd[1495]: time="2025-01-29T11:32:29.412029099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f846fb45c-qnzlv,Uid:6a3ccfc9-9edc-4b98-a77a-7df17efe2895,Namespace:calico-apiserver,Attempt:1,}" Jan 29 11:32:29.412847 kubelet[2697]: I0129 11:32:29.412452 2697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88b4cc24607b9cc69cf8566db1a11a31044711ee8251a11957523ee4d62e4b16" Jan 29 11:32:29.413031 containerd[1495]: time="2025-01-29T11:32:29.412877170Z" level=info msg="StopPodSandbox for \"88b4cc24607b9cc69cf8566db1a11a31044711ee8251a11957523ee4d62e4b16\"" Jan 29 11:32:29.413092 containerd[1495]: time="2025-01-29T11:32:29.413068519Z" level=info msg="Ensure that sandbox 88b4cc24607b9cc69cf8566db1a11a31044711ee8251a11957523ee4d62e4b16 in task-service has been cleanup successfully" Jan 29 11:32:29.413299 containerd[1495]: time="2025-01-29T11:32:29.413265659Z" level=info msg="TearDown network for sandbox \"88b4cc24607b9cc69cf8566db1a11a31044711ee8251a11957523ee4d62e4b16\" successfully" Jan 29 11:32:29.413299 containerd[1495]: time="2025-01-29T11:32:29.413288531Z" level=info msg="StopPodSandbox for \"88b4cc24607b9cc69cf8566db1a11a31044711ee8251a11957523ee4d62e4b16\" returns successfully" Jan 29 11:32:29.413391 kubelet[2697]: I0129 11:32:29.413363 2697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a57e13c259cc81ce2e477b9469ae3554a9ae4291cb2db4e7129310c60db62f90" Jan 29 11:32:29.413723 kubelet[2697]: E0129 11:32:29.413608 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:32:29.413812 containerd[1495]: time="2025-01-29T11:32:29.413782779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pgcpq,Uid:ce5f6883-5ebc-45bd-8052-20316de2d012,Namespace:kube-system,Attempt:1,}" Jan 29 11:32:29.413905 containerd[1495]: time="2025-01-29T11:32:29.413879992Z" level=info msg="StopPodSandbox for \"a57e13c259cc81ce2e477b9469ae3554a9ae4291cb2db4e7129310c60db62f90\"" Jan 29 11:32:29.414126 containerd[1495]: time="2025-01-29T11:32:29.414090628Z" level=info msg="Ensure that sandbox a57e13c259cc81ce2e477b9469ae3554a9ae4291cb2db4e7129310c60db62f90 in task-service has been cleanup successfully" Jan 29 11:32:29.414330 containerd[1495]: time="2025-01-29T11:32:29.414303377Z" level=info msg="TearDown network for sandbox \"a57e13c259cc81ce2e477b9469ae3554a9ae4291cb2db4e7129310c60db62f90\" successfully" Jan 29 11:32:29.414330 containerd[1495]: time="2025-01-29T11:32:29.414322412Z" level=info msg="StopPodSandbox for \"a57e13c259cc81ce2e477b9469ae3554a9ae4291cb2db4e7129310c60db62f90\" returns successfully" Jan 29 11:32:29.414875 containerd[1495]: time="2025-01-29T11:32:29.414816249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f846fb45c-49zts,Uid:355edf79-8969-4232-bff0-a38923ed3709,Namespace:calico-apiserver,Attempt:1,}" Jan 29 11:32:29.415077 kubelet[2697]: I0129 11:32:29.415059 2697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe6ac203e82a4009391dc90c7b6ce83180175a75113cfc6c681bac96b7142886" Jan 29 11:32:29.415463 containerd[1495]: time="2025-01-29T11:32:29.415400636Z" level=info msg="StopPodSandbox for \"fe6ac203e82a4009391dc90c7b6ce83180175a75113cfc6c681bac96b7142886\"" Jan 29 11:32:29.415648 containerd[1495]: time="2025-01-29T11:32:29.415616842Z" level=info msg="Ensure that sandbox fe6ac203e82a4009391dc90c7b6ce83180175a75113cfc6c681bac96b7142886 in task-service has been cleanup successfully" Jan 29 11:32:29.415777 containerd[1495]: time="2025-01-29T11:32:29.415757576Z" level=info msg="TearDown network for sandbox \"fe6ac203e82a4009391dc90c7b6ce83180175a75113cfc6c681bac96b7142886\" successfully" Jan 29 11:32:29.415777 containerd[1495]: time="2025-01-29T11:32:29.415771231Z" level=info msg="StopPodSandbox for \"fe6ac203e82a4009391dc90c7b6ce83180175a75113cfc6c681bac96b7142886\" returns successfully" Jan 29 11:32:29.416194 containerd[1495]: time="2025-01-29T11:32:29.416164550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-748549c4c9-7d2cf,Uid:dfb9e1ad-f94c-4aa8-a1d0-d67fe50cc0e9,Namespace:calico-system,Attempt:1,}" Jan 29 11:32:29.900625 systemd[1]: run-netns-cni\x2d852d7d6a\x2d44f7\x2da1df\x2d36ea\x2d17e5a1396626.mount: Deactivated successfully. Jan 29 11:32:29.900784 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-88b4cc24607b9cc69cf8566db1a11a31044711ee8251a11957523ee4d62e4b16-shm.mount: Deactivated successfully. Jan 29 11:32:29.900892 systemd[1]: run-netns-cni\x2d2befd61e\x2d1a22\x2d469b\x2d0c31\x2d7e1366a70465.mount: Deactivated successfully. Jan 29 11:32:29.900984 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a57e13c259cc81ce2e477b9469ae3554a9ae4291cb2db4e7129310c60db62f90-shm.mount: Deactivated successfully. Jan 29 11:32:29.901077 systemd[1]: run-netns-cni\x2d56dbcc1e\x2de2b7\x2d3dc7\x2d2ae9\x2d70d452ab1b59.mount: Deactivated successfully. Jan 29 11:32:29.901166 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-62fbf0cacbcea32d6948f7a7cdc9388760af0845df2b850d76ebf9e79a41f942-shm.mount: Deactivated successfully. Jan 29 11:32:29.901256 systemd[1]: run-netns-cni\x2d90016541\x2d02db\x2df4f7\x2ddaf4\x2d59e6fe9a6ff8.mount: Deactivated successfully. Jan 29 11:32:29.901340 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bd1c4fbf33953e5862c77e9bcf30d3094180127b1d08f643c0857a95af354d3a-shm.mount: Deactivated successfully. Jan 29 11:32:29.901476 systemd[1]: run-netns-cni\x2d0a73032e\x2dd10e\x2db8ae\x2d893a\x2d0fc3ad486d0e.mount: Deactivated successfully. Jan 29 11:32:29.901568 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fe6ac203e82a4009391dc90c7b6ce83180175a75113cfc6c681bac96b7142886-shm.mount: Deactivated successfully. Jan 29 11:32:30.086670 containerd[1495]: time="2025-01-29T11:32:30.086510098Z" level=error msg="Failed to destroy network for sandbox \"f81dce53143ec3a050bc20013a34bc942f73112707290c1c5853735a992a3294\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:30.087892 containerd[1495]: time="2025-01-29T11:32:30.087654086Z" level=error msg="encountered an error cleaning up failed sandbox \"f81dce53143ec3a050bc20013a34bc942f73112707290c1c5853735a992a3294\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:30.087892 containerd[1495]: time="2025-01-29T11:32:30.087743473Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f846fb45c-49zts,Uid:355edf79-8969-4232-bff0-a38923ed3709,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"f81dce53143ec3a050bc20013a34bc942f73112707290c1c5853735a992a3294\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:30.088361 kubelet[2697]: E0129 11:32:30.088321 2697 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f81dce53143ec3a050bc20013a34bc942f73112707290c1c5853735a992a3294\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:30.088662 kubelet[2697]: E0129 11:32:30.088629 2697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f81dce53143ec3a050bc20013a34bc942f73112707290c1c5853735a992a3294\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f846fb45c-49zts" Jan 29 11:32:30.088768 kubelet[2697]: E0129 11:32:30.088750 2697 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f81dce53143ec3a050bc20013a34bc942f73112707290c1c5853735a992a3294\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f846fb45c-49zts" Jan 29 11:32:30.090014 kubelet[2697]: E0129 11:32:30.088964 2697 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f846fb45c-49zts_calico-apiserver(355edf79-8969-4232-bff0-a38923ed3709)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f846fb45c-49zts_calico-apiserver(355edf79-8969-4232-bff0-a38923ed3709)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f81dce53143ec3a050bc20013a34bc942f73112707290c1c5853735a992a3294\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f846fb45c-49zts" podUID="355edf79-8969-4232-bff0-a38923ed3709" Jan 29 11:32:30.112095 containerd[1495]: time="2025-01-29T11:32:30.111921308Z" level=error msg="Failed to destroy network for sandbox \"934f5bf6aae6891083f2c20de48c400c17aaa8a980e059df118a8a6387f827de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:30.112747 containerd[1495]: time="2025-01-29T11:32:30.112565237Z" level=error msg="encountered an error cleaning up failed sandbox \"934f5bf6aae6891083f2c20de48c400c17aaa8a980e059df118a8a6387f827de\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:30.112747 containerd[1495]: time="2025-01-29T11:32:30.112634837Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pgcpq,Uid:ce5f6883-5ebc-45bd-8052-20316de2d012,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"934f5bf6aae6891083f2c20de48c400c17aaa8a980e059df118a8a6387f827de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:30.114060 kubelet[2697]: E0129 11:32:30.112963 2697 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"934f5bf6aae6891083f2c20de48c400c17aaa8a980e059df118a8a6387f827de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:30.114060 kubelet[2697]: E0129 11:32:30.113040 2697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"934f5bf6aae6891083f2c20de48c400c17aaa8a980e059df118a8a6387f827de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-pgcpq" Jan 29 11:32:30.114060 kubelet[2697]: E0129 11:32:30.113072 2697 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"934f5bf6aae6891083f2c20de48c400c17aaa8a980e059df118a8a6387f827de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-pgcpq" Jan 29 11:32:30.114355 kubelet[2697]: E0129 11:32:30.113125 2697 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-pgcpq_kube-system(ce5f6883-5ebc-45bd-8052-20316de2d012)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-pgcpq_kube-system(ce5f6883-5ebc-45bd-8052-20316de2d012)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"934f5bf6aae6891083f2c20de48c400c17aaa8a980e059df118a8a6387f827de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-pgcpq" podUID="ce5f6883-5ebc-45bd-8052-20316de2d012" Jan 29 11:32:30.116736 containerd[1495]: time="2025-01-29T11:32:30.116691390Z" level=error msg="Failed to destroy network for sandbox \"5811a36840dd1d42d19b253cd448b0069b1615ccdc5a9f870a7e114387802527\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:30.117529 containerd[1495]: time="2025-01-29T11:32:30.117501441Z" level=error msg="encountered an error cleaning up failed sandbox \"5811a36840dd1d42d19b253cd448b0069b1615ccdc5a9f870a7e114387802527\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:30.117673 containerd[1495]: time="2025-01-29T11:32:30.117644279Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f846fb45c-qnzlv,Uid:6a3ccfc9-9edc-4b98-a77a-7df17efe2895,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"5811a36840dd1d42d19b253cd448b0069b1615ccdc5a9f870a7e114387802527\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:30.118322 kubelet[2697]: E0129 11:32:30.118035 2697 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5811a36840dd1d42d19b253cd448b0069b1615ccdc5a9f870a7e114387802527\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:30.118322 kubelet[2697]: E0129 11:32:30.118123 2697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5811a36840dd1d42d19b253cd448b0069b1615ccdc5a9f870a7e114387802527\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f846fb45c-qnzlv" Jan 29 11:32:30.118322 kubelet[2697]: E0129 11:32:30.118152 2697 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5811a36840dd1d42d19b253cd448b0069b1615ccdc5a9f870a7e114387802527\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f846fb45c-qnzlv" Jan 29 11:32:30.118515 kubelet[2697]: E0129 11:32:30.118252 2697 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f846fb45c-qnzlv_calico-apiserver(6a3ccfc9-9edc-4b98-a77a-7df17efe2895)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f846fb45c-qnzlv_calico-apiserver(6a3ccfc9-9edc-4b98-a77a-7df17efe2895)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5811a36840dd1d42d19b253cd448b0069b1615ccdc5a9f870a7e114387802527\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f846fb45c-qnzlv" podUID="6a3ccfc9-9edc-4b98-a77a-7df17efe2895" Jan 29 11:32:30.125855 containerd[1495]: time="2025-01-29T11:32:30.124747679Z" level=error msg="Failed to destroy network for sandbox \"c0b5ead6c962ad21415289fbc7fdd4d4dbcb723931e95b4fc2ac10dc62090f23\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:30.125855 containerd[1495]: time="2025-01-29T11:32:30.125593498Z" level=error msg="Failed to destroy network for sandbox \"6b01d3b17008796e2599c4562b21cab96766aa5bc7d9347165898d1384559383\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:30.126371 containerd[1495]: time="2025-01-29T11:32:30.126342163Z" level=error msg="encountered an error cleaning up failed sandbox \"c0b5ead6c962ad21415289fbc7fdd4d4dbcb723931e95b4fc2ac10dc62090f23\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:30.126661 containerd[1495]: time="2025-01-29T11:32:30.126632427Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bpqc6,Uid:59ad9644-a5c7-4480-bc20-dbeaa0a967d1,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"c0b5ead6c962ad21415289fbc7fdd4d4dbcb723931e95b4fc2ac10dc62090f23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:30.126766 containerd[1495]: time="2025-01-29T11:32:30.126732965Z" level=error msg="encountered an error cleaning up failed sandbox \"6b01d3b17008796e2599c4562b21cab96766aa5bc7d9347165898d1384559383\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:30.126808 containerd[1495]: time="2025-01-29T11:32:30.126786816Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qtzv2,Uid:eb49b472-01c5-4cb5-84d5-9a1a2c4b969d,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"6b01d3b17008796e2599c4562b21cab96766aa5bc7d9347165898d1384559383\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:30.127083 kubelet[2697]: E0129 11:32:30.127034 2697 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0b5ead6c962ad21415289fbc7fdd4d4dbcb723931e95b4fc2ac10dc62090f23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:30.127141 kubelet[2697]: E0129 11:32:30.127099 2697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0b5ead6c962ad21415289fbc7fdd4d4dbcb723931e95b4fc2ac10dc62090f23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bpqc6" Jan 29 11:32:30.127192 kubelet[2697]: E0129 11:32:30.127139 2697 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0b5ead6c962ad21415289fbc7fdd4d4dbcb723931e95b4fc2ac10dc62090f23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bpqc6" Jan 29 11:32:30.127224 kubelet[2697]: E0129 11:32:30.127190 2697 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-bpqc6_kube-system(59ad9644-a5c7-4480-bc20-dbeaa0a967d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-bpqc6_kube-system(59ad9644-a5c7-4480-bc20-dbeaa0a967d1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c0b5ead6c962ad21415289fbc7fdd4d4dbcb723931e95b4fc2ac10dc62090f23\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-bpqc6" podUID="59ad9644-a5c7-4480-bc20-dbeaa0a967d1" Jan 29 11:32:30.127300 kubelet[2697]: E0129 11:32:30.127042 2697 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b01d3b17008796e2599c4562b21cab96766aa5bc7d9347165898d1384559383\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:30.127300 kubelet[2697]: E0129 11:32:30.127251 2697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b01d3b17008796e2599c4562b21cab96766aa5bc7d9347165898d1384559383\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qtzv2" Jan 29 11:32:30.127300 kubelet[2697]: E0129 11:32:30.127269 2697 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b01d3b17008796e2599c4562b21cab96766aa5bc7d9347165898d1384559383\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qtzv2" Jan 29 11:32:30.127536 kubelet[2697]: E0129 11:32:30.127298 2697 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qtzv2_calico-system(eb49b472-01c5-4cb5-84d5-9a1a2c4b969d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qtzv2_calico-system(eb49b472-01c5-4cb5-84d5-9a1a2c4b969d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6b01d3b17008796e2599c4562b21cab96766aa5bc7d9347165898d1384559383\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qtzv2" podUID="eb49b472-01c5-4cb5-84d5-9a1a2c4b969d" Jan 29 11:32:30.147100 containerd[1495]: time="2025-01-29T11:32:30.147028665Z" level=error msg="Failed to destroy network for sandbox \"72d092d80dc2ce8362dcfe998cac484b496878a763b26fd59103bd9f00b2119c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:30.147622 containerd[1495]: time="2025-01-29T11:32:30.147577415Z" level=error msg="encountered an error cleaning up failed sandbox \"72d092d80dc2ce8362dcfe998cac484b496878a763b26fd59103bd9f00b2119c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:30.147693 containerd[1495]: time="2025-01-29T11:32:30.147654860Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-748549c4c9-7d2cf,Uid:dfb9e1ad-f94c-4aa8-a1d0-d67fe50cc0e9,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"72d092d80dc2ce8362dcfe998cac484b496878a763b26fd59103bd9f00b2119c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:30.148029 kubelet[2697]: E0129 11:32:30.147970 2697 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72d092d80dc2ce8362dcfe998cac484b496878a763b26fd59103bd9f00b2119c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:30.148102 kubelet[2697]: E0129 11:32:30.148049 2697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72d092d80dc2ce8362dcfe998cac484b496878a763b26fd59103bd9f00b2119c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-748549c4c9-7d2cf" Jan 29 11:32:30.148102 kubelet[2697]: E0129 11:32:30.148078 2697 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72d092d80dc2ce8362dcfe998cac484b496878a763b26fd59103bd9f00b2119c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-748549c4c9-7d2cf" Jan 29 11:32:30.148178 kubelet[2697]: E0129 11:32:30.148138 2697 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-748549c4c9-7d2cf_calico-system(dfb9e1ad-f94c-4aa8-a1d0-d67fe50cc0e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-748549c4c9-7d2cf_calico-system(dfb9e1ad-f94c-4aa8-a1d0-d67fe50cc0e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"72d092d80dc2ce8362dcfe998cac484b496878a763b26fd59103bd9f00b2119c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-748549c4c9-7d2cf" podUID="dfb9e1ad-f94c-4aa8-a1d0-d67fe50cc0e9" Jan 29 11:32:30.419087 kubelet[2697]: I0129 11:32:30.419048 2697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f81dce53143ec3a050bc20013a34bc942f73112707290c1c5853735a992a3294" Jan 29 11:32:30.419757 containerd[1495]: time="2025-01-29T11:32:30.419722447Z" level=info msg="StopPodSandbox for \"f81dce53143ec3a050bc20013a34bc942f73112707290c1c5853735a992a3294\"" Jan 29 11:32:30.420260 containerd[1495]: time="2025-01-29T11:32:30.420225721Z" level=info msg="Ensure that sandbox f81dce53143ec3a050bc20013a34bc942f73112707290c1c5853735a992a3294 in task-service has been cleanup successfully" Jan 29 11:32:30.420800 containerd[1495]: time="2025-01-29T11:32:30.420774001Z" level=info msg="TearDown network for sandbox \"f81dce53143ec3a050bc20013a34bc942f73112707290c1c5853735a992a3294\" successfully" Jan 29 11:32:30.420800 containerd[1495]: time="2025-01-29T11:32:30.420796052Z" level=info msg="StopPodSandbox for \"f81dce53143ec3a050bc20013a34bc942f73112707290c1c5853735a992a3294\" returns successfully" Jan 29 11:32:30.421283 containerd[1495]: time="2025-01-29T11:32:30.421247909Z" level=info msg="StopPodSandbox for \"a57e13c259cc81ce2e477b9469ae3554a9ae4291cb2db4e7129310c60db62f90\"" Jan 29 11:32:30.421385 containerd[1495]: time="2025-01-29T11:32:30.421361643Z" level=info msg="TearDown network for sandbox \"a57e13c259cc81ce2e477b9469ae3554a9ae4291cb2db4e7129310c60db62f90\" successfully" Jan 29 11:32:30.421385 containerd[1495]: time="2025-01-29T11:32:30.421381690Z" level=info msg="StopPodSandbox for \"a57e13c259cc81ce2e477b9469ae3554a9ae4291cb2db4e7129310c60db62f90\" returns successfully" Jan 29 11:32:30.421490 kubelet[2697]: I0129 11:32:30.421376 2697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72d092d80dc2ce8362dcfe998cac484b496878a763b26fd59103bd9f00b2119c" Jan 29 11:32:30.422502 containerd[1495]: time="2025-01-29T11:32:30.422075653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f846fb45c-49zts,Uid:355edf79-8969-4232-bff0-a38923ed3709,Namespace:calico-apiserver,Attempt:2,}" Jan 29 11:32:30.422502 containerd[1495]: time="2025-01-29T11:32:30.422168949Z" level=info msg="StopPodSandbox for \"72d092d80dc2ce8362dcfe998cac484b496878a763b26fd59103bd9f00b2119c\"" Jan 29 11:32:30.423119 containerd[1495]: time="2025-01-29T11:32:30.423076201Z" level=info msg="Ensure that sandbox 72d092d80dc2ce8362dcfe998cac484b496878a763b26fd59103bd9f00b2119c in task-service has been cleanup successfully" Jan 29 11:32:30.423514 containerd[1495]: time="2025-01-29T11:32:30.423447758Z" level=info msg="TearDown network for sandbox \"72d092d80dc2ce8362dcfe998cac484b496878a763b26fd59103bd9f00b2119c\" successfully" Jan 29 11:32:30.423514 containerd[1495]: time="2025-01-29T11:32:30.423478846Z" level=info msg="StopPodSandbox for \"72d092d80dc2ce8362dcfe998cac484b496878a763b26fd59103bd9f00b2119c\" returns successfully" Jan 29 11:32:30.424122 containerd[1495]: time="2025-01-29T11:32:30.424096155Z" level=info msg="StopPodSandbox for \"fe6ac203e82a4009391dc90c7b6ce83180175a75113cfc6c681bac96b7142886\"" Jan 29 11:32:30.424214 containerd[1495]: time="2025-01-29T11:32:30.424188769Z" level=info msg="TearDown network for sandbox \"fe6ac203e82a4009391dc90c7b6ce83180175a75113cfc6c681bac96b7142886\" successfully" Jan 29 11:32:30.424214 containerd[1495]: time="2025-01-29T11:32:30.424210409Z" level=info msg="StopPodSandbox for \"fe6ac203e82a4009391dc90c7b6ce83180175a75113cfc6c681bac96b7142886\" returns successfully" Jan 29 11:32:30.425753 kubelet[2697]: I0129 11:32:30.425683 2697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b01d3b17008796e2599c4562b21cab96766aa5bc7d9347165898d1384559383" Jan 29 11:32:30.427392 containerd[1495]: time="2025-01-29T11:32:30.427365180Z" level=info msg="StopPodSandbox for \"6b01d3b17008796e2599c4562b21cab96766aa5bc7d9347165898d1384559383\"" Jan 29 11:32:30.427914 containerd[1495]: time="2025-01-29T11:32:30.427878393Z" level=info msg="Ensure that sandbox 6b01d3b17008796e2599c4562b21cab96766aa5bc7d9347165898d1384559383 in task-service has been cleanup successfully" Jan 29 11:32:30.428821 containerd[1495]: time="2025-01-29T11:32:30.428786668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-748549c4c9-7d2cf,Uid:dfb9e1ad-f94c-4aa8-a1d0-d67fe50cc0e9,Namespace:calico-system,Attempt:2,}" Jan 29 11:32:30.430281 containerd[1495]: time="2025-01-29T11:32:30.430244634Z" level=info msg="TearDown network for sandbox \"6b01d3b17008796e2599c4562b21cab96766aa5bc7d9347165898d1384559383\" successfully" Jan 29 11:32:30.430281 containerd[1495]: time="2025-01-29T11:32:30.430270022Z" level=info msg="StopPodSandbox for \"6b01d3b17008796e2599c4562b21cab96766aa5bc7d9347165898d1384559383\" returns successfully" Jan 29 11:32:30.432115 containerd[1495]: time="2025-01-29T11:32:30.431900642Z" level=info msg="StopPodSandbox for \"068481ee10bd71b1783eeab1c40f9ef93d9dc158fa6a9beeeba5fd6980a6abd7\"" Jan 29 11:32:30.432115 containerd[1495]: time="2025-01-29T11:32:30.432008795Z" level=info msg="TearDown network for sandbox \"068481ee10bd71b1783eeab1c40f9ef93d9dc158fa6a9beeeba5fd6980a6abd7\" successfully" Jan 29 11:32:30.432115 containerd[1495]: time="2025-01-29T11:32:30.432023362Z" level=info msg="StopPodSandbox for \"068481ee10bd71b1783eeab1c40f9ef93d9dc158fa6a9beeeba5fd6980a6abd7\" returns successfully" Jan 29 11:32:30.432499 containerd[1495]: time="2025-01-29T11:32:30.432449763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qtzv2,Uid:eb49b472-01c5-4cb5-84d5-9a1a2c4b969d,Namespace:calico-system,Attempt:2,}" Jan 29 11:32:30.432989 kubelet[2697]: I0129 11:32:30.432964 2697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0b5ead6c962ad21415289fbc7fdd4d4dbcb723931e95b4fc2ac10dc62090f23" Jan 29 11:32:30.433472 containerd[1495]: time="2025-01-29T11:32:30.433393393Z" level=info msg="StopPodSandbox for \"c0b5ead6c962ad21415289fbc7fdd4d4dbcb723931e95b4fc2ac10dc62090f23\"" Jan 29 11:32:30.433681 containerd[1495]: time="2025-01-29T11:32:30.433643783Z" level=info msg="Ensure that sandbox c0b5ead6c962ad21415289fbc7fdd4d4dbcb723931e95b4fc2ac10dc62090f23 in task-service has been cleanup successfully" Jan 29 11:32:30.434192 containerd[1495]: time="2025-01-29T11:32:30.434110760Z" level=info msg="TearDown network for sandbox \"c0b5ead6c962ad21415289fbc7fdd4d4dbcb723931e95b4fc2ac10dc62090f23\" successfully" Jan 29 11:32:30.434192 containerd[1495]: time="2025-01-29T11:32:30.434145485Z" level=info msg="StopPodSandbox for \"c0b5ead6c962ad21415289fbc7fdd4d4dbcb723931e95b4fc2ac10dc62090f23\" returns successfully" Jan 29 11:32:30.435802 containerd[1495]: time="2025-01-29T11:32:30.435781164Z" level=info msg="StopPodSandbox for \"62fbf0cacbcea32d6948f7a7cdc9388760af0845df2b850d76ebf9e79a41f942\"" Jan 29 11:32:30.435875 containerd[1495]: time="2025-01-29T11:32:30.435860914Z" level=info msg="TearDown network for sandbox \"62fbf0cacbcea32d6948f7a7cdc9388760af0845df2b850d76ebf9e79a41f942\" successfully" Jan 29 11:32:30.435913 containerd[1495]: time="2025-01-29T11:32:30.435872756Z" level=info msg="StopPodSandbox for \"62fbf0cacbcea32d6948f7a7cdc9388760af0845df2b850d76ebf9e79a41f942\" returns successfully" Jan 29 11:32:30.436234 kubelet[2697]: E0129 11:32:30.436184 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:32:30.436551 containerd[1495]: time="2025-01-29T11:32:30.436532885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bpqc6,Uid:59ad9644-a5c7-4480-bc20-dbeaa0a967d1,Namespace:kube-system,Attempt:2,}" Jan 29 11:32:30.437395 kubelet[2697]: I0129 11:32:30.437348 2697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5811a36840dd1d42d19b253cd448b0069b1615ccdc5a9f870a7e114387802527" Jan 29 11:32:30.442452 containerd[1495]: time="2025-01-29T11:32:30.439246909Z" level=info msg="StopPodSandbox for \"5811a36840dd1d42d19b253cd448b0069b1615ccdc5a9f870a7e114387802527\"" Jan 29 11:32:30.442452 containerd[1495]: time="2025-01-29T11:32:30.440544114Z" level=info msg="Ensure that sandbox 5811a36840dd1d42d19b253cd448b0069b1615ccdc5a9f870a7e114387802527 in task-service has been cleanup successfully" Jan 29 11:32:30.442452 containerd[1495]: time="2025-01-29T11:32:30.440801807Z" level=info msg="TearDown network for sandbox \"5811a36840dd1d42d19b253cd448b0069b1615ccdc5a9f870a7e114387802527\" successfully" Jan 29 11:32:30.442452 containerd[1495]: time="2025-01-29T11:32:30.440817396Z" level=info msg="StopPodSandbox for \"5811a36840dd1d42d19b253cd448b0069b1615ccdc5a9f870a7e114387802527\" returns successfully" Jan 29 11:32:30.442452 containerd[1495]: time="2025-01-29T11:32:30.441173415Z" level=info msg="StopPodSandbox for \"bd1c4fbf33953e5862c77e9bcf30d3094180127b1d08f643c0857a95af354d3a\"" Jan 29 11:32:30.442452 containerd[1495]: time="2025-01-29T11:32:30.441259035Z" level=info msg="TearDown network for sandbox \"bd1c4fbf33953e5862c77e9bcf30d3094180127b1d08f643c0857a95af354d3a\" successfully" Jan 29 11:32:30.442452 containerd[1495]: time="2025-01-29T11:32:30.441275266Z" level=info msg="StopPodSandbox for \"bd1c4fbf33953e5862c77e9bcf30d3094180127b1d08f643c0857a95af354d3a\" returns successfully" Jan 29 11:32:30.443766 kubelet[2697]: I0129 11:32:30.443734 2697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="934f5bf6aae6891083f2c20de48c400c17aaa8a980e059df118a8a6387f827de" Jan 29 11:32:30.445582 containerd[1495]: time="2025-01-29T11:32:30.445541432Z" level=info msg="StopPodSandbox for \"934f5bf6aae6891083f2c20de48c400c17aaa8a980e059df118a8a6387f827de\"" Jan 29 11:32:30.445771 containerd[1495]: time="2025-01-29T11:32:30.445747318Z" level=info msg="Ensure that sandbox 934f5bf6aae6891083f2c20de48c400c17aaa8a980e059df118a8a6387f827de in task-service has been cleanup successfully" Jan 29 11:32:30.446076 containerd[1495]: time="2025-01-29T11:32:30.446052431Z" level=info msg="TearDown network for sandbox \"934f5bf6aae6891083f2c20de48c400c17aaa8a980e059df118a8a6387f827de\" successfully" Jan 29 11:32:30.446111 containerd[1495]: time="2025-01-29T11:32:30.446074462Z" level=info msg="StopPodSandbox for \"934f5bf6aae6891083f2c20de48c400c17aaa8a980e059df118a8a6387f827de\" returns successfully" Jan 29 11:32:30.453627 containerd[1495]: time="2025-01-29T11:32:30.453587763Z" level=info msg="StopPodSandbox for \"88b4cc24607b9cc69cf8566db1a11a31044711ee8251a11957523ee4d62e4b16\"" Jan 29 11:32:30.454328 containerd[1495]: time="2025-01-29T11:32:30.454298818Z" level=info msg="TearDown network for sandbox \"88b4cc24607b9cc69cf8566db1a11a31044711ee8251a11957523ee4d62e4b16\" successfully" Jan 29 11:32:30.454328 containerd[1495]: time="2025-01-29T11:32:30.454319687Z" level=info msg="StopPodSandbox for \"88b4cc24607b9cc69cf8566db1a11a31044711ee8251a11957523ee4d62e4b16\" returns successfully" Jan 29 11:32:30.454690 kubelet[2697]: E0129 11:32:30.454655 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:32:30.454754 containerd[1495]: time="2025-01-29T11:32:30.454738763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f846fb45c-qnzlv,Uid:6a3ccfc9-9edc-4b98-a77a-7df17efe2895,Namespace:calico-apiserver,Attempt:2,}" Jan 29 11:32:30.454978 containerd[1495]: time="2025-01-29T11:32:30.454945662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pgcpq,Uid:ce5f6883-5ebc-45bd-8052-20316de2d012,Namespace:kube-system,Attempt:2,}" Jan 29 11:32:30.608387 containerd[1495]: time="2025-01-29T11:32:30.608323399Z" level=error msg="Failed to destroy network for sandbox \"32d3ad80782f04b13fc8d95b06e337633874857fa4abf01127138c329bba0487\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:30.608545 containerd[1495]: time="2025-01-29T11:32:30.608422315Z" level=error msg="Failed to destroy network for sandbox \"cc31abcb76ed49cf0135c5d1e1ab407bf029dc7d3acca444a7ee37ff89eb9e88\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:30.611743 containerd[1495]: time="2025-01-29T11:32:30.611694737Z" level=error msg="encountered an error cleaning up failed sandbox \"32d3ad80782f04b13fc8d95b06e337633874857fa4abf01127138c329bba0487\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:30.611808 containerd[1495]: time="2025-01-29T11:32:30.611776280Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qtzv2,Uid:eb49b472-01c5-4cb5-84d5-9a1a2c4b969d,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"32d3ad80782f04b13fc8d95b06e337633874857fa4abf01127138c329bba0487\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:30.612478 kubelet[2697]: E0129 11:32:30.612070 2697 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32d3ad80782f04b13fc8d95b06e337633874857fa4abf01127138c329bba0487\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:30.612478 kubelet[2697]: E0129 11:32:30.612142 2697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32d3ad80782f04b13fc8d95b06e337633874857fa4abf01127138c329bba0487\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qtzv2" Jan 29 11:32:30.612478 kubelet[2697]: E0129 11:32:30.612168 2697 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32d3ad80782f04b13fc8d95b06e337633874857fa4abf01127138c329bba0487\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qtzv2" Jan 29 11:32:30.612597 kubelet[2697]: E0129 11:32:30.612210 2697 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qtzv2_calico-system(eb49b472-01c5-4cb5-84d5-9a1a2c4b969d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qtzv2_calico-system(eb49b472-01c5-4cb5-84d5-9a1a2c4b969d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"32d3ad80782f04b13fc8d95b06e337633874857fa4abf01127138c329bba0487\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qtzv2" podUID="eb49b472-01c5-4cb5-84d5-9a1a2c4b969d" Jan 29 11:32:30.615137 containerd[1495]: time="2025-01-29T11:32:30.615087934Z" level=error msg="encountered an error cleaning up failed sandbox \"cc31abcb76ed49cf0135c5d1e1ab407bf029dc7d3acca444a7ee37ff89eb9e88\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:30.615183 containerd[1495]: time="2025-01-29T11:32:30.615164788Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f846fb45c-49zts,Uid:355edf79-8969-4232-bff0-a38923ed3709,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"cc31abcb76ed49cf0135c5d1e1ab407bf029dc7d3acca444a7ee37ff89eb9e88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:30.615478 kubelet[2697]: E0129 11:32:30.615387 2697 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc31abcb76ed49cf0135c5d1e1ab407bf029dc7d3acca444a7ee37ff89eb9e88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:30.615547 kubelet[2697]: E0129 11:32:30.615499 2697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc31abcb76ed49cf0135c5d1e1ab407bf029dc7d3acca444a7ee37ff89eb9e88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f846fb45c-49zts" Jan 29 11:32:30.615584 kubelet[2697]: E0129 11:32:30.615551 2697 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc31abcb76ed49cf0135c5d1e1ab407bf029dc7d3acca444a7ee37ff89eb9e88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f846fb45c-49zts" Jan 29 11:32:30.615686 kubelet[2697]: E0129 11:32:30.615645 2697 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f846fb45c-49zts_calico-apiserver(355edf79-8969-4232-bff0-a38923ed3709)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f846fb45c-49zts_calico-apiserver(355edf79-8969-4232-bff0-a38923ed3709)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc31abcb76ed49cf0135c5d1e1ab407bf029dc7d3acca444a7ee37ff89eb9e88\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f846fb45c-49zts" podUID="355edf79-8969-4232-bff0-a38923ed3709" Jan 29 11:32:30.624914 containerd[1495]: time="2025-01-29T11:32:30.624608732Z" level=error msg="Failed to destroy network for sandbox \"89c5015df07cde96721832440960b9c6a01da3d920c81398d189bdc7a03b198c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:30.627142 containerd[1495]: time="2025-01-29T11:32:30.627037861Z" level=error msg="encountered an error cleaning up failed sandbox \"89c5015df07cde96721832440960b9c6a01da3d920c81398d189bdc7a03b198c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:30.627142 containerd[1495]: time="2025-01-29T11:32:30.627094608Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-748549c4c9-7d2cf,Uid:dfb9e1ad-f94c-4aa8-a1d0-d67fe50cc0e9,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"89c5015df07cde96721832440960b9c6a01da3d920c81398d189bdc7a03b198c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:30.633508 kubelet[2697]: E0129 11:32:30.630067 2697 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89c5015df07cde96721832440960b9c6a01da3d920c81398d189bdc7a03b198c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:30.633508 kubelet[2697]: E0129 11:32:30.630490 2697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89c5015df07cde96721832440960b9c6a01da3d920c81398d189bdc7a03b198c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-748549c4c9-7d2cf" Jan 29 11:32:30.633508 kubelet[2697]: E0129 11:32:30.630516 2697 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89c5015df07cde96721832440960b9c6a01da3d920c81398d189bdc7a03b198c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-748549c4c9-7d2cf" Jan 29 11:32:30.633938 kubelet[2697]: E0129 11:32:30.630905 2697 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-748549c4c9-7d2cf_calico-system(dfb9e1ad-f94c-4aa8-a1d0-d67fe50cc0e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-748549c4c9-7d2cf_calico-system(dfb9e1ad-f94c-4aa8-a1d0-d67fe50cc0e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"89c5015df07cde96721832440960b9c6a01da3d920c81398d189bdc7a03b198c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-748549c4c9-7d2cf" podUID="dfb9e1ad-f94c-4aa8-a1d0-d67fe50cc0e9" Jan 29 11:32:30.638219 containerd[1495]: time="2025-01-29T11:32:30.638158391Z" level=error msg="Failed to destroy network for sandbox \"8066f0c386aa12e1020b2dbf729a0686e7cd3d2af7c54baa18b36cbe29e48a40\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:30.639382 containerd[1495]: time="2025-01-29T11:32:30.639288853Z" level=error msg="encountered an error cleaning up failed sandbox \"8066f0c386aa12e1020b2dbf729a0686e7cd3d2af7c54baa18b36cbe29e48a40\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:30.639557 containerd[1495]: time="2025-01-29T11:32:30.639386216Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f846fb45c-qnzlv,Uid:6a3ccfc9-9edc-4b98-a77a-7df17efe2895,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"8066f0c386aa12e1020b2dbf729a0686e7cd3d2af7c54baa18b36cbe29e48a40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:30.639950 kubelet[2697]: E0129 11:32:30.639701 2697 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8066f0c386aa12e1020b2dbf729a0686e7cd3d2af7c54baa18b36cbe29e48a40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:30.639950 kubelet[2697]: E0129 11:32:30.639764 2697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8066f0c386aa12e1020b2dbf729a0686e7cd3d2af7c54baa18b36cbe29e48a40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f846fb45c-qnzlv" Jan 29 11:32:30.639950 kubelet[2697]: E0129 11:32:30.639789 2697 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8066f0c386aa12e1020b2dbf729a0686e7cd3d2af7c54baa18b36cbe29e48a40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f846fb45c-qnzlv" Jan 29 11:32:30.640399 kubelet[2697]: E0129 11:32:30.639841 2697 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f846fb45c-qnzlv_calico-apiserver(6a3ccfc9-9edc-4b98-a77a-7df17efe2895)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f846fb45c-qnzlv_calico-apiserver(6a3ccfc9-9edc-4b98-a77a-7df17efe2895)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8066f0c386aa12e1020b2dbf729a0686e7cd3d2af7c54baa18b36cbe29e48a40\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f846fb45c-qnzlv" podUID="6a3ccfc9-9edc-4b98-a77a-7df17efe2895" Jan 29 11:32:30.640806 containerd[1495]: time="2025-01-29T11:32:30.640092632Z" level=error msg="Failed to destroy network for sandbox \"a72ec10a20a86b9cbf5f8208218ea1a41fc5aef847107a9fb000132954894a40\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:30.640806 containerd[1495]: time="2025-01-29T11:32:30.640714438Z" level=error msg="encountered an error cleaning up failed sandbox \"a72ec10a20a86b9cbf5f8208218ea1a41fc5aef847107a9fb000132954894a40\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:30.640806 containerd[1495]: time="2025-01-29T11:32:30.640753993Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bpqc6,Uid:59ad9644-a5c7-4480-bc20-dbeaa0a967d1,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"a72ec10a20a86b9cbf5f8208218ea1a41fc5aef847107a9fb000132954894a40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:30.641043 kubelet[2697]: E0129 11:32:30.640983 2697 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a72ec10a20a86b9cbf5f8208218ea1a41fc5aef847107a9fb000132954894a40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:30.641043 kubelet[2697]: E0129 11:32:30.641009 2697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a72ec10a20a86b9cbf5f8208218ea1a41fc5aef847107a9fb000132954894a40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bpqc6" Jan 29 11:32:30.641043 kubelet[2697]: E0129 11:32:30.641028 2697 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a72ec10a20a86b9cbf5f8208218ea1a41fc5aef847107a9fb000132954894a40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bpqc6" Jan 29 11:32:30.641189 kubelet[2697]: E0129 11:32:30.641054 2697 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-bpqc6_kube-system(59ad9644-a5c7-4480-bc20-dbeaa0a967d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-bpqc6_kube-system(59ad9644-a5c7-4480-bc20-dbeaa0a967d1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a72ec10a20a86b9cbf5f8208218ea1a41fc5aef847107a9fb000132954894a40\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-bpqc6" podUID="59ad9644-a5c7-4480-bc20-dbeaa0a967d1" Jan 29 11:32:30.669313 containerd[1495]: time="2025-01-29T11:32:30.669192404Z" level=error msg="Failed to destroy network for sandbox \"d569cb7c5ddbed0b16a9e1171769ace0a51b3c89a0c48b2f5957786231c09e94\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:30.669857 containerd[1495]: time="2025-01-29T11:32:30.669835330Z" level=error msg="encountered an error cleaning up failed sandbox \"d569cb7c5ddbed0b16a9e1171769ace0a51b3c89a0c48b2f5957786231c09e94\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:30.669972 containerd[1495]: time="2025-01-29T11:32:30.669955356Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pgcpq,Uid:ce5f6883-5ebc-45bd-8052-20316de2d012,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"d569cb7c5ddbed0b16a9e1171769ace0a51b3c89a0c48b2f5957786231c09e94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:30.670318 kubelet[2697]: E0129 11:32:30.670258 2697 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d569cb7c5ddbed0b16a9e1171769ace0a51b3c89a0c48b2f5957786231c09e94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:30.670374 kubelet[2697]: E0129 11:32:30.670334 2697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d569cb7c5ddbed0b16a9e1171769ace0a51b3c89a0c48b2f5957786231c09e94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-pgcpq" Jan 29 11:32:30.670398 kubelet[2697]: E0129 11:32:30.670362 2697 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d569cb7c5ddbed0b16a9e1171769ace0a51b3c89a0c48b2f5957786231c09e94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-pgcpq" Jan 29 11:32:30.670491 kubelet[2697]: E0129 11:32:30.670457 2697 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-pgcpq_kube-system(ce5f6883-5ebc-45bd-8052-20316de2d012)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-pgcpq_kube-system(ce5f6883-5ebc-45bd-8052-20316de2d012)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d569cb7c5ddbed0b16a9e1171769ace0a51b3c89a0c48b2f5957786231c09e94\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-pgcpq" podUID="ce5f6883-5ebc-45bd-8052-20316de2d012" Jan 29 11:32:30.901953 systemd[1]: run-netns-cni\x2d9ec3d834\x2d3c61\x2dd030\x2da945\x2d6835153d6eec.mount: Deactivated successfully. Jan 29 11:32:30.902063 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c0b5ead6c962ad21415289fbc7fdd4d4dbcb723931e95b4fc2ac10dc62090f23-shm.mount: Deactivated successfully. Jan 29 11:32:30.902142 systemd[1]: run-netns-cni\x2de217c521\x2d51f7\x2d1c21\x2dda52\x2d577f24f0f6ae.mount: Deactivated successfully. Jan 29 11:32:30.902226 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f81dce53143ec3a050bc20013a34bc942f73112707290c1c5853735a992a3294-shm.mount: Deactivated successfully. Jan 29 11:32:31.447829 kubelet[2697]: I0129 11:32:31.447035 2697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32d3ad80782f04b13fc8d95b06e337633874857fa4abf01127138c329bba0487" Jan 29 11:32:31.448302 containerd[1495]: time="2025-01-29T11:32:31.447654817Z" level=info msg="StopPodSandbox for \"32d3ad80782f04b13fc8d95b06e337633874857fa4abf01127138c329bba0487\"" Jan 29 11:32:31.448302 containerd[1495]: time="2025-01-29T11:32:31.447916027Z" level=info msg="Ensure that sandbox 32d3ad80782f04b13fc8d95b06e337633874857fa4abf01127138c329bba0487 in task-service has been cleanup successfully" Jan 29 11:32:31.450637 containerd[1495]: time="2025-01-29T11:32:31.450568564Z" level=info msg="TearDown network for sandbox \"32d3ad80782f04b13fc8d95b06e337633874857fa4abf01127138c329bba0487\" successfully" Jan 29 11:32:31.450637 containerd[1495]: time="2025-01-29T11:32:31.450591768Z" level=info msg="StopPodSandbox for \"32d3ad80782f04b13fc8d95b06e337633874857fa4abf01127138c329bba0487\" returns successfully" Jan 29 11:32:31.450822 systemd[1]: run-netns-cni\x2d1e0c789e\x2db8f8\x2dea9c\x2d612e\x2d8823f3552bc7.mount: Deactivated successfully. Jan 29 11:32:31.450925 containerd[1495]: time="2025-01-29T11:32:31.450857287Z" level=info msg="StopPodSandbox for \"6b01d3b17008796e2599c4562b21cab96766aa5bc7d9347165898d1384559383\"" Jan 29 11:32:31.451007 containerd[1495]: time="2025-01-29T11:32:31.450983213Z" level=info msg="TearDown network for sandbox \"6b01d3b17008796e2599c4562b21cab96766aa5bc7d9347165898d1384559383\" successfully" Jan 29 11:32:31.451039 containerd[1495]: time="2025-01-29T11:32:31.451005154Z" level=info msg="StopPodSandbox for \"6b01d3b17008796e2599c4562b21cab96766aa5bc7d9347165898d1384559383\" returns successfully" Jan 29 11:32:31.575650 containerd[1495]: time="2025-01-29T11:32:31.575597870Z" level=info msg="StopPodSandbox for \"068481ee10bd71b1783eeab1c40f9ef93d9dc158fa6a9beeeba5fd6980a6abd7\"" Jan 29 11:32:31.575780 containerd[1495]: time="2025-01-29T11:32:31.575722053Z" level=info msg="TearDown network for sandbox \"068481ee10bd71b1783eeab1c40f9ef93d9dc158fa6a9beeeba5fd6980a6abd7\" successfully" Jan 29 11:32:31.575780 containerd[1495]: time="2025-01-29T11:32:31.575731520Z" level=info msg="StopPodSandbox for \"068481ee10bd71b1783eeab1c40f9ef93d9dc158fa6a9beeeba5fd6980a6abd7\" returns successfully" Jan 29 11:32:31.576441 kubelet[2697]: I0129 11:32:31.576051 2697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a72ec10a20a86b9cbf5f8208218ea1a41fc5aef847107a9fb000132954894a40" Jan 29 11:32:31.576492 containerd[1495]: time="2025-01-29T11:32:31.576332889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qtzv2,Uid:eb49b472-01c5-4cb5-84d5-9a1a2c4b969d,Namespace:calico-system,Attempt:3,}" Jan 29 11:32:31.576636 containerd[1495]: time="2025-01-29T11:32:31.576603477Z" level=info msg="StopPodSandbox for \"a72ec10a20a86b9cbf5f8208218ea1a41fc5aef847107a9fb000132954894a40\"" Jan 29 11:32:31.576997 containerd[1495]: time="2025-01-29T11:32:31.576777514Z" level=info msg="Ensure that sandbox a72ec10a20a86b9cbf5f8208218ea1a41fc5aef847107a9fb000132954894a40 in task-service has been cleanup successfully" Jan 29 11:32:31.577220 containerd[1495]: time="2025-01-29T11:32:31.577162736Z" level=info msg="TearDown network for sandbox \"a72ec10a20a86b9cbf5f8208218ea1a41fc5aef847107a9fb000132954894a40\" successfully" Jan 29 11:32:31.577301 containerd[1495]: time="2025-01-29T11:32:31.577287701Z" level=info msg="StopPodSandbox for \"a72ec10a20a86b9cbf5f8208218ea1a41fc5aef847107a9fb000132954894a40\" returns successfully" Jan 29 11:32:31.577644 kubelet[2697]: I0129 11:32:31.577609 2697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8066f0c386aa12e1020b2dbf729a0686e7cd3d2af7c54baa18b36cbe29e48a40" Jan 29 11:32:31.578496 containerd[1495]: time="2025-01-29T11:32:31.577948471Z" level=info msg="StopPodSandbox for \"c0b5ead6c962ad21415289fbc7fdd4d4dbcb723931e95b4fc2ac10dc62090f23\"" Jan 29 11:32:31.578496 containerd[1495]: time="2025-01-29T11:32:31.578047286Z" level=info msg="TearDown network for sandbox \"c0b5ead6c962ad21415289fbc7fdd4d4dbcb723931e95b4fc2ac10dc62090f23\" successfully" Jan 29 11:32:31.578496 containerd[1495]: time="2025-01-29T11:32:31.578058006Z" level=info msg="StopPodSandbox for \"c0b5ead6c962ad21415289fbc7fdd4d4dbcb723931e95b4fc2ac10dc62090f23\" returns successfully" Jan 29 11:32:31.578744 containerd[1495]: time="2025-01-29T11:32:31.578723445Z" level=info msg="StopPodSandbox for \"8066f0c386aa12e1020b2dbf729a0686e7cd3d2af7c54baa18b36cbe29e48a40\"" Jan 29 11:32:31.578877 containerd[1495]: time="2025-01-29T11:32:31.578732172Z" level=info msg="StopPodSandbox for \"62fbf0cacbcea32d6948f7a7cdc9388760af0845df2b850d76ebf9e79a41f942\"" Jan 29 11:32:31.579314 containerd[1495]: time="2025-01-29T11:32:31.579093550Z" level=info msg="TearDown network for sandbox \"62fbf0cacbcea32d6948f7a7cdc9388760af0845df2b850d76ebf9e79a41f942\" successfully" Jan 29 11:32:31.579314 containerd[1495]: time="2025-01-29T11:32:31.579108839Z" level=info msg="StopPodSandbox for \"62fbf0cacbcea32d6948f7a7cdc9388760af0845df2b850d76ebf9e79a41f942\" returns successfully" Jan 29 11:32:31.579314 containerd[1495]: time="2025-01-29T11:32:31.578934742Z" level=info msg="Ensure that sandbox 8066f0c386aa12e1020b2dbf729a0686e7cd3d2af7c54baa18b36cbe29e48a40 in task-service has been cleanup successfully" Jan 29 11:32:31.579249 systemd[1]: run-netns-cni\x2d40b30c2e\x2d40d9\x2d81b8\x2ddc6a\x2da3441ae91802.mount: Deactivated successfully. Jan 29 11:32:31.580119 kubelet[2697]: E0129 11:32:31.579640 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:32:31.580197 containerd[1495]: time="2025-01-29T11:32:31.579655594Z" level=info msg="TearDown network for sandbox \"8066f0c386aa12e1020b2dbf729a0686e7cd3d2af7c54baa18b36cbe29e48a40\" successfully" Jan 29 11:32:31.580197 containerd[1495]: time="2025-01-29T11:32:31.579669320Z" level=info msg="StopPodSandbox for \"8066f0c386aa12e1020b2dbf729a0686e7cd3d2af7c54baa18b36cbe29e48a40\" returns successfully" Jan 29 11:32:31.580197 containerd[1495]: time="2025-01-29T11:32:31.579823240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bpqc6,Uid:59ad9644-a5c7-4480-bc20-dbeaa0a967d1,Namespace:kube-system,Attempt:3,}" Jan 29 11:32:31.580645 containerd[1495]: time="2025-01-29T11:32:31.580628050Z" level=info msg="StopPodSandbox for \"5811a36840dd1d42d19b253cd448b0069b1615ccdc5a9f870a7e114387802527\"" Jan 29 11:32:31.580916 containerd[1495]: time="2025-01-29T11:32:31.580772982Z" level=info msg="TearDown network for sandbox \"5811a36840dd1d42d19b253cd448b0069b1615ccdc5a9f870a7e114387802527\" successfully" Jan 29 11:32:31.580916 containerd[1495]: time="2025-01-29T11:32:31.580787459Z" level=info msg="StopPodSandbox for \"5811a36840dd1d42d19b253cd448b0069b1615ccdc5a9f870a7e114387802527\" returns successfully" Jan 29 11:32:31.582082 kubelet[2697]: I0129 11:32:31.581295 2697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d569cb7c5ddbed0b16a9e1171769ace0a51b3c89a0c48b2f5957786231c09e94" Jan 29 11:32:31.582135 containerd[1495]: time="2025-01-29T11:32:31.581751117Z" level=info msg="StopPodSandbox for \"d569cb7c5ddbed0b16a9e1171769ace0a51b3c89a0c48b2f5957786231c09e94\"" Jan 29 11:32:31.582135 containerd[1495]: time="2025-01-29T11:32:31.581941705Z" level=info msg="Ensure that sandbox d569cb7c5ddbed0b16a9e1171769ace0a51b3c89a0c48b2f5957786231c09e94 in task-service has been cleanup successfully" Jan 29 11:32:31.582677 containerd[1495]: time="2025-01-29T11:32:31.582656396Z" level=info msg="StopPodSandbox for \"bd1c4fbf33953e5862c77e9bcf30d3094180127b1d08f643c0857a95af354d3a\"" Jan 29 11:32:31.582957 containerd[1495]: time="2025-01-29T11:32:31.582805415Z" level=info msg="TearDown network for sandbox \"bd1c4fbf33953e5862c77e9bcf30d3094180127b1d08f643c0857a95af354d3a\" successfully" Jan 29 11:32:31.582957 containerd[1495]: time="2025-01-29T11:32:31.582819884Z" level=info msg="StopPodSandbox for \"bd1c4fbf33953e5862c77e9bcf30d3094180127b1d08f643c0857a95af354d3a\" returns successfully" Jan 29 11:32:31.583094 containerd[1495]: time="2025-01-29T11:32:31.583077507Z" level=info msg="TearDown network for sandbox \"d569cb7c5ddbed0b16a9e1171769ace0a51b3c89a0c48b2f5957786231c09e94\" successfully" Jan 29 11:32:31.583148 containerd[1495]: time="2025-01-29T11:32:31.583135816Z" level=info msg="StopPodSandbox for \"d569cb7c5ddbed0b16a9e1171769ace0a51b3c89a0c48b2f5957786231c09e94\" returns successfully" Jan 29 11:32:31.583430 systemd[1]: run-netns-cni\x2dcc715e8a\x2d4481\x2d0de5\x2d9897\x2dee8e6ddc5dba.mount: Deactivated successfully. Jan 29 11:32:31.584319 containerd[1495]: time="2025-01-29T11:32:31.584257100Z" level=info msg="StopPodSandbox for \"934f5bf6aae6891083f2c20de48c400c17aaa8a980e059df118a8a6387f827de\"" Jan 29 11:32:31.584367 containerd[1495]: time="2025-01-29T11:32:31.584344213Z" level=info msg="TearDown network for sandbox \"934f5bf6aae6891083f2c20de48c400c17aaa8a980e059df118a8a6387f827de\" successfully" Jan 29 11:32:31.584367 containerd[1495]: time="2025-01-29T11:32:31.584354753Z" level=info msg="StopPodSandbox for \"934f5bf6aae6891083f2c20de48c400c17aaa8a980e059df118a8a6387f827de\" returns successfully" Jan 29 11:32:31.585105 containerd[1495]: time="2025-01-29T11:32:31.585076808Z" level=info msg="StopPodSandbox for \"88b4cc24607b9cc69cf8566db1a11a31044711ee8251a11957523ee4d62e4b16\"" Jan 29 11:32:31.585186 containerd[1495]: time="2025-01-29T11:32:31.585168540Z" level=info msg="TearDown network for sandbox \"88b4cc24607b9cc69cf8566db1a11a31044711ee8251a11957523ee4d62e4b16\" successfully" Jan 29 11:32:31.585186 containerd[1495]: time="2025-01-29T11:32:31.585182426Z" level=info msg="StopPodSandbox for \"88b4cc24607b9cc69cf8566db1a11a31044711ee8251a11957523ee4d62e4b16\" returns successfully" Jan 29 11:32:31.585477 kubelet[2697]: E0129 11:32:31.585400 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:32:31.585847 containerd[1495]: time="2025-01-29T11:32:31.585813360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pgcpq,Uid:ce5f6883-5ebc-45bd-8052-20316de2d012,Namespace:kube-system,Attempt:3,}" Jan 29 11:32:31.586630 kubelet[2697]: I0129 11:32:31.586571 2697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc31abcb76ed49cf0135c5d1e1ab407bf029dc7d3acca444a7ee37ff89eb9e88" Jan 29 11:32:31.587080 containerd[1495]: time="2025-01-29T11:32:31.587019885Z" level=info msg="StopPodSandbox for \"cc31abcb76ed49cf0135c5d1e1ab407bf029dc7d3acca444a7ee37ff89eb9e88\"" Jan 29 11:32:31.587339 containerd[1495]: time="2025-01-29T11:32:31.587195003Z" level=info msg="Ensure that sandbox cc31abcb76ed49cf0135c5d1e1ab407bf029dc7d3acca444a7ee37ff89eb9e88 in task-service has been cleanup successfully" Jan 29 11:32:31.587479 containerd[1495]: time="2025-01-29T11:32:31.587463136Z" level=info msg="TearDown network for sandbox \"cc31abcb76ed49cf0135c5d1e1ab407bf029dc7d3acca444a7ee37ff89eb9e88\" successfully" Jan 29 11:32:31.587538 containerd[1495]: time="2025-01-29T11:32:31.587526737Z" level=info msg="StopPodSandbox for \"cc31abcb76ed49cf0135c5d1e1ab407bf029dc7d3acca444a7ee37ff89eb9e88\" returns successfully" Jan 29 11:32:31.589273 containerd[1495]: time="2025-01-29T11:32:31.588343620Z" level=info msg="StopPodSandbox for \"f81dce53143ec3a050bc20013a34bc942f73112707290c1c5853735a992a3294\"" Jan 29 11:32:31.589273 containerd[1495]: time="2025-01-29T11:32:31.588515181Z" level=info msg="TearDown network for sandbox \"f81dce53143ec3a050bc20013a34bc942f73112707290c1c5853735a992a3294\" successfully" Jan 29 11:32:31.589273 containerd[1495]: time="2025-01-29T11:32:31.588530209Z" level=info msg="StopPodSandbox for \"f81dce53143ec3a050bc20013a34bc942f73112707290c1c5853735a992a3294\" returns successfully" Jan 29 11:32:31.589273 containerd[1495]: time="2025-01-29T11:32:31.589062247Z" level=info msg="StopPodSandbox for \"a57e13c259cc81ce2e477b9469ae3554a9ae4291cb2db4e7129310c60db62f90\"" Jan 29 11:32:31.589273 containerd[1495]: time="2025-01-29T11:32:31.589138241Z" level=info msg="TearDown network for sandbox \"a57e13c259cc81ce2e477b9469ae3554a9ae4291cb2db4e7129310c60db62f90\" successfully" Jan 29 11:32:31.589273 containerd[1495]: time="2025-01-29T11:32:31.589148720Z" level=info msg="StopPodSandbox for \"a57e13c259cc81ce2e477b9469ae3554a9ae4291cb2db4e7129310c60db62f90\" returns successfully" Jan 29 11:32:31.588579 systemd[1]: run-netns-cni\x2d8304d498\x2ded39\x2dbb2f\x2dc801\x2d28cfe8c9c371.mount: Deactivated successfully. Jan 29 11:32:31.589671 containerd[1495]: time="2025-01-29T11:32:31.589608683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f846fb45c-49zts,Uid:355edf79-8969-4232-bff0-a38923ed3709,Namespace:calico-apiserver,Attempt:3,}" Jan 29 11:32:31.591011 kubelet[2697]: I0129 11:32:31.590749 2697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89c5015df07cde96721832440960b9c6a01da3d920c81398d189bdc7a03b198c" Jan 29 11:32:31.591460 containerd[1495]: time="2025-01-29T11:32:31.591435612Z" level=info msg="StopPodSandbox for \"89c5015df07cde96721832440960b9c6a01da3d920c81398d189bdc7a03b198c\"" Jan 29 11:32:31.591709 containerd[1495]: time="2025-01-29T11:32:31.591677937Z" level=info msg="Ensure that sandbox 89c5015df07cde96721832440960b9c6a01da3d920c81398d189bdc7a03b198c in task-service has been cleanup successfully" Jan 29 11:32:31.591912 containerd[1495]: time="2025-01-29T11:32:31.591886418Z" level=info msg="TearDown network for sandbox \"89c5015df07cde96721832440960b9c6a01da3d920c81398d189bdc7a03b198c\" successfully" Jan 29 11:32:31.591912 containerd[1495]: time="2025-01-29T11:32:31.591906215Z" level=info msg="StopPodSandbox for \"89c5015df07cde96721832440960b9c6a01da3d920c81398d189bdc7a03b198c\" returns successfully" Jan 29 11:32:31.592576 containerd[1495]: time="2025-01-29T11:32:31.592556345Z" level=info msg="StopPodSandbox for \"72d092d80dc2ce8362dcfe998cac484b496878a763b26fd59103bd9f00b2119c\"" Jan 29 11:32:31.592672 containerd[1495]: time="2025-01-29T11:32:31.592647917Z" level=info msg="TearDown network for sandbox \"72d092d80dc2ce8362dcfe998cac484b496878a763b26fd59103bd9f00b2119c\" successfully" Jan 29 11:32:31.592672 containerd[1495]: time="2025-01-29T11:32:31.592661502Z" level=info msg="StopPodSandbox for \"72d092d80dc2ce8362dcfe998cac484b496878a763b26fd59103bd9f00b2119c\" returns successfully" Jan 29 11:32:31.593011 containerd[1495]: time="2025-01-29T11:32:31.592888147Z" level=info msg="StopPodSandbox for \"fe6ac203e82a4009391dc90c7b6ce83180175a75113cfc6c681bac96b7142886\"" Jan 29 11:32:31.593011 containerd[1495]: time="2025-01-29T11:32:31.592961896Z" level=info msg="TearDown network for sandbox \"fe6ac203e82a4009391dc90c7b6ce83180175a75113cfc6c681bac96b7142886\" successfully" Jan 29 11:32:31.593011 containerd[1495]: time="2025-01-29T11:32:31.592970773Z" level=info msg="StopPodSandbox for \"fe6ac203e82a4009391dc90c7b6ce83180175a75113cfc6c681bac96b7142886\" returns successfully" Jan 29 11:32:31.593995 containerd[1495]: time="2025-01-29T11:32:31.593798827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-748549c4c9-7d2cf,Uid:dfb9e1ad-f94c-4aa8-a1d0-d67fe50cc0e9,Namespace:calico-system,Attempt:3,}" Jan 29 11:32:31.594949 containerd[1495]: time="2025-01-29T11:32:31.594923998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f846fb45c-qnzlv,Uid:6a3ccfc9-9edc-4b98-a77a-7df17efe2895,Namespace:calico-apiserver,Attempt:3,}" Jan 29 11:32:31.784867 containerd[1495]: time="2025-01-29T11:32:31.784627784Z" level=error msg="Failed to destroy network for sandbox \"4e1e197df2d2535e387f27fb3ad1d9788046231aaa3dcab8b067d5acba99c1c6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:31.787035 containerd[1495]: time="2025-01-29T11:32:31.786068478Z" level=error msg="encountered an error cleaning up failed sandbox \"4e1e197df2d2535e387f27fb3ad1d9788046231aaa3dcab8b067d5acba99c1c6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:31.787035 containerd[1495]: time="2025-01-29T11:32:31.786126016Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bpqc6,Uid:59ad9644-a5c7-4480-bc20-dbeaa0a967d1,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"4e1e197df2d2535e387f27fb3ad1d9788046231aaa3dcab8b067d5acba99c1c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:31.787115 kubelet[2697]: E0129 11:32:31.786400 2697 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e1e197df2d2535e387f27fb3ad1d9788046231aaa3dcab8b067d5acba99c1c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:31.787115 kubelet[2697]: E0129 11:32:31.786569 2697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e1e197df2d2535e387f27fb3ad1d9788046231aaa3dcab8b067d5acba99c1c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bpqc6" Jan 29 11:32:31.787115 kubelet[2697]: E0129 11:32:31.786597 2697 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e1e197df2d2535e387f27fb3ad1d9788046231aaa3dcab8b067d5acba99c1c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bpqc6" Jan 29 11:32:31.787302 kubelet[2697]: E0129 11:32:31.786669 2697 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-bpqc6_kube-system(59ad9644-a5c7-4480-bc20-dbeaa0a967d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-bpqc6_kube-system(59ad9644-a5c7-4480-bc20-dbeaa0a967d1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4e1e197df2d2535e387f27fb3ad1d9788046231aaa3dcab8b067d5acba99c1c6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-bpqc6" podUID="59ad9644-a5c7-4480-bc20-dbeaa0a967d1" Jan 29 11:32:31.799966 containerd[1495]: time="2025-01-29T11:32:31.799735897Z" level=error msg="Failed to destroy network for sandbox \"959ee7c5fd0e3b26b779fc1b7c9c34bb6768bd7de43d74c8453896c45bbc7058\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:31.800389 containerd[1495]: time="2025-01-29T11:32:31.800345470Z" level=error msg="encountered an error cleaning up failed sandbox \"959ee7c5fd0e3b26b779fc1b7c9c34bb6768bd7de43d74c8453896c45bbc7058\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:31.800858 containerd[1495]: time="2025-01-29T11:32:31.800835410Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-748549c4c9-7d2cf,Uid:dfb9e1ad-f94c-4aa8-a1d0-d67fe50cc0e9,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"959ee7c5fd0e3b26b779fc1b7c9c34bb6768bd7de43d74c8453896c45bbc7058\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:31.801537 kubelet[2697]: E0129 11:32:31.801168 2697 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"959ee7c5fd0e3b26b779fc1b7c9c34bb6768bd7de43d74c8453896c45bbc7058\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:31.801537 kubelet[2697]: E0129 11:32:31.801225 2697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"959ee7c5fd0e3b26b779fc1b7c9c34bb6768bd7de43d74c8453896c45bbc7058\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-748549c4c9-7d2cf" Jan 29 11:32:31.801537 kubelet[2697]: E0129 11:32:31.801244 2697 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"959ee7c5fd0e3b26b779fc1b7c9c34bb6768bd7de43d74c8453896c45bbc7058\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-748549c4c9-7d2cf" Jan 29 11:32:31.801669 kubelet[2697]: E0129 11:32:31.801292 2697 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-748549c4c9-7d2cf_calico-system(dfb9e1ad-f94c-4aa8-a1d0-d67fe50cc0e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-748549c4c9-7d2cf_calico-system(dfb9e1ad-f94c-4aa8-a1d0-d67fe50cc0e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"959ee7c5fd0e3b26b779fc1b7c9c34bb6768bd7de43d74c8453896c45bbc7058\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-748549c4c9-7d2cf" podUID="dfb9e1ad-f94c-4aa8-a1d0-d67fe50cc0e9" Jan 29 11:32:31.807933 containerd[1495]: time="2025-01-29T11:32:31.807869991Z" level=error msg="Failed to destroy network for sandbox \"5ba01231171e716c7d6bfe7ea5616f070e4f98ceb9b8630b477b92af2ec4f966\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:31.808476 containerd[1495]: time="2025-01-29T11:32:31.808365642Z" level=error msg="Failed to destroy network for sandbox \"b39c2348c38274ae69c6ca2882dbe664e0f843199c327bc4d8145d923c55a15b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:31.808715 containerd[1495]: time="2025-01-29T11:32:31.808695110Z" level=error msg="encountered an error cleaning up failed sandbox \"5ba01231171e716c7d6bfe7ea5616f070e4f98ceb9b8630b477b92af2ec4f966\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:31.808826 containerd[1495]: time="2025-01-29T11:32:31.808804485Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f846fb45c-qnzlv,Uid:6a3ccfc9-9edc-4b98-a77a-7df17efe2895,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"5ba01231171e716c7d6bfe7ea5616f070e4f98ceb9b8630b477b92af2ec4f966\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:31.809293 kubelet[2697]: E0129 11:32:31.809122 2697 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ba01231171e716c7d6bfe7ea5616f070e4f98ceb9b8630b477b92af2ec4f966\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:31.809293 kubelet[2697]: E0129 11:32:31.809178 2697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ba01231171e716c7d6bfe7ea5616f070e4f98ceb9b8630b477b92af2ec4f966\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f846fb45c-qnzlv" Jan 29 11:32:31.809293 kubelet[2697]: E0129 11:32:31.809198 2697 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ba01231171e716c7d6bfe7ea5616f070e4f98ceb9b8630b477b92af2ec4f966\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f846fb45c-qnzlv" Jan 29 11:32:31.809404 kubelet[2697]: E0129 11:32:31.809235 2697 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f846fb45c-qnzlv_calico-apiserver(6a3ccfc9-9edc-4b98-a77a-7df17efe2895)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f846fb45c-qnzlv_calico-apiserver(6a3ccfc9-9edc-4b98-a77a-7df17efe2895)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5ba01231171e716c7d6bfe7ea5616f070e4f98ceb9b8630b477b92af2ec4f966\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f846fb45c-qnzlv" podUID="6a3ccfc9-9edc-4b98-a77a-7df17efe2895" Jan 29 11:32:31.816478 containerd[1495]: time="2025-01-29T11:32:31.816307956Z" level=error msg="Failed to destroy network for sandbox \"97436e09e8a3af61a6c30171fb86bac060178da354f2257a8dfbf354e1e82387\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:31.816765 containerd[1495]: time="2025-01-29T11:32:31.816741259Z" level=error msg="encountered an error cleaning up failed sandbox \"97436e09e8a3af61a6c30171fb86bac060178da354f2257a8dfbf354e1e82387\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:31.816866 containerd[1495]: time="2025-01-29T11:32:31.816803786Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qtzv2,Uid:eb49b472-01c5-4cb5-84d5-9a1a2c4b969d,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"97436e09e8a3af61a6c30171fb86bac060178da354f2257a8dfbf354e1e82387\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:31.817075 kubelet[2697]: E0129 11:32:31.817033 2697 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97436e09e8a3af61a6c30171fb86bac060178da354f2257a8dfbf354e1e82387\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:31.817122 kubelet[2697]: E0129 11:32:31.817101 2697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97436e09e8a3af61a6c30171fb86bac060178da354f2257a8dfbf354e1e82387\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qtzv2" Jan 29 11:32:31.817150 kubelet[2697]: E0129 11:32:31.817121 2697 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97436e09e8a3af61a6c30171fb86bac060178da354f2257a8dfbf354e1e82387\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qtzv2" Jan 29 11:32:31.817188 kubelet[2697]: E0129 11:32:31.817164 2697 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qtzv2_calico-system(eb49b472-01c5-4cb5-84d5-9a1a2c4b969d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qtzv2_calico-system(eb49b472-01c5-4cb5-84d5-9a1a2c4b969d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"97436e09e8a3af61a6c30171fb86bac060178da354f2257a8dfbf354e1e82387\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qtzv2" podUID="eb49b472-01c5-4cb5-84d5-9a1a2c4b969d" Jan 29 11:32:31.827660 containerd[1495]: time="2025-01-29T11:32:31.827572135Z" level=error msg="encountered an error cleaning up failed sandbox \"b39c2348c38274ae69c6ca2882dbe664e0f843199c327bc4d8145d923c55a15b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:31.827812 containerd[1495]: time="2025-01-29T11:32:31.827688935Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f846fb45c-49zts,Uid:355edf79-8969-4232-bff0-a38923ed3709,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"b39c2348c38274ae69c6ca2882dbe664e0f843199c327bc4d8145d923c55a15b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:31.828017 kubelet[2697]: E0129 11:32:31.827932 2697 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b39c2348c38274ae69c6ca2882dbe664e0f843199c327bc4d8145d923c55a15b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:31.828017 kubelet[2697]: E0129 11:32:31.827976 2697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b39c2348c38274ae69c6ca2882dbe664e0f843199c327bc4d8145d923c55a15b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f846fb45c-49zts" Jan 29 11:32:31.828017 kubelet[2697]: E0129 11:32:31.828022 2697 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b39c2348c38274ae69c6ca2882dbe664e0f843199c327bc4d8145d923c55a15b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f846fb45c-49zts" Jan 29 11:32:31.828246 kubelet[2697]: E0129 11:32:31.828058 2697 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f846fb45c-49zts_calico-apiserver(355edf79-8969-4232-bff0-a38923ed3709)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f846fb45c-49zts_calico-apiserver(355edf79-8969-4232-bff0-a38923ed3709)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b39c2348c38274ae69c6ca2882dbe664e0f843199c327bc4d8145d923c55a15b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f846fb45c-49zts" podUID="355edf79-8969-4232-bff0-a38923ed3709" Jan 29 11:32:31.830570 containerd[1495]: time="2025-01-29T11:32:31.830538722Z" level=error msg="Failed to destroy network for sandbox \"c60489ec5828713ee6df69e7a33bc62e8d4338df25cba506d3f6f340dd21c518\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:31.830969 containerd[1495]: time="2025-01-29T11:32:31.830940968Z" level=error msg="encountered an error cleaning up failed sandbox \"c60489ec5828713ee6df69e7a33bc62e8d4338df25cba506d3f6f340dd21c518\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:31.831090 containerd[1495]: time="2025-01-29T11:32:31.831064790Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pgcpq,Uid:ce5f6883-5ebc-45bd-8052-20316de2d012,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"c60489ec5828713ee6df69e7a33bc62e8d4338df25cba506d3f6f340dd21c518\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:31.831335 kubelet[2697]: E0129 11:32:31.831293 2697 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c60489ec5828713ee6df69e7a33bc62e8d4338df25cba506d3f6f340dd21c518\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:31.831391 kubelet[2697]: E0129 11:32:31.831360 2697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c60489ec5828713ee6df69e7a33bc62e8d4338df25cba506d3f6f340dd21c518\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-pgcpq" Jan 29 11:32:31.831391 kubelet[2697]: E0129 11:32:31.831380 2697 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c60489ec5828713ee6df69e7a33bc62e8d4338df25cba506d3f6f340dd21c518\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-pgcpq" Jan 29 11:32:31.831511 kubelet[2697]: E0129 11:32:31.831454 2697 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-pgcpq_kube-system(ce5f6883-5ebc-45bd-8052-20316de2d012)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-pgcpq_kube-system(ce5f6883-5ebc-45bd-8052-20316de2d012)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c60489ec5828713ee6df69e7a33bc62e8d4338df25cba506d3f6f340dd21c518\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-pgcpq" podUID="ce5f6883-5ebc-45bd-8052-20316de2d012" Jan 29 11:32:31.902539 systemd[1]: run-netns-cni\x2d1f2130cc\x2d95f9\x2d5cb9\x2d4762\x2de48b3e3076f6.mount: Deactivated successfully. Jan 29 11:32:31.902640 systemd[1]: run-netns-cni\x2df512017c\x2d5c49\x2dbd03\x2dcf8b\x2dd1a2b9c9a430.mount: Deactivated successfully. Jan 29 11:32:32.596054 kubelet[2697]: I0129 11:32:32.595721 2697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b39c2348c38274ae69c6ca2882dbe664e0f843199c327bc4d8145d923c55a15b" Jan 29 11:32:32.598139 containerd[1495]: time="2025-01-29T11:32:32.597347650Z" level=info msg="StopPodSandbox for \"b39c2348c38274ae69c6ca2882dbe664e0f843199c327bc4d8145d923c55a15b\"" Jan 29 11:32:32.598139 containerd[1495]: time="2025-01-29T11:32:32.597582470Z" level=info msg="Ensure that sandbox b39c2348c38274ae69c6ca2882dbe664e0f843199c327bc4d8145d923c55a15b in task-service has been cleanup successfully" Jan 29 11:32:32.598139 containerd[1495]: time="2025-01-29T11:32:32.598076107Z" level=info msg="TearDown network for sandbox \"b39c2348c38274ae69c6ca2882dbe664e0f843199c327bc4d8145d923c55a15b\" successfully" Jan 29 11:32:32.598139 containerd[1495]: time="2025-01-29T11:32:32.598088711Z" level=info msg="StopPodSandbox for \"b39c2348c38274ae69c6ca2882dbe664e0f843199c327bc4d8145d923c55a15b\" returns successfully" Jan 29 11:32:32.599105 containerd[1495]: time="2025-01-29T11:32:32.598664632Z" level=info msg="StopPodSandbox for \"cc31abcb76ed49cf0135c5d1e1ab407bf029dc7d3acca444a7ee37ff89eb9e88\"" Jan 29 11:32:32.601206 kubelet[2697]: I0129 11:32:32.601169 2697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="959ee7c5fd0e3b26b779fc1b7c9c34bb6768bd7de43d74c8453896c45bbc7058" Jan 29 11:32:32.602069 systemd[1]: run-netns-cni\x2d9f56a0ba\x2df795\x2df2e2\x2d83db\x2d9eb7b8836bcf.mount: Deactivated successfully. Jan 29 11:32:32.606981 kubelet[2697]: I0129 11:32:32.606123 2697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97436e09e8a3af61a6c30171fb86bac060178da354f2257a8dfbf354e1e82387" Jan 29 11:32:32.610305 kubelet[2697]: I0129 11:32:32.609995 2697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e1e197df2d2535e387f27fb3ad1d9788046231aaa3dcab8b067d5acba99c1c6" Jan 29 11:32:32.610455 containerd[1495]: time="2025-01-29T11:32:32.598734262Z" level=info msg="TearDown network for sandbox \"cc31abcb76ed49cf0135c5d1e1ab407bf029dc7d3acca444a7ee37ff89eb9e88\" successfully" Jan 29 11:32:32.610514 containerd[1495]: time="2025-01-29T11:32:32.610437905Z" level=info msg="StopPodSandbox for \"cc31abcb76ed49cf0135c5d1e1ab407bf029dc7d3acca444a7ee37ff89eb9e88\" returns successfully" Jan 29 11:32:32.610514 containerd[1495]: time="2025-01-29T11:32:32.610465056Z" level=info msg="StopPodSandbox for \"4e1e197df2d2535e387f27fb3ad1d9788046231aaa3dcab8b067d5acba99c1c6\"" Jan 29 11:32:32.610514 containerd[1495]: time="2025-01-29T11:32:32.601986756Z" level=info msg="StopPodSandbox for \"959ee7c5fd0e3b26b779fc1b7c9c34bb6768bd7de43d74c8453896c45bbc7058\"" Jan 29 11:32:32.610777 containerd[1495]: time="2025-01-29T11:32:32.610733158Z" level=info msg="Ensure that sandbox 4e1e197df2d2535e387f27fb3ad1d9788046231aaa3dcab8b067d5acba99c1c6 in task-service has been cleanup successfully" Jan 29 11:32:32.610859 containerd[1495]: time="2025-01-29T11:32:32.610835581Z" level=info msg="StopPodSandbox for \"f81dce53143ec3a050bc20013a34bc942f73112707290c1c5853735a992a3294\"" Jan 29 11:32:32.610963 containerd[1495]: time="2025-01-29T11:32:32.610918566Z" level=info msg="TearDown network for sandbox \"f81dce53143ec3a050bc20013a34bc942f73112707290c1c5853735a992a3294\" successfully" Jan 29 11:32:32.610963 containerd[1495]: time="2025-01-29T11:32:32.610952410Z" level=info msg="StopPodSandbox for \"f81dce53143ec3a050bc20013a34bc942f73112707290c1c5853735a992a3294\" returns successfully" Jan 29 11:32:32.611161 containerd[1495]: time="2025-01-29T11:32:32.610750631Z" level=info msg="Ensure that sandbox 959ee7c5fd0e3b26b779fc1b7c9c34bb6768bd7de43d74c8453896c45bbc7058 in task-service has been cleanup successfully" Jan 29 11:32:32.611161 containerd[1495]: time="2025-01-29T11:32:32.607296169Z" level=info msg="StopPodSandbox for \"97436e09e8a3af61a6c30171fb86bac060178da354f2257a8dfbf354e1e82387\"" Jan 29 11:32:32.611260 containerd[1495]: time="2025-01-29T11:32:32.611235892Z" level=info msg="TearDown network for sandbox \"4e1e197df2d2535e387f27fb3ad1d9788046231aaa3dcab8b067d5acba99c1c6\" successfully" Jan 29 11:32:32.611260 containerd[1495]: time="2025-01-29T11:32:32.611254477Z" level=info msg="StopPodSandbox for \"4e1e197df2d2535e387f27fb3ad1d9788046231aaa3dcab8b067d5acba99c1c6\" returns successfully" Jan 29 11:32:32.611330 containerd[1495]: time="2025-01-29T11:32:32.611262071Z" level=info msg="Ensure that sandbox 97436e09e8a3af61a6c30171fb86bac060178da354f2257a8dfbf354e1e82387 in task-service has been cleanup successfully" Jan 29 11:32:32.611571 containerd[1495]: time="2025-01-29T11:32:32.611541766Z" level=info msg="TearDown network for sandbox \"97436e09e8a3af61a6c30171fb86bac060178da354f2257a8dfbf354e1e82387\" successfully" Jan 29 11:32:32.611571 containerd[1495]: time="2025-01-29T11:32:32.611560862Z" level=info msg="StopPodSandbox for \"97436e09e8a3af61a6c30171fb86bac060178da354f2257a8dfbf354e1e82387\" returns successfully" Jan 29 11:32:32.611975 containerd[1495]: time="2025-01-29T11:32:32.611899156Z" level=info msg="TearDown network for sandbox \"959ee7c5fd0e3b26b779fc1b7c9c34bb6768bd7de43d74c8453896c45bbc7058\" successfully" Jan 29 11:32:32.611975 containerd[1495]: time="2025-01-29T11:32:32.611910538Z" level=info msg="StopPodSandbox for \"959ee7c5fd0e3b26b779fc1b7c9c34bb6768bd7de43d74c8453896c45bbc7058\" returns successfully" Jan 29 11:32:32.612250 containerd[1495]: time="2025-01-29T11:32:32.612231430Z" level=info msg="StopPodSandbox for \"a72ec10a20a86b9cbf5f8208218ea1a41fc5aef847107a9fb000132954894a40\"" Jan 29 11:32:32.612348 containerd[1495]: time="2025-01-29T11:32:32.612332289Z" level=info msg="TearDown network for sandbox \"a72ec10a20a86b9cbf5f8208218ea1a41fc5aef847107a9fb000132954894a40\" successfully" Jan 29 11:32:32.612371 containerd[1495]: time="2025-01-29T11:32:32.612345985Z" level=info msg="StopPodSandbox for \"a72ec10a20a86b9cbf5f8208218ea1a41fc5aef847107a9fb000132954894a40\" returns successfully" Jan 29 11:32:32.612391 containerd[1495]: time="2025-01-29T11:32:32.612364830Z" level=info msg="StopPodSandbox for \"32d3ad80782f04b13fc8d95b06e337633874857fa4abf01127138c329bba0487\"" Jan 29 11:32:32.612453 containerd[1495]: time="2025-01-29T11:32:32.612405727Z" level=info msg="StopPodSandbox for \"a57e13c259cc81ce2e477b9469ae3554a9ae4291cb2db4e7129310c60db62f90\"" Jan 29 11:32:32.612493 containerd[1495]: time="2025-01-29T11:32:32.612476460Z" level=info msg="TearDown network for sandbox \"32d3ad80782f04b13fc8d95b06e337633874857fa4abf01127138c329bba0487\" successfully" Jan 29 11:32:32.612522 containerd[1495]: time="2025-01-29T11:32:32.612491067Z" level=info msg="StopPodSandbox for \"32d3ad80782f04b13fc8d95b06e337633874857fa4abf01127138c329bba0487\" returns successfully" Jan 29 11:32:32.612552 containerd[1495]: time="2025-01-29T11:32:32.612504903Z" level=info msg="TearDown network for sandbox \"a57e13c259cc81ce2e477b9469ae3554a9ae4291cb2db4e7129310c60db62f90\" successfully" Jan 29 11:32:32.612552 containerd[1495]: time="2025-01-29T11:32:32.612530571Z" level=info msg="StopPodSandbox for \"a57e13c259cc81ce2e477b9469ae3554a9ae4291cb2db4e7129310c60db62f90\" returns successfully" Jan 29 11:32:32.613168 containerd[1495]: time="2025-01-29T11:32:32.613060406Z" level=info msg="StopPodSandbox for \"89c5015df07cde96721832440960b9c6a01da3d920c81398d189bdc7a03b198c\"" Jan 29 11:32:32.613168 containerd[1495]: time="2025-01-29T11:32:32.613091614Z" level=info msg="StopPodSandbox for \"6b01d3b17008796e2599c4562b21cab96766aa5bc7d9347165898d1384559383\"" Jan 29 11:32:32.613231 containerd[1495]: time="2025-01-29T11:32:32.613216549Z" level=info msg="TearDown network for sandbox \"6b01d3b17008796e2599c4562b21cab96766aa5bc7d9347165898d1384559383\" successfully" Jan 29 11:32:32.613231 containerd[1495]: time="2025-01-29T11:32:32.613228211Z" level=info msg="StopPodSandbox for \"6b01d3b17008796e2599c4562b21cab96766aa5bc7d9347165898d1384559383\" returns successfully" Jan 29 11:32:32.613282 containerd[1495]: time="2025-01-29T11:32:32.613253909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f846fb45c-49zts,Uid:355edf79-8969-4232-bff0-a38923ed3709,Namespace:calico-apiserver,Attempt:4,}" Jan 29 11:32:32.613795 containerd[1495]: time="2025-01-29T11:32:32.613766641Z" level=info msg="StopPodSandbox for \"068481ee10bd71b1783eeab1c40f9ef93d9dc158fa6a9beeeba5fd6980a6abd7\"" Jan 29 11:32:32.614098 containerd[1495]: time="2025-01-29T11:32:32.614018894Z" level=info msg="TearDown network for sandbox \"068481ee10bd71b1783eeab1c40f9ef93d9dc158fa6a9beeeba5fd6980a6abd7\" successfully" Jan 29 11:32:32.614098 containerd[1495]: time="2025-01-29T11:32:32.614058659Z" level=info msg="StopPodSandbox for \"068481ee10bd71b1783eeab1c40f9ef93d9dc158fa6a9beeeba5fd6980a6abd7\" returns successfully" Jan 29 11:32:32.614098 containerd[1495]: time="2025-01-29T11:32:32.613888680Z" level=info msg="TearDown network for sandbox \"89c5015df07cde96721832440960b9c6a01da3d920c81398d189bdc7a03b198c\" successfully" Jan 29 11:32:32.614098 containerd[1495]: time="2025-01-29T11:32:32.614091310Z" level=info msg="StopPodSandbox for \"89c5015df07cde96721832440960b9c6a01da3d920c81398d189bdc7a03b198c\" returns successfully" Jan 29 11:32:32.614198 containerd[1495]: time="2025-01-29T11:32:32.614100357Z" level=info msg="StopPodSandbox for \"c0b5ead6c962ad21415289fbc7fdd4d4dbcb723931e95b4fc2ac10dc62090f23\"" Jan 29 11:32:32.614198 containerd[1495]: time="2025-01-29T11:32:32.614172773Z" level=info msg="TearDown network for sandbox \"c0b5ead6c962ad21415289fbc7fdd4d4dbcb723931e95b4fc2ac10dc62090f23\" successfully" Jan 29 11:32:32.614198 containerd[1495]: time="2025-01-29T11:32:32.614182251Z" level=info msg="StopPodSandbox for \"c0b5ead6c962ad21415289fbc7fdd4d4dbcb723931e95b4fc2ac10dc62090f23\" returns successfully" Jan 29 11:32:32.614165 systemd[1]: run-netns-cni\x2dc3a1adaa\x2d4680\x2dc088\x2df7a5\x2d7ccea1b472a4.mount: Deactivated successfully. Jan 29 11:32:32.614576 containerd[1495]: time="2025-01-29T11:32:32.614556083Z" level=info msg="StopPodSandbox for \"72d092d80dc2ce8362dcfe998cac484b496878a763b26fd59103bd9f00b2119c\"" Jan 29 11:32:32.614892 containerd[1495]: time="2025-01-29T11:32:32.614640531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qtzv2,Uid:eb49b472-01c5-4cb5-84d5-9a1a2c4b969d,Namespace:calico-system,Attempt:4,}" Jan 29 11:32:32.614892 containerd[1495]: time="2025-01-29T11:32:32.614713328Z" level=info msg="TearDown network for sandbox \"72d092d80dc2ce8362dcfe998cac484b496878a763b26fd59103bd9f00b2119c\" successfully" Jan 29 11:32:32.614892 containerd[1495]: time="2025-01-29T11:32:32.614725220Z" level=info msg="StopPodSandbox for \"72d092d80dc2ce8362dcfe998cac484b496878a763b26fd59103bd9f00b2119c\" returns successfully" Jan 29 11:32:32.614892 containerd[1495]: time="2025-01-29T11:32:32.614779402Z" level=info msg="StopPodSandbox for \"62fbf0cacbcea32d6948f7a7cdc9388760af0845df2b850d76ebf9e79a41f942\"" Jan 29 11:32:32.614892 containerd[1495]: time="2025-01-29T11:32:32.614837310Z" level=info msg="TearDown network for sandbox \"62fbf0cacbcea32d6948f7a7cdc9388760af0845df2b850d76ebf9e79a41f942\" successfully" Jan 29 11:32:32.614892 containerd[1495]: time="2025-01-29T11:32:32.614846417Z" level=info msg="StopPodSandbox for \"62fbf0cacbcea32d6948f7a7cdc9388760af0845df2b850d76ebf9e79a41f942\" returns successfully" Jan 29 11:32:32.615034 kubelet[2697]: I0129 11:32:32.615024 2697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ba01231171e716c7d6bfe7ea5616f070e4f98ceb9b8630b477b92af2ec4f966" Jan 29 11:32:32.615103 kubelet[2697]: E0129 11:32:32.615076 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:32:32.615675 containerd[1495]: time="2025-01-29T11:32:32.615658010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bpqc6,Uid:59ad9644-a5c7-4480-bc20-dbeaa0a967d1,Namespace:kube-system,Attempt:4,}" Jan 29 11:32:32.615913 containerd[1495]: time="2025-01-29T11:32:32.615670714Z" level=info msg="StopPodSandbox for \"fe6ac203e82a4009391dc90c7b6ce83180175a75113cfc6c681bac96b7142886\"" Jan 29 11:32:32.616029 containerd[1495]: time="2025-01-29T11:32:32.616014419Z" level=info msg="TearDown network for sandbox \"fe6ac203e82a4009391dc90c7b6ce83180175a75113cfc6c681bac96b7142886\" successfully" Jan 29 11:32:32.616082 containerd[1495]: time="2025-01-29T11:32:32.616070956Z" level=info msg="StopPodSandbox for \"fe6ac203e82a4009391dc90c7b6ce83180175a75113cfc6c681bac96b7142886\" returns successfully" Jan 29 11:32:32.616370 containerd[1495]: time="2025-01-29T11:32:32.615718243Z" level=info msg="StopPodSandbox for \"5ba01231171e716c7d6bfe7ea5616f070e4f98ceb9b8630b477b92af2ec4f966\"" Jan 29 11:32:32.616740 containerd[1495]: time="2025-01-29T11:32:32.616717689Z" level=info msg="Ensure that sandbox 5ba01231171e716c7d6bfe7ea5616f070e4f98ceb9b8630b477b92af2ec4f966 in task-service has been cleanup successfully" Jan 29 11:32:32.616922 containerd[1495]: time="2025-01-29T11:32:32.616890644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-748549c4c9-7d2cf,Uid:dfb9e1ad-f94c-4aa8-a1d0-d67fe50cc0e9,Namespace:calico-system,Attempt:4,}" Jan 29 11:32:32.617053 systemd[1]: run-netns-cni\x2db8018929\x2dc1bf\x2de810\x2d14e7\x2d8f9ae05bf5cf.mount: Deactivated successfully. Jan 29 11:32:32.617179 containerd[1495]: time="2025-01-29T11:32:32.617053239Z" level=info msg="TearDown network for sandbox \"5ba01231171e716c7d6bfe7ea5616f070e4f98ceb9b8630b477b92af2ec4f966\" successfully" Jan 29 11:32:32.617179 containerd[1495]: time="2025-01-29T11:32:32.617088625Z" level=info msg="StopPodSandbox for \"5ba01231171e716c7d6bfe7ea5616f070e4f98ceb9b8630b477b92af2ec4f966\" returns successfully" Jan 29 11:32:32.617200 systemd[1]: run-netns-cni\x2d8a49b756\x2d233a\x2dcb62\x2dc9cb\x2dadecaaf31c05.mount: Deactivated successfully. Jan 29 11:32:32.617588 containerd[1495]: time="2025-01-29T11:32:32.617454141Z" level=info msg="StopPodSandbox for \"8066f0c386aa12e1020b2dbf729a0686e7cd3d2af7c54baa18b36cbe29e48a40\"" Jan 29 11:32:32.617588 containerd[1495]: time="2025-01-29T11:32:32.617525685Z" level=info msg="TearDown network for sandbox \"8066f0c386aa12e1020b2dbf729a0686e7cd3d2af7c54baa18b36cbe29e48a40\" successfully" Jan 29 11:32:32.617588 containerd[1495]: time="2025-01-29T11:32:32.617535553Z" level=info msg="StopPodSandbox for \"8066f0c386aa12e1020b2dbf729a0686e7cd3d2af7c54baa18b36cbe29e48a40\" returns successfully" Jan 29 11:32:32.618066 containerd[1495]: time="2025-01-29T11:32:32.618041844Z" level=info msg="StopPodSandbox for \"5811a36840dd1d42d19b253cd448b0069b1615ccdc5a9f870a7e114387802527\"" Jan 29 11:32:32.618133 containerd[1495]: time="2025-01-29T11:32:32.618117746Z" level=info msg="TearDown network for sandbox \"5811a36840dd1d42d19b253cd448b0069b1615ccdc5a9f870a7e114387802527\" successfully" Jan 29 11:32:32.618133 containerd[1495]: time="2025-01-29T11:32:32.618130961Z" level=info msg="StopPodSandbox for \"5811a36840dd1d42d19b253cd448b0069b1615ccdc5a9f870a7e114387802527\" returns successfully" Jan 29 11:32:32.619311 containerd[1495]: time="2025-01-29T11:32:32.619185199Z" level=info msg="StopPodSandbox for \"bd1c4fbf33953e5862c77e9bcf30d3094180127b1d08f643c0857a95af354d3a\"" Jan 29 11:32:32.619552 containerd[1495]: time="2025-01-29T11:32:32.619497406Z" level=info msg="TearDown network for sandbox \"bd1c4fbf33953e5862c77e9bcf30d3094180127b1d08f643c0857a95af354d3a\" successfully" Jan 29 11:32:32.619552 containerd[1495]: time="2025-01-29T11:32:32.619512303Z" level=info msg="StopPodSandbox for \"bd1c4fbf33953e5862c77e9bcf30d3094180127b1d08f643c0857a95af354d3a\" returns successfully" Jan 29 11:32:32.619778 kubelet[2697]: I0129 11:32:32.619763 2697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c60489ec5828713ee6df69e7a33bc62e8d4338df25cba506d3f6f340dd21c518" Jan 29 11:32:32.620065 containerd[1495]: time="2025-01-29T11:32:32.620047187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f846fb45c-qnzlv,Uid:6a3ccfc9-9edc-4b98-a77a-7df17efe2895,Namespace:calico-apiserver,Attempt:4,}" Jan 29 11:32:32.620707 containerd[1495]: time="2025-01-29T11:32:32.620394258Z" level=info msg="StopPodSandbox for \"c60489ec5828713ee6df69e7a33bc62e8d4338df25cba506d3f6f340dd21c518\"" Jan 29 11:32:32.620854 containerd[1495]: time="2025-01-29T11:32:32.620835576Z" level=info msg="Ensure that sandbox c60489ec5828713ee6df69e7a33bc62e8d4338df25cba506d3f6f340dd21c518 in task-service has been cleanup successfully" Jan 29 11:32:32.621211 containerd[1495]: time="2025-01-29T11:32:32.621193959Z" level=info msg="TearDown network for sandbox \"c60489ec5828713ee6df69e7a33bc62e8d4338df25cba506d3f6f340dd21c518\" successfully" Jan 29 11:32:32.621211 containerd[1495]: time="2025-01-29T11:32:32.621208236Z" level=info msg="StopPodSandbox for \"c60489ec5828713ee6df69e7a33bc62e8d4338df25cba506d3f6f340dd21c518\" returns successfully" Jan 29 11:32:32.621528 containerd[1495]: time="2025-01-29T11:32:32.621508930Z" level=info msg="StopPodSandbox for \"d569cb7c5ddbed0b16a9e1171769ace0a51b3c89a0c48b2f5957786231c09e94\"" Jan 29 11:32:32.621750 containerd[1495]: time="2025-01-29T11:32:32.621699217Z" level=info msg="TearDown network for sandbox \"d569cb7c5ddbed0b16a9e1171769ace0a51b3c89a0c48b2f5957786231c09e94\" successfully" Jan 29 11:32:32.621750 containerd[1495]: time="2025-01-29T11:32:32.621713233Z" level=info msg="StopPodSandbox for \"d569cb7c5ddbed0b16a9e1171769ace0a51b3c89a0c48b2f5957786231c09e94\" returns successfully" Jan 29 11:32:32.622246 containerd[1495]: time="2025-01-29T11:32:32.622127431Z" level=info msg="StopPodSandbox for \"934f5bf6aae6891083f2c20de48c400c17aaa8a980e059df118a8a6387f827de\"" Jan 29 11:32:32.622246 containerd[1495]: time="2025-01-29T11:32:32.622195508Z" level=info msg="TearDown network for sandbox \"934f5bf6aae6891083f2c20de48c400c17aaa8a980e059df118a8a6387f827de\" successfully" Jan 29 11:32:32.622246 containerd[1495]: time="2025-01-29T11:32:32.622203584Z" level=info msg="StopPodSandbox for \"934f5bf6aae6891083f2c20de48c400c17aaa8a980e059df118a8a6387f827de\" returns successfully" Jan 29 11:32:32.623138 containerd[1495]: time="2025-01-29T11:32:32.623105135Z" level=info msg="StopPodSandbox for \"88b4cc24607b9cc69cf8566db1a11a31044711ee8251a11957523ee4d62e4b16\"" Jan 29 11:32:32.623293 containerd[1495]: time="2025-01-29T11:32:32.623201396Z" level=info msg="TearDown network for sandbox \"88b4cc24607b9cc69cf8566db1a11a31044711ee8251a11957523ee4d62e4b16\" successfully" Jan 29 11:32:32.623293 containerd[1495]: time="2025-01-29T11:32:32.623215001Z" level=info msg="StopPodSandbox for \"88b4cc24607b9cc69cf8566db1a11a31044711ee8251a11957523ee4d62e4b16\" returns successfully" Jan 29 11:32:32.623376 kubelet[2697]: E0129 11:32:32.623352 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:32:32.623678 containerd[1495]: time="2025-01-29T11:32:32.623659316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pgcpq,Uid:ce5f6883-5ebc-45bd-8052-20316de2d012,Namespace:kube-system,Attempt:4,}" Jan 29 11:32:32.625194 systemd[1]: run-netns-cni\x2dfc36cf4d\x2daf50\x2d3af0\x2d041c\x2d655a11328bda.mount: Deactivated successfully. Jan 29 11:32:32.625307 systemd[1]: run-netns-cni\x2d8c6aed65\x2d9f6d\x2d0c08\x2de35c\x2dce24db4a1565.mount: Deactivated successfully. Jan 29 11:32:33.661525 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4085341584.mount: Deactivated successfully. Jan 29 11:32:33.796602 systemd[1]: Started sshd@10-10.0.0.69:22-10.0.0.1:56420.service - OpenSSH per-connection server daemon (10.0.0.1:56420). Jan 29 11:32:34.114209 sshd[4342]: Accepted publickey for core from 10.0.0.1 port 56420 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:32:34.116043 sshd-session[4342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:32:34.120321 systemd-logind[1471]: New session 11 of user core. Jan 29 11:32:34.128539 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 11:32:34.653524 sshd[4344]: Connection closed by 10.0.0.1 port 56420 Jan 29 11:32:34.654687 sshd-session[4342]: pam_unix(sshd:session): session closed for user core Jan 29 11:32:34.667252 systemd[1]: sshd@10-10.0.0.69:22-10.0.0.1:56420.service: Deactivated successfully. Jan 29 11:32:34.669580 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 11:32:34.671455 systemd-logind[1471]: Session 11 logged out. Waiting for processes to exit. Jan 29 11:32:34.679700 systemd[1]: Started sshd@11-10.0.0.69:22-10.0.0.1:56430.service - OpenSSH per-connection server daemon (10.0.0.1:56430). Jan 29 11:32:34.680723 systemd-logind[1471]: Removed session 11. Jan 29 11:32:34.787875 sshd[4358]: Accepted publickey for core from 10.0.0.1 port 56430 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:32:34.789208 sshd-session[4358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:32:34.793496 systemd-logind[1471]: New session 12 of user core. Jan 29 11:32:34.803575 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 11:32:35.214595 sshd[4360]: Connection closed by 10.0.0.1 port 56430 Jan 29 11:32:35.215051 sshd-session[4358]: pam_unix(sshd:session): session closed for user core Jan 29 11:32:35.223033 systemd[1]: sshd@11-10.0.0.69:22-10.0.0.1:56430.service: Deactivated successfully. Jan 29 11:32:35.227547 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 11:32:35.228742 systemd-logind[1471]: Session 12 logged out. Waiting for processes to exit. Jan 29 11:32:35.237661 systemd[1]: Started sshd@12-10.0.0.69:22-10.0.0.1:56436.service - OpenSSH per-connection server daemon (10.0.0.1:56436). Jan 29 11:32:35.238972 systemd-logind[1471]: Removed session 12. Jan 29 11:32:35.351528 sshd[4377]: Accepted publickey for core from 10.0.0.1 port 56436 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:32:35.353113 sshd-session[4377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:32:35.367853 systemd-logind[1471]: New session 13 of user core. Jan 29 11:32:35.372577 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 11:32:35.383348 containerd[1495]: time="2025-01-29T11:32:35.383287203Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 29 11:32:35.383774 containerd[1495]: time="2025-01-29T11:32:35.383755312Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:32:35.386751 containerd[1495]: time="2025-01-29T11:32:35.386717851Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:32:35.406473 containerd[1495]: time="2025-01-29T11:32:35.406430618Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:32:35.416184 containerd[1495]: time="2025-01-29T11:32:35.416044167Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 7.009029416s" Jan 29 11:32:35.416184 containerd[1495]: time="2025-01-29T11:32:35.416083761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 29 11:32:35.433967 containerd[1495]: time="2025-01-29T11:32:35.433912495Z" level=info msg="CreateContainer within sandbox \"3ee21ef4050ccd111435a0544a1fb757ac81bd2550610a120d7c78332e44e5c7\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 11:32:35.491370 containerd[1495]: time="2025-01-29T11:32:35.490481898Z" level=error msg="Failed to destroy network for sandbox \"f4159f78bb2d48a1117a75aeb4e25573e0a11837294e7901b6da3c3896dae9e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:35.494764 containerd[1495]: time="2025-01-29T11:32:35.494510638Z" level=error msg="encountered an error cleaning up failed sandbox \"f4159f78bb2d48a1117a75aeb4e25573e0a11837294e7901b6da3c3896dae9e0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:35.495258 containerd[1495]: time="2025-01-29T11:32:35.495218556Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-748549c4c9-7d2cf,Uid:dfb9e1ad-f94c-4aa8-a1d0-d67fe50cc0e9,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"f4159f78bb2d48a1117a75aeb4e25573e0a11837294e7901b6da3c3896dae9e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:35.496293 kubelet[2697]: E0129 11:32:35.495623 2697 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4159f78bb2d48a1117a75aeb4e25573e0a11837294e7901b6da3c3896dae9e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:35.496293 kubelet[2697]: E0129 11:32:35.495678 2697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4159f78bb2d48a1117a75aeb4e25573e0a11837294e7901b6da3c3896dae9e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-748549c4c9-7d2cf" Jan 29 11:32:35.496293 kubelet[2697]: E0129 11:32:35.495704 2697 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4159f78bb2d48a1117a75aeb4e25573e0a11837294e7901b6da3c3896dae9e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-748549c4c9-7d2cf" Jan 29 11:32:35.496736 kubelet[2697]: E0129 11:32:35.495749 2697 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-748549c4c9-7d2cf_calico-system(dfb9e1ad-f94c-4aa8-a1d0-d67fe50cc0e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-748549c4c9-7d2cf_calico-system(dfb9e1ad-f94c-4aa8-a1d0-d67fe50cc0e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f4159f78bb2d48a1117a75aeb4e25573e0a11837294e7901b6da3c3896dae9e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-748549c4c9-7d2cf" podUID="dfb9e1ad-f94c-4aa8-a1d0-d67fe50cc0e9" Jan 29 11:32:35.506104 containerd[1495]: time="2025-01-29T11:32:35.503659344Z" level=error msg="Failed to destroy network for sandbox \"8a448304f72df29d9b001f735dc09f27534467dd06569f9ad7d03d0886fca03f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:35.506104 containerd[1495]: time="2025-01-29T11:32:35.505699462Z" level=error msg="encountered an error cleaning up failed sandbox \"8a448304f72df29d9b001f735dc09f27534467dd06569f9ad7d03d0886fca03f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:35.506104 containerd[1495]: time="2025-01-29T11:32:35.505756318Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bpqc6,Uid:59ad9644-a5c7-4480-bc20-dbeaa0a967d1,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"8a448304f72df29d9b001f735dc09f27534467dd06569f9ad7d03d0886fca03f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:35.509212 kubelet[2697]: E0129 11:32:35.507613 2697 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a448304f72df29d9b001f735dc09f27534467dd06569f9ad7d03d0886fca03f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:35.509212 kubelet[2697]: E0129 11:32:35.507690 2697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a448304f72df29d9b001f735dc09f27534467dd06569f9ad7d03d0886fca03f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bpqc6" Jan 29 11:32:35.509212 kubelet[2697]: E0129 11:32:35.507722 2697 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a448304f72df29d9b001f735dc09f27534467dd06569f9ad7d03d0886fca03f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bpqc6" Jan 29 11:32:35.509427 kubelet[2697]: E0129 11:32:35.507772 2697 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-bpqc6_kube-system(59ad9644-a5c7-4480-bc20-dbeaa0a967d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-bpqc6_kube-system(59ad9644-a5c7-4480-bc20-dbeaa0a967d1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8a448304f72df29d9b001f735dc09f27534467dd06569f9ad7d03d0886fca03f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-bpqc6" podUID="59ad9644-a5c7-4480-bc20-dbeaa0a967d1" Jan 29 11:32:35.515356 containerd[1495]: time="2025-01-29T11:32:35.513492895Z" level=error msg="Failed to destroy network for sandbox \"4f05e1208f1add52e8a5b9089a9a0848ac49ff2f3de3a669456b522c00d2264e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:35.520437 containerd[1495]: time="2025-01-29T11:32:35.515898178Z" level=error msg="encountered an error cleaning up failed sandbox \"4f05e1208f1add52e8a5b9089a9a0848ac49ff2f3de3a669456b522c00d2264e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:35.520437 containerd[1495]: time="2025-01-29T11:32:35.515985351Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f846fb45c-qnzlv,Uid:6a3ccfc9-9edc-4b98-a77a-7df17efe2895,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"4f05e1208f1add52e8a5b9089a9a0848ac49ff2f3de3a669456b522c00d2264e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:35.520634 kubelet[2697]: E0129 11:32:35.516267 2697 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f05e1208f1add52e8a5b9089a9a0848ac49ff2f3de3a669456b522c00d2264e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:35.520634 kubelet[2697]: E0129 11:32:35.516313 2697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f05e1208f1add52e8a5b9089a9a0848ac49ff2f3de3a669456b522c00d2264e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f846fb45c-qnzlv" Jan 29 11:32:35.520634 kubelet[2697]: E0129 11:32:35.516345 2697 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f05e1208f1add52e8a5b9089a9a0848ac49ff2f3de3a669456b522c00d2264e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f846fb45c-qnzlv" Jan 29 11:32:35.520736 kubelet[2697]: E0129 11:32:35.516382 2697 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f846fb45c-qnzlv_calico-apiserver(6a3ccfc9-9edc-4b98-a77a-7df17efe2895)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f846fb45c-qnzlv_calico-apiserver(6a3ccfc9-9edc-4b98-a77a-7df17efe2895)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4f05e1208f1add52e8a5b9089a9a0848ac49ff2f3de3a669456b522c00d2264e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f846fb45c-qnzlv" podUID="6a3ccfc9-9edc-4b98-a77a-7df17efe2895" Jan 29 11:32:35.523041 containerd[1495]: time="2025-01-29T11:32:35.522893293Z" level=info msg="CreateContainer within sandbox \"3ee21ef4050ccd111435a0544a1fb757ac81bd2550610a120d7c78332e44e5c7\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bea0775ce4483acfb1096f68e6fc2fd6fec1d0bad4a9e54a2a4adfbf8be0988d\"" Jan 29 11:32:35.526941 containerd[1495]: time="2025-01-29T11:32:35.526475856Z" level=info msg="StartContainer for \"bea0775ce4483acfb1096f68e6fc2fd6fec1d0bad4a9e54a2a4adfbf8be0988d\"" Jan 29 11:32:35.530587 containerd[1495]: time="2025-01-29T11:32:35.530552385Z" level=error msg="Failed to destroy network for sandbox \"61ea1d99928f6fca8bf5f9f3d6685899fdaf05dc52e35306cb5b54127a72d6e7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:35.531055 containerd[1495]: time="2025-01-29T11:32:35.531033196Z" level=error msg="encountered an error cleaning up failed sandbox \"61ea1d99928f6fca8bf5f9f3d6685899fdaf05dc52e35306cb5b54127a72d6e7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:35.531164 containerd[1495]: time="2025-01-29T11:32:35.531145677Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qtzv2,Uid:eb49b472-01c5-4cb5-84d5-9a1a2c4b969d,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"61ea1d99928f6fca8bf5f9f3d6685899fdaf05dc52e35306cb5b54127a72d6e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:35.531709 kubelet[2697]: E0129 11:32:35.531435 2697 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61ea1d99928f6fca8bf5f9f3d6685899fdaf05dc52e35306cb5b54127a72d6e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:35.531709 kubelet[2697]: E0129 11:32:35.531486 2697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61ea1d99928f6fca8bf5f9f3d6685899fdaf05dc52e35306cb5b54127a72d6e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qtzv2" Jan 29 11:32:35.531709 kubelet[2697]: E0129 11:32:35.531505 2697 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61ea1d99928f6fca8bf5f9f3d6685899fdaf05dc52e35306cb5b54127a72d6e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qtzv2" Jan 29 11:32:35.531823 kubelet[2697]: E0129 11:32:35.531542 2697 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qtzv2_calico-system(eb49b472-01c5-4cb5-84d5-9a1a2c4b969d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qtzv2_calico-system(eb49b472-01c5-4cb5-84d5-9a1a2c4b969d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"61ea1d99928f6fca8bf5f9f3d6685899fdaf05dc52e35306cb5b54127a72d6e7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qtzv2" podUID="eb49b472-01c5-4cb5-84d5-9a1a2c4b969d" Jan 29 11:32:35.560925 containerd[1495]: time="2025-01-29T11:32:35.560714289Z" level=error msg="Failed to destroy network for sandbox \"8ff3ec328ea81ecb5b20f9bf8dc8473fb5ca80ce1cd5e221a39baad55272eccf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:35.566264 containerd[1495]: time="2025-01-29T11:32:35.564561447Z" level=error msg="encountered an error cleaning up failed sandbox \"8ff3ec328ea81ecb5b20f9bf8dc8473fb5ca80ce1cd5e221a39baad55272eccf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:35.566264 containerd[1495]: time="2025-01-29T11:32:35.564671765Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f846fb45c-49zts,Uid:355edf79-8969-4232-bff0-a38923ed3709,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"8ff3ec328ea81ecb5b20f9bf8dc8473fb5ca80ce1cd5e221a39baad55272eccf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:35.566503 kubelet[2697]: E0129 11:32:35.565864 2697 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ff3ec328ea81ecb5b20f9bf8dc8473fb5ca80ce1cd5e221a39baad55272eccf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:35.566503 kubelet[2697]: E0129 11:32:35.565932 2697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ff3ec328ea81ecb5b20f9bf8dc8473fb5ca80ce1cd5e221a39baad55272eccf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f846fb45c-49zts" Jan 29 11:32:35.566503 kubelet[2697]: E0129 11:32:35.565954 2697 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ff3ec328ea81ecb5b20f9bf8dc8473fb5ca80ce1cd5e221a39baad55272eccf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f846fb45c-49zts" Jan 29 11:32:35.566617 kubelet[2697]: E0129 11:32:35.566002 2697 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f846fb45c-49zts_calico-apiserver(355edf79-8969-4232-bff0-a38923ed3709)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f846fb45c-49zts_calico-apiserver(355edf79-8969-4232-bff0-a38923ed3709)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8ff3ec328ea81ecb5b20f9bf8dc8473fb5ca80ce1cd5e221a39baad55272eccf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f846fb45c-49zts" podUID="355edf79-8969-4232-bff0-a38923ed3709" Jan 29 11:32:35.575982 containerd[1495]: time="2025-01-29T11:32:35.575575082Z" level=error msg="Failed to destroy network for sandbox \"018d81fe3443755c0386db1c28cb83e6eda2e9ae48cc9dfd32950115231468d6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:35.575982 containerd[1495]: time="2025-01-29T11:32:35.575979732Z" level=error msg="encountered an error cleaning up failed sandbox \"018d81fe3443755c0386db1c28cb83e6eda2e9ae48cc9dfd32950115231468d6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:35.576137 containerd[1495]: time="2025-01-29T11:32:35.576060985Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pgcpq,Uid:ce5f6883-5ebc-45bd-8052-20316de2d012,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"018d81fe3443755c0386db1c28cb83e6eda2e9ae48cc9dfd32950115231468d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:35.577397 kubelet[2697]: E0129 11:32:35.576391 2697 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"018d81fe3443755c0386db1c28cb83e6eda2e9ae48cc9dfd32950115231468d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:35.577397 kubelet[2697]: E0129 11:32:35.576477 2697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"018d81fe3443755c0386db1c28cb83e6eda2e9ae48cc9dfd32950115231468d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-pgcpq" Jan 29 11:32:35.577397 kubelet[2697]: E0129 11:32:35.576496 2697 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"018d81fe3443755c0386db1c28cb83e6eda2e9ae48cc9dfd32950115231468d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-pgcpq" Jan 29 11:32:35.577567 kubelet[2697]: E0129 11:32:35.576540 2697 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-pgcpq_kube-system(ce5f6883-5ebc-45bd-8052-20316de2d012)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-pgcpq_kube-system(ce5f6883-5ebc-45bd-8052-20316de2d012)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"018d81fe3443755c0386db1c28cb83e6eda2e9ae48cc9dfd32950115231468d6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-pgcpq" podUID="ce5f6883-5ebc-45bd-8052-20316de2d012" Jan 29 11:32:35.586223 sshd[4398]: Connection closed by 10.0.0.1 port 56436 Jan 29 11:32:35.586689 sshd-session[4377]: pam_unix(sshd:session): session closed for user core Jan 29 11:32:35.590051 systemd[1]: sshd@12-10.0.0.69:22-10.0.0.1:56436.service: Deactivated successfully. Jan 29 11:32:35.592053 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 11:32:35.593537 systemd-logind[1471]: Session 13 logged out. Waiting for processes to exit. Jan 29 11:32:35.594786 systemd-logind[1471]: Removed session 13. Jan 29 11:32:35.640581 systemd[1]: Started cri-containerd-bea0775ce4483acfb1096f68e6fc2fd6fec1d0bad4a9e54a2a4adfbf8be0988d.scope - libcontainer container bea0775ce4483acfb1096f68e6fc2fd6fec1d0bad4a9e54a2a4adfbf8be0988d. Jan 29 11:32:35.659431 kubelet[2697]: I0129 11:32:35.659366 2697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4159f78bb2d48a1117a75aeb4e25573e0a11837294e7901b6da3c3896dae9e0" Jan 29 11:32:35.660879 containerd[1495]: time="2025-01-29T11:32:35.660203436Z" level=info msg="StopPodSandbox for \"f4159f78bb2d48a1117a75aeb4e25573e0a11837294e7901b6da3c3896dae9e0\"" Jan 29 11:32:35.660879 containerd[1495]: time="2025-01-29T11:32:35.660519659Z" level=info msg="Ensure that sandbox f4159f78bb2d48a1117a75aeb4e25573e0a11837294e7901b6da3c3896dae9e0 in task-service has been cleanup successfully" Jan 29 11:32:35.662285 containerd[1495]: time="2025-01-29T11:32:35.662108930Z" level=info msg="TearDown network for sandbox \"f4159f78bb2d48a1117a75aeb4e25573e0a11837294e7901b6da3c3896dae9e0\" successfully" Jan 29 11:32:35.662377 containerd[1495]: time="2025-01-29T11:32:35.662296613Z" level=info msg="StopPodSandbox for \"f4159f78bb2d48a1117a75aeb4e25573e0a11837294e7901b6da3c3896dae9e0\" returns successfully" Jan 29 11:32:35.663294 containerd[1495]: time="2025-01-29T11:32:35.663238801Z" level=info msg="StopPodSandbox for \"959ee7c5fd0e3b26b779fc1b7c9c34bb6768bd7de43d74c8453896c45bbc7058\"" Jan 29 11:32:35.663911 containerd[1495]: time="2025-01-29T11:32:35.663873802Z" level=info msg="TearDown network for sandbox \"959ee7c5fd0e3b26b779fc1b7c9c34bb6768bd7de43d74c8453896c45bbc7058\" successfully" Jan 29 11:32:35.663911 containerd[1495]: time="2025-01-29T11:32:35.663894080Z" level=info msg="StopPodSandbox for \"959ee7c5fd0e3b26b779fc1b7c9c34bb6768bd7de43d74c8453896c45bbc7058\" returns successfully" Jan 29 11:32:35.665033 kubelet[2697]: I0129 11:32:35.665003 2697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61ea1d99928f6fca8bf5f9f3d6685899fdaf05dc52e35306cb5b54127a72d6e7" Jan 29 11:32:35.666025 containerd[1495]: time="2025-01-29T11:32:35.665684580Z" level=info msg="StopPodSandbox for \"61ea1d99928f6fca8bf5f9f3d6685899fdaf05dc52e35306cb5b54127a72d6e7\"" Jan 29 11:32:35.666025 containerd[1495]: time="2025-01-29T11:32:35.665878914Z" level=info msg="StopPodSandbox for \"89c5015df07cde96721832440960b9c6a01da3d920c81398d189bdc7a03b198c\"" Jan 29 11:32:35.666025 containerd[1495]: time="2025-01-29T11:32:35.665891908Z" level=info msg="Ensure that sandbox 61ea1d99928f6fca8bf5f9f3d6685899fdaf05dc52e35306cb5b54127a72d6e7 in task-service has been cleanup successfully" Jan 29 11:32:35.666025 containerd[1495]: time="2025-01-29T11:32:35.665949727Z" level=info msg="TearDown network for sandbox \"89c5015df07cde96721832440960b9c6a01da3d920c81398d189bdc7a03b198c\" successfully" Jan 29 11:32:35.666025 containerd[1495]: time="2025-01-29T11:32:35.665958634Z" level=info msg="StopPodSandbox for \"89c5015df07cde96721832440960b9c6a01da3d920c81398d189bdc7a03b198c\" returns successfully" Jan 29 11:32:35.666498 containerd[1495]: time="2025-01-29T11:32:35.666481555Z" level=info msg="StopPodSandbox for \"72d092d80dc2ce8362dcfe998cac484b496878a763b26fd59103bd9f00b2119c\"" Jan 29 11:32:35.666626 containerd[1495]: time="2025-01-29T11:32:35.666611619Z" level=info msg="TearDown network for sandbox \"72d092d80dc2ce8362dcfe998cac484b496878a763b26fd59103bd9f00b2119c\" successfully" Jan 29 11:32:35.666832 containerd[1495]: time="2025-01-29T11:32:35.666817966Z" level=info msg="StopPodSandbox for \"72d092d80dc2ce8362dcfe998cac484b496878a763b26fd59103bd9f00b2119c\" returns successfully" Jan 29 11:32:35.666991 containerd[1495]: time="2025-01-29T11:32:35.666976213Z" level=info msg="TearDown network for sandbox \"61ea1d99928f6fca8bf5f9f3d6685899fdaf05dc52e35306cb5b54127a72d6e7\" successfully" Jan 29 11:32:35.667126 containerd[1495]: time="2025-01-29T11:32:35.667065520Z" level=info msg="StopPodSandbox for \"61ea1d99928f6fca8bf5f9f3d6685899fdaf05dc52e35306cb5b54127a72d6e7\" returns successfully" Jan 29 11:32:35.668687 containerd[1495]: time="2025-01-29T11:32:35.668503038Z" level=info msg="StopPodSandbox for \"fe6ac203e82a4009391dc90c7b6ce83180175a75113cfc6c681bac96b7142886\"" Jan 29 11:32:35.668687 containerd[1495]: time="2025-01-29T11:32:35.668599068Z" level=info msg="TearDown network for sandbox \"fe6ac203e82a4009391dc90c7b6ce83180175a75113cfc6c681bac96b7142886\" successfully" Jan 29 11:32:35.668687 containerd[1495]: time="2025-01-29T11:32:35.668612393Z" level=info msg="StopPodSandbox for \"fe6ac203e82a4009391dc90c7b6ce83180175a75113cfc6c681bac96b7142886\" returns successfully" Jan 29 11:32:35.668687 containerd[1495]: time="2025-01-29T11:32:35.668638011Z" level=info msg="StopPodSandbox for \"97436e09e8a3af61a6c30171fb86bac060178da354f2257a8dfbf354e1e82387\"" Jan 29 11:32:35.668841 containerd[1495]: time="2025-01-29T11:32:35.668730695Z" level=info msg="TearDown network for sandbox \"97436e09e8a3af61a6c30171fb86bac060178da354f2257a8dfbf354e1e82387\" successfully" Jan 29 11:32:35.668841 containerd[1495]: time="2025-01-29T11:32:35.668744561Z" level=info msg="StopPodSandbox for \"97436e09e8a3af61a6c30171fb86bac060178da354f2257a8dfbf354e1e82387\" returns successfully" Jan 29 11:32:35.669739 containerd[1495]: time="2025-01-29T11:32:35.669579157Z" level=info msg="StopPodSandbox for \"32d3ad80782f04b13fc8d95b06e337633874857fa4abf01127138c329bba0487\"" Jan 29 11:32:35.669739 containerd[1495]: time="2025-01-29T11:32:35.669686308Z" level=info msg="TearDown network for sandbox \"32d3ad80782f04b13fc8d95b06e337633874857fa4abf01127138c329bba0487\" successfully" Jan 29 11:32:35.669739 containerd[1495]: time="2025-01-29T11:32:35.669700395Z" level=info msg="StopPodSandbox for \"32d3ad80782f04b13fc8d95b06e337633874857fa4abf01127138c329bba0487\" returns successfully" Jan 29 11:32:35.670066 containerd[1495]: time="2025-01-29T11:32:35.670040282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-748549c4c9-7d2cf,Uid:dfb9e1ad-f94c-4aa8-a1d0-d67fe50cc0e9,Namespace:calico-system,Attempt:5,}" Jan 29 11:32:35.670842 containerd[1495]: time="2025-01-29T11:32:35.670807302Z" level=info msg="StopPodSandbox for \"6b01d3b17008796e2599c4562b21cab96766aa5bc7d9347165898d1384559383\"" Jan 29 11:32:35.671035 kubelet[2697]: I0129 11:32:35.671005 2697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a448304f72df29d9b001f735dc09f27534467dd06569f9ad7d03d0886fca03f" Jan 29 11:32:35.672393 containerd[1495]: time="2025-01-29T11:32:35.672367590Z" level=info msg="StopPodSandbox for \"8a448304f72df29d9b001f735dc09f27534467dd06569f9ad7d03d0886fca03f\"" Jan 29 11:32:35.672642 containerd[1495]: time="2025-01-29T11:32:35.672618420Z" level=info msg="Ensure that sandbox 8a448304f72df29d9b001f735dc09f27534467dd06569f9ad7d03d0886fca03f in task-service has been cleanup successfully" Jan 29 11:32:35.672923 containerd[1495]: time="2025-01-29T11:32:35.672892314Z" level=info msg="TearDown network for sandbox \"6b01d3b17008796e2599c4562b21cab96766aa5bc7d9347165898d1384559383\" successfully" Jan 29 11:32:35.672923 containerd[1495]: time="2025-01-29T11:32:35.672912662Z" level=info msg="StopPodSandbox for \"6b01d3b17008796e2599c4562b21cab96766aa5bc7d9347165898d1384559383\" returns successfully" Jan 29 11:32:35.674152 containerd[1495]: time="2025-01-29T11:32:35.673662890Z" level=info msg="StopPodSandbox for \"068481ee10bd71b1783eeab1c40f9ef93d9dc158fa6a9beeeba5fd6980a6abd7\"" Jan 29 11:32:35.674795 containerd[1495]: time="2025-01-29T11:32:35.674758756Z" level=info msg="TearDown network for sandbox \"068481ee10bd71b1783eeab1c40f9ef93d9dc158fa6a9beeeba5fd6980a6abd7\" successfully" Jan 29 11:32:35.674795 containerd[1495]: time="2025-01-29T11:32:35.674780226Z" level=info msg="StopPodSandbox for \"068481ee10bd71b1783eeab1c40f9ef93d9dc158fa6a9beeeba5fd6980a6abd7\" returns successfully" Jan 29 11:32:35.675479 containerd[1495]: time="2025-01-29T11:32:35.675451125Z" level=info msg="TearDown network for sandbox \"8a448304f72df29d9b001f735dc09f27534467dd06569f9ad7d03d0886fca03f\" successfully" Jan 29 11:32:35.675534 containerd[1495]: time="2025-01-29T11:32:35.675470542Z" level=info msg="StopPodSandbox for \"8a448304f72df29d9b001f735dc09f27534467dd06569f9ad7d03d0886fca03f\" returns successfully" Jan 29 11:32:35.675626 containerd[1495]: time="2025-01-29T11:32:35.675600966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qtzv2,Uid:eb49b472-01c5-4cb5-84d5-9a1a2c4b969d,Namespace:calico-system,Attempt:5,}" Jan 29 11:32:35.678031 containerd[1495]: time="2025-01-29T11:32:35.676225489Z" level=info msg="StopPodSandbox for \"4e1e197df2d2535e387f27fb3ad1d9788046231aaa3dcab8b067d5acba99c1c6\"" Jan 29 11:32:35.678031 containerd[1495]: time="2025-01-29T11:32:35.676394986Z" level=info msg="TearDown network for sandbox \"4e1e197df2d2535e387f27fb3ad1d9788046231aaa3dcab8b067d5acba99c1c6\" successfully" Jan 29 11:32:35.678031 containerd[1495]: time="2025-01-29T11:32:35.676421937Z" level=info msg="StopPodSandbox for \"4e1e197df2d2535e387f27fb3ad1d9788046231aaa3dcab8b067d5acba99c1c6\" returns successfully" Jan 29 11:32:35.678031 containerd[1495]: time="2025-01-29T11:32:35.676755082Z" level=info msg="StopPodSandbox for \"a72ec10a20a86b9cbf5f8208218ea1a41fc5aef847107a9fb000132954894a40\"" Jan 29 11:32:35.678031 containerd[1495]: time="2025-01-29T11:32:35.676845511Z" level=info msg="TearDown network for sandbox \"a72ec10a20a86b9cbf5f8208218ea1a41fc5aef847107a9fb000132954894a40\" successfully" Jan 29 11:32:35.678031 containerd[1495]: time="2025-01-29T11:32:35.676856883Z" level=info msg="StopPodSandbox for \"a72ec10a20a86b9cbf5f8208218ea1a41fc5aef847107a9fb000132954894a40\" returns successfully" Jan 29 11:32:35.678031 containerd[1495]: time="2025-01-29T11:32:35.677091623Z" level=info msg="StopPodSandbox for \"c0b5ead6c962ad21415289fbc7fdd4d4dbcb723931e95b4fc2ac10dc62090f23\"" Jan 29 11:32:35.678031 containerd[1495]: time="2025-01-29T11:32:35.677175791Z" level=info msg="TearDown network for sandbox \"c0b5ead6c962ad21415289fbc7fdd4d4dbcb723931e95b4fc2ac10dc62090f23\" successfully" Jan 29 11:32:35.678031 containerd[1495]: time="2025-01-29T11:32:35.677189346Z" level=info msg="StopPodSandbox for \"c0b5ead6c962ad21415289fbc7fdd4d4dbcb723931e95b4fc2ac10dc62090f23\" returns successfully" Jan 29 11:32:35.678031 containerd[1495]: time="2025-01-29T11:32:35.677545655Z" level=info msg="StopPodSandbox for \"62fbf0cacbcea32d6948f7a7cdc9388760af0845df2b850d76ebf9e79a41f942\"" Jan 29 11:32:35.678031 containerd[1495]: time="2025-01-29T11:32:35.677617450Z" level=info msg="TearDown network for sandbox \"62fbf0cacbcea32d6948f7a7cdc9388760af0845df2b850d76ebf9e79a41f942\" successfully" Jan 29 11:32:35.678031 containerd[1495]: time="2025-01-29T11:32:35.677625655Z" level=info msg="StopPodSandbox for \"62fbf0cacbcea32d6948f7a7cdc9388760af0845df2b850d76ebf9e79a41f942\" returns successfully" Jan 29 11:32:35.678387 kubelet[2697]: E0129 11:32:35.677838 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:32:35.678454 containerd[1495]: time="2025-01-29T11:32:35.678052215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bpqc6,Uid:59ad9644-a5c7-4480-bc20-dbeaa0a967d1,Namespace:kube-system,Attempt:5,}" Jan 29 11:32:35.767280 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 29 11:32:35.768031 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 29 11:32:35.809069 containerd[1495]: time="2025-01-29T11:32:35.809000650Z" level=info msg="StartContainer for \"bea0775ce4483acfb1096f68e6fc2fd6fec1d0bad4a9e54a2a4adfbf8be0988d\" returns successfully" Jan 29 11:32:35.809365 kubelet[2697]: I0129 11:32:35.809329 2697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f05e1208f1add52e8a5b9089a9a0848ac49ff2f3de3a669456b522c00d2264e" Jan 29 11:32:35.810137 containerd[1495]: time="2025-01-29T11:32:35.810087730Z" level=info msg="StopPodSandbox for \"4f05e1208f1add52e8a5b9089a9a0848ac49ff2f3de3a669456b522c00d2264e\"" Jan 29 11:32:35.810374 containerd[1495]: time="2025-01-29T11:32:35.810339663Z" level=info msg="Ensure that sandbox 4f05e1208f1add52e8a5b9089a9a0848ac49ff2f3de3a669456b522c00d2264e in task-service has been cleanup successfully" Jan 29 11:32:35.810597 containerd[1495]: time="2025-01-29T11:32:35.810569835Z" level=info msg="TearDown network for sandbox \"4f05e1208f1add52e8a5b9089a9a0848ac49ff2f3de3a669456b522c00d2264e\" successfully" Jan 29 11:32:35.810597 containerd[1495]: time="2025-01-29T11:32:35.810585073Z" level=info msg="StopPodSandbox for \"4f05e1208f1add52e8a5b9089a9a0848ac49ff2f3de3a669456b522c00d2264e\" returns successfully" Jan 29 11:32:35.810954 containerd[1495]: time="2025-01-29T11:32:35.810916275Z" level=info msg="StopPodSandbox for \"5ba01231171e716c7d6bfe7ea5616f070e4f98ceb9b8630b477b92af2ec4f966\"" Jan 29 11:32:35.811053 containerd[1495]: time="2025-01-29T11:32:35.811028365Z" level=info msg="TearDown network for sandbox \"5ba01231171e716c7d6bfe7ea5616f070e4f98ceb9b8630b477b92af2ec4f966\" successfully" Jan 29 11:32:35.811053 containerd[1495]: time="2025-01-29T11:32:35.811044806Z" level=info msg="StopPodSandbox for \"5ba01231171e716c7d6bfe7ea5616f070e4f98ceb9b8630b477b92af2ec4f966\" returns successfully" Jan 29 11:32:35.811374 containerd[1495]: time="2025-01-29T11:32:35.811340141Z" level=info msg="StopPodSandbox for \"8066f0c386aa12e1020b2dbf729a0686e7cd3d2af7c54baa18b36cbe29e48a40\"" Jan 29 11:32:35.811469 containerd[1495]: time="2025-01-29T11:32:35.811450568Z" level=info msg="TearDown network for sandbox \"8066f0c386aa12e1020b2dbf729a0686e7cd3d2af7c54baa18b36cbe29e48a40\" successfully" Jan 29 11:32:35.811469 containerd[1495]: time="2025-01-29T11:32:35.811464504Z" level=info msg="StopPodSandbox for \"8066f0c386aa12e1020b2dbf729a0686e7cd3d2af7c54baa18b36cbe29e48a40\" returns successfully" Jan 29 11:32:35.811925 containerd[1495]: time="2025-01-29T11:32:35.811875525Z" level=info msg="StopPodSandbox for \"5811a36840dd1d42d19b253cd448b0069b1615ccdc5a9f870a7e114387802527\"" Jan 29 11:32:35.812041 containerd[1495]: time="2025-01-29T11:32:35.811984629Z" level=info msg="TearDown network for sandbox \"5811a36840dd1d42d19b253cd448b0069b1615ccdc5a9f870a7e114387802527\" successfully" Jan 29 11:32:35.812041 containerd[1495]: time="2025-01-29T11:32:35.811995049Z" level=info msg="StopPodSandbox for \"5811a36840dd1d42d19b253cd448b0069b1615ccdc5a9f870a7e114387802527\" returns successfully" Jan 29 11:32:35.812438 containerd[1495]: time="2025-01-29T11:32:35.812394248Z" level=info msg="StopPodSandbox for \"bd1c4fbf33953e5862c77e9bcf30d3094180127b1d08f643c0857a95af354d3a\"" Jan 29 11:32:35.812585 containerd[1495]: time="2025-01-29T11:32:35.812564898Z" level=info msg="TearDown network for sandbox \"bd1c4fbf33953e5862c77e9bcf30d3094180127b1d08f643c0857a95af354d3a\" successfully" Jan 29 11:32:35.812585 containerd[1495]: time="2025-01-29T11:32:35.812580568Z" level=info msg="StopPodSandbox for \"bd1c4fbf33953e5862c77e9bcf30d3094180127b1d08f643c0857a95af354d3a\" returns successfully" Jan 29 11:32:35.813006 containerd[1495]: time="2025-01-29T11:32:35.812982131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f846fb45c-qnzlv,Uid:6a3ccfc9-9edc-4b98-a77a-7df17efe2895,Namespace:calico-apiserver,Attempt:5,}" Jan 29 11:32:35.814573 kubelet[2697]: I0129 11:32:35.814394 2697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="018d81fe3443755c0386db1c28cb83e6eda2e9ae48cc9dfd32950115231468d6" Jan 29 11:32:35.814873 containerd[1495]: time="2025-01-29T11:32:35.814848232Z" level=info msg="StopPodSandbox for \"018d81fe3443755c0386db1c28cb83e6eda2e9ae48cc9dfd32950115231468d6\"" Jan 29 11:32:35.815051 containerd[1495]: time="2025-01-29T11:32:35.815026336Z" level=info msg="Ensure that sandbox 018d81fe3443755c0386db1c28cb83e6eda2e9ae48cc9dfd32950115231468d6 in task-service has been cleanup successfully" Jan 29 11:32:35.815290 containerd[1495]: time="2025-01-29T11:32:35.815251500Z" level=info msg="TearDown network for sandbox \"018d81fe3443755c0386db1c28cb83e6eda2e9ae48cc9dfd32950115231468d6\" successfully" Jan 29 11:32:35.815290 containerd[1495]: time="2025-01-29T11:32:35.815279823Z" level=info msg="StopPodSandbox for \"018d81fe3443755c0386db1c28cb83e6eda2e9ae48cc9dfd32950115231468d6\" returns successfully" Jan 29 11:32:35.815563 containerd[1495]: time="2025-01-29T11:32:35.815541544Z" level=info msg="StopPodSandbox for \"c60489ec5828713ee6df69e7a33bc62e8d4338df25cba506d3f6f340dd21c518\"" Jan 29 11:32:35.815633 containerd[1495]: time="2025-01-29T11:32:35.815616725Z" level=info msg="TearDown network for sandbox \"c60489ec5828713ee6df69e7a33bc62e8d4338df25cba506d3f6f340dd21c518\" successfully" Jan 29 11:32:35.815633 containerd[1495]: time="2025-01-29T11:32:35.815630100Z" level=info msg="StopPodSandbox for \"c60489ec5828713ee6df69e7a33bc62e8d4338df25cba506d3f6f340dd21c518\" returns successfully" Jan 29 11:32:35.815902 containerd[1495]: time="2025-01-29T11:32:35.815878035Z" level=info msg="StopPodSandbox for \"d569cb7c5ddbed0b16a9e1171769ace0a51b3c89a0c48b2f5957786231c09e94\"" Jan 29 11:32:35.815981 containerd[1495]: time="2025-01-29T11:32:35.815963004Z" level=info msg="TearDown network for sandbox \"d569cb7c5ddbed0b16a9e1171769ace0a51b3c89a0c48b2f5957786231c09e94\" successfully" Jan 29 11:32:35.815981 containerd[1495]: time="2025-01-29T11:32:35.815977181Z" level=info msg="StopPodSandbox for \"d569cb7c5ddbed0b16a9e1171769ace0a51b3c89a0c48b2f5957786231c09e94\" returns successfully" Jan 29 11:32:35.816505 containerd[1495]: time="2025-01-29T11:32:35.816197694Z" level=info msg="StopPodSandbox for \"934f5bf6aae6891083f2c20de48c400c17aaa8a980e059df118a8a6387f827de\"" Jan 29 11:32:35.816505 containerd[1495]: time="2025-01-29T11:32:35.816274879Z" level=info msg="TearDown network for sandbox \"934f5bf6aae6891083f2c20de48c400c17aaa8a980e059df118a8a6387f827de\" successfully" Jan 29 11:32:35.816505 containerd[1495]: time="2025-01-29T11:32:35.816284778Z" level=info msg="StopPodSandbox for \"934f5bf6aae6891083f2c20de48c400c17aaa8a980e059df118a8a6387f827de\" returns successfully" Jan 29 11:32:35.816593 kubelet[2697]: I0129 11:32:35.816222 2697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ff3ec328ea81ecb5b20f9bf8dc8473fb5ca80ce1cd5e221a39baad55272eccf" Jan 29 11:32:35.816633 containerd[1495]: time="2025-01-29T11:32:35.816594800Z" level=info msg="StopPodSandbox for \"8ff3ec328ea81ecb5b20f9bf8dc8473fb5ca80ce1cd5e221a39baad55272eccf\"" Jan 29 11:32:35.816828 containerd[1495]: time="2025-01-29T11:32:35.816806868Z" level=info msg="Ensure that sandbox 8ff3ec328ea81ecb5b20f9bf8dc8473fb5ca80ce1cd5e221a39baad55272eccf in task-service has been cleanup successfully" Jan 29 11:32:35.817020 containerd[1495]: time="2025-01-29T11:32:35.816598807Z" level=info msg="StopPodSandbox for \"88b4cc24607b9cc69cf8566db1a11a31044711ee8251a11957523ee4d62e4b16\"" Jan 29 11:32:35.817103 containerd[1495]: time="2025-01-29T11:32:35.816987055Z" level=info msg="TearDown network for sandbox \"8ff3ec328ea81ecb5b20f9bf8dc8473fb5ca80ce1cd5e221a39baad55272eccf\" successfully" Jan 29 11:32:35.817132 containerd[1495]: time="2025-01-29T11:32:35.817105878Z" level=info msg="StopPodSandbox for \"8ff3ec328ea81ecb5b20f9bf8dc8473fb5ca80ce1cd5e221a39baad55272eccf\" returns successfully" Jan 29 11:32:35.817158 containerd[1495]: time="2025-01-29T11:32:35.817076934Z" level=info msg="TearDown network for sandbox \"88b4cc24607b9cc69cf8566db1a11a31044711ee8251a11957523ee4d62e4b16\" successfully" Jan 29 11:32:35.817158 containerd[1495]: time="2025-01-29T11:32:35.817153017Z" level=info msg="StopPodSandbox for \"88b4cc24607b9cc69cf8566db1a11a31044711ee8251a11957523ee4d62e4b16\" returns successfully" Jan 29 11:32:35.817448 kubelet[2697]: E0129 11:32:35.817312 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:32:35.817593 containerd[1495]: time="2025-01-29T11:32:35.817467377Z" level=info msg="StopPodSandbox for \"b39c2348c38274ae69c6ca2882dbe664e0f843199c327bc4d8145d923c55a15b\"" Jan 29 11:32:35.817593 containerd[1495]: time="2025-01-29T11:32:35.817548549Z" level=info msg="TearDown network for sandbox \"b39c2348c38274ae69c6ca2882dbe664e0f843199c327bc4d8145d923c55a15b\" successfully" Jan 29 11:32:35.817593 containerd[1495]: time="2025-01-29T11:32:35.817553479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pgcpq,Uid:ce5f6883-5ebc-45bd-8052-20316de2d012,Namespace:kube-system,Attempt:5,}" Jan 29 11:32:35.817782 containerd[1495]: time="2025-01-29T11:32:35.817558388Z" level=info msg="StopPodSandbox for \"b39c2348c38274ae69c6ca2882dbe664e0f843199c327bc4d8145d923c55a15b\" returns successfully" Jan 29 11:32:35.818032 containerd[1495]: time="2025-01-29T11:32:35.818008903Z" level=info msg="StopPodSandbox for \"cc31abcb76ed49cf0135c5d1e1ab407bf029dc7d3acca444a7ee37ff89eb9e88\"" Jan 29 11:32:35.818098 containerd[1495]: time="2025-01-29T11:32:35.818080026Z" level=info msg="TearDown network for sandbox \"cc31abcb76ed49cf0135c5d1e1ab407bf029dc7d3acca444a7ee37ff89eb9e88\" successfully" Jan 29 11:32:35.818098 containerd[1495]: time="2025-01-29T11:32:35.818093021Z" level=info msg="StopPodSandbox for \"cc31abcb76ed49cf0135c5d1e1ab407bf029dc7d3acca444a7ee37ff89eb9e88\" returns successfully" Jan 29 11:32:35.818375 containerd[1495]: time="2025-01-29T11:32:35.818349633Z" level=info msg="StopPodSandbox for \"f81dce53143ec3a050bc20013a34bc942f73112707290c1c5853735a992a3294\"" Jan 29 11:32:35.818480 containerd[1495]: time="2025-01-29T11:32:35.818455682Z" level=info msg="TearDown network for sandbox \"f81dce53143ec3a050bc20013a34bc942f73112707290c1c5853735a992a3294\" successfully" Jan 29 11:32:35.818480 containerd[1495]: time="2025-01-29T11:32:35.818479005Z" level=info msg="StopPodSandbox for \"f81dce53143ec3a050bc20013a34bc942f73112707290c1c5853735a992a3294\" returns successfully" Jan 29 11:32:35.818845 containerd[1495]: time="2025-01-29T11:32:35.818819394Z" level=info msg="StopPodSandbox for \"a57e13c259cc81ce2e477b9469ae3554a9ae4291cb2db4e7129310c60db62f90\"" Jan 29 11:32:35.818919 containerd[1495]: time="2025-01-29T11:32:35.818902039Z" level=info msg="TearDown network for sandbox \"a57e13c259cc81ce2e477b9469ae3554a9ae4291cb2db4e7129310c60db62f90\" successfully" Jan 29 11:32:35.818919 containerd[1495]: time="2025-01-29T11:32:35.818916245Z" level=info msg="StopPodSandbox for \"a57e13c259cc81ce2e477b9469ae3554a9ae4291cb2db4e7129310c60db62f90\" returns successfully" Jan 29 11:32:35.819340 containerd[1495]: time="2025-01-29T11:32:35.819300537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f846fb45c-49zts,Uid:355edf79-8969-4232-bff0-a38923ed3709,Namespace:calico-apiserver,Attempt:5,}" Jan 29 11:32:36.374215 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-018d81fe3443755c0386db1c28cb83e6eda2e9ae48cc9dfd32950115231468d6-shm.mount: Deactivated successfully. Jan 29 11:32:36.374336 systemd[1]: run-netns-cni\x2dc44c0993\x2d9454\x2d5e8c\x2d7950\x2db0c3c50e448c.mount: Deactivated successfully. Jan 29 11:32:36.374409 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4f05e1208f1add52e8a5b9089a9a0848ac49ff2f3de3a669456b522c00d2264e-shm.mount: Deactivated successfully. Jan 29 11:32:36.377637 systemd[1]: run-netns-cni\x2d0f0f7204\x2de977\x2df762\x2d85d1\x2de63b78d358a6.mount: Deactivated successfully. Jan 29 11:32:36.377737 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8a448304f72df29d9b001f735dc09f27534467dd06569f9ad7d03d0886fca03f-shm.mount: Deactivated successfully. Jan 29 11:32:36.378022 systemd[1]: run-netns-cni\x2dbb394b0b\x2db5bd\x2d528f\x2d488e\x2d103481eb58de.mount: Deactivated successfully. Jan 29 11:32:36.378194 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f4159f78bb2d48a1117a75aeb4e25573e0a11837294e7901b6da3c3896dae9e0-shm.mount: Deactivated successfully. Jan 29 11:32:36.448365 systemd-networkd[1410]: cali3688a07fa4d: Link UP Jan 29 11:32:36.449217 systemd-networkd[1410]: cali3688a07fa4d: Gained carrier Jan 29 11:32:36.465977 containerd[1495]: 2025-01-29 11:32:36.258 [INFO][4668] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:32:36.465977 containerd[1495]: 2025-01-29 11:32:36.281 [INFO][4668] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--748549c4c9--7d2cf-eth0 calico-kube-controllers-748549c4c9- calico-system dfb9e1ad-f94c-4aa8-a1d0-d67fe50cc0e9 786 0 2025-01-29 11:32:14 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:748549c4c9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-748549c4c9-7d2cf eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3688a07fa4d [] []}} ContainerID="4ad33d5f904b2102a448346a5144c9534ec0dffc40559e3997e0c372a26eef72" Namespace="calico-system" Pod="calico-kube-controllers-748549c4c9-7d2cf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--748549c4c9--7d2cf-" Jan 29 11:32:36.465977 containerd[1495]: 2025-01-29 11:32:36.282 [INFO][4668] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4ad33d5f904b2102a448346a5144c9534ec0dffc40559e3997e0c372a26eef72" Namespace="calico-system" Pod="calico-kube-controllers-748549c4c9-7d2cf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--748549c4c9--7d2cf-eth0" Jan 29 11:32:36.465977 containerd[1495]: 2025-01-29 11:32:36.395 [INFO][4759] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4ad33d5f904b2102a448346a5144c9534ec0dffc40559e3997e0c372a26eef72" HandleID="k8s-pod-network.4ad33d5f904b2102a448346a5144c9534ec0dffc40559e3997e0c372a26eef72" Workload="localhost-k8s-calico--kube--controllers--748549c4c9--7d2cf-eth0" Jan 29 11:32:36.465977 containerd[1495]: 2025-01-29 11:32:36.412 [INFO][4759] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4ad33d5f904b2102a448346a5144c9534ec0dffc40559e3997e0c372a26eef72" HandleID="k8s-pod-network.4ad33d5f904b2102a448346a5144c9534ec0dffc40559e3997e0c372a26eef72" Workload="localhost-k8s-calico--kube--controllers--748549c4c9--7d2cf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000309980), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-748549c4c9-7d2cf", "timestamp":"2025-01-29 11:32:36.395853036 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:32:36.465977 containerd[1495]: 2025-01-29 11:32:36.412 [INFO][4759] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:32:36.465977 containerd[1495]: 2025-01-29 11:32:36.412 [INFO][4759] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:32:36.465977 containerd[1495]: 2025-01-29 11:32:36.412 [INFO][4759] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:32:36.465977 containerd[1495]: 2025-01-29 11:32:36.415 [INFO][4759] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4ad33d5f904b2102a448346a5144c9534ec0dffc40559e3997e0c372a26eef72" host="localhost" Jan 29 11:32:36.465977 containerd[1495]: 2025-01-29 11:32:36.418 [INFO][4759] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:32:36.465977 containerd[1495]: 2025-01-29 11:32:36.421 [INFO][4759] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:32:36.465977 containerd[1495]: 2025-01-29 11:32:36.423 [INFO][4759] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:32:36.465977 containerd[1495]: 2025-01-29 11:32:36.425 [INFO][4759] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:32:36.465977 containerd[1495]: 2025-01-29 11:32:36.425 [INFO][4759] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4ad33d5f904b2102a448346a5144c9534ec0dffc40559e3997e0c372a26eef72" host="localhost" Jan 29 11:32:36.465977 containerd[1495]: 2025-01-29 11:32:36.426 [INFO][4759] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4ad33d5f904b2102a448346a5144c9534ec0dffc40559e3997e0c372a26eef72 Jan 29 11:32:36.465977 containerd[1495]: 2025-01-29 11:32:36.429 [INFO][4759] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4ad33d5f904b2102a448346a5144c9534ec0dffc40559e3997e0c372a26eef72" host="localhost" Jan 29 11:32:36.465977 containerd[1495]: 2025-01-29 11:32:36.436 [INFO][4759] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.4ad33d5f904b2102a448346a5144c9534ec0dffc40559e3997e0c372a26eef72" host="localhost" Jan 29 11:32:36.465977 containerd[1495]: 2025-01-29 11:32:36.436 [INFO][4759] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.4ad33d5f904b2102a448346a5144c9534ec0dffc40559e3997e0c372a26eef72" host="localhost" Jan 29 11:32:36.465977 containerd[1495]: 2025-01-29 11:32:36.436 [INFO][4759] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:32:36.465977 containerd[1495]: 2025-01-29 11:32:36.436 [INFO][4759] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="4ad33d5f904b2102a448346a5144c9534ec0dffc40559e3997e0c372a26eef72" HandleID="k8s-pod-network.4ad33d5f904b2102a448346a5144c9534ec0dffc40559e3997e0c372a26eef72" Workload="localhost-k8s-calico--kube--controllers--748549c4c9--7d2cf-eth0" Jan 29 11:32:36.467379 containerd[1495]: 2025-01-29 11:32:36.439 [INFO][4668] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4ad33d5f904b2102a448346a5144c9534ec0dffc40559e3997e0c372a26eef72" Namespace="calico-system" Pod="calico-kube-controllers-748549c4c9-7d2cf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--748549c4c9--7d2cf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--748549c4c9--7d2cf-eth0", GenerateName:"calico-kube-controllers-748549c4c9-", Namespace:"calico-system", SelfLink:"", UID:"dfb9e1ad-f94c-4aa8-a1d0-d67fe50cc0e9", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 32, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"748549c4c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-748549c4c9-7d2cf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3688a07fa4d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:32:36.467379 containerd[1495]: 2025-01-29 11:32:36.439 [INFO][4668] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="4ad33d5f904b2102a448346a5144c9534ec0dffc40559e3997e0c372a26eef72" Namespace="calico-system" Pod="calico-kube-controllers-748549c4c9-7d2cf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--748549c4c9--7d2cf-eth0" Jan 29 11:32:36.467379 containerd[1495]: 2025-01-29 11:32:36.439 [INFO][4668] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3688a07fa4d ContainerID="4ad33d5f904b2102a448346a5144c9534ec0dffc40559e3997e0c372a26eef72" Namespace="calico-system" Pod="calico-kube-controllers-748549c4c9-7d2cf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--748549c4c9--7d2cf-eth0" Jan 29 11:32:36.467379 containerd[1495]: 2025-01-29 11:32:36.448 [INFO][4668] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4ad33d5f904b2102a448346a5144c9534ec0dffc40559e3997e0c372a26eef72" Namespace="calico-system" Pod="calico-kube-controllers-748549c4c9-7d2cf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--748549c4c9--7d2cf-eth0" Jan 29 11:32:36.467379 containerd[1495]: 2025-01-29 11:32:36.449 [INFO][4668] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4ad33d5f904b2102a448346a5144c9534ec0dffc40559e3997e0c372a26eef72" Namespace="calico-system" Pod="calico-kube-controllers-748549c4c9-7d2cf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--748549c4c9--7d2cf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--748549c4c9--7d2cf-eth0", GenerateName:"calico-kube-controllers-748549c4c9-", Namespace:"calico-system", SelfLink:"", UID:"dfb9e1ad-f94c-4aa8-a1d0-d67fe50cc0e9", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 32, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"748549c4c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4ad33d5f904b2102a448346a5144c9534ec0dffc40559e3997e0c372a26eef72", Pod:"calico-kube-controllers-748549c4c9-7d2cf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3688a07fa4d", MAC:"e6:01:a0:5c:f6:78", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:32:36.467379 containerd[1495]: 2025-01-29 11:32:36.460 [INFO][4668] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4ad33d5f904b2102a448346a5144c9534ec0dffc40559e3997e0c372a26eef72" Namespace="calico-system" Pod="calico-kube-controllers-748549c4c9-7d2cf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--748549c4c9--7d2cf-eth0" Jan 29 11:32:36.468080 systemd-networkd[1410]: calif35a999af1a: Link UP Jan 29 11:32:36.468288 systemd-networkd[1410]: calif35a999af1a: Gained carrier Jan 29 11:32:36.486108 containerd[1495]: 2025-01-29 11:32:36.281 [INFO][4688] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:32:36.486108 containerd[1495]: 2025-01-29 11:32:36.300 [INFO][4688] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--qtzv2-eth0 csi-node-driver- calico-system eb49b472-01c5-4cb5-84d5-9a1a2c4b969d 619 0 2025-01-29 11:32:14 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-qtzv2 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif35a999af1a [] []}} ContainerID="c7031c2bd20faef7d4c5a33a0582c94920a65f9bd7f61b97d96440b21f035cbd" Namespace="calico-system" Pod="csi-node-driver-qtzv2" WorkloadEndpoint="localhost-k8s-csi--node--driver--qtzv2-" Jan 29 11:32:36.486108 containerd[1495]: 2025-01-29 11:32:36.301 [INFO][4688] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c7031c2bd20faef7d4c5a33a0582c94920a65f9bd7f61b97d96440b21f035cbd" Namespace="calico-system" Pod="csi-node-driver-qtzv2" WorkloadEndpoint="localhost-k8s-csi--node--driver--qtzv2-eth0" Jan 29 11:32:36.486108 containerd[1495]: 2025-01-29 11:32:36.394 [INFO][4776] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c7031c2bd20faef7d4c5a33a0582c94920a65f9bd7f61b97d96440b21f035cbd" HandleID="k8s-pod-network.c7031c2bd20faef7d4c5a33a0582c94920a65f9bd7f61b97d96440b21f035cbd" Workload="localhost-k8s-csi--node--driver--qtzv2-eth0" Jan 29 11:32:36.486108 containerd[1495]: 2025-01-29 11:32:36.412 [INFO][4776] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c7031c2bd20faef7d4c5a33a0582c94920a65f9bd7f61b97d96440b21f035cbd" HandleID="k8s-pod-network.c7031c2bd20faef7d4c5a33a0582c94920a65f9bd7f61b97d96440b21f035cbd" Workload="localhost-k8s-csi--node--driver--qtzv2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003acf20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-qtzv2", "timestamp":"2025-01-29 11:32:36.394910467 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:32:36.486108 containerd[1495]: 2025-01-29 11:32:36.413 [INFO][4776] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:32:36.486108 containerd[1495]: 2025-01-29 11:32:36.436 [INFO][4776] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:32:36.486108 containerd[1495]: 2025-01-29 11:32:36.436 [INFO][4776] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:32:36.486108 containerd[1495]: 2025-01-29 11:32:36.438 [INFO][4776] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c7031c2bd20faef7d4c5a33a0582c94920a65f9bd7f61b97d96440b21f035cbd" host="localhost" Jan 29 11:32:36.486108 containerd[1495]: 2025-01-29 11:32:36.442 [INFO][4776] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:32:36.486108 containerd[1495]: 2025-01-29 11:32:36.445 [INFO][4776] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:32:36.486108 containerd[1495]: 2025-01-29 11:32:36.447 [INFO][4776] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:32:36.486108 containerd[1495]: 2025-01-29 11:32:36.450 [INFO][4776] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:32:36.486108 containerd[1495]: 2025-01-29 11:32:36.450 [INFO][4776] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c7031c2bd20faef7d4c5a33a0582c94920a65f9bd7f61b97d96440b21f035cbd" host="localhost" Jan 29 11:32:36.486108 containerd[1495]: 2025-01-29 11:32:36.452 [INFO][4776] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c7031c2bd20faef7d4c5a33a0582c94920a65f9bd7f61b97d96440b21f035cbd Jan 29 11:32:36.486108 containerd[1495]: 2025-01-29 11:32:36.455 [INFO][4776] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c7031c2bd20faef7d4c5a33a0582c94920a65f9bd7f61b97d96440b21f035cbd" host="localhost" Jan 29 11:32:36.486108 containerd[1495]: 2025-01-29 11:32:36.461 [INFO][4776] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.c7031c2bd20faef7d4c5a33a0582c94920a65f9bd7f61b97d96440b21f035cbd" host="localhost" Jan 29 11:32:36.486108 containerd[1495]: 2025-01-29 11:32:36.461 [INFO][4776] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.c7031c2bd20faef7d4c5a33a0582c94920a65f9bd7f61b97d96440b21f035cbd" host="localhost" Jan 29 11:32:36.486108 containerd[1495]: 2025-01-29 11:32:36.461 [INFO][4776] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:32:36.486108 containerd[1495]: 2025-01-29 11:32:36.461 [INFO][4776] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="c7031c2bd20faef7d4c5a33a0582c94920a65f9bd7f61b97d96440b21f035cbd" HandleID="k8s-pod-network.c7031c2bd20faef7d4c5a33a0582c94920a65f9bd7f61b97d96440b21f035cbd" Workload="localhost-k8s-csi--node--driver--qtzv2-eth0" Jan 29 11:32:36.487120 containerd[1495]: 2025-01-29 11:32:36.464 [INFO][4688] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c7031c2bd20faef7d4c5a33a0582c94920a65f9bd7f61b97d96440b21f035cbd" Namespace="calico-system" Pod="csi-node-driver-qtzv2" WorkloadEndpoint="localhost-k8s-csi--node--driver--qtzv2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qtzv2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"eb49b472-01c5-4cb5-84d5-9a1a2c4b969d", ResourceVersion:"619", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 32, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-qtzv2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif35a999af1a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:32:36.487120 containerd[1495]: 2025-01-29 11:32:36.464 [INFO][4688] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="c7031c2bd20faef7d4c5a33a0582c94920a65f9bd7f61b97d96440b21f035cbd" Namespace="calico-system" Pod="csi-node-driver-qtzv2" WorkloadEndpoint="localhost-k8s-csi--node--driver--qtzv2-eth0" Jan 29 11:32:36.487120 containerd[1495]: 2025-01-29 11:32:36.464 [INFO][4688] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif35a999af1a ContainerID="c7031c2bd20faef7d4c5a33a0582c94920a65f9bd7f61b97d96440b21f035cbd" Namespace="calico-system" Pod="csi-node-driver-qtzv2" WorkloadEndpoint="localhost-k8s-csi--node--driver--qtzv2-eth0" Jan 29 11:32:36.487120 containerd[1495]: 2025-01-29 11:32:36.469 [INFO][4688] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c7031c2bd20faef7d4c5a33a0582c94920a65f9bd7f61b97d96440b21f035cbd" Namespace="calico-system" Pod="csi-node-driver-qtzv2" WorkloadEndpoint="localhost-k8s-csi--node--driver--qtzv2-eth0" Jan 29 11:32:36.487120 containerd[1495]: 2025-01-29 11:32:36.469 [INFO][4688] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c7031c2bd20faef7d4c5a33a0582c94920a65f9bd7f61b97d96440b21f035cbd" Namespace="calico-system" Pod="csi-node-driver-qtzv2" WorkloadEndpoint="localhost-k8s-csi--node--driver--qtzv2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qtzv2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"eb49b472-01c5-4cb5-84d5-9a1a2c4b969d", ResourceVersion:"619", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 32, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c7031c2bd20faef7d4c5a33a0582c94920a65f9bd7f61b97d96440b21f035cbd", Pod:"csi-node-driver-qtzv2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif35a999af1a", MAC:"4e:9c:51:eb:15:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:32:36.487120 containerd[1495]: 2025-01-29 11:32:36.477 [INFO][4688] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c7031c2bd20faef7d4c5a33a0582c94920a65f9bd7f61b97d96440b21f035cbd" Namespace="calico-system" Pod="csi-node-driver-qtzv2" WorkloadEndpoint="localhost-k8s-csi--node--driver--qtzv2-eth0" Jan 29 11:32:36.497806 systemd-networkd[1410]: cali4230ab0d14a: Link UP Jan 29 11:32:36.498024 systemd-networkd[1410]: cali4230ab0d14a: Gained carrier Jan 29 11:32:36.508395 containerd[1495]: 2025-01-29 11:32:36.291 [INFO][4711] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:32:36.508395 containerd[1495]: 2025-01-29 11:32:36.309 [INFO][4711] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7f846fb45c--qnzlv-eth0 calico-apiserver-7f846fb45c- calico-apiserver 6a3ccfc9-9edc-4b98-a77a-7df17efe2895 790 0 2025-01-29 11:32:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f846fb45c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7f846fb45c-qnzlv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4230ab0d14a [] []}} ContainerID="638e11ec792144c0dec74ece1f329597fa473f09e56ff22d402a550f748799fc" Namespace="calico-apiserver" Pod="calico-apiserver-7f846fb45c-qnzlv" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f846fb45c--qnzlv-" Jan 29 11:32:36.508395 containerd[1495]: 2025-01-29 11:32:36.309 [INFO][4711] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="638e11ec792144c0dec74ece1f329597fa473f09e56ff22d402a550f748799fc" Namespace="calico-apiserver" Pod="calico-apiserver-7f846fb45c-qnzlv" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f846fb45c--qnzlv-eth0" Jan 29 11:32:36.508395 containerd[1495]: 2025-01-29 11:32:36.399 [INFO][4781] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="638e11ec792144c0dec74ece1f329597fa473f09e56ff22d402a550f748799fc" HandleID="k8s-pod-network.638e11ec792144c0dec74ece1f329597fa473f09e56ff22d402a550f748799fc" Workload="localhost-k8s-calico--apiserver--7f846fb45c--qnzlv-eth0" Jan 29 11:32:36.508395 containerd[1495]: 2025-01-29 11:32:36.414 [INFO][4781] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="638e11ec792144c0dec74ece1f329597fa473f09e56ff22d402a550f748799fc" HandleID="k8s-pod-network.638e11ec792144c0dec74ece1f329597fa473f09e56ff22d402a550f748799fc" Workload="localhost-k8s-calico--apiserver--7f846fb45c--qnzlv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f9e60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7f846fb45c-qnzlv", "timestamp":"2025-01-29 11:32:36.399614783 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:32:36.508395 containerd[1495]: 2025-01-29 11:32:36.414 [INFO][4781] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:32:36.508395 containerd[1495]: 2025-01-29 11:32:36.463 [INFO][4781] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:32:36.508395 containerd[1495]: 2025-01-29 11:32:36.463 [INFO][4781] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:32:36.508395 containerd[1495]: 2025-01-29 11:32:36.467 [INFO][4781] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.638e11ec792144c0dec74ece1f329597fa473f09e56ff22d402a550f748799fc" host="localhost" Jan 29 11:32:36.508395 containerd[1495]: 2025-01-29 11:32:36.471 [INFO][4781] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:32:36.508395 containerd[1495]: 2025-01-29 11:32:36.475 [INFO][4781] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:32:36.508395 containerd[1495]: 2025-01-29 11:32:36.478 [INFO][4781] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:32:36.508395 containerd[1495]: 2025-01-29 11:32:36.480 [INFO][4781] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:32:36.508395 containerd[1495]: 2025-01-29 11:32:36.480 [INFO][4781] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.638e11ec792144c0dec74ece1f329597fa473f09e56ff22d402a550f748799fc" host="localhost" Jan 29 11:32:36.508395 containerd[1495]: 2025-01-29 11:32:36.481 [INFO][4781] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.638e11ec792144c0dec74ece1f329597fa473f09e56ff22d402a550f748799fc Jan 29 11:32:36.508395 containerd[1495]: 2025-01-29 11:32:36.484 [INFO][4781] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.638e11ec792144c0dec74ece1f329597fa473f09e56ff22d402a550f748799fc" host="localhost" Jan 29 11:32:36.508395 containerd[1495]: 2025-01-29 11:32:36.489 [INFO][4781] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.638e11ec792144c0dec74ece1f329597fa473f09e56ff22d402a550f748799fc" host="localhost" Jan 29 11:32:36.508395 containerd[1495]: 2025-01-29 11:32:36.490 [INFO][4781] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.638e11ec792144c0dec74ece1f329597fa473f09e56ff22d402a550f748799fc" host="localhost" Jan 29 11:32:36.508395 containerd[1495]: 2025-01-29 11:32:36.490 [INFO][4781] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:32:36.508395 containerd[1495]: 2025-01-29 11:32:36.490 [INFO][4781] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="638e11ec792144c0dec74ece1f329597fa473f09e56ff22d402a550f748799fc" HandleID="k8s-pod-network.638e11ec792144c0dec74ece1f329597fa473f09e56ff22d402a550f748799fc" Workload="localhost-k8s-calico--apiserver--7f846fb45c--qnzlv-eth0" Jan 29 11:32:36.509155 containerd[1495]: 2025-01-29 11:32:36.494 [INFO][4711] cni-plugin/k8s.go 386: Populated endpoint ContainerID="638e11ec792144c0dec74ece1f329597fa473f09e56ff22d402a550f748799fc" Namespace="calico-apiserver" Pod="calico-apiserver-7f846fb45c-qnzlv" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f846fb45c--qnzlv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f846fb45c--qnzlv-eth0", GenerateName:"calico-apiserver-7f846fb45c-", Namespace:"calico-apiserver", SelfLink:"", UID:"6a3ccfc9-9edc-4b98-a77a-7df17efe2895", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 32, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f846fb45c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7f846fb45c-qnzlv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4230ab0d14a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:32:36.509155 containerd[1495]: 2025-01-29 11:32:36.495 [INFO][4711] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="638e11ec792144c0dec74ece1f329597fa473f09e56ff22d402a550f748799fc" Namespace="calico-apiserver" Pod="calico-apiserver-7f846fb45c-qnzlv" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f846fb45c--qnzlv-eth0" Jan 29 11:32:36.509155 containerd[1495]: 2025-01-29 11:32:36.495 [INFO][4711] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4230ab0d14a ContainerID="638e11ec792144c0dec74ece1f329597fa473f09e56ff22d402a550f748799fc" Namespace="calico-apiserver" Pod="calico-apiserver-7f846fb45c-qnzlv" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f846fb45c--qnzlv-eth0" Jan 29 11:32:36.509155 containerd[1495]: 2025-01-29 11:32:36.496 [INFO][4711] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="638e11ec792144c0dec74ece1f329597fa473f09e56ff22d402a550f748799fc" Namespace="calico-apiserver" Pod="calico-apiserver-7f846fb45c-qnzlv" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f846fb45c--qnzlv-eth0" Jan 29 11:32:36.509155 containerd[1495]: 2025-01-29 11:32:36.496 [INFO][4711] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="638e11ec792144c0dec74ece1f329597fa473f09e56ff22d402a550f748799fc" Namespace="calico-apiserver" Pod="calico-apiserver-7f846fb45c-qnzlv" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f846fb45c--qnzlv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f846fb45c--qnzlv-eth0", GenerateName:"calico-apiserver-7f846fb45c-", Namespace:"calico-apiserver", SelfLink:"", UID:"6a3ccfc9-9edc-4b98-a77a-7df17efe2895", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 32, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f846fb45c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"638e11ec792144c0dec74ece1f329597fa473f09e56ff22d402a550f748799fc", Pod:"calico-apiserver-7f846fb45c-qnzlv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4230ab0d14a", MAC:"ee:1d:92:cd:0b:7b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:32:36.509155 containerd[1495]: 2025-01-29 11:32:36.504 [INFO][4711] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="638e11ec792144c0dec74ece1f329597fa473f09e56ff22d402a550f748799fc" Namespace="calico-apiserver" Pod="calico-apiserver-7f846fb45c-qnzlv" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f846fb45c--qnzlv-eth0" Jan 29 11:32:36.524774 containerd[1495]: time="2025-01-29T11:32:36.524671787Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:32:36.524774 containerd[1495]: time="2025-01-29T11:32:36.524732410Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:32:36.524774 containerd[1495]: time="2025-01-29T11:32:36.524745946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:32:36.525015 containerd[1495]: time="2025-01-29T11:32:36.524840764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:32:36.542004 containerd[1495]: time="2025-01-29T11:32:36.540639707Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:32:36.542004 containerd[1495]: time="2025-01-29T11:32:36.540704138Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:32:36.542004 containerd[1495]: time="2025-01-29T11:32:36.540717062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:32:36.542004 containerd[1495]: time="2025-01-29T11:32:36.540811840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:32:36.545861 systemd-networkd[1410]: cali504630069c8: Link UP Jan 29 11:32:36.549628 systemd-networkd[1410]: cali504630069c8: Gained carrier Jan 29 11:32:36.568549 containerd[1495]: time="2025-01-29T11:32:36.561940414Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:32:36.570058 containerd[1495]: time="2025-01-29T11:32:36.568840430Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:32:36.570058 containerd[1495]: time="2025-01-29T11:32:36.569651321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:32:36.577347 containerd[1495]: 2025-01-29 11:32:36.268 [INFO][4726] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:32:36.577347 containerd[1495]: 2025-01-29 11:32:36.281 [INFO][4726] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--pgcpq-eth0 coredns-7db6d8ff4d- kube-system ce5f6883-5ebc-45bd-8052-20316de2d012 783 0 2025-01-29 11:32:05 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-pgcpq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali504630069c8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="2a63e02c9ebf7baba155265a1242e7a24f39e40692af7a79117462accef2bdb5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-pgcpq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--pgcpq-" Jan 29 11:32:36.577347 containerd[1495]: 2025-01-29 11:32:36.282 [INFO][4726] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2a63e02c9ebf7baba155265a1242e7a24f39e40692af7a79117462accef2bdb5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-pgcpq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--pgcpq-eth0" Jan 29 11:32:36.577347 containerd[1495]: 2025-01-29 11:32:36.394 [INFO][4756] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2a63e02c9ebf7baba155265a1242e7a24f39e40692af7a79117462accef2bdb5" HandleID="k8s-pod-network.2a63e02c9ebf7baba155265a1242e7a24f39e40692af7a79117462accef2bdb5" Workload="localhost-k8s-coredns--7db6d8ff4d--pgcpq-eth0" Jan 29 11:32:36.577347 containerd[1495]: 2025-01-29 11:32:36.414 [INFO][4756] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2a63e02c9ebf7baba155265a1242e7a24f39e40692af7a79117462accef2bdb5" HandleID="k8s-pod-network.2a63e02c9ebf7baba155265a1242e7a24f39e40692af7a79117462accef2bdb5" Workload="localhost-k8s-coredns--7db6d8ff4d--pgcpq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000120220), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-pgcpq", "timestamp":"2025-01-29 11:32:36.394809167 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:32:36.577347 containerd[1495]: 2025-01-29 11:32:36.414 [INFO][4756] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:32:36.577347 containerd[1495]: 2025-01-29 11:32:36.490 [INFO][4756] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:32:36.577347 containerd[1495]: 2025-01-29 11:32:36.491 [INFO][4756] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:32:36.577347 containerd[1495]: 2025-01-29 11:32:36.492 [INFO][4756] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2a63e02c9ebf7baba155265a1242e7a24f39e40692af7a79117462accef2bdb5" host="localhost" Jan 29 11:32:36.577347 containerd[1495]: 2025-01-29 11:32:36.497 [INFO][4756] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:32:36.577347 containerd[1495]: 2025-01-29 11:32:36.507 [INFO][4756] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:32:36.577347 containerd[1495]: 2025-01-29 11:32:36.513 [INFO][4756] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:32:36.577347 containerd[1495]: 2025-01-29 11:32:36.516 [INFO][4756] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:32:36.577347 containerd[1495]: 2025-01-29 11:32:36.516 [INFO][4756] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2a63e02c9ebf7baba155265a1242e7a24f39e40692af7a79117462accef2bdb5" host="localhost" Jan 29 11:32:36.577347 containerd[1495]: 2025-01-29 11:32:36.518 [INFO][4756] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2a63e02c9ebf7baba155265a1242e7a24f39e40692af7a79117462accef2bdb5 Jan 29 11:32:36.577347 containerd[1495]: 2025-01-29 11:32:36.524 [INFO][4756] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2a63e02c9ebf7baba155265a1242e7a24f39e40692af7a79117462accef2bdb5" host="localhost" Jan 29 11:32:36.577347 containerd[1495]: 2025-01-29 11:32:36.529 [INFO][4756] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.2a63e02c9ebf7baba155265a1242e7a24f39e40692af7a79117462accef2bdb5" host="localhost" Jan 29 11:32:36.577347 containerd[1495]: 2025-01-29 11:32:36.529 [INFO][4756] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.2a63e02c9ebf7baba155265a1242e7a24f39e40692af7a79117462accef2bdb5" host="localhost" Jan 29 11:32:36.577347 containerd[1495]: 2025-01-29 11:32:36.529 [INFO][4756] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:32:36.577347 containerd[1495]: 2025-01-29 11:32:36.529 [INFO][4756] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="2a63e02c9ebf7baba155265a1242e7a24f39e40692af7a79117462accef2bdb5" HandleID="k8s-pod-network.2a63e02c9ebf7baba155265a1242e7a24f39e40692af7a79117462accef2bdb5" Workload="localhost-k8s-coredns--7db6d8ff4d--pgcpq-eth0" Jan 29 11:32:36.578049 containerd[1495]: 2025-01-29 11:32:36.539 [INFO][4726] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2a63e02c9ebf7baba155265a1242e7a24f39e40692af7a79117462accef2bdb5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-pgcpq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--pgcpq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--pgcpq-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ce5f6883-5ebc-45bd-8052-20316de2d012", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 32, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-pgcpq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali504630069c8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:32:36.578049 containerd[1495]: 2025-01-29 11:32:36.540 [INFO][4726] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="2a63e02c9ebf7baba155265a1242e7a24f39e40692af7a79117462accef2bdb5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-pgcpq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--pgcpq-eth0" Jan 29 11:32:36.578049 containerd[1495]: 2025-01-29 11:32:36.540 [INFO][4726] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali504630069c8 ContainerID="2a63e02c9ebf7baba155265a1242e7a24f39e40692af7a79117462accef2bdb5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-pgcpq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--pgcpq-eth0" Jan 29 11:32:36.578049 containerd[1495]: 2025-01-29 11:32:36.550 [INFO][4726] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2a63e02c9ebf7baba155265a1242e7a24f39e40692af7a79117462accef2bdb5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-pgcpq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--pgcpq-eth0" Jan 29 11:32:36.578049 containerd[1495]: 2025-01-29 11:32:36.550 [INFO][4726] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2a63e02c9ebf7baba155265a1242e7a24f39e40692af7a79117462accef2bdb5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-pgcpq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--pgcpq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--pgcpq-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ce5f6883-5ebc-45bd-8052-20316de2d012", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 32, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2a63e02c9ebf7baba155265a1242e7a24f39e40692af7a79117462accef2bdb5", Pod:"coredns-7db6d8ff4d-pgcpq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali504630069c8", MAC:"22:52:90:54:b5:b9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:32:36.578049 containerd[1495]: 2025-01-29 11:32:36.566 [INFO][4726] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2a63e02c9ebf7baba155265a1242e7a24f39e40692af7a79117462accef2bdb5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-pgcpq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--pgcpq-eth0" Jan 29 11:32:36.578715 containerd[1495]: time="2025-01-29T11:32:36.572785552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:32:36.582595 systemd[1]: Started cri-containerd-c7031c2bd20faef7d4c5a33a0582c94920a65f9bd7f61b97d96440b21f035cbd.scope - libcontainer container c7031c2bd20faef7d4c5a33a0582c94920a65f9bd7f61b97d96440b21f035cbd. Jan 29 11:32:36.582818 systemd-networkd[1410]: califb45b084c0f: Link UP Jan 29 11:32:36.583001 systemd-networkd[1410]: califb45b084c0f: Gained carrier Jan 29 11:32:36.603315 containerd[1495]: 2025-01-29 11:32:36.250 [INFO][4674] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:32:36.603315 containerd[1495]: 2025-01-29 11:32:36.281 [INFO][4674] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7f846fb45c--49zts-eth0 calico-apiserver-7f846fb45c- calico-apiserver 355edf79-8969-4232-bff0-a38923ed3709 791 0 2025-01-29 11:32:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f846fb45c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7f846fb45c-49zts eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califb45b084c0f [] []}} ContainerID="38aa2d247dcbac1e27ce33495b22ca6fbac8a4c4bf7050ff12511d62b16b9eb9" Namespace="calico-apiserver" Pod="calico-apiserver-7f846fb45c-49zts" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f846fb45c--49zts-" Jan 29 11:32:36.603315 containerd[1495]: 2025-01-29 11:32:36.282 [INFO][4674] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="38aa2d247dcbac1e27ce33495b22ca6fbac8a4c4bf7050ff12511d62b16b9eb9" Namespace="calico-apiserver" Pod="calico-apiserver-7f846fb45c-49zts" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f846fb45c--49zts-eth0" Jan 29 11:32:36.603315 containerd[1495]: 2025-01-29 11:32:36.403 [INFO][4760] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="38aa2d247dcbac1e27ce33495b22ca6fbac8a4c4bf7050ff12511d62b16b9eb9" HandleID="k8s-pod-network.38aa2d247dcbac1e27ce33495b22ca6fbac8a4c4bf7050ff12511d62b16b9eb9" Workload="localhost-k8s-calico--apiserver--7f846fb45c--49zts-eth0" Jan 29 11:32:36.603315 containerd[1495]: 2025-01-29 11:32:36.415 [INFO][4760] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="38aa2d247dcbac1e27ce33495b22ca6fbac8a4c4bf7050ff12511d62b16b9eb9" HandleID="k8s-pod-network.38aa2d247dcbac1e27ce33495b22ca6fbac8a4c4bf7050ff12511d62b16b9eb9" Workload="localhost-k8s-calico--apiserver--7f846fb45c--49zts-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004002f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7f846fb45c-49zts", "timestamp":"2025-01-29 11:32:36.402920265 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:32:36.603315 containerd[1495]: 2025-01-29 11:32:36.415 [INFO][4760] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:32:36.603315 containerd[1495]: 2025-01-29 11:32:36.529 [INFO][4760] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:32:36.603315 containerd[1495]: 2025-01-29 11:32:36.529 [INFO][4760] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:32:36.603315 containerd[1495]: 2025-01-29 11:32:36.531 [INFO][4760] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.38aa2d247dcbac1e27ce33495b22ca6fbac8a4c4bf7050ff12511d62b16b9eb9" host="localhost" Jan 29 11:32:36.603315 containerd[1495]: 2025-01-29 11:32:36.537 [INFO][4760] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:32:36.603315 containerd[1495]: 2025-01-29 11:32:36.544 [INFO][4760] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:32:36.603315 containerd[1495]: 2025-01-29 11:32:36.546 [INFO][4760] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:32:36.603315 containerd[1495]: 2025-01-29 11:32:36.548 [INFO][4760] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:32:36.603315 containerd[1495]: 2025-01-29 11:32:36.548 [INFO][4760] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.38aa2d247dcbac1e27ce33495b22ca6fbac8a4c4bf7050ff12511d62b16b9eb9" host="localhost" Jan 29 11:32:36.603315 containerd[1495]: 2025-01-29 11:32:36.549 [INFO][4760] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.38aa2d247dcbac1e27ce33495b22ca6fbac8a4c4bf7050ff12511d62b16b9eb9 Jan 29 11:32:36.603315 containerd[1495]: 2025-01-29 11:32:36.557 [INFO][4760] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.38aa2d247dcbac1e27ce33495b22ca6fbac8a4c4bf7050ff12511d62b16b9eb9" host="localhost" Jan 29 11:32:36.603315 containerd[1495]: 2025-01-29 11:32:36.566 [INFO][4760] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.38aa2d247dcbac1e27ce33495b22ca6fbac8a4c4bf7050ff12511d62b16b9eb9" host="localhost" Jan 29 11:32:36.603315 containerd[1495]: 2025-01-29 11:32:36.566 [INFO][4760] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.38aa2d247dcbac1e27ce33495b22ca6fbac8a4c4bf7050ff12511d62b16b9eb9" host="localhost" Jan 29 11:32:36.603315 containerd[1495]: 2025-01-29 11:32:36.566 [INFO][4760] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:32:36.603315 containerd[1495]: 2025-01-29 11:32:36.566 [INFO][4760] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="38aa2d247dcbac1e27ce33495b22ca6fbac8a4c4bf7050ff12511d62b16b9eb9" HandleID="k8s-pod-network.38aa2d247dcbac1e27ce33495b22ca6fbac8a4c4bf7050ff12511d62b16b9eb9" Workload="localhost-k8s-calico--apiserver--7f846fb45c--49zts-eth0" Jan 29 11:32:36.603936 containerd[1495]: 2025-01-29 11:32:36.573 [INFO][4674] cni-plugin/k8s.go 386: Populated endpoint ContainerID="38aa2d247dcbac1e27ce33495b22ca6fbac8a4c4bf7050ff12511d62b16b9eb9" Namespace="calico-apiserver" Pod="calico-apiserver-7f846fb45c-49zts" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f846fb45c--49zts-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f846fb45c--49zts-eth0", GenerateName:"calico-apiserver-7f846fb45c-", Namespace:"calico-apiserver", SelfLink:"", UID:"355edf79-8969-4232-bff0-a38923ed3709", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 32, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f846fb45c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7f846fb45c-49zts", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califb45b084c0f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:32:36.603936 containerd[1495]: 2025-01-29 11:32:36.574 [INFO][4674] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="38aa2d247dcbac1e27ce33495b22ca6fbac8a4c4bf7050ff12511d62b16b9eb9" Namespace="calico-apiserver" Pod="calico-apiserver-7f846fb45c-49zts" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f846fb45c--49zts-eth0" Jan 29 11:32:36.603936 containerd[1495]: 2025-01-29 11:32:36.574 [INFO][4674] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califb45b084c0f ContainerID="38aa2d247dcbac1e27ce33495b22ca6fbac8a4c4bf7050ff12511d62b16b9eb9" Namespace="calico-apiserver" Pod="calico-apiserver-7f846fb45c-49zts" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f846fb45c--49zts-eth0" Jan 29 11:32:36.603936 containerd[1495]: 2025-01-29 11:32:36.584 [INFO][4674] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="38aa2d247dcbac1e27ce33495b22ca6fbac8a4c4bf7050ff12511d62b16b9eb9" Namespace="calico-apiserver" Pod="calico-apiserver-7f846fb45c-49zts" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f846fb45c--49zts-eth0" Jan 29 11:32:36.603936 containerd[1495]: 2025-01-29 11:32:36.584 [INFO][4674] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="38aa2d247dcbac1e27ce33495b22ca6fbac8a4c4bf7050ff12511d62b16b9eb9" Namespace="calico-apiserver" Pod="calico-apiserver-7f846fb45c-49zts" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f846fb45c--49zts-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f846fb45c--49zts-eth0", GenerateName:"calico-apiserver-7f846fb45c-", Namespace:"calico-apiserver", SelfLink:"", UID:"355edf79-8969-4232-bff0-a38923ed3709", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 32, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f846fb45c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"38aa2d247dcbac1e27ce33495b22ca6fbac8a4c4bf7050ff12511d62b16b9eb9", Pod:"calico-apiserver-7f846fb45c-49zts", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califb45b084c0f", MAC:"0a:49:2b:05:6a:f4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:32:36.603936 containerd[1495]: 2025-01-29 11:32:36.598 [INFO][4674] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="38aa2d247dcbac1e27ce33495b22ca6fbac8a4c4bf7050ff12511d62b16b9eb9" Namespace="calico-apiserver" Pod="calico-apiserver-7f846fb45c-49zts" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f846fb45c--49zts-eth0" Jan 29 11:32:36.604579 systemd[1]: Started cri-containerd-4ad33d5f904b2102a448346a5144c9534ec0dffc40559e3997e0c372a26eef72.scope - libcontainer container 4ad33d5f904b2102a448346a5144c9534ec0dffc40559e3997e0c372a26eef72. Jan 29 11:32:36.609210 systemd[1]: Started cri-containerd-638e11ec792144c0dec74ece1f329597fa473f09e56ff22d402a550f748799fc.scope - libcontainer container 638e11ec792144c0dec74ece1f329597fa473f09e56ff22d402a550f748799fc. Jan 29 11:32:36.614968 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:32:36.623572 systemd-networkd[1410]: calidf46db22dc3: Link UP Jan 29 11:32:36.625774 systemd-networkd[1410]: calidf46db22dc3: Gained carrier Jan 29 11:32:36.635020 containerd[1495]: time="2025-01-29T11:32:36.633823427Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:32:36.640323 containerd[1495]: time="2025-01-29T11:32:36.638101634Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:32:36.640323 containerd[1495]: time="2025-01-29T11:32:36.640139206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:32:36.640490 containerd[1495]: time="2025-01-29T11:32:36.640302794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:32:36.641580 containerd[1495]: time="2025-01-29T11:32:36.641549914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qtzv2,Uid:eb49b472-01c5-4cb5-84d5-9a1a2c4b969d,Namespace:calico-system,Attempt:5,} returns sandbox id \"c7031c2bd20faef7d4c5a33a0582c94920a65f9bd7f61b97d96440b21f035cbd\"" Jan 29 11:32:36.648916 containerd[1495]: time="2025-01-29T11:32:36.648877032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 29 11:32:36.654137 containerd[1495]: 2025-01-29 11:32:36.271 [INFO][4693] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:32:36.654137 containerd[1495]: 2025-01-29 11:32:36.285 [INFO][4693] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--bpqc6-eth0 coredns-7db6d8ff4d- kube-system 59ad9644-a5c7-4480-bc20-dbeaa0a967d1 789 0 2025-01-29 11:32:05 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-bpqc6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidf46db22dc3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="7438675319815f1c2e42849bf7340541e498b102627b5b1325d994d5c287353e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bpqc6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--bpqc6-" Jan 29 11:32:36.654137 containerd[1495]: 2025-01-29 11:32:36.285 [INFO][4693] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7438675319815f1c2e42849bf7340541e498b102627b5b1325d994d5c287353e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bpqc6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--bpqc6-eth0" Jan 29 11:32:36.654137 containerd[1495]: 2025-01-29 11:32:36.403 [INFO][4758] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7438675319815f1c2e42849bf7340541e498b102627b5b1325d994d5c287353e" HandleID="k8s-pod-network.7438675319815f1c2e42849bf7340541e498b102627b5b1325d994d5c287353e" Workload="localhost-k8s-coredns--7db6d8ff4d--bpqc6-eth0" Jan 29 11:32:36.654137 containerd[1495]: 2025-01-29 11:32:36.415 [INFO][4758] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7438675319815f1c2e42849bf7340541e498b102627b5b1325d994d5c287353e" HandleID="k8s-pod-network.7438675319815f1c2e42849bf7340541e498b102627b5b1325d994d5c287353e" Workload="localhost-k8s-coredns--7db6d8ff4d--bpqc6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000307b60), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-bpqc6", "timestamp":"2025-01-29 11:32:36.403692955 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:32:36.654137 containerd[1495]: 2025-01-29 11:32:36.416 [INFO][4758] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:32:36.654137 containerd[1495]: 2025-01-29 11:32:36.566 [INFO][4758] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:32:36.654137 containerd[1495]: 2025-01-29 11:32:36.567 [INFO][4758] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:32:36.654137 containerd[1495]: 2025-01-29 11:32:36.574 [INFO][4758] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7438675319815f1c2e42849bf7340541e498b102627b5b1325d994d5c287353e" host="localhost" Jan 29 11:32:36.654137 containerd[1495]: 2025-01-29 11:32:36.580 [INFO][4758] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:32:36.654137 containerd[1495]: 2025-01-29 11:32:36.586 [INFO][4758] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:32:36.654137 containerd[1495]: 2025-01-29 11:32:36.588 [INFO][4758] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:32:36.654137 containerd[1495]: 2025-01-29 11:32:36.594 [INFO][4758] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:32:36.654137 containerd[1495]: 2025-01-29 11:32:36.594 [INFO][4758] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7438675319815f1c2e42849bf7340541e498b102627b5b1325d994d5c287353e" host="localhost" Jan 29 11:32:36.654137 containerd[1495]: 2025-01-29 11:32:36.598 [INFO][4758] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7438675319815f1c2e42849bf7340541e498b102627b5b1325d994d5c287353e Jan 29 11:32:36.654137 containerd[1495]: 2025-01-29 11:32:36.607 [INFO][4758] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7438675319815f1c2e42849bf7340541e498b102627b5b1325d994d5c287353e" host="localhost" Jan 29 11:32:36.654137 containerd[1495]: 2025-01-29 11:32:36.613 [INFO][4758] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.7438675319815f1c2e42849bf7340541e498b102627b5b1325d994d5c287353e" host="localhost" Jan 29 11:32:36.654137 containerd[1495]: 2025-01-29 11:32:36.613 [INFO][4758] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.7438675319815f1c2e42849bf7340541e498b102627b5b1325d994d5c287353e" host="localhost" Jan 29 11:32:36.654137 containerd[1495]: 2025-01-29 11:32:36.613 [INFO][4758] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:32:36.654137 containerd[1495]: 2025-01-29 11:32:36.613 [INFO][4758] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="7438675319815f1c2e42849bf7340541e498b102627b5b1325d994d5c287353e" HandleID="k8s-pod-network.7438675319815f1c2e42849bf7340541e498b102627b5b1325d994d5c287353e" Workload="localhost-k8s-coredns--7db6d8ff4d--bpqc6-eth0" Jan 29 11:32:36.655578 containerd[1495]: 2025-01-29 11:32:36.618 [INFO][4693] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7438675319815f1c2e42849bf7340541e498b102627b5b1325d994d5c287353e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bpqc6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--bpqc6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--bpqc6-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"59ad9644-a5c7-4480-bc20-dbeaa0a967d1", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 32, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-bpqc6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidf46db22dc3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:32:36.655578 containerd[1495]: 2025-01-29 11:32:36.618 [INFO][4693] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="7438675319815f1c2e42849bf7340541e498b102627b5b1325d994d5c287353e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bpqc6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--bpqc6-eth0" Jan 29 11:32:36.655578 containerd[1495]: 2025-01-29 11:32:36.618 [INFO][4693] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidf46db22dc3 ContainerID="7438675319815f1c2e42849bf7340541e498b102627b5b1325d994d5c287353e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bpqc6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--bpqc6-eth0" Jan 29 11:32:36.655578 containerd[1495]: 2025-01-29 11:32:36.626 [INFO][4693] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7438675319815f1c2e42849bf7340541e498b102627b5b1325d994d5c287353e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bpqc6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--bpqc6-eth0" Jan 29 11:32:36.655578 containerd[1495]: 2025-01-29 11:32:36.627 [INFO][4693] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7438675319815f1c2e42849bf7340541e498b102627b5b1325d994d5c287353e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bpqc6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--bpqc6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--bpqc6-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"59ad9644-a5c7-4480-bc20-dbeaa0a967d1", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 32, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7438675319815f1c2e42849bf7340541e498b102627b5b1325d994d5c287353e", Pod:"coredns-7db6d8ff4d-bpqc6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidf46db22dc3", MAC:"5a:c9:ea:d7:83:28", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:32:36.655578 containerd[1495]: 2025-01-29 11:32:36.647 [INFO][4693] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7438675319815f1c2e42849bf7340541e498b102627b5b1325d994d5c287353e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bpqc6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--bpqc6-eth0" Jan 29 11:32:36.670362 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:32:36.673188 containerd[1495]: time="2025-01-29T11:32:36.673097756Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:32:36.673350 containerd[1495]: time="2025-01-29T11:32:36.673325854Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:32:36.673484 containerd[1495]: time="2025-01-29T11:32:36.673460317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:32:36.674662 containerd[1495]: time="2025-01-29T11:32:36.674575629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:32:36.676193 systemd[1]: Started cri-containerd-2a63e02c9ebf7baba155265a1242e7a24f39e40692af7a79117462accef2bdb5.scope - libcontainer container 2a63e02c9ebf7baba155265a1242e7a24f39e40692af7a79117462accef2bdb5. Jan 29 11:32:36.682471 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:32:36.692834 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:32:36.710960 systemd[1]: Started cri-containerd-38aa2d247dcbac1e27ce33495b22ca6fbac8a4c4bf7050ff12511d62b16b9eb9.scope - libcontainer container 38aa2d247dcbac1e27ce33495b22ca6fbac8a4c4bf7050ff12511d62b16b9eb9. Jan 29 11:32:36.714000 containerd[1495]: time="2025-01-29T11:32:36.713584992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-748549c4c9-7d2cf,Uid:dfb9e1ad-f94c-4aa8-a1d0-d67fe50cc0e9,Namespace:calico-system,Attempt:5,} returns sandbox id \"4ad33d5f904b2102a448346a5144c9534ec0dffc40559e3997e0c372a26eef72\"" Jan 29 11:32:36.727632 containerd[1495]: time="2025-01-29T11:32:36.727404381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pgcpq,Uid:ce5f6883-5ebc-45bd-8052-20316de2d012,Namespace:kube-system,Attempt:5,} returns sandbox id \"2a63e02c9ebf7baba155265a1242e7a24f39e40692af7a79117462accef2bdb5\"" Jan 29 11:32:36.728490 kubelet[2697]: E0129 11:32:36.728468 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:32:36.736091 containerd[1495]: time="2025-01-29T11:32:36.735745431Z" level=info msg="CreateContainer within sandbox \"2a63e02c9ebf7baba155265a1242e7a24f39e40692af7a79117462accef2bdb5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:32:36.738246 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:32:36.738957 containerd[1495]: time="2025-01-29T11:32:36.736441036Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:32:36.738957 containerd[1495]: time="2025-01-29T11:32:36.736492984Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:32:36.738957 containerd[1495]: time="2025-01-29T11:32:36.736507842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:32:36.738957 containerd[1495]: time="2025-01-29T11:32:36.736592170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:32:36.747407 containerd[1495]: time="2025-01-29T11:32:36.746551566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f846fb45c-qnzlv,Uid:6a3ccfc9-9edc-4b98-a77a-7df17efe2895,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"638e11ec792144c0dec74ece1f329597fa473f09e56ff22d402a550f748799fc\"" Jan 29 11:32:36.762558 systemd[1]: Started cri-containerd-7438675319815f1c2e42849bf7340541e498b102627b5b1325d994d5c287353e.scope - libcontainer container 7438675319815f1c2e42849bf7340541e498b102627b5b1325d994d5c287353e. Jan 29 11:32:36.773881 containerd[1495]: time="2025-01-29T11:32:36.773841561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f846fb45c-49zts,Uid:355edf79-8969-4232-bff0-a38923ed3709,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"38aa2d247dcbac1e27ce33495b22ca6fbac8a4c4bf7050ff12511d62b16b9eb9\"" Jan 29 11:32:36.778505 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:32:36.804326 containerd[1495]: time="2025-01-29T11:32:36.804278509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bpqc6,Uid:59ad9644-a5c7-4480-bc20-dbeaa0a967d1,Namespace:kube-system,Attempt:5,} returns sandbox id \"7438675319815f1c2e42849bf7340541e498b102627b5b1325d994d5c287353e\"" Jan 29 11:32:36.805098 kubelet[2697]: E0129 11:32:36.805077 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:32:36.807164 containerd[1495]: time="2025-01-29T11:32:36.807118257Z" level=info msg="CreateContainer within sandbox \"7438675319815f1c2e42849bf7340541e498b102627b5b1325d994d5c287353e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:32:36.836015 kubelet[2697]: E0129 11:32:36.835985 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:32:36.950010 kubelet[2697]: I0129 11:32:36.949855 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-4v26b" podStartSLOduration=2.897779596 podStartE2EDuration="23.949832139s" podCreationTimestamp="2025-01-29 11:32:13 +0000 UTC" firstStartedPulling="2025-01-29 11:32:14.36584427 +0000 UTC m=+25.185817034" lastFinishedPulling="2025-01-29 11:32:35.417896814 +0000 UTC m=+46.237869577" observedRunningTime="2025-01-29 11:32:36.948663206 +0000 UTC m=+47.768635999" watchObservedRunningTime="2025-01-29 11:32:36.949832139 +0000 UTC m=+47.769804902" Jan 29 11:32:37.373107 containerd[1495]: time="2025-01-29T11:32:37.373059895Z" level=info msg="CreateContainer within sandbox \"7438675319815f1c2e42849bf7340541e498b102627b5b1325d994d5c287353e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d283fa89ac183c682dd2dd654b44c0fc66060eeb3b93b32a4519a3a31b5cd567\"" Jan 29 11:32:37.373739 containerd[1495]: time="2025-01-29T11:32:37.373616328Z" level=info msg="StartContainer for \"d283fa89ac183c682dd2dd654b44c0fc66060eeb3b93b32a4519a3a31b5cd567\"" Jan 29 11:32:37.374889 containerd[1495]: time="2025-01-29T11:32:37.374847027Z" level=info msg="CreateContainer within sandbox \"2a63e02c9ebf7baba155265a1242e7a24f39e40692af7a79117462accef2bdb5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"08146ecc1f3608a323ca8a9f549e63bc18c53b3d30059a5ebcb28933101ba4f3\"" Jan 29 11:32:37.375340 containerd[1495]: time="2025-01-29T11:32:37.375291201Z" level=info msg="StartContainer for \"08146ecc1f3608a323ca8a9f549e63bc18c53b3d30059a5ebcb28933101ba4f3\"" Jan 29 11:32:37.410203 systemd[1]: Started cri-containerd-d283fa89ac183c682dd2dd654b44c0fc66060eeb3b93b32a4519a3a31b5cd567.scope - libcontainer container d283fa89ac183c682dd2dd654b44c0fc66060eeb3b93b32a4519a3a31b5cd567. Jan 29 11:32:37.427020 systemd[1]: Started cri-containerd-08146ecc1f3608a323ca8a9f549e63bc18c53b3d30059a5ebcb28933101ba4f3.scope - libcontainer container 08146ecc1f3608a323ca8a9f549e63bc18c53b3d30059a5ebcb28933101ba4f3. Jan 29 11:32:37.642610 systemd-networkd[1410]: cali4230ab0d14a: Gained IPv6LL Jan 29 11:32:37.685463 kernel: bpftool[5347]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 29 11:32:37.692067 containerd[1495]: time="2025-01-29T11:32:37.691905446Z" level=info msg="StartContainer for \"d283fa89ac183c682dd2dd654b44c0fc66060eeb3b93b32a4519a3a31b5cd567\" returns successfully" Jan 29 11:32:37.693265 containerd[1495]: time="2025-01-29T11:32:37.691978964Z" level=info msg="StartContainer for \"08146ecc1f3608a323ca8a9f549e63bc18c53b3d30059a5ebcb28933101ba4f3\" returns successfully" Jan 29 11:32:37.845280 kubelet[2697]: E0129 11:32:37.845245 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:32:37.850527 kubelet[2697]: E0129 11:32:37.850497 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:32:37.852334 kubelet[2697]: E0129 11:32:37.852318 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:32:37.858820 kubelet[2697]: I0129 11:32:37.858043 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-pgcpq" podStartSLOduration=32.858022442 podStartE2EDuration="32.858022442s" podCreationTimestamp="2025-01-29 11:32:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:32:37.856827282 +0000 UTC m=+48.676800035" watchObservedRunningTime="2025-01-29 11:32:37.858022442 +0000 UTC m=+48.677995205" Jan 29 11:32:37.875624 kubelet[2697]: I0129 11:32:37.875570 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-bpqc6" podStartSLOduration=32.875549198 podStartE2EDuration="32.875549198s" podCreationTimestamp="2025-01-29 11:32:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:32:37.874299444 +0000 UTC m=+48.694272207" watchObservedRunningTime="2025-01-29 11:32:37.875549198 +0000 UTC m=+48.695521961" Jan 29 11:32:37.951988 systemd-networkd[1410]: vxlan.calico: Link UP Jan 29 11:32:37.951999 systemd-networkd[1410]: vxlan.calico: Gained carrier Jan 29 11:32:38.089560 systemd-networkd[1410]: cali504630069c8: Gained IPv6LL Jan 29 11:32:38.089870 systemd-networkd[1410]: calidf46db22dc3: Gained IPv6LL Jan 29 11:32:38.090094 systemd-networkd[1410]: cali3688a07fa4d: Gained IPv6LL Jan 29 11:32:38.261121 containerd[1495]: time="2025-01-29T11:32:38.261024283Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:32:38.261809 containerd[1495]: time="2025-01-29T11:32:38.261775128Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 29 11:32:38.263088 containerd[1495]: time="2025-01-29T11:32:38.263056490Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:32:38.265793 containerd[1495]: time="2025-01-29T11:32:38.265753365Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:32:38.266492 containerd[1495]: time="2025-01-29T11:32:38.266468490Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.617473727s" Jan 29 11:32:38.266492 containerd[1495]: time="2025-01-29T11:32:38.266495934Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 29 11:32:38.267872 containerd[1495]: time="2025-01-29T11:32:38.267721007Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 29 11:32:38.268730 containerd[1495]: time="2025-01-29T11:32:38.268680064Z" level=info msg="CreateContainer within sandbox \"c7031c2bd20faef7d4c5a33a0582c94920a65f9bd7f61b97d96440b21f035cbd\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 29 11:32:38.292365 containerd[1495]: time="2025-01-29T11:32:38.292325103Z" level=info msg="CreateContainer within sandbox \"c7031c2bd20faef7d4c5a33a0582c94920a65f9bd7f61b97d96440b21f035cbd\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"fd2be201368256ed30dcdfd844bd5508c1d6c5972a1d0f9d36dc40045c287307\"" Jan 29 11:32:38.292831 containerd[1495]: time="2025-01-29T11:32:38.292792849Z" level=info msg="StartContainer for \"fd2be201368256ed30dcdfd844bd5508c1d6c5972a1d0f9d36dc40045c287307\"" Jan 29 11:32:38.322553 systemd[1]: Started cri-containerd-fd2be201368256ed30dcdfd844bd5508c1d6c5972a1d0f9d36dc40045c287307.scope - libcontainer container fd2be201368256ed30dcdfd844bd5508c1d6c5972a1d0f9d36dc40045c287307. Jan 29 11:32:38.355319 containerd[1495]: time="2025-01-29T11:32:38.355112030Z" level=info msg="StartContainer for \"fd2be201368256ed30dcdfd844bd5508c1d6c5972a1d0f9d36dc40045c287307\" returns successfully" Jan 29 11:32:38.409554 systemd-networkd[1410]: calif35a999af1a: Gained IPv6LL Jan 29 11:32:38.473595 systemd-networkd[1410]: califb45b084c0f: Gained IPv6LL Jan 29 11:32:38.854854 kubelet[2697]: E0129 11:32:38.854005 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:32:38.855606 kubelet[2697]: E0129 11:32:38.855281 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:32:39.113578 systemd-networkd[1410]: vxlan.calico: Gained IPv6LL Jan 29 11:32:39.856773 kubelet[2697]: E0129 11:32:39.856730 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:32:39.857260 kubelet[2697]: E0129 11:32:39.856747 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:32:40.602720 systemd[1]: Started sshd@13-10.0.0.69:22-10.0.0.1:47020.service - OpenSSH per-connection server daemon (10.0.0.1:47020). Jan 29 11:32:40.658764 sshd[5519]: Accepted publickey for core from 10.0.0.1 port 47020 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:32:40.660657 sshd-session[5519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:32:40.666138 systemd-logind[1471]: New session 14 of user core. Jan 29 11:32:40.678311 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 11:32:40.821061 sshd[5521]: Connection closed by 10.0.0.1 port 47020 Jan 29 11:32:40.822527 sshd-session[5519]: pam_unix(sshd:session): session closed for user core Jan 29 11:32:40.827672 systemd[1]: sshd@13-10.0.0.69:22-10.0.0.1:47020.service: Deactivated successfully. Jan 29 11:32:40.829932 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 11:32:40.832047 systemd-logind[1471]: Session 14 logged out. Waiting for processes to exit. Jan 29 11:32:40.833275 systemd-logind[1471]: Removed session 14. Jan 29 11:32:40.858748 kubelet[2697]: E0129 11:32:40.858638 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:32:40.927896 containerd[1495]: time="2025-01-29T11:32:40.927839516Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:32:40.928648 containerd[1495]: time="2025-01-29T11:32:40.928589737Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 29 11:32:40.929649 containerd[1495]: time="2025-01-29T11:32:40.929612986Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:32:40.931482 containerd[1495]: time="2025-01-29T11:32:40.931450159Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:32:40.932044 containerd[1495]: time="2025-01-29T11:32:40.932014601Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.664269707s" Jan 29 11:32:40.932091 containerd[1495]: time="2025-01-29T11:32:40.932043546Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 29 11:32:40.932985 containerd[1495]: time="2025-01-29T11:32:40.932962994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 11:32:40.942926 containerd[1495]: time="2025-01-29T11:32:40.942756144Z" level=info msg="CreateContainer within sandbox \"4ad33d5f904b2102a448346a5144c9534ec0dffc40559e3997e0c372a26eef72\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 29 11:32:40.957864 containerd[1495]: time="2025-01-29T11:32:40.957823032Z" level=info msg="CreateContainer within sandbox \"4ad33d5f904b2102a448346a5144c9534ec0dffc40559e3997e0c372a26eef72\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"bdce1cd50366bf599dffc3b0c480ee94e0f1ebe5778773751cc92f1e519e069e\"" Jan 29 11:32:40.958337 containerd[1495]: time="2025-01-29T11:32:40.958301898Z" level=info msg="StartContainer for \"bdce1cd50366bf599dffc3b0c480ee94e0f1ebe5778773751cc92f1e519e069e\"" Jan 29 11:32:40.987574 systemd[1]: Started cri-containerd-bdce1cd50366bf599dffc3b0c480ee94e0f1ebe5778773751cc92f1e519e069e.scope - libcontainer container bdce1cd50366bf599dffc3b0c480ee94e0f1ebe5778773751cc92f1e519e069e. Jan 29 11:32:41.031714 containerd[1495]: time="2025-01-29T11:32:41.031588780Z" level=info msg="StartContainer for \"bdce1cd50366bf599dffc3b0c480ee94e0f1ebe5778773751cc92f1e519e069e\" returns successfully" Jan 29 11:32:41.941933 kubelet[2697]: I0129 11:32:41.941651 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-748549c4c9-7d2cf" podStartSLOduration=23.724146381 podStartE2EDuration="27.941634913s" podCreationTimestamp="2025-01-29 11:32:14 +0000 UTC" firstStartedPulling="2025-01-29 11:32:36.715361374 +0000 UTC m=+47.535334137" lastFinishedPulling="2025-01-29 11:32:40.932849906 +0000 UTC m=+51.752822669" observedRunningTime="2025-01-29 11:32:41.941270248 +0000 UTC m=+52.761243011" watchObservedRunningTime="2025-01-29 11:32:41.941634913 +0000 UTC m=+52.761607676" Jan 29 11:32:43.962443 containerd[1495]: time="2025-01-29T11:32:43.962376662Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:32:43.963258 containerd[1495]: time="2025-01-29T11:32:43.963205251Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 29 11:32:43.964592 containerd[1495]: time="2025-01-29T11:32:43.964567328Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:32:43.967117 containerd[1495]: time="2025-01-29T11:32:43.967090296Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:32:43.967955 containerd[1495]: time="2025-01-29T11:32:43.967907994Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.034912786s" Jan 29 11:32:43.967955 containerd[1495]: time="2025-01-29T11:32:43.967951357Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 29 11:32:43.969409 containerd[1495]: time="2025-01-29T11:32:43.969289989Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 11:32:43.970928 containerd[1495]: time="2025-01-29T11:32:43.970866361Z" level=info msg="CreateContainer within sandbox \"638e11ec792144c0dec74ece1f329597fa473f09e56ff22d402a550f748799fc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 11:32:43.987958 containerd[1495]: time="2025-01-29T11:32:43.987882038Z" level=info msg="CreateContainer within sandbox \"638e11ec792144c0dec74ece1f329597fa473f09e56ff22d402a550f748799fc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"eb5925d65bd6a411d752bdd17492a008826c0c98b3a58745173e10c77b15d931\"" Jan 29 11:32:43.989142 containerd[1495]: time="2025-01-29T11:32:43.989071302Z" level=info msg="StartContainer for \"eb5925d65bd6a411d752bdd17492a008826c0c98b3a58745173e10c77b15d931\"" Jan 29 11:32:44.026640 systemd[1]: Started cri-containerd-eb5925d65bd6a411d752bdd17492a008826c0c98b3a58745173e10c77b15d931.scope - libcontainer container eb5925d65bd6a411d752bdd17492a008826c0c98b3a58745173e10c77b15d931. Jan 29 11:32:44.087888 containerd[1495]: time="2025-01-29T11:32:44.087276325Z" level=info msg="StartContainer for \"eb5925d65bd6a411d752bdd17492a008826c0c98b3a58745173e10c77b15d931\" returns successfully" Jan 29 11:32:44.355241 containerd[1495]: time="2025-01-29T11:32:44.355113520Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:32:44.356252 containerd[1495]: time="2025-01-29T11:32:44.356185587Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 29 11:32:44.358355 containerd[1495]: time="2025-01-29T11:32:44.358317126Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 388.996057ms" Jan 29 11:32:44.358355 containerd[1495]: time="2025-01-29T11:32:44.358347294Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 29 11:32:44.360801 containerd[1495]: time="2025-01-29T11:32:44.360095904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 29 11:32:44.361173 containerd[1495]: time="2025-01-29T11:32:44.361105240Z" level=info msg="CreateContainer within sandbox \"38aa2d247dcbac1e27ce33495b22ca6fbac8a4c4bf7050ff12511d62b16b9eb9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 11:32:44.392285 containerd[1495]: time="2025-01-29T11:32:44.392227857Z" level=info msg="CreateContainer within sandbox \"38aa2d247dcbac1e27ce33495b22ca6fbac8a4c4bf7050ff12511d62b16b9eb9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1b96f7e5a2b40cfe634e6ed38eb3c66d4394fa4a94a8534e1d47778c78313814\"" Jan 29 11:32:44.393190 containerd[1495]: time="2025-01-29T11:32:44.393153011Z" level=info msg="StartContainer for \"1b96f7e5a2b40cfe634e6ed38eb3c66d4394fa4a94a8534e1d47778c78313814\"" Jan 29 11:32:44.422633 systemd[1]: Started cri-containerd-1b96f7e5a2b40cfe634e6ed38eb3c66d4394fa4a94a8534e1d47778c78313814.scope - libcontainer container 1b96f7e5a2b40cfe634e6ed38eb3c66d4394fa4a94a8534e1d47778c78313814. Jan 29 11:32:44.467487 containerd[1495]: time="2025-01-29T11:32:44.467434156Z" level=info msg="StartContainer for \"1b96f7e5a2b40cfe634e6ed38eb3c66d4394fa4a94a8534e1d47778c78313814\" returns successfully" Jan 29 11:32:44.881346 kubelet[2697]: I0129 11:32:44.881286 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7f846fb45c-49zts" podStartSLOduration=24.29687327 podStartE2EDuration="31.881266073s" podCreationTimestamp="2025-01-29 11:32:13 +0000 UTC" firstStartedPulling="2025-01-29 11:32:36.775328159 +0000 UTC m=+47.595300922" lastFinishedPulling="2025-01-29 11:32:44.359720962 +0000 UTC m=+55.179693725" observedRunningTime="2025-01-29 11:32:44.880526527 +0000 UTC m=+55.700499280" watchObservedRunningTime="2025-01-29 11:32:44.881266073 +0000 UTC m=+55.701238836" Jan 29 11:32:45.836330 systemd[1]: Started sshd@14-10.0.0.69:22-10.0.0.1:47032.service - OpenSSH per-connection server daemon (10.0.0.1:47032). Jan 29 11:32:45.877608 kubelet[2697]: I0129 11:32:45.877562 2697 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:32:45.877608 kubelet[2697]: I0129 11:32:45.877581 2697 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:32:45.888701 sshd[5699]: Accepted publickey for core from 10.0.0.1 port 47032 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:32:45.890267 sshd-session[5699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:32:45.894503 systemd-logind[1471]: New session 15 of user core. Jan 29 11:32:45.900556 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 11:32:46.097226 sshd[5701]: Connection closed by 10.0.0.1 port 47032 Jan 29 11:32:46.097514 sshd-session[5699]: pam_unix(sshd:session): session closed for user core Jan 29 11:32:46.101151 systemd[1]: sshd@14-10.0.0.69:22-10.0.0.1:47032.service: Deactivated successfully. Jan 29 11:32:46.103241 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 11:32:46.103890 systemd-logind[1471]: Session 15 logged out. Waiting for processes to exit. Jan 29 11:32:46.104702 systemd-logind[1471]: Removed session 15. Jan 29 11:32:46.597160 containerd[1495]: time="2025-01-29T11:32:46.597103357Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:32:46.598558 containerd[1495]: time="2025-01-29T11:32:46.598522820Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 29 11:32:46.600271 containerd[1495]: time="2025-01-29T11:32:46.600241589Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:32:46.610961 containerd[1495]: time="2025-01-29T11:32:46.610887521Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:32:46.611436 containerd[1495]: time="2025-01-29T11:32:46.611379107Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.251249527s" Jan 29 11:32:46.611436 containerd[1495]: time="2025-01-29T11:32:46.611426579Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 29 11:32:46.613453 containerd[1495]: time="2025-01-29T11:32:46.613431889Z" level=info msg="CreateContainer within sandbox \"c7031c2bd20faef7d4c5a33a0582c94920a65f9bd7f61b97d96440b21f035cbd\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 29 11:32:46.629437 containerd[1495]: time="2025-01-29T11:32:46.629385589Z" level=info msg="CreateContainer within sandbox \"c7031c2bd20faef7d4c5a33a0582c94920a65f9bd7f61b97d96440b21f035cbd\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"568acb177a83b6857314cf4576011295b8ebe35d34c5934d117e6cc994efdc6c\"" Jan 29 11:32:46.629825 containerd[1495]: time="2025-01-29T11:32:46.629788184Z" level=info msg="StartContainer for \"568acb177a83b6857314cf4576011295b8ebe35d34c5934d117e6cc994efdc6c\"" Jan 29 11:32:46.665553 systemd[1]: Started cri-containerd-568acb177a83b6857314cf4576011295b8ebe35d34c5934d117e6cc994efdc6c.scope - libcontainer container 568acb177a83b6857314cf4576011295b8ebe35d34c5934d117e6cc994efdc6c. Jan 29 11:32:46.698178 containerd[1495]: time="2025-01-29T11:32:46.698137875Z" level=info msg="StartContainer for \"568acb177a83b6857314cf4576011295b8ebe35d34c5934d117e6cc994efdc6c\" returns successfully" Jan 29 11:32:46.984427 kubelet[2697]: I0129 11:32:46.984061 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7f846fb45c-qnzlv" podStartSLOduration=26.76341235 podStartE2EDuration="33.98403854s" podCreationTimestamp="2025-01-29 11:32:13 +0000 UTC" firstStartedPulling="2025-01-29 11:32:36.748467561 +0000 UTC m=+47.568440324" lastFinishedPulling="2025-01-29 11:32:43.969093761 +0000 UTC m=+54.789066514" observedRunningTime="2025-01-29 11:32:44.89474155 +0000 UTC m=+55.714714313" watchObservedRunningTime="2025-01-29 11:32:46.98403854 +0000 UTC m=+57.804011293" Jan 29 11:32:46.985728 kubelet[2697]: I0129 11:32:46.985567 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-qtzv2" podStartSLOduration=23.021934084 podStartE2EDuration="32.98555586s" podCreationTimestamp="2025-01-29 11:32:14 +0000 UTC" firstStartedPulling="2025-01-29 11:32:36.648543004 +0000 UTC m=+47.468515767" lastFinishedPulling="2025-01-29 11:32:46.61216479 +0000 UTC m=+57.432137543" observedRunningTime="2025-01-29 11:32:46.984451034 +0000 UTC m=+57.804423797" watchObservedRunningTime="2025-01-29 11:32:46.98555586 +0000 UTC m=+57.805528643" Jan 29 11:32:47.343023 kubelet[2697]: I0129 11:32:47.342899 2697 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 29 11:32:47.343023 kubelet[2697]: I0129 11:32:47.342937 2697 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 29 11:32:49.261806 containerd[1495]: time="2025-01-29T11:32:49.261759324Z" level=info msg="StopPodSandbox for \"fe6ac203e82a4009391dc90c7b6ce83180175a75113cfc6c681bac96b7142886\"" Jan 29 11:32:49.262269 containerd[1495]: time="2025-01-29T11:32:49.261889113Z" level=info msg="TearDown network for sandbox \"fe6ac203e82a4009391dc90c7b6ce83180175a75113cfc6c681bac96b7142886\" successfully" Jan 29 11:32:49.262269 containerd[1495]: time="2025-01-29T11:32:49.261899733Z" level=info msg="StopPodSandbox for \"fe6ac203e82a4009391dc90c7b6ce83180175a75113cfc6c681bac96b7142886\" returns successfully" Jan 29 11:32:49.262348 containerd[1495]: time="2025-01-29T11:32:49.262277299Z" level=info msg="RemovePodSandbox for \"fe6ac203e82a4009391dc90c7b6ce83180175a75113cfc6c681bac96b7142886\"" Jan 29 11:32:49.272827 containerd[1495]: time="2025-01-29T11:32:49.272799273Z" level=info msg="Forcibly stopping sandbox \"fe6ac203e82a4009391dc90c7b6ce83180175a75113cfc6c681bac96b7142886\"" Jan 29 11:32:49.272965 containerd[1495]: time="2025-01-29T11:32:49.272918311Z" level=info msg="TearDown network for sandbox \"fe6ac203e82a4009391dc90c7b6ce83180175a75113cfc6c681bac96b7142886\" successfully" Jan 29 11:32:49.282542 containerd[1495]: time="2025-01-29T11:32:49.282488567Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fe6ac203e82a4009391dc90c7b6ce83180175a75113cfc6c681bac96b7142886\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:32:49.282702 containerd[1495]: time="2025-01-29T11:32:49.282573980Z" level=info msg="RemovePodSandbox \"fe6ac203e82a4009391dc90c7b6ce83180175a75113cfc6c681bac96b7142886\" returns successfully" Jan 29 11:32:49.283081 containerd[1495]: time="2025-01-29T11:32:49.283047299Z" level=info msg="StopPodSandbox for \"72d092d80dc2ce8362dcfe998cac484b496878a763b26fd59103bd9f00b2119c\"" Jan 29 11:32:49.283208 containerd[1495]: time="2025-01-29T11:32:49.283148945Z" level=info msg="TearDown network for sandbox \"72d092d80dc2ce8362dcfe998cac484b496878a763b26fd59103bd9f00b2119c\" successfully" Jan 29 11:32:49.283208 containerd[1495]: time="2025-01-29T11:32:49.283192559Z" level=info msg="StopPodSandbox for \"72d092d80dc2ce8362dcfe998cac484b496878a763b26fd59103bd9f00b2119c\" returns successfully" Jan 29 11:32:49.283492 containerd[1495]: time="2025-01-29T11:32:49.283464110Z" level=info msg="RemovePodSandbox for \"72d092d80dc2ce8362dcfe998cac484b496878a763b26fd59103bd9f00b2119c\"" Jan 29 11:32:49.283561 containerd[1495]: time="2025-01-29T11:32:49.283495611Z" level=info msg="Forcibly stopping sandbox \"72d092d80dc2ce8362dcfe998cac484b496878a763b26fd59103bd9f00b2119c\"" Jan 29 11:32:49.283628 containerd[1495]: time="2025-01-29T11:32:49.283584752Z" level=info msg="TearDown network for sandbox \"72d092d80dc2ce8362dcfe998cac484b496878a763b26fd59103bd9f00b2119c\" successfully" Jan 29 11:32:49.287321 containerd[1495]: time="2025-01-29T11:32:49.287287655Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"72d092d80dc2ce8362dcfe998cac484b496878a763b26fd59103bd9f00b2119c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:32:49.287383 containerd[1495]: time="2025-01-29T11:32:49.287334826Z" level=info msg="RemovePodSandbox \"72d092d80dc2ce8362dcfe998cac484b496878a763b26fd59103bd9f00b2119c\" returns successfully" Jan 29 11:32:49.287705 containerd[1495]: time="2025-01-29T11:32:49.287681211Z" level=info msg="StopPodSandbox for \"89c5015df07cde96721832440960b9c6a01da3d920c81398d189bdc7a03b198c\"" Jan 29 11:32:49.287797 containerd[1495]: time="2025-01-29T11:32:49.287774601Z" level=info msg="TearDown network for sandbox \"89c5015df07cde96721832440960b9c6a01da3d920c81398d189bdc7a03b198c\" successfully" Jan 29 11:32:49.287797 containerd[1495]: time="2025-01-29T11:32:49.287790241Z" level=info msg="StopPodSandbox for \"89c5015df07cde96721832440960b9c6a01da3d920c81398d189bdc7a03b198c\" returns successfully" Jan 29 11:32:49.288044 containerd[1495]: time="2025-01-29T11:32:49.288020473Z" level=info msg="RemovePodSandbox for \"89c5015df07cde96721832440960b9c6a01da3d920c81398d189bdc7a03b198c\"" Jan 29 11:32:49.288080 containerd[1495]: time="2025-01-29T11:32:49.288047635Z" level=info msg="Forcibly stopping sandbox \"89c5015df07cde96721832440960b9c6a01da3d920c81398d189bdc7a03b198c\"" Jan 29 11:32:49.288172 containerd[1495]: time="2025-01-29T11:32:49.288142678Z" level=info msg="TearDown network for sandbox \"89c5015df07cde96721832440960b9c6a01da3d920c81398d189bdc7a03b198c\" successfully" Jan 29 11:32:49.292227 containerd[1495]: time="2025-01-29T11:32:49.292198690Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"89c5015df07cde96721832440960b9c6a01da3d920c81398d189bdc7a03b198c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:32:49.292280 containerd[1495]: time="2025-01-29T11:32:49.292236062Z" level=info msg="RemovePodSandbox \"89c5015df07cde96721832440960b9c6a01da3d920c81398d189bdc7a03b198c\" returns successfully" Jan 29 11:32:49.292495 containerd[1495]: time="2025-01-29T11:32:49.292463248Z" level=info msg="StopPodSandbox for \"959ee7c5fd0e3b26b779fc1b7c9c34bb6768bd7de43d74c8453896c45bbc7058\"" Jan 29 11:32:49.292572 containerd[1495]: time="2025-01-29T11:32:49.292552349Z" level=info msg="TearDown network for sandbox \"959ee7c5fd0e3b26b779fc1b7c9c34bb6768bd7de43d74c8453896c45bbc7058\" successfully" Jan 29 11:32:49.292600 containerd[1495]: time="2025-01-29T11:32:49.292573279Z" level=info msg="StopPodSandbox for \"959ee7c5fd0e3b26b779fc1b7c9c34bb6768bd7de43d74c8453896c45bbc7058\" returns successfully" Jan 29 11:32:49.292790 containerd[1495]: time="2025-01-29T11:32:49.292768304Z" level=info msg="RemovePodSandbox for \"959ee7c5fd0e3b26b779fc1b7c9c34bb6768bd7de43d74c8453896c45bbc7058\"" Jan 29 11:32:49.292831 containerd[1495]: time="2025-01-29T11:32:49.292794213Z" level=info msg="Forcibly stopping sandbox \"959ee7c5fd0e3b26b779fc1b7c9c34bb6768bd7de43d74c8453896c45bbc7058\"" Jan 29 11:32:49.292905 containerd[1495]: time="2025-01-29T11:32:49.292868086Z" level=info msg="TearDown network for sandbox \"959ee7c5fd0e3b26b779fc1b7c9c34bb6768bd7de43d74c8453896c45bbc7058\" successfully" Jan 29 11:32:49.320870 containerd[1495]: time="2025-01-29T11:32:49.320832859Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"959ee7c5fd0e3b26b779fc1b7c9c34bb6768bd7de43d74c8453896c45bbc7058\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:32:49.320870 containerd[1495]: time="2025-01-29T11:32:49.320876342Z" level=info msg="RemovePodSandbox \"959ee7c5fd0e3b26b779fc1b7c9c34bb6768bd7de43d74c8453896c45bbc7058\" returns successfully" Jan 29 11:32:49.321192 containerd[1495]: time="2025-01-29T11:32:49.321151811Z" level=info msg="StopPodSandbox for \"f4159f78bb2d48a1117a75aeb4e25573e0a11837294e7901b6da3c3896dae9e0\"" Jan 29 11:32:49.321279 containerd[1495]: time="2025-01-29T11:32:49.321251483Z" level=info msg="TearDown network for sandbox \"f4159f78bb2d48a1117a75aeb4e25573e0a11837294e7901b6da3c3896dae9e0\" successfully" Jan 29 11:32:49.321279 containerd[1495]: time="2025-01-29T11:32:49.321261682Z" level=info msg="StopPodSandbox for \"f4159f78bb2d48a1117a75aeb4e25573e0a11837294e7901b6da3c3896dae9e0\" returns successfully" Jan 29 11:32:49.321529 containerd[1495]: time="2025-01-29T11:32:49.321487145Z" level=info msg="RemovePodSandbox for \"f4159f78bb2d48a1117a75aeb4e25573e0a11837294e7901b6da3c3896dae9e0\"" Jan 29 11:32:49.321529 containerd[1495]: time="2025-01-29T11:32:49.321510230Z" level=info msg="Forcibly stopping sandbox \"f4159f78bb2d48a1117a75aeb4e25573e0a11837294e7901b6da3c3896dae9e0\"" Jan 29 11:32:49.321628 containerd[1495]: time="2025-01-29T11:32:49.321583991Z" level=info msg="TearDown network for sandbox \"f4159f78bb2d48a1117a75aeb4e25573e0a11837294e7901b6da3c3896dae9e0\" successfully" Jan 29 11:32:49.409704 containerd[1495]: time="2025-01-29T11:32:49.409645133Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f4159f78bb2d48a1117a75aeb4e25573e0a11837294e7901b6da3c3896dae9e0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:32:49.409785 containerd[1495]: time="2025-01-29T11:32:49.409721520Z" level=info msg="RemovePodSandbox \"f4159f78bb2d48a1117a75aeb4e25573e0a11837294e7901b6da3c3896dae9e0\" returns successfully" Jan 29 11:32:49.410154 containerd[1495]: time="2025-01-29T11:32:49.410126097Z" level=info msg="StopPodSandbox for \"bd1c4fbf33953e5862c77e9bcf30d3094180127b1d08f643c0857a95af354d3a\"" Jan 29 11:32:49.410299 containerd[1495]: time="2025-01-29T11:32:49.410228403Z" level=info msg="TearDown network for sandbox \"bd1c4fbf33953e5862c77e9bcf30d3094180127b1d08f643c0857a95af354d3a\" successfully" Jan 29 11:32:49.410299 containerd[1495]: time="2025-01-29T11:32:49.410238654Z" level=info msg="StopPodSandbox for \"bd1c4fbf33953e5862c77e9bcf30d3094180127b1d08f643c0857a95af354d3a\" returns successfully" Jan 29 11:32:49.411436 containerd[1495]: time="2025-01-29T11:32:49.410580710Z" level=info msg="RemovePodSandbox for \"bd1c4fbf33953e5862c77e9bcf30d3094180127b1d08f643c0857a95af354d3a\"" Jan 29 11:32:49.411436 containerd[1495]: time="2025-01-29T11:32:49.410602602Z" level=info msg="Forcibly stopping sandbox \"bd1c4fbf33953e5862c77e9bcf30d3094180127b1d08f643c0857a95af354d3a\"" Jan 29 11:32:49.411436 containerd[1495]: time="2025-01-29T11:32:49.410671514Z" level=info msg="TearDown network for sandbox \"bd1c4fbf33953e5862c77e9bcf30d3094180127b1d08f643c0857a95af354d3a\" successfully" Jan 29 11:32:49.420997 containerd[1495]: time="2025-01-29T11:32:49.420965220Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bd1c4fbf33953e5862c77e9bcf30d3094180127b1d08f643c0857a95af354d3a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:32:49.421053 containerd[1495]: time="2025-01-29T11:32:49.421013673Z" level=info msg="RemovePodSandbox \"bd1c4fbf33953e5862c77e9bcf30d3094180127b1d08f643c0857a95af354d3a\" returns successfully" Jan 29 11:32:49.421442 containerd[1495]: time="2025-01-29T11:32:49.421275776Z" level=info msg="StopPodSandbox for \"5811a36840dd1d42d19b253cd448b0069b1615ccdc5a9f870a7e114387802527\"" Jan 29 11:32:49.421442 containerd[1495]: time="2025-01-29T11:32:49.421366811Z" level=info msg="TearDown network for sandbox \"5811a36840dd1d42d19b253cd448b0069b1615ccdc5a9f870a7e114387802527\" successfully" Jan 29 11:32:49.421442 containerd[1495]: time="2025-01-29T11:32:49.421376139Z" level=info msg="StopPodSandbox for \"5811a36840dd1d42d19b253cd448b0069b1615ccdc5a9f870a7e114387802527\" returns successfully" Jan 29 11:32:49.421676 containerd[1495]: time="2025-01-29T11:32:49.421648753Z" level=info msg="RemovePodSandbox for \"5811a36840dd1d42d19b253cd448b0069b1615ccdc5a9f870a7e114387802527\"" Jan 29 11:32:49.421676 containerd[1495]: time="2025-01-29T11:32:49.421670124Z" level=info msg="Forcibly stopping sandbox \"5811a36840dd1d42d19b253cd448b0069b1615ccdc5a9f870a7e114387802527\"" Jan 29 11:32:49.421775 containerd[1495]: time="2025-01-29T11:32:49.421737834Z" level=info msg="TearDown network for sandbox \"5811a36840dd1d42d19b253cd448b0069b1615ccdc5a9f870a7e114387802527\" successfully" Jan 29 11:32:49.425387 containerd[1495]: time="2025-01-29T11:32:49.425356375Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5811a36840dd1d42d19b253cd448b0069b1615ccdc5a9f870a7e114387802527\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:32:49.425449 containerd[1495]: time="2025-01-29T11:32:49.425394158Z" level=info msg="RemovePodSandbox \"5811a36840dd1d42d19b253cd448b0069b1615ccdc5a9f870a7e114387802527\" returns successfully" Jan 29 11:32:49.425736 containerd[1495]: time="2025-01-29T11:32:49.425700667Z" level=info msg="StopPodSandbox for \"8066f0c386aa12e1020b2dbf729a0686e7cd3d2af7c54baa18b36cbe29e48a40\"" Jan 29 11:32:49.425848 containerd[1495]: time="2025-01-29T11:32:49.425824626Z" level=info msg="TearDown network for sandbox \"8066f0c386aa12e1020b2dbf729a0686e7cd3d2af7c54baa18b36cbe29e48a40\" successfully" Jan 29 11:32:49.425848 containerd[1495]: time="2025-01-29T11:32:49.425840034Z" level=info msg="StopPodSandbox for \"8066f0c386aa12e1020b2dbf729a0686e7cd3d2af7c54baa18b36cbe29e48a40\" returns successfully" Jan 29 11:32:49.426084 containerd[1495]: time="2025-01-29T11:32:49.426065208Z" level=info msg="RemovePodSandbox for \"8066f0c386aa12e1020b2dbf729a0686e7cd3d2af7c54baa18b36cbe29e48a40\"" Jan 29 11:32:49.426136 containerd[1495]: time="2025-01-29T11:32:49.426088272Z" level=info msg="Forcibly stopping sandbox \"8066f0c386aa12e1020b2dbf729a0686e7cd3d2af7c54baa18b36cbe29e48a40\"" Jan 29 11:32:49.426184 containerd[1495]: time="2025-01-29T11:32:49.426157525Z" level=info msg="TearDown network for sandbox \"8066f0c386aa12e1020b2dbf729a0686e7cd3d2af7c54baa18b36cbe29e48a40\" successfully" Jan 29 11:32:49.432809 containerd[1495]: time="2025-01-29T11:32:49.432759618Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8066f0c386aa12e1020b2dbf729a0686e7cd3d2af7c54baa18b36cbe29e48a40\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:32:49.432809 containerd[1495]: time="2025-01-29T11:32:49.432811558Z" level=info msg="RemovePodSandbox \"8066f0c386aa12e1020b2dbf729a0686e7cd3d2af7c54baa18b36cbe29e48a40\" returns successfully" Jan 29 11:32:49.433185 containerd[1495]: time="2025-01-29T11:32:49.433160458Z" level=info msg="StopPodSandbox for \"5ba01231171e716c7d6bfe7ea5616f070e4f98ceb9b8630b477b92af2ec4f966\"" Jan 29 11:32:49.433299 containerd[1495]: time="2025-01-29T11:32:49.433252765Z" level=info msg="TearDown network for sandbox \"5ba01231171e716c7d6bfe7ea5616f070e4f98ceb9b8630b477b92af2ec4f966\" successfully" Jan 29 11:32:49.433299 containerd[1495]: time="2025-01-29T11:32:49.433295327Z" level=info msg="StopPodSandbox for \"5ba01231171e716c7d6bfe7ea5616f070e4f98ceb9b8630b477b92af2ec4f966\" returns successfully" Jan 29 11:32:49.433533 containerd[1495]: time="2025-01-29T11:32:49.433510340Z" level=info msg="RemovePodSandbox for \"5ba01231171e716c7d6bfe7ea5616f070e4f98ceb9b8630b477b92af2ec4f966\"" Jan 29 11:32:49.433584 containerd[1495]: time="2025-01-29T11:32:49.433532443Z" level=info msg="Forcibly stopping sandbox \"5ba01231171e716c7d6bfe7ea5616f070e4f98ceb9b8630b477b92af2ec4f966\"" Jan 29 11:32:49.433643 containerd[1495]: time="2025-01-29T11:32:49.433602728Z" level=info msg="TearDown network for sandbox \"5ba01231171e716c7d6bfe7ea5616f070e4f98ceb9b8630b477b92af2ec4f966\" successfully" Jan 29 11:32:49.438445 containerd[1495]: time="2025-01-29T11:32:49.438410183Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5ba01231171e716c7d6bfe7ea5616f070e4f98ceb9b8630b477b92af2ec4f966\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:32:49.438487 containerd[1495]: time="2025-01-29T11:32:49.438462483Z" level=info msg="RemovePodSandbox \"5ba01231171e716c7d6bfe7ea5616f070e4f98ceb9b8630b477b92af2ec4f966\" returns successfully" Jan 29 11:32:49.438784 containerd[1495]: time="2025-01-29T11:32:49.438725628Z" level=info msg="StopPodSandbox for \"4f05e1208f1add52e8a5b9089a9a0848ac49ff2f3de3a669456b522c00d2264e\"" Jan 29 11:32:49.438824 containerd[1495]: time="2025-01-29T11:32:49.438813447Z" level=info msg="TearDown network for sandbox \"4f05e1208f1add52e8a5b9089a9a0848ac49ff2f3de3a669456b522c00d2264e\" successfully" Jan 29 11:32:49.438824 containerd[1495]: time="2025-01-29T11:32:49.438823196Z" level=info msg="StopPodSandbox for \"4f05e1208f1add52e8a5b9089a9a0848ac49ff2f3de3a669456b522c00d2264e\" returns successfully" Jan 29 11:32:49.439065 containerd[1495]: time="2025-01-29T11:32:49.439034893Z" level=info msg="RemovePodSandbox for \"4f05e1208f1add52e8a5b9089a9a0848ac49ff2f3de3a669456b522c00d2264e\"" Jan 29 11:32:49.439065 containerd[1495]: time="2025-01-29T11:32:49.439056435Z" level=info msg="Forcibly stopping sandbox \"4f05e1208f1add52e8a5b9089a9a0848ac49ff2f3de3a669456b522c00d2264e\"" Jan 29 11:32:49.439163 containerd[1495]: time="2025-01-29T11:32:49.439120849Z" level=info msg="TearDown network for sandbox \"4f05e1208f1add52e8a5b9089a9a0848ac49ff2f3de3a669456b522c00d2264e\" successfully" Jan 29 11:32:49.443011 containerd[1495]: time="2025-01-29T11:32:49.442974581Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4f05e1208f1add52e8a5b9089a9a0848ac49ff2f3de3a669456b522c00d2264e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:32:49.443011 containerd[1495]: time="2025-01-29T11:32:49.443010200Z" level=info msg="RemovePodSandbox \"4f05e1208f1add52e8a5b9089a9a0848ac49ff2f3de3a669456b522c00d2264e\" returns successfully" Jan 29 11:32:49.443579 containerd[1495]: time="2025-01-29T11:32:49.443236565Z" level=info msg="StopPodSandbox for \"88b4cc24607b9cc69cf8566db1a11a31044711ee8251a11957523ee4d62e4b16\"" Jan 29 11:32:49.443579 containerd[1495]: time="2025-01-29T11:32:49.443316028Z" level=info msg="TearDown network for sandbox \"88b4cc24607b9cc69cf8566db1a11a31044711ee8251a11957523ee4d62e4b16\" successfully" Jan 29 11:32:49.443579 containerd[1495]: time="2025-01-29T11:32:49.443324944Z" level=info msg="StopPodSandbox for \"88b4cc24607b9cc69cf8566db1a11a31044711ee8251a11957523ee4d62e4b16\" returns successfully" Jan 29 11:32:49.443670 containerd[1495]: time="2025-01-29T11:32:49.443594362Z" level=info msg="RemovePodSandbox for \"88b4cc24607b9cc69cf8566db1a11a31044711ee8251a11957523ee4d62e4b16\"" Jan 29 11:32:49.443670 containerd[1495]: time="2025-01-29T11:32:49.443615863Z" level=info msg="Forcibly stopping sandbox \"88b4cc24607b9cc69cf8566db1a11a31044711ee8251a11957523ee4d62e4b16\"" Jan 29 11:32:49.443727 containerd[1495]: time="2025-01-29T11:32:49.443687872Z" level=info msg="TearDown network for sandbox \"88b4cc24607b9cc69cf8566db1a11a31044711ee8251a11957523ee4d62e4b16\" successfully" Jan 29 11:32:49.447220 containerd[1495]: time="2025-01-29T11:32:49.447194708Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"88b4cc24607b9cc69cf8566db1a11a31044711ee8251a11957523ee4d62e4b16\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:32:49.447278 containerd[1495]: time="2025-01-29T11:32:49.447243944Z" level=info msg="RemovePodSandbox \"88b4cc24607b9cc69cf8566db1a11a31044711ee8251a11957523ee4d62e4b16\" returns successfully" Jan 29 11:32:49.449213 containerd[1495]: time="2025-01-29T11:32:49.447867090Z" level=info msg="StopPodSandbox for \"934f5bf6aae6891083f2c20de48c400c17aaa8a980e059df118a8a6387f827de\"" Jan 29 11:32:49.449213 containerd[1495]: time="2025-01-29T11:32:49.447946723Z" level=info msg="TearDown network for sandbox \"934f5bf6aae6891083f2c20de48c400c17aaa8a980e059df118a8a6387f827de\" successfully" Jan 29 11:32:49.449213 containerd[1495]: time="2025-01-29T11:32:49.447981379Z" level=info msg="StopPodSandbox for \"934f5bf6aae6891083f2c20de48c400c17aaa8a980e059df118a8a6387f827de\" returns successfully" Jan 29 11:32:49.449213 containerd[1495]: time="2025-01-29T11:32:49.448198416Z" level=info msg="RemovePodSandbox for \"934f5bf6aae6891083f2c20de48c400c17aaa8a980e059df118a8a6387f827de\"" Jan 29 11:32:49.449213 containerd[1495]: time="2025-01-29T11:32:49.448214738Z" level=info msg="Forcibly stopping sandbox \"934f5bf6aae6891083f2c20de48c400c17aaa8a980e059df118a8a6387f827de\"" Jan 29 11:32:49.449213 containerd[1495]: time="2025-01-29T11:32:49.448289982Z" level=info msg="TearDown network for sandbox \"934f5bf6aae6891083f2c20de48c400c17aaa8a980e059df118a8a6387f827de\" successfully" Jan 29 11:32:49.452150 containerd[1495]: time="2025-01-29T11:32:49.452117065Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"934f5bf6aae6891083f2c20de48c400c17aaa8a980e059df118a8a6387f827de\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:32:49.452150 containerd[1495]: time="2025-01-29T11:32:49.452163344Z" level=info msg="RemovePodSandbox \"934f5bf6aae6891083f2c20de48c400c17aaa8a980e059df118a8a6387f827de\" returns successfully" Jan 29 11:32:49.452507 containerd[1495]: time="2025-01-29T11:32:49.452465204Z" level=info msg="StopPodSandbox for \"d569cb7c5ddbed0b16a9e1171769ace0a51b3c89a0c48b2f5957786231c09e94\"" Jan 29 11:32:49.452622 containerd[1495]: time="2025-01-29T11:32:49.452597057Z" level=info msg="TearDown network for sandbox \"d569cb7c5ddbed0b16a9e1171769ace0a51b3c89a0c48b2f5957786231c09e94\" successfully" Jan 29 11:32:49.452622 containerd[1495]: time="2025-01-29T11:32:49.452613147Z" level=info msg="StopPodSandbox for \"d569cb7c5ddbed0b16a9e1171769ace0a51b3c89a0c48b2f5957786231c09e94\" returns successfully" Jan 29 11:32:49.453650 containerd[1495]: time="2025-01-29T11:32:49.453627927Z" level=info msg="RemovePodSandbox for \"d569cb7c5ddbed0b16a9e1171769ace0a51b3c89a0c48b2f5957786231c09e94\"" Jan 29 11:32:49.453694 containerd[1495]: time="2025-01-29T11:32:49.453653005Z" level=info msg="Forcibly stopping sandbox \"d569cb7c5ddbed0b16a9e1171769ace0a51b3c89a0c48b2f5957786231c09e94\"" Jan 29 11:32:49.453763 containerd[1495]: time="2025-01-29T11:32:49.453726797Z" level=info msg="TearDown network for sandbox \"d569cb7c5ddbed0b16a9e1171769ace0a51b3c89a0c48b2f5957786231c09e94\" successfully" Jan 29 11:32:49.458361 containerd[1495]: time="2025-01-29T11:32:49.458326293Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d569cb7c5ddbed0b16a9e1171769ace0a51b3c89a0c48b2f5957786231c09e94\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:32:49.458459 containerd[1495]: time="2025-01-29T11:32:49.458371991Z" level=info msg="RemovePodSandbox \"d569cb7c5ddbed0b16a9e1171769ace0a51b3c89a0c48b2f5957786231c09e94\" returns successfully" Jan 29 11:32:49.458745 containerd[1495]: time="2025-01-29T11:32:49.458722574Z" level=info msg="StopPodSandbox for \"c60489ec5828713ee6df69e7a33bc62e8d4338df25cba506d3f6f340dd21c518\"" Jan 29 11:32:49.458831 containerd[1495]: time="2025-01-29T11:32:49.458809421Z" level=info msg="TearDown network for sandbox \"c60489ec5828713ee6df69e7a33bc62e8d4338df25cba506d3f6f340dd21c518\" successfully" Jan 29 11:32:49.458831 containerd[1495]: time="2025-01-29T11:32:49.458824420Z" level=info msg="StopPodSandbox for \"c60489ec5828713ee6df69e7a33bc62e8d4338df25cba506d3f6f340dd21c518\" returns successfully" Jan 29 11:32:49.459131 containerd[1495]: time="2025-01-29T11:32:49.459101954Z" level=info msg="RemovePodSandbox for \"c60489ec5828713ee6df69e7a33bc62e8d4338df25cba506d3f6f340dd21c518\"" Jan 29 11:32:49.459131 containerd[1495]: time="2025-01-29T11:32:49.459123234Z" level=info msg="Forcibly stopping sandbox \"c60489ec5828713ee6df69e7a33bc62e8d4338df25cba506d3f6f340dd21c518\"" Jan 29 11:32:49.459215 containerd[1495]: time="2025-01-29T11:32:49.459185474Z" level=info msg="TearDown network for sandbox \"c60489ec5828713ee6df69e7a33bc62e8d4338df25cba506d3f6f340dd21c518\" successfully" Jan 29 11:32:49.462492 containerd[1495]: time="2025-01-29T11:32:49.462459273Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c60489ec5828713ee6df69e7a33bc62e8d4338df25cba506d3f6f340dd21c518\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:32:49.462538 containerd[1495]: time="2025-01-29T11:32:49.462496705Z" level=info msg="RemovePodSandbox \"c60489ec5828713ee6df69e7a33bc62e8d4338df25cba506d3f6f340dd21c518\" returns successfully" Jan 29 11:32:49.462739 containerd[1495]: time="2025-01-29T11:32:49.462715144Z" level=info msg="StopPodSandbox for \"018d81fe3443755c0386db1c28cb83e6eda2e9ae48cc9dfd32950115231468d6\"" Jan 29 11:32:49.462816 containerd[1495]: time="2025-01-29T11:32:49.462803444Z" level=info msg="TearDown network for sandbox \"018d81fe3443755c0386db1c28cb83e6eda2e9ae48cc9dfd32950115231468d6\" successfully" Jan 29 11:32:49.462840 containerd[1495]: time="2025-01-29T11:32:49.462814926Z" level=info msg="StopPodSandbox for \"018d81fe3443755c0386db1c28cb83e6eda2e9ae48cc9dfd32950115231468d6\" returns successfully" Jan 29 11:32:49.463042 containerd[1495]: time="2025-01-29T11:32:49.463023266Z" level=info msg="RemovePodSandbox for \"018d81fe3443755c0386db1c28cb83e6eda2e9ae48cc9dfd32950115231468d6\"" Jan 29 11:32:49.463083 containerd[1495]: time="2025-01-29T11:32:49.463044437Z" level=info msg="Forcibly stopping sandbox \"018d81fe3443755c0386db1c28cb83e6eda2e9ae48cc9dfd32950115231468d6\"" Jan 29 11:32:49.463134 containerd[1495]: time="2025-01-29T11:32:49.463109061Z" level=info msg="TearDown network for sandbox \"018d81fe3443755c0386db1c28cb83e6eda2e9ae48cc9dfd32950115231468d6\" successfully" Jan 29 11:32:49.466402 containerd[1495]: time="2025-01-29T11:32:49.466376318Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"018d81fe3443755c0386db1c28cb83e6eda2e9ae48cc9dfd32950115231468d6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:32:49.466467 containerd[1495]: time="2025-01-29T11:32:49.466407107Z" level=info msg="RemovePodSandbox \"018d81fe3443755c0386db1c28cb83e6eda2e9ae48cc9dfd32950115231468d6\" returns successfully" Jan 29 11:32:49.466660 containerd[1495]: time="2025-01-29T11:32:49.466636338Z" level=info msg="StopPodSandbox for \"068481ee10bd71b1783eeab1c40f9ef93d9dc158fa6a9beeeba5fd6980a6abd7\"" Jan 29 11:32:49.466756 containerd[1495]: time="2025-01-29T11:32:49.466732532Z" level=info msg="TearDown network for sandbox \"068481ee10bd71b1783eeab1c40f9ef93d9dc158fa6a9beeeba5fd6980a6abd7\" successfully" Jan 29 11:32:49.466756 containerd[1495]: time="2025-01-29T11:32:49.466747251Z" level=info msg="StopPodSandbox for \"068481ee10bd71b1783eeab1c40f9ef93d9dc158fa6a9beeeba5fd6980a6abd7\" returns successfully" Jan 29 11:32:49.466951 containerd[1495]: time="2025-01-29T11:32:49.466924992Z" level=info msg="RemovePodSandbox for \"068481ee10bd71b1783eeab1c40f9ef93d9dc158fa6a9beeeba5fd6980a6abd7\"" Jan 29 11:32:49.467006 containerd[1495]: time="2025-01-29T11:32:49.466949439Z" level=info msg="Forcibly stopping sandbox \"068481ee10bd71b1783eeab1c40f9ef93d9dc158fa6a9beeeba5fd6980a6abd7\"" Jan 29 11:32:49.467049 containerd[1495]: time="2025-01-29T11:32:49.467022680Z" level=info msg="TearDown network for sandbox \"068481ee10bd71b1783eeab1c40f9ef93d9dc158fa6a9beeeba5fd6980a6abd7\" successfully" Jan 29 11:32:49.473052 containerd[1495]: time="2025-01-29T11:32:49.472999992Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"068481ee10bd71b1783eeab1c40f9ef93d9dc158fa6a9beeeba5fd6980a6abd7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:32:49.473141 containerd[1495]: time="2025-01-29T11:32:49.473067953Z" level=info msg="RemovePodSandbox \"068481ee10bd71b1783eeab1c40f9ef93d9dc158fa6a9beeeba5fd6980a6abd7\" returns successfully" Jan 29 11:32:49.473456 containerd[1495]: time="2025-01-29T11:32:49.473436090Z" level=info msg="StopPodSandbox for \"6b01d3b17008796e2599c4562b21cab96766aa5bc7d9347165898d1384559383\"" Jan 29 11:32:49.473582 containerd[1495]: time="2025-01-29T11:32:49.473545911Z" level=info msg="TearDown network for sandbox \"6b01d3b17008796e2599c4562b21cab96766aa5bc7d9347165898d1384559383\" successfully" Jan 29 11:32:49.473582 containerd[1495]: time="2025-01-29T11:32:49.473562643Z" level=info msg="StopPodSandbox for \"6b01d3b17008796e2599c4562b21cab96766aa5bc7d9347165898d1384559383\" returns successfully" Jan 29 11:32:49.473829 containerd[1495]: time="2025-01-29T11:32:49.473808656Z" level=info msg="RemovePodSandbox for \"6b01d3b17008796e2599c4562b21cab96766aa5bc7d9347165898d1384559383\"" Jan 29 11:32:49.473882 containerd[1495]: time="2025-01-29T11:32:49.473835467Z" level=info msg="Forcibly stopping sandbox \"6b01d3b17008796e2599c4562b21cab96766aa5bc7d9347165898d1384559383\"" Jan 29 11:32:49.473958 containerd[1495]: time="2025-01-29T11:32:49.473916904Z" level=info msg="TearDown network for sandbox \"6b01d3b17008796e2599c4562b21cab96766aa5bc7d9347165898d1384559383\" successfully" Jan 29 11:32:49.477618 containerd[1495]: time="2025-01-29T11:32:49.477590992Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6b01d3b17008796e2599c4562b21cab96766aa5bc7d9347165898d1384559383\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:32:49.477679 containerd[1495]: time="2025-01-29T11:32:49.477631229Z" level=info msg="RemovePodSandbox \"6b01d3b17008796e2599c4562b21cab96766aa5bc7d9347165898d1384559383\" returns successfully" Jan 29 11:32:49.477921 containerd[1495]: time="2025-01-29T11:32:49.477899695Z" level=info msg="StopPodSandbox for \"32d3ad80782f04b13fc8d95b06e337633874857fa4abf01127138c329bba0487\"" Jan 29 11:32:49.478011 containerd[1495]: time="2025-01-29T11:32:49.477991602Z" level=info msg="TearDown network for sandbox \"32d3ad80782f04b13fc8d95b06e337633874857fa4abf01127138c329bba0487\" successfully" Jan 29 11:32:49.478011 containerd[1495]: time="2025-01-29T11:32:49.478005408Z" level=info msg="StopPodSandbox for \"32d3ad80782f04b13fc8d95b06e337633874857fa4abf01127138c329bba0487\" returns successfully" Jan 29 11:32:49.478257 containerd[1495]: time="2025-01-29T11:32:49.478233046Z" level=info msg="RemovePodSandbox for \"32d3ad80782f04b13fc8d95b06e337633874857fa4abf01127138c329bba0487\"" Jan 29 11:32:49.478334 containerd[1495]: time="2025-01-29T11:32:49.478259346Z" level=info msg="Forcibly stopping sandbox \"32d3ad80782f04b13fc8d95b06e337633874857fa4abf01127138c329bba0487\"" Jan 29 11:32:49.478369 containerd[1495]: time="2025-01-29T11:32:49.478335503Z" level=info msg="TearDown network for sandbox \"32d3ad80782f04b13fc8d95b06e337633874857fa4abf01127138c329bba0487\" successfully" Jan 29 11:32:49.481867 containerd[1495]: time="2025-01-29T11:32:49.481832360Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"32d3ad80782f04b13fc8d95b06e337633874857fa4abf01127138c329bba0487\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:32:49.481913 containerd[1495]: time="2025-01-29T11:32:49.481875443Z" level=info msg="RemovePodSandbox \"32d3ad80782f04b13fc8d95b06e337633874857fa4abf01127138c329bba0487\" returns successfully" Jan 29 11:32:49.482163 containerd[1495]: time="2025-01-29T11:32:49.482141504Z" level=info msg="StopPodSandbox for \"97436e09e8a3af61a6c30171fb86bac060178da354f2257a8dfbf354e1e82387\"" Jan 29 11:32:49.482260 containerd[1495]: time="2025-01-29T11:32:49.482236727Z" level=info msg="TearDown network for sandbox \"97436e09e8a3af61a6c30171fb86bac060178da354f2257a8dfbf354e1e82387\" successfully" Jan 29 11:32:49.482260 containerd[1495]: time="2025-01-29T11:32:49.482252828Z" level=info msg="StopPodSandbox for \"97436e09e8a3af61a6c30171fb86bac060178da354f2257a8dfbf354e1e82387\" returns successfully" Jan 29 11:32:49.482498 containerd[1495]: time="2025-01-29T11:32:49.482478100Z" level=info msg="RemovePodSandbox for \"97436e09e8a3af61a6c30171fb86bac060178da354f2257a8dfbf354e1e82387\"" Jan 29 11:32:49.482596 containerd[1495]: time="2025-01-29T11:32:49.482565939Z" level=info msg="Forcibly stopping sandbox \"97436e09e8a3af61a6c30171fb86bac060178da354f2257a8dfbf354e1e82387\"" Jan 29 11:32:49.482674 containerd[1495]: time="2025-01-29T11:32:49.482642116Z" level=info msg="TearDown network for sandbox \"97436e09e8a3af61a6c30171fb86bac060178da354f2257a8dfbf354e1e82387\" successfully" Jan 29 11:32:49.486167 containerd[1495]: time="2025-01-29T11:32:49.486142300Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"97436e09e8a3af61a6c30171fb86bac060178da354f2257a8dfbf354e1e82387\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:32:49.486212 containerd[1495]: time="2025-01-29T11:32:49.486178730Z" level=info msg="RemovePodSandbox \"97436e09e8a3af61a6c30171fb86bac060178da354f2257a8dfbf354e1e82387\" returns successfully" Jan 29 11:32:49.486368 containerd[1495]: time="2025-01-29T11:32:49.486352924Z" level=info msg="StopPodSandbox for \"61ea1d99928f6fca8bf5f9f3d6685899fdaf05dc52e35306cb5b54127a72d6e7\"" Jan 29 11:32:49.486466 containerd[1495]: time="2025-01-29T11:32:49.486446674Z" level=info msg="TearDown network for sandbox \"61ea1d99928f6fca8bf5f9f3d6685899fdaf05dc52e35306cb5b54127a72d6e7\" successfully" Jan 29 11:32:49.486466 containerd[1495]: time="2025-01-29T11:32:49.486458758Z" level=info msg="StopPodSandbox for \"61ea1d99928f6fca8bf5f9f3d6685899fdaf05dc52e35306cb5b54127a72d6e7\" returns successfully" Jan 29 11:32:49.486739 containerd[1495]: time="2025-01-29T11:32:49.486716453Z" level=info msg="RemovePodSandbox for \"61ea1d99928f6fca8bf5f9f3d6685899fdaf05dc52e35306cb5b54127a72d6e7\"" Jan 29 11:32:49.486772 containerd[1495]: time="2025-01-29T11:32:49.486740419Z" level=info msg="Forcibly stopping sandbox \"61ea1d99928f6fca8bf5f9f3d6685899fdaf05dc52e35306cb5b54127a72d6e7\"" Jan 29 11:32:49.486847 containerd[1495]: time="2025-01-29T11:32:49.486806416Z" level=info msg="TearDown network for sandbox \"61ea1d99928f6fca8bf5f9f3d6685899fdaf05dc52e35306cb5b54127a72d6e7\" successfully" Jan 29 11:32:49.490717 containerd[1495]: time="2025-01-29T11:32:49.490677242Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"61ea1d99928f6fca8bf5f9f3d6685899fdaf05dc52e35306cb5b54127a72d6e7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:32:49.490809 containerd[1495]: time="2025-01-29T11:32:49.490738089Z" level=info msg="RemovePodSandbox \"61ea1d99928f6fca8bf5f9f3d6685899fdaf05dc52e35306cb5b54127a72d6e7\" returns successfully" Jan 29 11:32:49.491087 containerd[1495]: time="2025-01-29T11:32:49.491063204Z" level=info msg="StopPodSandbox for \"a57e13c259cc81ce2e477b9469ae3554a9ae4291cb2db4e7129310c60db62f90\"" Jan 29 11:32:49.491186 containerd[1495]: time="2025-01-29T11:32:49.491166341Z" level=info msg="TearDown network for sandbox \"a57e13c259cc81ce2e477b9469ae3554a9ae4291cb2db4e7129310c60db62f90\" successfully" Jan 29 11:32:49.491186 containerd[1495]: time="2025-01-29T11:32:49.491183995Z" level=info msg="StopPodSandbox for \"a57e13c259cc81ce2e477b9469ae3554a9ae4291cb2db4e7129310c60db62f90\" returns successfully" Jan 29 11:32:49.491382 containerd[1495]: time="2025-01-29T11:32:49.491363830Z" level=info msg="RemovePodSandbox for \"a57e13c259cc81ce2e477b9469ae3554a9ae4291cb2db4e7129310c60db62f90\"" Jan 29 11:32:49.491441 containerd[1495]: time="2025-01-29T11:32:49.491384630Z" level=info msg="Forcibly stopping sandbox \"a57e13c259cc81ce2e477b9469ae3554a9ae4291cb2db4e7129310c60db62f90\"" Jan 29 11:32:49.491520 containerd[1495]: time="2025-01-29T11:32:49.491480846Z" level=info msg="TearDown network for sandbox \"a57e13c259cc81ce2e477b9469ae3554a9ae4291cb2db4e7129310c60db62f90\" successfully" Jan 29 11:32:49.495137 containerd[1495]: time="2025-01-29T11:32:49.495108765Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a57e13c259cc81ce2e477b9469ae3554a9ae4291cb2db4e7129310c60db62f90\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:32:49.495191 containerd[1495]: time="2025-01-29T11:32:49.495152329Z" level=info msg="RemovePodSandbox \"a57e13c259cc81ce2e477b9469ae3554a9ae4291cb2db4e7129310c60db62f90\" returns successfully" Jan 29 11:32:49.495453 containerd[1495]: time="2025-01-29T11:32:49.495426866Z" level=info msg="StopPodSandbox for \"f81dce53143ec3a050bc20013a34bc942f73112707290c1c5853735a992a3294\"" Jan 29 11:32:49.495535 containerd[1495]: time="2025-01-29T11:32:49.495519224Z" level=info msg="TearDown network for sandbox \"f81dce53143ec3a050bc20013a34bc942f73112707290c1c5853735a992a3294\" successfully" Jan 29 11:32:49.495578 containerd[1495]: time="2025-01-29T11:32:49.495533250Z" level=info msg="StopPodSandbox for \"f81dce53143ec3a050bc20013a34bc942f73112707290c1c5853735a992a3294\" returns successfully" Jan 29 11:32:49.495774 containerd[1495]: time="2025-01-29T11:32:49.495752080Z" level=info msg="RemovePodSandbox for \"f81dce53143ec3a050bc20013a34bc942f73112707290c1c5853735a992a3294\"" Jan 29 11:32:49.495875 containerd[1495]: time="2025-01-29T11:32:49.495775265Z" level=info msg="Forcibly stopping sandbox \"f81dce53143ec3a050bc20013a34bc942f73112707290c1c5853735a992a3294\"" Jan 29 11:32:49.495875 containerd[1495]: time="2025-01-29T11:32:49.495849799Z" level=info msg="TearDown network for sandbox \"f81dce53143ec3a050bc20013a34bc942f73112707290c1c5853735a992a3294\" successfully" Jan 29 11:32:49.499362 containerd[1495]: time="2025-01-29T11:32:49.499331377Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f81dce53143ec3a050bc20013a34bc942f73112707290c1c5853735a992a3294\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:32:49.499441 containerd[1495]: time="2025-01-29T11:32:49.499368829Z" level=info msg="RemovePodSandbox \"f81dce53143ec3a050bc20013a34bc942f73112707290c1c5853735a992a3294\" returns successfully" Jan 29 11:32:49.499618 containerd[1495]: time="2025-01-29T11:32:49.499598430Z" level=info msg="StopPodSandbox for \"cc31abcb76ed49cf0135c5d1e1ab407bf029dc7d3acca444a7ee37ff89eb9e88\"" Jan 29 11:32:49.499683 containerd[1495]: time="2025-01-29T11:32:49.499675227Z" level=info msg="TearDown network for sandbox \"cc31abcb76ed49cf0135c5d1e1ab407bf029dc7d3acca444a7ee37ff89eb9e88\" successfully" Jan 29 11:32:49.499707 containerd[1495]: time="2025-01-29T11:32:49.499684515Z" level=info msg="StopPodSandbox for \"cc31abcb76ed49cf0135c5d1e1ab407bf029dc7d3acca444a7ee37ff89eb9e88\" returns successfully" Jan 29 11:32:49.499891 containerd[1495]: time="2025-01-29T11:32:49.499876414Z" level=info msg="RemovePodSandbox for \"cc31abcb76ed49cf0135c5d1e1ab407bf029dc7d3acca444a7ee37ff89eb9e88\"" Jan 29 11:32:49.499935 containerd[1495]: time="2025-01-29T11:32:49.499892886Z" level=info msg="Forcibly stopping sandbox \"cc31abcb76ed49cf0135c5d1e1ab407bf029dc7d3acca444a7ee37ff89eb9e88\"" Jan 29 11:32:49.499981 containerd[1495]: time="2025-01-29T11:32:49.499948262Z" level=info msg="TearDown network for sandbox \"cc31abcb76ed49cf0135c5d1e1ab407bf029dc7d3acca444a7ee37ff89eb9e88\" successfully" Jan 29 11:32:49.503235 containerd[1495]: time="2025-01-29T11:32:49.503215148Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cc31abcb76ed49cf0135c5d1e1ab407bf029dc7d3acca444a7ee37ff89eb9e88\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:32:49.503291 containerd[1495]: time="2025-01-29T11:32:49.503242871Z" level=info msg="RemovePodSandbox \"cc31abcb76ed49cf0135c5d1e1ab407bf029dc7d3acca444a7ee37ff89eb9e88\" returns successfully" Jan 29 11:32:49.503511 containerd[1495]: time="2025-01-29T11:32:49.503493142Z" level=info msg="StopPodSandbox for \"b39c2348c38274ae69c6ca2882dbe664e0f843199c327bc4d8145d923c55a15b\"" Jan 29 11:32:49.503594 containerd[1495]: time="2025-01-29T11:32:49.503580420Z" level=info msg="TearDown network for sandbox \"b39c2348c38274ae69c6ca2882dbe664e0f843199c327bc4d8145d923c55a15b\" successfully" Jan 29 11:32:49.503594 containerd[1495]: time="2025-01-29T11:32:49.503591811Z" level=info msg="StopPodSandbox for \"b39c2348c38274ae69c6ca2882dbe664e0f843199c327bc4d8145d923c55a15b\" returns successfully" Jan 29 11:32:49.503808 containerd[1495]: time="2025-01-29T11:32:49.503789401Z" level=info msg="RemovePodSandbox for \"b39c2348c38274ae69c6ca2882dbe664e0f843199c327bc4d8145d923c55a15b\"" Jan 29 11:32:49.503857 containerd[1495]: time="2025-01-29T11:32:49.503812446Z" level=info msg="Forcibly stopping sandbox \"b39c2348c38274ae69c6ca2882dbe664e0f843199c327bc4d8145d923c55a15b\"" Jan 29 11:32:49.503920 containerd[1495]: time="2025-01-29T11:32:49.503889844Z" level=info msg="TearDown network for sandbox \"b39c2348c38274ae69c6ca2882dbe664e0f843199c327bc4d8145d923c55a15b\" successfully" Jan 29 11:32:49.507341 containerd[1495]: time="2025-01-29T11:32:49.507304463Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b39c2348c38274ae69c6ca2882dbe664e0f843199c327bc4d8145d923c55a15b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:32:49.507409 containerd[1495]: time="2025-01-29T11:32:49.507346955Z" level=info msg="RemovePodSandbox \"b39c2348c38274ae69c6ca2882dbe664e0f843199c327bc4d8145d923c55a15b\" returns successfully" Jan 29 11:32:49.507646 kubelet[2697]: I0129 11:32:49.507603 2697 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:32:49.507963 containerd[1495]: time="2025-01-29T11:32:49.507604780Z" level=info msg="StopPodSandbox for \"8ff3ec328ea81ecb5b20f9bf8dc8473fb5ca80ce1cd5e221a39baad55272eccf\"" Jan 29 11:32:49.507963 containerd[1495]: time="2025-01-29T11:32:49.507696817Z" level=info msg="TearDown network for sandbox \"8ff3ec328ea81ecb5b20f9bf8dc8473fb5ca80ce1cd5e221a39baad55272eccf\" successfully" Jan 29 11:32:49.507963 containerd[1495]: time="2025-01-29T11:32:49.507708941Z" level=info msg="StopPodSandbox for \"8ff3ec328ea81ecb5b20f9bf8dc8473fb5ca80ce1cd5e221a39baad55272eccf\" returns successfully" Jan 29 11:32:49.508514 containerd[1495]: time="2025-01-29T11:32:49.508491945Z" level=info msg="RemovePodSandbox for \"8ff3ec328ea81ecb5b20f9bf8dc8473fb5ca80ce1cd5e221a39baad55272eccf\"" Jan 29 11:32:49.508553 containerd[1495]: time="2025-01-29T11:32:49.508516142Z" level=info msg="Forcibly stopping sandbox \"8ff3ec328ea81ecb5b20f9bf8dc8473fb5ca80ce1cd5e221a39baad55272eccf\"" Jan 29 11:32:49.508613 containerd[1495]: time="2025-01-29T11:32:49.508596797Z" level=info msg="TearDown network for sandbox \"8ff3ec328ea81ecb5b20f9bf8dc8473fb5ca80ce1cd5e221a39baad55272eccf\" successfully" Jan 29 11:32:49.513939 containerd[1495]: time="2025-01-29T11:32:49.513798499Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8ff3ec328ea81ecb5b20f9bf8dc8473fb5ca80ce1cd5e221a39baad55272eccf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:32:49.513939 containerd[1495]: time="2025-01-29T11:32:49.513853515Z" level=info msg="RemovePodSandbox \"8ff3ec328ea81ecb5b20f9bf8dc8473fb5ca80ce1cd5e221a39baad55272eccf\" returns successfully" Jan 29 11:32:49.514470 containerd[1495]: time="2025-01-29T11:32:49.514247663Z" level=info msg="StopPodSandbox for \"62fbf0cacbcea32d6948f7a7cdc9388760af0845df2b850d76ebf9e79a41f942\"" Jan 29 11:32:49.514470 containerd[1495]: time="2025-01-29T11:32:49.514354497Z" level=info msg="TearDown network for sandbox \"62fbf0cacbcea32d6948f7a7cdc9388760af0845df2b850d76ebf9e79a41f942\" successfully" Jan 29 11:32:49.514470 containerd[1495]: time="2025-01-29T11:32:49.514367823Z" level=info msg="StopPodSandbox for \"62fbf0cacbcea32d6948f7a7cdc9388760af0845df2b850d76ebf9e79a41f942\" returns successfully" Jan 29 11:32:49.514745 containerd[1495]: time="2025-01-29T11:32:49.514687828Z" level=info msg="RemovePodSandbox for \"62fbf0cacbcea32d6948f7a7cdc9388760af0845df2b850d76ebf9e79a41f942\"" Jan 29 11:32:49.514789 containerd[1495]: time="2025-01-29T11:32:49.514746290Z" level=info msg="Forcibly stopping sandbox \"62fbf0cacbcea32d6948f7a7cdc9388760af0845df2b850d76ebf9e79a41f942\"" Jan 29 11:32:49.514919 containerd[1495]: time="2025-01-29T11:32:49.514877602Z" level=info msg="TearDown network for sandbox \"62fbf0cacbcea32d6948f7a7cdc9388760af0845df2b850d76ebf9e79a41f942\" successfully" Jan 29 11:32:49.519353 containerd[1495]: time="2025-01-29T11:32:49.519319967Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"62fbf0cacbcea32d6948f7a7cdc9388760af0845df2b850d76ebf9e79a41f942\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:32:49.519520 containerd[1495]: time="2025-01-29T11:32:49.519379231Z" level=info msg="RemovePodSandbox \"62fbf0cacbcea32d6948f7a7cdc9388760af0845df2b850d76ebf9e79a41f942\" returns successfully" Jan 29 11:32:49.519813 containerd[1495]: time="2025-01-29T11:32:49.519789067Z" level=info msg="StopPodSandbox for \"c0b5ead6c962ad21415289fbc7fdd4d4dbcb723931e95b4fc2ac10dc62090f23\"" Jan 29 11:32:49.520073 containerd[1495]: time="2025-01-29T11:32:49.520003660Z" level=info msg="TearDown network for sandbox \"c0b5ead6c962ad21415289fbc7fdd4d4dbcb723931e95b4fc2ac10dc62090f23\" successfully" Jan 29 11:32:49.520073 containerd[1495]: time="2025-01-29T11:32:49.520020993Z" level=info msg="StopPodSandbox for \"c0b5ead6c962ad21415289fbc7fdd4d4dbcb723931e95b4fc2ac10dc62090f23\" returns successfully" Jan 29 11:32:49.520324 containerd[1495]: time="2025-01-29T11:32:49.520303105Z" level=info msg="RemovePodSandbox for \"c0b5ead6c962ad21415289fbc7fdd4d4dbcb723931e95b4fc2ac10dc62090f23\"" Jan 29 11:32:49.520384 containerd[1495]: time="2025-01-29T11:32:49.520329796Z" level=info msg="Forcibly stopping sandbox \"c0b5ead6c962ad21415289fbc7fdd4d4dbcb723931e95b4fc2ac10dc62090f23\"" Jan 29 11:32:49.520478 containerd[1495]: time="2025-01-29T11:32:49.520433656Z" level=info msg="TearDown network for sandbox \"c0b5ead6c962ad21415289fbc7fdd4d4dbcb723931e95b4fc2ac10dc62090f23\" successfully" Jan 29 11:32:49.525988 containerd[1495]: time="2025-01-29T11:32:49.525920366Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c0b5ead6c962ad21415289fbc7fdd4d4dbcb723931e95b4fc2ac10dc62090f23\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:32:49.526119 containerd[1495]: time="2025-01-29T11:32:49.526008215Z" level=info msg="RemovePodSandbox \"c0b5ead6c962ad21415289fbc7fdd4d4dbcb723931e95b4fc2ac10dc62090f23\" returns successfully" Jan 29 11:32:49.527410 containerd[1495]: time="2025-01-29T11:32:49.527228129Z" level=info msg="StopPodSandbox for \"a72ec10a20a86b9cbf5f8208218ea1a41fc5aef847107a9fb000132954894a40\"" Jan 29 11:32:49.527410 containerd[1495]: time="2025-01-29T11:32:49.527342688Z" level=info msg="TearDown network for sandbox \"a72ec10a20a86b9cbf5f8208218ea1a41fc5aef847107a9fb000132954894a40\" successfully" Jan 29 11:32:49.527410 containerd[1495]: time="2025-01-29T11:32:49.527352688Z" level=info msg="StopPodSandbox for \"a72ec10a20a86b9cbf5f8208218ea1a41fc5aef847107a9fb000132954894a40\" returns successfully" Jan 29 11:32:49.528600 containerd[1495]: time="2025-01-29T11:32:49.527581688Z" level=info msg="RemovePodSandbox for \"a72ec10a20a86b9cbf5f8208218ea1a41fc5aef847107a9fb000132954894a40\"" Jan 29 11:32:49.528600 containerd[1495]: time="2025-01-29T11:32:49.527598811Z" level=info msg="Forcibly stopping sandbox \"a72ec10a20a86b9cbf5f8208218ea1a41fc5aef847107a9fb000132954894a40\"" Jan 29 11:32:49.528600 containerd[1495]: time="2025-01-29T11:32:49.527654097Z" level=info msg="TearDown network for sandbox \"a72ec10a20a86b9cbf5f8208218ea1a41fc5aef847107a9fb000132954894a40\" successfully" Jan 29 11:32:49.533339 containerd[1495]: time="2025-01-29T11:32:49.533210491Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a72ec10a20a86b9cbf5f8208218ea1a41fc5aef847107a9fb000132954894a40\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:32:49.533951 containerd[1495]: time="2025-01-29T11:32:49.533905366Z" level=info msg="RemovePodSandbox \"a72ec10a20a86b9cbf5f8208218ea1a41fc5aef847107a9fb000132954894a40\" returns successfully" Jan 29 11:32:49.535389 containerd[1495]: time="2025-01-29T11:32:49.534564943Z" level=info msg="StopPodSandbox for \"4e1e197df2d2535e387f27fb3ad1d9788046231aaa3dcab8b067d5acba99c1c6\"" Jan 29 11:32:49.535389 containerd[1495]: time="2025-01-29T11:32:49.534732375Z" level=info msg="TearDown network for sandbox \"4e1e197df2d2535e387f27fb3ad1d9788046231aaa3dcab8b067d5acba99c1c6\" successfully" Jan 29 11:32:49.535389 containerd[1495]: time="2025-01-29T11:32:49.534745770Z" level=info msg="StopPodSandbox for \"4e1e197df2d2535e387f27fb3ad1d9788046231aaa3dcab8b067d5acba99c1c6\" returns successfully" Jan 29 11:32:49.535389 containerd[1495]: time="2025-01-29T11:32:49.535336745Z" level=info msg="RemovePodSandbox for \"4e1e197df2d2535e387f27fb3ad1d9788046231aaa3dcab8b067d5acba99c1c6\"" Jan 29 11:32:49.535389 containerd[1495]: time="2025-01-29T11:32:49.535358607Z" level=info msg="Forcibly stopping sandbox \"4e1e197df2d2535e387f27fb3ad1d9788046231aaa3dcab8b067d5acba99c1c6\"" Jan 29 11:32:49.535592 containerd[1495]: time="2025-01-29T11:32:49.535462247Z" level=info msg="TearDown network for sandbox \"4e1e197df2d2535e387f27fb3ad1d9788046231aaa3dcab8b067d5acba99c1c6\" successfully" Jan 29 11:32:49.542436 containerd[1495]: time="2025-01-29T11:32:49.540541885Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4e1e197df2d2535e387f27fb3ad1d9788046231aaa3dcab8b067d5acba99c1c6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:32:49.542436 containerd[1495]: time="2025-01-29T11:32:49.540631788Z" level=info msg="RemovePodSandbox \"4e1e197df2d2535e387f27fb3ad1d9788046231aaa3dcab8b067d5acba99c1c6\" returns successfully" Jan 29 11:32:49.542436 containerd[1495]: time="2025-01-29T11:32:49.541172056Z" level=info msg="StopPodSandbox for \"8a448304f72df29d9b001f735dc09f27534467dd06569f9ad7d03d0886fca03f\"" Jan 29 11:32:49.542436 containerd[1495]: time="2025-01-29T11:32:49.541316343Z" level=info msg="TearDown network for sandbox \"8a448304f72df29d9b001f735dc09f27534467dd06569f9ad7d03d0886fca03f\" successfully" Jan 29 11:32:49.542436 containerd[1495]: time="2025-01-29T11:32:49.541329118Z" level=info msg="StopPodSandbox for \"8a448304f72df29d9b001f735dc09f27534467dd06569f9ad7d03d0886fca03f\" returns successfully" Jan 29 11:32:49.542436 containerd[1495]: time="2025-01-29T11:32:49.541725279Z" level=info msg="RemovePodSandbox for \"8a448304f72df29d9b001f735dc09f27534467dd06569f9ad7d03d0886fca03f\"" Jan 29 11:32:49.542436 containerd[1495]: time="2025-01-29T11:32:49.541769283Z" level=info msg="Forcibly stopping sandbox \"8a448304f72df29d9b001f735dc09f27534467dd06569f9ad7d03d0886fca03f\"" Jan 29 11:32:49.542436 containerd[1495]: time="2025-01-29T11:32:49.541893452Z" level=info msg="TearDown network for sandbox \"8a448304f72df29d9b001f735dc09f27534467dd06569f9ad7d03d0886fca03f\" successfully" Jan 29 11:32:49.548189 containerd[1495]: time="2025-01-29T11:32:49.548131595Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8a448304f72df29d9b001f735dc09f27534467dd06569f9ad7d03d0886fca03f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:32:49.548261 containerd[1495]: time="2025-01-29T11:32:49.548215487Z" level=info msg="RemovePodSandbox \"8a448304f72df29d9b001f735dc09f27534467dd06569f9ad7d03d0886fca03f\" returns successfully" Jan 29 11:32:51.108373 systemd[1]: Started sshd@15-10.0.0.69:22-10.0.0.1:39882.service - OpenSSH per-connection server daemon (10.0.0.1:39882). Jan 29 11:32:51.159363 sshd[5764]: Accepted publickey for core from 10.0.0.1 port 39882 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:32:51.160947 sshd-session[5764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:32:51.164781 systemd-logind[1471]: New session 16 of user core. Jan 29 11:32:51.180674 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 11:32:51.302507 sshd[5766]: Connection closed by 10.0.0.1 port 39882 Jan 29 11:32:51.302928 sshd-session[5764]: pam_unix(sshd:session): session closed for user core Jan 29 11:32:51.314212 systemd[1]: sshd@15-10.0.0.69:22-10.0.0.1:39882.service: Deactivated successfully. Jan 29 11:32:51.316040 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 11:32:51.317509 systemd-logind[1471]: Session 16 logged out. Waiting for processes to exit. Jan 29 11:32:51.322649 systemd[1]: Started sshd@16-10.0.0.69:22-10.0.0.1:39892.service - OpenSSH per-connection server daemon (10.0.0.1:39892). Jan 29 11:32:51.323569 systemd-logind[1471]: Removed session 16. Jan 29 11:32:51.365905 sshd[5779]: Accepted publickey for core from 10.0.0.1 port 39892 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:32:51.367433 sshd-session[5779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:32:51.371381 systemd-logind[1471]: New session 17 of user core. Jan 29 11:32:51.381551 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 11:32:51.621300 sshd[5781]: Connection closed by 10.0.0.1 port 39892 Jan 29 11:32:51.621631 sshd-session[5779]: pam_unix(sshd:session): session closed for user core Jan 29 11:32:51.633056 systemd[1]: sshd@16-10.0.0.69:22-10.0.0.1:39892.service: Deactivated successfully. Jan 29 11:32:51.634834 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 11:32:51.636174 systemd-logind[1471]: Session 17 logged out. Waiting for processes to exit. Jan 29 11:32:51.645929 systemd[1]: Started sshd@17-10.0.0.69:22-10.0.0.1:39906.service - OpenSSH per-connection server daemon (10.0.0.1:39906). Jan 29 11:32:51.647003 systemd-logind[1471]: Removed session 17. Jan 29 11:32:51.686704 sshd[5791]: Accepted publickey for core from 10.0.0.1 port 39906 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:32:51.688249 sshd-session[5791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:32:51.691977 systemd-logind[1471]: New session 18 of user core. Jan 29 11:32:51.698522 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 11:32:53.188803 sshd[5793]: Connection closed by 10.0.0.1 port 39906 Jan 29 11:32:53.190168 sshd-session[5791]: pam_unix(sshd:session): session closed for user core Jan 29 11:32:53.199622 systemd[1]: sshd@17-10.0.0.69:22-10.0.0.1:39906.service: Deactivated successfully. Jan 29 11:32:53.202119 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 11:32:53.202884 systemd-logind[1471]: Session 18 logged out. Waiting for processes to exit. Jan 29 11:32:53.205020 systemd-logind[1471]: Removed session 18. Jan 29 11:32:53.215175 systemd[1]: Started sshd@18-10.0.0.69:22-10.0.0.1:39908.service - OpenSSH per-connection server daemon (10.0.0.1:39908). Jan 29 11:32:53.269950 sshd[5813]: Accepted publickey for core from 10.0.0.1 port 39908 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:32:53.271623 sshd-session[5813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:32:53.283369 systemd-logind[1471]: New session 19 of user core. Jan 29 11:32:53.292669 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 11:32:53.511221 sshd[5815]: Connection closed by 10.0.0.1 port 39908 Jan 29 11:32:53.511679 sshd-session[5813]: pam_unix(sshd:session): session closed for user core Jan 29 11:32:53.521756 systemd[1]: sshd@18-10.0.0.69:22-10.0.0.1:39908.service: Deactivated successfully. Jan 29 11:32:53.523894 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 11:32:53.525622 systemd-logind[1471]: Session 19 logged out. Waiting for processes to exit. Jan 29 11:32:53.535890 systemd[1]: Started sshd@19-10.0.0.69:22-10.0.0.1:39918.service - OpenSSH per-connection server daemon (10.0.0.1:39918). Jan 29 11:32:53.536900 systemd-logind[1471]: Removed session 19. Jan 29 11:32:53.576105 sshd[5825]: Accepted publickey for core from 10.0.0.1 port 39918 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:32:53.577852 sshd-session[5825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:32:53.581949 systemd-logind[1471]: New session 20 of user core. Jan 29 11:32:53.589586 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 11:32:53.705816 sshd[5827]: Connection closed by 10.0.0.1 port 39918 Jan 29 11:32:53.706170 sshd-session[5825]: pam_unix(sshd:session): session closed for user core Jan 29 11:32:53.709880 systemd[1]: sshd@19-10.0.0.69:22-10.0.0.1:39918.service: Deactivated successfully. Jan 29 11:32:53.712023 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 11:32:53.712669 systemd-logind[1471]: Session 20 logged out. Waiting for processes to exit. Jan 29 11:32:53.713566 systemd-logind[1471]: Removed session 20. Jan 29 11:32:58.718513 systemd[1]: Started sshd@20-10.0.0.69:22-10.0.0.1:57126.service - OpenSSH per-connection server daemon (10.0.0.1:57126). Jan 29 11:32:58.764669 sshd[5866]: Accepted publickey for core from 10.0.0.1 port 57126 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:32:58.766821 sshd-session[5866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:32:58.771331 systemd-logind[1471]: New session 21 of user core. Jan 29 11:32:58.779582 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 11:32:58.893112 sshd[5868]: Connection closed by 10.0.0.1 port 57126 Jan 29 11:32:58.893553 sshd-session[5866]: pam_unix(sshd:session): session closed for user core Jan 29 11:32:58.898225 systemd[1]: sshd@20-10.0.0.69:22-10.0.0.1:57126.service: Deactivated successfully. Jan 29 11:32:58.900302 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 11:32:58.900982 systemd-logind[1471]: Session 21 logged out. Waiting for processes to exit. Jan 29 11:32:58.901985 systemd-logind[1471]: Removed session 21. Jan 29 11:33:03.905279 systemd[1]: Started sshd@21-10.0.0.69:22-10.0.0.1:57128.service - OpenSSH per-connection server daemon (10.0.0.1:57128). Jan 29 11:33:03.948845 sshd[5886]: Accepted publickey for core from 10.0.0.1 port 57128 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:33:03.950323 sshd-session[5886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:33:03.954461 systemd-logind[1471]: New session 22 of user core. Jan 29 11:33:03.966545 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 11:33:04.070523 sshd[5888]: Connection closed by 10.0.0.1 port 57128 Jan 29 11:33:04.070887 sshd-session[5886]: pam_unix(sshd:session): session closed for user core Jan 29 11:33:04.074566 systemd[1]: sshd@21-10.0.0.69:22-10.0.0.1:57128.service: Deactivated successfully. Jan 29 11:33:04.076623 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 11:33:04.077245 systemd-logind[1471]: Session 22 logged out. Waiting for processes to exit. Jan 29 11:33:04.079865 systemd-logind[1471]: Removed session 22. Jan 29 11:33:04.214521 kubelet[2697]: E0129 11:33:04.214368 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:33:06.268100 kubelet[2697]: E0129 11:33:06.268051 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:33:09.085063 systemd[1]: Started sshd@22-10.0.0.69:22-10.0.0.1:40606.service - OpenSSH per-connection server daemon (10.0.0.1:40606). Jan 29 11:33:09.132085 sshd[5926]: Accepted publickey for core from 10.0.0.1 port 40606 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:33:09.133978 sshd-session[5926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:33:09.139371 systemd-logind[1471]: New session 23 of user core. Jan 29 11:33:09.144514 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 11:33:09.231633 kubelet[2697]: I0129 11:33:09.231335 2697 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:33:09.285403 sshd[5928]: Connection closed by 10.0.0.1 port 40606 Jan 29 11:33:09.287126 sshd-session[5926]: pam_unix(sshd:session): session closed for user core Jan 29 11:33:09.290684 systemd[1]: sshd@22-10.0.0.69:22-10.0.0.1:40606.service: Deactivated successfully. Jan 29 11:33:09.293188 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 11:33:09.295137 systemd-logind[1471]: Session 23 logged out. Waiting for processes to exit. Jan 29 11:33:09.296921 systemd-logind[1471]: Removed session 23. Jan 29 11:33:13.268620 kubelet[2697]: E0129 11:33:13.268568 2697 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:33:14.305559 systemd[1]: Started sshd@23-10.0.0.69:22-10.0.0.1:40616.service - OpenSSH per-connection server daemon (10.0.0.1:40616). Jan 29 11:33:14.352880 sshd[5943]: Accepted publickey for core from 10.0.0.1 port 40616 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:33:14.354537 sshd-session[5943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:33:14.358693 systemd-logind[1471]: New session 24 of user core. Jan 29 11:33:14.370558 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 11:33:14.481918 sshd[5945]: Connection closed by 10.0.0.1 port 40616 Jan 29 11:33:14.482364 sshd-session[5943]: pam_unix(sshd:session): session closed for user core Jan 29 11:33:14.485051 systemd[1]: sshd@23-10.0.0.69:22-10.0.0.1:40616.service: Deactivated successfully. Jan 29 11:33:14.487262 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 11:33:14.489374 systemd-logind[1471]: Session 24 logged out. Waiting for processes to exit. Jan 29 11:33:14.490438 systemd-logind[1471]: Removed session 24.