Dec 13 13:32:53.886808 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 13 11:52:04 -00 2024 Dec 13 13:32:53.886830 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:32:53.886841 kernel: BIOS-provided physical RAM map: Dec 13 13:32:53.886848 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 13:32:53.886855 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 13:32:53.886861 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 13:32:53.886869 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Dec 13 13:32:53.886875 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Dec 13 13:32:53.886882 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 13:32:53.886891 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 13:32:53.886897 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 13:32:53.886904 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 13:32:53.886911 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 13 13:32:53.886918 kernel: NX (Execute Disable) protection: active Dec 13 13:32:53.886926 kernel: APIC: Static calls initialized Dec 13 13:32:53.886938 kernel: SMBIOS 2.8 present. Dec 13 13:32:53.886946 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Dec 13 13:32:53.886955 kernel: Hypervisor detected: KVM Dec 13 13:32:53.886963 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 13:32:53.886970 kernel: kvm-clock: using sched offset of 2269636118 cycles Dec 13 13:32:53.886978 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 13:32:53.886986 kernel: tsc: Detected 2794.748 MHz processor Dec 13 13:32:53.886993 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 13:32:53.887001 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 13:32:53.887009 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Dec 13 13:32:53.887019 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 13:32:53.887026 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 13:32:53.887034 kernel: Using GB pages for direct mapping Dec 13 13:32:53.887041 kernel: ACPI: Early table checksum verification disabled Dec 13 13:32:53.887049 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Dec 13 13:32:53.887056 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:32:53.887064 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:32:53.887071 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:32:53.887079 kernel: ACPI: FACS 0x000000009CFE0000 000040 Dec 13 13:32:53.887088 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:32:53.887096 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:32:53.887103 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:32:53.887111 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:32:53.887119 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Dec 13 13:32:53.887126 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Dec 13 13:32:53.887137 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Dec 13 13:32:53.887147 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Dec 13 13:32:53.887155 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Dec 13 13:32:53.887163 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Dec 13 13:32:53.887170 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Dec 13 13:32:53.887178 kernel: No NUMA configuration found Dec 13 13:32:53.887186 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Dec 13 13:32:53.887193 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Dec 13 13:32:53.887203 kernel: Zone ranges: Dec 13 13:32:53.887211 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 13:32:53.887219 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Dec 13 13:32:53.887226 kernel: Normal empty Dec 13 13:32:53.887234 kernel: Movable zone start for each node Dec 13 13:32:53.887242 kernel: Early memory node ranges Dec 13 13:32:53.887249 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 13:32:53.887257 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Dec 13 13:32:53.887265 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Dec 13 13:32:53.887275 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 13:32:53.887282 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 13:32:53.887290 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Dec 13 13:32:53.887298 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 13:32:53.887306 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 13:32:53.887313 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 13:32:53.887321 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 13:32:53.887329 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 13:32:53.887337 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 13:32:53.887344 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 13:32:53.887354 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 13:32:53.887362 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 13:32:53.887370 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 13:32:53.887377 kernel: TSC deadline timer available Dec 13 13:32:53.887385 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Dec 13 13:32:53.887393 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 13:32:53.887400 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 13 13:32:53.887408 kernel: kvm-guest: setup PV sched yield Dec 13 13:32:53.887416 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 13:32:53.887426 kernel: Booting paravirtualized kernel on KVM Dec 13 13:32:53.887434 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 13:32:53.887450 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Dec 13 13:32:53.887458 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Dec 13 13:32:53.887466 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Dec 13 13:32:53.887473 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 13 13:32:53.887481 kernel: kvm-guest: PV spinlocks enabled Dec 13 13:32:53.887488 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 13:32:53.887497 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:32:53.887508 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 13:32:53.887515 kernel: random: crng init done Dec 13 13:32:53.887523 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 13:32:53.887532 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 13:32:53.887539 kernel: Fallback order for Node 0: 0 Dec 13 13:32:53.887547 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Dec 13 13:32:53.887555 kernel: Policy zone: DMA32 Dec 13 13:32:53.887563 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 13:32:53.887573 kernel: Memory: 2432548K/2571752K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43328K init, 1748K bss, 138948K reserved, 0K cma-reserved) Dec 13 13:32:53.887581 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 13:32:53.887589 kernel: ftrace: allocating 37874 entries in 148 pages Dec 13 13:32:53.887597 kernel: ftrace: allocated 148 pages with 3 groups Dec 13 13:32:53.887605 kernel: Dynamic Preempt: voluntary Dec 13 13:32:53.887612 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 13:32:53.887621 kernel: rcu: RCU event tracing is enabled. Dec 13 13:32:53.887629 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 13:32:53.887637 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 13:32:53.887647 kernel: Rude variant of Tasks RCU enabled. Dec 13 13:32:53.887655 kernel: Tracing variant of Tasks RCU enabled. Dec 13 13:32:53.887662 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 13:32:53.887670 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 13:32:53.887678 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 13 13:32:53.887686 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 13:32:53.887801 kernel: Console: colour VGA+ 80x25 Dec 13 13:32:53.887809 kernel: printk: console [ttyS0] enabled Dec 13 13:32:53.887817 kernel: ACPI: Core revision 20230628 Dec 13 13:32:53.887828 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 13:32:53.887835 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 13:32:53.887843 kernel: x2apic enabled Dec 13 13:32:53.887851 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 13:32:53.887859 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 13 13:32:53.887867 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 13 13:32:53.887874 kernel: kvm-guest: setup PV IPIs Dec 13 13:32:53.887891 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 13:32:53.887899 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 13:32:53.887908 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 13 13:32:53.887916 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 13:32:53.887924 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 13:32:53.887934 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 13:32:53.887942 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 13:32:53.887950 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 13:32:53.887959 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 13:32:53.887967 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 13:32:53.887977 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 13:32:53.887985 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 13:32:53.887994 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 13:32:53.888002 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 13:32:53.888010 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 13 13:32:53.888019 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 13 13:32:53.888027 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 13 13:32:53.888035 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 13:32:53.888046 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 13:32:53.888054 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 13:32:53.888062 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 13:32:53.888070 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 13 13:32:53.888078 kernel: Freeing SMP alternatives memory: 32K Dec 13 13:32:53.888086 kernel: pid_max: default: 32768 minimum: 301 Dec 13 13:32:53.888094 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 13:32:53.888102 kernel: landlock: Up and running. Dec 13 13:32:53.888110 kernel: SELinux: Initializing. Dec 13 13:32:53.888121 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 13:32:53.888129 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 13:32:53.888137 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 13:32:53.888145 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 13:32:53.888153 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 13:32:53.888162 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 13:32:53.888170 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 13:32:53.888178 kernel: ... version: 0 Dec 13 13:32:53.888186 kernel: ... bit width: 48 Dec 13 13:32:53.888196 kernel: ... generic registers: 6 Dec 13 13:32:53.888204 kernel: ... value mask: 0000ffffffffffff Dec 13 13:32:53.888212 kernel: ... max period: 00007fffffffffff Dec 13 13:32:53.888220 kernel: ... fixed-purpose events: 0 Dec 13 13:32:53.888229 kernel: ... event mask: 000000000000003f Dec 13 13:32:53.888237 kernel: signal: max sigframe size: 1776 Dec 13 13:32:53.888245 kernel: rcu: Hierarchical SRCU implementation. Dec 13 13:32:53.888253 kernel: rcu: Max phase no-delay instances is 400. Dec 13 13:32:53.888261 kernel: smp: Bringing up secondary CPUs ... Dec 13 13:32:53.888271 kernel: smpboot: x86: Booting SMP configuration: Dec 13 13:32:53.888279 kernel: .... node #0, CPUs: #1 #2 #3 Dec 13 13:32:53.888288 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 13:32:53.888296 kernel: smpboot: Max logical packages: 1 Dec 13 13:32:53.888304 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 13 13:32:53.888312 kernel: devtmpfs: initialized Dec 13 13:32:53.888320 kernel: x86/mm: Memory block size: 128MB Dec 13 13:32:53.888328 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 13:32:53.888336 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 13:32:53.888347 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 13:32:53.888355 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 13:32:53.888363 kernel: audit: initializing netlink subsys (disabled) Dec 13 13:32:53.888371 kernel: audit: type=2000 audit(1734096773.745:1): state=initialized audit_enabled=0 res=1 Dec 13 13:32:53.888379 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 13:32:53.888387 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 13:32:53.888395 kernel: cpuidle: using governor menu Dec 13 13:32:53.888403 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 13:32:53.888411 kernel: dca service started, version 1.12.1 Dec 13 13:32:53.888422 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 13:32:53.888430 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 13 13:32:53.888444 kernel: PCI: Using configuration type 1 for base access Dec 13 13:32:53.888452 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 13:32:53.888461 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 13:32:53.888469 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 13:32:53.888477 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 13:32:53.888485 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 13:32:53.888493 kernel: ACPI: Added _OSI(Module Device) Dec 13 13:32:53.888504 kernel: ACPI: Added _OSI(Processor Device) Dec 13 13:32:53.888512 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 13:32:53.888520 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 13:32:53.888528 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 13:32:53.888536 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 13:32:53.888544 kernel: ACPI: Interpreter enabled Dec 13 13:32:53.888553 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 13:32:53.888561 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 13:32:53.888569 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 13:32:53.888579 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 13:32:53.888587 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 13:32:53.888595 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 13:32:53.888791 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 13:32:53.888921 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 13:32:53.889043 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 13:32:53.889054 kernel: PCI host bridge to bus 0000:00 Dec 13 13:32:53.889183 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 13:32:53.889296 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 13:32:53.889407 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 13:32:53.889531 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Dec 13 13:32:53.889644 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 13:32:53.889779 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Dec 13 13:32:53.889901 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 13:32:53.890058 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 13:32:53.890191 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Dec 13 13:32:53.890313 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Dec 13 13:32:53.890443 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Dec 13 13:32:53.890566 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Dec 13 13:32:53.890686 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 13:32:53.890840 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 13:32:53.890969 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Dec 13 13:32:53.891090 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Dec 13 13:32:53.891241 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Dec 13 13:32:53.891374 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Dec 13 13:32:53.891506 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 13:32:53.891627 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Dec 13 13:32:53.891769 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Dec 13 13:32:53.892007 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 13:32:53.892179 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Dec 13 13:32:53.892303 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Dec 13 13:32:53.892425 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Dec 13 13:32:53.892556 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Dec 13 13:32:53.892704 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 13:32:53.892857 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 13:32:53.892994 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 13:32:53.893114 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Dec 13 13:32:53.893322 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Dec 13 13:32:53.893505 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 13:32:53.893663 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 13:32:53.893675 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 13:32:53.893688 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 13:32:53.893713 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 13:32:53.893722 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 13:32:53.893730 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 13:32:53.893738 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 13:32:53.893746 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 13:32:53.893755 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 13:32:53.893763 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 13:32:53.893771 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 13:32:53.893782 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 13:32:53.893790 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 13:32:53.893798 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 13:32:53.893806 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 13:32:53.893814 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 13:32:53.893822 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 13:32:53.893830 kernel: iommu: Default domain type: Translated Dec 13 13:32:53.893838 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 13:32:53.893847 kernel: PCI: Using ACPI for IRQ routing Dec 13 13:32:53.893857 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 13:32:53.893865 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 13:32:53.893873 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Dec 13 13:32:53.894000 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 13:32:53.894121 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 13:32:53.894240 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 13:32:53.894250 kernel: vgaarb: loaded Dec 13 13:32:53.894259 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 13:32:53.894270 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 13:32:53.894279 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 13:32:53.894287 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 13:32:53.894295 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 13:32:53.894303 kernel: pnp: PnP ACPI init Dec 13 13:32:53.894433 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 13:32:53.894455 kernel: pnp: PnP ACPI: found 6 devices Dec 13 13:32:53.894463 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 13:32:53.894475 kernel: NET: Registered PF_INET protocol family Dec 13 13:32:53.894483 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 13:32:53.894491 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 13:32:53.894499 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 13:32:53.894508 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 13:32:53.894516 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 13:32:53.894524 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 13:32:53.894532 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 13:32:53.894541 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 13:32:53.894551 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 13:32:53.894560 kernel: NET: Registered PF_XDP protocol family Dec 13 13:32:53.894673 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 13:32:53.894907 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 13:32:53.895061 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 13:32:53.895172 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Dec 13 13:32:53.895280 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 13:32:53.895389 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Dec 13 13:32:53.895405 kernel: PCI: CLS 0 bytes, default 64 Dec 13 13:32:53.895413 kernel: Initialise system trusted keyrings Dec 13 13:32:53.895422 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 13:32:53.895430 kernel: Key type asymmetric registered Dec 13 13:32:53.895446 kernel: Asymmetric key parser 'x509' registered Dec 13 13:32:53.895455 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 13:32:53.895463 kernel: io scheduler mq-deadline registered Dec 13 13:32:53.895471 kernel: io scheduler kyber registered Dec 13 13:32:53.895479 kernel: io scheduler bfq registered Dec 13 13:32:53.895488 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 13:32:53.895500 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 13:32:53.895508 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 13:32:53.895517 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 13:32:53.895525 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 13:32:53.895533 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 13:32:53.895541 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 13:32:53.895550 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 13:32:53.895558 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 13:32:53.895685 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 13:32:53.895713 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 13:32:53.895876 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 13:32:53.895991 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T13:32:53 UTC (1734096773) Dec 13 13:32:53.896103 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 13 13:32:53.896114 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 13 13:32:53.896122 kernel: NET: Registered PF_INET6 protocol family Dec 13 13:32:53.896130 kernel: Segment Routing with IPv6 Dec 13 13:32:53.896142 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 13:32:53.896151 kernel: NET: Registered PF_PACKET protocol family Dec 13 13:32:53.896159 kernel: Key type dns_resolver registered Dec 13 13:32:53.896167 kernel: IPI shorthand broadcast: enabled Dec 13 13:32:53.896175 kernel: sched_clock: Marking stable (548002871, 104238110)->(754275746, -102034765) Dec 13 13:32:53.896183 kernel: registered taskstats version 1 Dec 13 13:32:53.896192 kernel: Loading compiled-in X.509 certificates Dec 13 13:32:53.896200 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: 87a680e70013684f1bdd04e047addefc714bd162' Dec 13 13:32:53.896208 kernel: Key type .fscrypt registered Dec 13 13:32:53.896218 kernel: Key type fscrypt-provisioning registered Dec 13 13:32:53.896226 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 13:32:53.896235 kernel: ima: Allocated hash algorithm: sha1 Dec 13 13:32:53.896243 kernel: ima: No architecture policies found Dec 13 13:32:53.896251 kernel: clk: Disabling unused clocks Dec 13 13:32:53.896259 kernel: Freeing unused kernel image (initmem) memory: 43328K Dec 13 13:32:53.896267 kernel: Write protecting the kernel read-only data: 38912k Dec 13 13:32:53.896275 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Dec 13 13:32:53.896283 kernel: Run /init as init process Dec 13 13:32:53.896293 kernel: with arguments: Dec 13 13:32:53.896302 kernel: /init Dec 13 13:32:53.896309 kernel: with environment: Dec 13 13:32:53.896317 kernel: HOME=/ Dec 13 13:32:53.896325 kernel: TERM=linux Dec 13 13:32:53.896333 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 13:32:53.896343 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:32:53.896354 systemd[1]: Detected virtualization kvm. Dec 13 13:32:53.896365 systemd[1]: Detected architecture x86-64. Dec 13 13:32:53.896373 systemd[1]: Running in initrd. Dec 13 13:32:53.896382 systemd[1]: No hostname configured, using default hostname. Dec 13 13:32:53.896390 systemd[1]: Hostname set to . Dec 13 13:32:53.896399 systemd[1]: Initializing machine ID from VM UUID. Dec 13 13:32:53.896407 systemd[1]: Queued start job for default target initrd.target. Dec 13 13:32:53.896416 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:32:53.896425 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:32:53.896447 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 13:32:53.896467 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:32:53.896478 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 13:32:53.896488 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 13:32:53.896498 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 13:32:53.896509 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 13:32:53.896518 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:32:53.896527 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:32:53.896537 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:32:53.896546 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:32:53.896555 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:32:53.896564 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:32:53.896573 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:32:53.896584 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:32:53.896594 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 13:32:53.896603 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 13:32:53.896612 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:32:53.896621 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:32:53.896630 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:32:53.896639 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:32:53.896648 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 13:32:53.896657 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:32:53.896668 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 13:32:53.896677 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 13:32:53.896686 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:32:53.896707 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:32:53.896716 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:32:53.896725 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 13:32:53.896734 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:32:53.896743 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 13:32:53.896756 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 13:32:53.896786 systemd-journald[193]: Collecting audit messages is disabled. Dec 13 13:32:53.896808 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:32:53.896817 systemd-journald[193]: Journal started Dec 13 13:32:53.896844 systemd-journald[193]: Runtime Journal (/run/log/journal/af7ff1c7907a4d98aad02e2e490f758c) is 6.0M, max 48.3M, 42.3M free. Dec 13 13:32:53.893661 systemd-modules-load[194]: Inserted module 'overlay' Dec 13 13:32:53.927426 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 13:32:53.927450 kernel: Bridge firewalling registered Dec 13 13:32:53.920185 systemd-modules-load[194]: Inserted module 'br_netfilter' Dec 13 13:32:53.930150 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:32:53.930592 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:32:53.932927 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:32:53.954823 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:32:53.957944 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:32:53.960602 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:32:53.963188 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:32:53.973797 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:32:53.974671 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:32:53.976039 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:32:53.982852 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 13:32:53.984085 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:32:53.987268 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:32:53.997447 dracut-cmdline[228]: dracut-dracut-053 Dec 13 13:32:54.001288 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:32:54.020523 systemd-resolved[233]: Positive Trust Anchors: Dec 13 13:32:54.020542 systemd-resolved[233]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:32:54.020574 systemd-resolved[233]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:32:54.023042 systemd-resolved[233]: Defaulting to hostname 'linux'. Dec 13 13:32:54.024077 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:32:54.030739 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:32:54.089726 kernel: SCSI subsystem initialized Dec 13 13:32:54.098727 kernel: Loading iSCSI transport class v2.0-870. Dec 13 13:32:54.132726 kernel: iscsi: registered transport (tcp) Dec 13 13:32:54.153720 kernel: iscsi: registered transport (qla4xxx) Dec 13 13:32:54.153759 kernel: QLogic iSCSI HBA Driver Dec 13 13:32:54.201798 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 13:32:54.211864 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 13:32:54.251770 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 13:32:54.251805 kernel: device-mapper: uevent: version 1.0.3 Dec 13 13:32:54.270037 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 13:32:54.321737 kernel: raid6: avx2x4 gen() 29833 MB/s Dec 13 13:32:54.338730 kernel: raid6: avx2x2 gen() 30039 MB/s Dec 13 13:32:54.355856 kernel: raid6: avx2x1 gen() 24985 MB/s Dec 13 13:32:54.355928 kernel: raid6: using algorithm avx2x2 gen() 30039 MB/s Dec 13 13:32:54.373835 kernel: raid6: .... xor() 19636 MB/s, rmw enabled Dec 13 13:32:54.373875 kernel: raid6: using avx2x2 recovery algorithm Dec 13 13:32:54.395724 kernel: xor: automatically using best checksumming function avx Dec 13 13:32:54.540729 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 13:32:54.551709 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:32:54.564861 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:32:54.577587 systemd-udevd[415]: Using default interface naming scheme 'v255'. Dec 13 13:32:54.582188 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:32:54.583627 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 13:32:54.601360 dracut-pre-trigger[419]: rd.md=0: removing MD RAID activation Dec 13 13:32:54.630966 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:32:54.642853 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:32:54.704375 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:32:54.722638 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 13:32:54.738870 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 13:32:54.742610 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Dec 13 13:32:54.766672 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 13:32:54.767238 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 13:32:54.767264 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 13:32:54.767286 kernel: GPT:9289727 != 19775487 Dec 13 13:32:54.767309 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 13:32:54.767343 kernel: GPT:9289727 != 19775487 Dec 13 13:32:54.767361 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 13:32:54.767382 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:32:54.767403 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 13:32:54.767440 kernel: AES CTR mode by8 optimization enabled Dec 13 13:32:54.747379 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:32:54.769849 kernel: libata version 3.00 loaded. Dec 13 13:32:54.749556 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:32:54.751226 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:32:54.773004 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 13:32:54.779661 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:32:54.783953 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 13:32:54.802961 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 13:32:54.802985 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 13:32:54.804225 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 13:32:54.804402 kernel: scsi host0: ahci Dec 13 13:32:54.804641 kernel: scsi host1: ahci Dec 13 13:32:54.804829 kernel: scsi host2: ahci Dec 13 13:32:54.804972 kernel: scsi host3: ahci Dec 13 13:32:54.805158 kernel: scsi host4: ahci Dec 13 13:32:54.807732 kernel: scsi host5: ahci Dec 13 13:32:54.807897 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Dec 13 13:32:54.807911 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Dec 13 13:32:54.807922 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Dec 13 13:32:54.807939 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Dec 13 13:32:54.807950 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Dec 13 13:32:54.807961 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Dec 13 13:32:54.807972 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (479) Dec 13 13:32:54.781476 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:32:54.785921 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:32:54.788042 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:32:54.788269 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:32:54.817846 kernel: BTRFS: device fsid 79c74448-2326-4c98-b9ff-09542b30ea52 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (473) Dec 13 13:32:54.790866 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:32:54.805026 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:32:54.811492 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:32:54.828256 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 13:32:54.859282 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 13:32:54.860791 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:32:54.878516 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 13:32:54.882229 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 13:32:54.882302 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 13:32:54.898870 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 13:32:54.900921 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:32:54.912622 disk-uuid[563]: Primary Header is updated. Dec 13 13:32:54.912622 disk-uuid[563]: Secondary Entries is updated. Dec 13 13:32:54.912622 disk-uuid[563]: Secondary Header is updated. Dec 13 13:32:54.916756 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:32:54.917610 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:32:54.921007 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:32:55.109712 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 13:32:55.109776 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 13:32:55.109796 kernel: ata3.00: applying bridge limits Dec 13 13:32:55.109808 kernel: ata3.00: configured for UDMA/100 Dec 13 13:32:55.110707 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 13:32:55.110723 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 13:32:55.111714 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 13:32:55.112718 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 13:32:55.113711 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 13:32:55.113731 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 13:32:55.163235 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 13:32:55.175302 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 13:32:55.175319 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 13 13:32:55.922252 disk-uuid[568]: The operation has completed successfully. Dec 13 13:32:55.923444 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:32:55.950404 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 13:32:55.950521 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 13:32:55.980014 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 13:32:55.983408 sh[595]: Success Dec 13 13:32:55.996728 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 13 13:32:56.031993 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 13:32:56.045586 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 13:32:56.048470 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 13:32:56.061512 kernel: BTRFS info (device dm-0): first mount of filesystem 79c74448-2326-4c98-b9ff-09542b30ea52 Dec 13 13:32:56.061542 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:32:56.061553 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 13:32:56.062541 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 13:32:56.063315 kernel: BTRFS info (device dm-0): using free space tree Dec 13 13:32:56.068257 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 13:32:56.070584 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 13:32:56.083835 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 13:32:56.086421 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 13:32:56.096496 kernel: BTRFS info (device vda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:32:56.096553 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:32:56.096565 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:32:56.099710 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:32:56.108519 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 13:32:56.110263 kernel: BTRFS info (device vda6): last unmount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:32:56.118876 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 13:32:56.132942 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 13:32:56.187979 ignition[689]: Ignition 2.20.0 Dec 13 13:32:56.187994 ignition[689]: Stage: fetch-offline Dec 13 13:32:56.188043 ignition[689]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:32:56.188057 ignition[689]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:32:56.188177 ignition[689]: parsed url from cmdline: "" Dec 13 13:32:56.188182 ignition[689]: no config URL provided Dec 13 13:32:56.188189 ignition[689]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 13:32:56.188199 ignition[689]: no config at "/usr/lib/ignition/user.ign" Dec 13 13:32:56.188229 ignition[689]: op(1): [started] loading QEMU firmware config module Dec 13 13:32:56.188235 ignition[689]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 13:32:56.201333 ignition[689]: op(1): [finished] loading QEMU firmware config module Dec 13 13:32:56.214975 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:32:56.227877 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:32:56.244583 ignition[689]: parsing config with SHA512: fd4af5baee0d0d194246eb7a5c9838d7cdf66ae0f25d03d181e6ce3808d50192b5b297c2c10236721a052bdde4d5adbafddbd077c12a9d90b128835db0735f6c Dec 13 13:32:56.248347 unknown[689]: fetched base config from "system" Dec 13 13:32:56.248373 unknown[689]: fetched user config from "qemu" Dec 13 13:32:56.250403 ignition[689]: fetch-offline: fetch-offline passed Dec 13 13:32:56.250562 systemd-networkd[784]: lo: Link UP Dec 13 13:32:56.250566 systemd-networkd[784]: lo: Gained carrier Dec 13 13:32:56.251424 ignition[689]: Ignition finished successfully Dec 13 13:32:56.252219 systemd-networkd[784]: Enumeration completed Dec 13 13:32:56.252551 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:32:56.252795 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:32:56.252800 systemd-networkd[784]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:32:56.253713 systemd[1]: Reached target network.target - Network. Dec 13 13:32:56.255038 systemd-networkd[784]: eth0: Link UP Dec 13 13:32:56.255043 systemd-networkd[784]: eth0: Gained carrier Dec 13 13:32:56.255051 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:32:56.266953 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:32:56.269422 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 13:32:56.279737 systemd-networkd[784]: eth0: DHCPv4 address 10.0.0.150/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 13:32:56.279900 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 13:32:56.293717 ignition[787]: Ignition 2.20.0 Dec 13 13:32:56.293730 ignition[787]: Stage: kargs Dec 13 13:32:56.293878 ignition[787]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:32:56.293890 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:32:56.297559 ignition[787]: kargs: kargs passed Dec 13 13:32:56.297607 ignition[787]: Ignition finished successfully Dec 13 13:32:56.302004 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 13:32:56.308967 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 13:32:56.322824 ignition[796]: Ignition 2.20.0 Dec 13 13:32:56.322835 ignition[796]: Stage: disks Dec 13 13:32:56.322998 ignition[796]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:32:56.323009 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:32:56.323863 ignition[796]: disks: disks passed Dec 13 13:32:56.323910 ignition[796]: Ignition finished successfully Dec 13 13:32:56.344257 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 13:32:56.346847 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 13:32:56.349052 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 13:32:56.351483 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:32:56.353453 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:32:56.355452 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:32:56.367822 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 13:32:56.412298 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 13:32:56.608997 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 13:32:56.623786 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 13:32:56.707724 kernel: EXT4-fs (vda9): mounted filesystem 8801d4fe-2f40-4e12-9140-c192f2e7d668 r/w with ordered data mode. Quota mode: none. Dec 13 13:32:56.707767 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 13:32:56.709931 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 13:32:56.730809 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:32:56.733477 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 13:32:56.735842 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 13:32:56.735894 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 13:32:56.735918 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:32:56.743778 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (814) Dec 13 13:32:56.744659 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 13:32:56.748758 kernel: BTRFS info (device vda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:32:56.748779 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:32:56.748800 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:32:56.748811 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:32:56.750143 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 13:32:56.753371 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:32:56.789309 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 13:32:56.794973 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Dec 13 13:32:56.799332 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 13:32:56.804264 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 13:32:56.895909 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 13:32:56.909780 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 13:32:56.913194 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 13:32:56.918714 kernel: BTRFS info (device vda6): last unmount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:32:56.938908 ignition[926]: INFO : Ignition 2.20.0 Dec 13 13:32:56.938908 ignition[926]: INFO : Stage: mount Dec 13 13:32:56.940680 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:32:56.940680 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:32:56.940680 ignition[926]: INFO : mount: mount passed Dec 13 13:32:56.940680 ignition[926]: INFO : Ignition finished successfully Dec 13 13:32:56.941648 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 13:32:56.942049 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 13:32:56.956792 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 13:32:57.060896 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 13:32:57.069899 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:32:57.078196 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (940) Dec 13 13:32:57.078233 kernel: BTRFS info (device vda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:32:57.078249 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:32:57.079057 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:32:57.082722 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:32:57.083793 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:32:57.111891 ignition[957]: INFO : Ignition 2.20.0 Dec 13 13:32:57.111891 ignition[957]: INFO : Stage: files Dec 13 13:32:57.114168 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:32:57.114168 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:32:57.114168 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Dec 13 13:32:57.114168 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 13:32:57.114168 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 13:32:57.122207 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 13:32:57.122207 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 13:32:57.122207 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 13:32:57.122207 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 13:32:57.122207 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 13:32:57.116295 unknown[957]: wrote ssh authorized keys file for user: core Dec 13 13:32:57.158879 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 13:32:57.244130 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 13:32:57.244130 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 13:32:57.247879 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 13:32:57.249573 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 13:32:57.251343 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 13:32:57.253008 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 13:32:57.255825 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 13:32:57.255825 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 13:32:57.259634 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 13:32:57.259634 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:32:57.259634 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:32:57.259634 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 13:32:57.259634 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 13:32:57.259634 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 13:32:57.259634 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Dec 13 13:32:57.581759 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 13:32:57.942314 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 13:32:57.942314 ignition[957]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 13:32:57.946251 ignition[957]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 13:32:57.946251 ignition[957]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 13:32:57.946251 ignition[957]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 13:32:57.946251 ignition[957]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 13 13:32:57.946251 ignition[957]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 13:32:57.946251 ignition[957]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 13:32:57.946251 ignition[957]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 13 13:32:57.946251 ignition[957]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 13:32:57.964868 ignition[957]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 13:32:57.970711 ignition[957]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 13:32:57.972357 ignition[957]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 13:32:57.972357 ignition[957]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 13 13:32:57.972357 ignition[957]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 13:32:57.972357 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:32:57.972357 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:32:57.972357 ignition[957]: INFO : files: files passed Dec 13 13:32:57.972357 ignition[957]: INFO : Ignition finished successfully Dec 13 13:32:57.973759 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 13:32:57.986819 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 13:32:58.003590 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 13:32:58.005245 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 13:32:58.005356 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 13:32:58.012229 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory Dec 13 13:32:58.014315 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:32:58.015976 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:32:58.018757 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:32:58.017486 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:32:58.019175 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 13:32:58.029811 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 13:32:58.052548 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 13:32:58.052666 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 13:32:58.054973 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 13:32:58.057052 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 13:32:58.059028 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 13:32:58.073820 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 13:32:58.085884 systemd-networkd[784]: eth0: Gained IPv6LL Dec 13 13:32:58.088516 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:32:58.098834 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 13:32:58.110575 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:32:58.110754 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:32:58.112918 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 13:32:58.115066 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 13:32:58.115188 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:32:58.118668 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 13:32:58.119838 systemd[1]: Stopped target basic.target - Basic System. Dec 13 13:32:58.120154 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 13:32:58.123243 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:32:58.123573 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 13:32:58.128321 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 13:32:58.131983 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:32:58.132332 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 13:32:58.137023 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 13:32:58.139003 systemd[1]: Stopped target swap.target - Swaps. Dec 13 13:32:58.139948 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 13:32:58.140077 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:32:58.143386 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:32:58.144454 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:32:58.144765 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 13:32:58.148625 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:32:58.149051 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 13:32:58.149175 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 13:32:58.152244 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 13:32:58.152373 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:32:58.154877 systemd[1]: Stopped target paths.target - Path Units. Dec 13 13:32:58.155238 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 13:32:58.162755 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:32:58.165530 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 13:32:58.165691 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 13:32:58.167399 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 13:32:58.167512 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:32:58.169430 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 13:32:58.169534 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:32:58.172063 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 13:32:58.172187 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:32:58.172959 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 13:32:58.173076 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 13:32:58.188872 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 13:32:58.190577 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 13:32:58.191751 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 13:32:58.191906 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:32:58.193972 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 13:32:58.194160 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:32:58.198679 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 13:32:58.198830 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 13:32:58.208586 ignition[1011]: INFO : Ignition 2.20.0 Dec 13 13:32:58.208586 ignition[1011]: INFO : Stage: umount Dec 13 13:32:58.210459 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:32:58.210459 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:32:58.210459 ignition[1011]: INFO : umount: umount passed Dec 13 13:32:58.210459 ignition[1011]: INFO : Ignition finished successfully Dec 13 13:32:58.211888 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 13:32:58.212041 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 13:32:58.213980 systemd[1]: Stopped target network.target - Network. Dec 13 13:32:58.215404 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 13:32:58.215466 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 13:32:58.217262 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 13:32:58.217322 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 13:32:58.219427 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 13:32:58.219473 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 13:32:58.221381 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 13:32:58.221428 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 13:32:58.223427 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 13:32:58.225463 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 13:32:58.228407 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 13:32:58.230735 systemd-networkd[784]: eth0: DHCPv6 lease lost Dec 13 13:32:58.233433 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 13:32:58.233579 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 13:32:58.235170 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 13:32:58.235238 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:32:58.245831 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 13:32:58.246863 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 13:32:58.246918 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:32:58.249256 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:32:58.253156 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 13:32:58.253272 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 13:32:58.257958 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 13:32:58.258045 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:32:58.260135 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 13:32:58.260198 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 13:32:58.261223 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 13:32:58.261276 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:32:58.265046 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 13:32:58.265158 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 13:32:58.267453 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 13:32:58.267619 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:32:58.269893 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 13:32:58.269962 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 13:32:58.271772 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 13:32:58.271811 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:32:58.273787 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 13:32:58.273833 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:32:58.275993 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 13:32:58.276040 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 13:32:58.278044 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:32:58.278090 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:32:58.292917 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 13:32:58.294059 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 13:32:58.294133 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:32:58.296428 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 13:32:58.296487 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:32:58.298754 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 13:32:58.298811 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:32:58.301466 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:32:58.301526 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:32:58.304284 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 13:32:58.304402 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 13:32:58.785441 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 13:32:58.785588 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 13:32:58.788042 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 13:32:58.789368 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 13:32:58.789430 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 13:32:58.807840 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 13:32:58.814735 systemd[1]: Switching root. Dec 13 13:32:58.850213 systemd-journald[193]: Journal stopped Dec 13 13:33:00.207896 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Dec 13 13:33:00.207968 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 13:33:00.207985 kernel: SELinux: policy capability open_perms=1 Dec 13 13:33:00.207997 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 13:33:00.208008 kernel: SELinux: policy capability always_check_network=0 Dec 13 13:33:00.208024 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 13:33:00.208038 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 13:33:00.208050 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 13:33:00.208061 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 13:33:00.208077 kernel: audit: type=1403 audit(1734096779.392:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 13:33:00.208090 systemd[1]: Successfully loaded SELinux policy in 38.337ms. Dec 13 13:33:00.208111 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.704ms. Dec 13 13:33:00.208124 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:33:00.208137 systemd[1]: Detected virtualization kvm. Dec 13 13:33:00.208149 systemd[1]: Detected architecture x86-64. Dec 13 13:33:00.208163 systemd[1]: Detected first boot. Dec 13 13:33:00.208175 systemd[1]: Initializing machine ID from VM UUID. Dec 13 13:33:00.208187 zram_generator::config[1056]: No configuration found. Dec 13 13:33:00.208200 systemd[1]: Populated /etc with preset unit settings. Dec 13 13:33:00.208212 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 13:33:00.208229 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 13:33:00.208249 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 13:33:00.208262 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 13:33:00.208276 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 13:33:00.208288 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 13:33:00.208300 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 13:33:00.208312 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 13:33:00.208324 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 13:33:00.208337 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 13:33:00.208349 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 13:33:00.208365 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:33:00.208377 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:33:00.208393 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 13:33:00.208405 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 13:33:00.208417 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 13:33:00.208429 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:33:00.208441 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 13:33:00.208453 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:33:00.208465 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 13:33:00.208477 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 13:33:00.208489 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 13:33:00.208503 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 13:33:00.208515 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:33:00.208532 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:33:00.208544 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:33:00.208556 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:33:00.208568 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 13:33:00.208580 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 13:33:00.208594 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:33:00.208606 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:33:00.208618 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:33:00.208630 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 13:33:00.208642 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 13:33:00.208656 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 13:33:00.208668 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 13:33:00.208680 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:33:00.208704 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 13:33:00.208720 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 13:33:00.208732 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 13:33:00.208745 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 13:33:00.208757 systemd[1]: Reached target machines.target - Containers. Dec 13 13:33:00.208770 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 13:33:00.208782 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:33:00.208794 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:33:00.208806 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 13:33:00.208818 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:33:00.208832 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 13:33:00.208844 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:33:00.208857 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 13:33:00.208869 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:33:00.208881 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 13:33:00.208893 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 13:33:00.208905 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 13:33:00.208917 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 13:33:00.208936 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 13:33:00.208948 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:33:00.208960 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:33:00.208972 kernel: fuse: init (API version 7.39) Dec 13 13:33:00.208983 kernel: loop: module loaded Dec 13 13:33:00.208995 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 13:33:00.209007 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 13:33:00.209020 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:33:00.209032 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 13:33:00.209047 systemd[1]: Stopped verity-setup.service. Dec 13 13:33:00.209059 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:33:00.209088 systemd-journald[1119]: Collecting audit messages is disabled. Dec 13 13:33:00.209111 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 13:33:00.209123 systemd-journald[1119]: Journal started Dec 13 13:33:00.209144 systemd-journald[1119]: Runtime Journal (/run/log/journal/af7ff1c7907a4d98aad02e2e490f758c) is 6.0M, max 48.3M, 42.3M free. Dec 13 13:32:59.909409 systemd[1]: Queued start job for default target multi-user.target. Dec 13 13:32:59.931424 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 13:32:59.931921 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 13:33:00.213468 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:33:00.214192 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 13:33:00.215709 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 13:33:00.216816 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 13:33:00.217984 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 13:33:00.219170 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 13:33:00.220381 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:33:00.221897 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 13:33:00.222111 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 13:33:00.223627 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:33:00.223816 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:33:00.225292 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:33:00.225604 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:33:00.227715 kernel: ACPI: bus type drm_connector registered Dec 13 13:33:00.227876 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 13:33:00.228051 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 13:33:00.229496 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:33:00.229666 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 13:33:00.231426 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:33:00.231596 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:33:00.233033 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:33:00.234510 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 13:33:00.236068 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 13:33:00.253471 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 13:33:00.260876 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 13:33:00.266319 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 13:33:00.267512 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 13:33:00.267558 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:33:00.269925 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 13:33:00.275411 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 13:33:00.278147 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 13:33:00.279435 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:33:00.282792 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 13:33:00.286320 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 13:33:00.287836 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:33:00.296841 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 13:33:00.298098 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:33:00.301492 systemd-journald[1119]: Time spent on flushing to /var/log/journal/af7ff1c7907a4d98aad02e2e490f758c is 20.627ms for 948 entries. Dec 13 13:33:00.301492 systemd-journald[1119]: System Journal (/var/log/journal/af7ff1c7907a4d98aad02e2e490f758c) is 8.0M, max 195.6M, 187.6M free. Dec 13 13:33:00.329034 systemd-journald[1119]: Received client request to flush runtime journal. Dec 13 13:33:00.329069 kernel: loop0: detected capacity change from 0 to 210664 Dec 13 13:33:00.300381 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:33:00.304317 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 13:33:00.309978 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 13:33:00.316147 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 13:33:00.317988 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:33:00.319837 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 13:33:00.321446 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 13:33:00.323267 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 13:33:00.327997 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 13:33:00.331542 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 13:33:00.339668 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 13:33:00.346942 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 13:33:00.350897 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 13:33:00.352843 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:33:00.360708 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 13:33:00.362737 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Dec 13 13:33:00.362753 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Dec 13 13:33:00.366240 udevadm[1183]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 13:33:00.372072 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:33:00.383505 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 13:33:00.385723 kernel: loop1: detected capacity change from 0 to 138184 Dec 13 13:33:00.387894 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 13:33:00.389033 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 13:33:00.411873 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 13:33:00.419050 kernel: loop2: detected capacity change from 0 to 141000 Dec 13 13:33:00.421310 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:33:00.438646 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Dec 13 13:33:00.439052 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Dec 13 13:33:00.445472 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:33:00.467733 kernel: loop3: detected capacity change from 0 to 210664 Dec 13 13:33:00.476768 kernel: loop4: detected capacity change from 0 to 138184 Dec 13 13:33:00.489710 kernel: loop5: detected capacity change from 0 to 141000 Dec 13 13:33:00.500861 (sd-merge)[1197]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 13 13:33:00.501463 (sd-merge)[1197]: Merged extensions into '/usr'. Dec 13 13:33:00.506008 systemd[1]: Reloading requested from client PID 1169 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 13:33:00.506023 systemd[1]: Reloading... Dec 13 13:33:00.570887 zram_generator::config[1223]: No configuration found. Dec 13 13:33:00.630417 ldconfig[1164]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 13:33:00.723061 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:33:00.772069 systemd[1]: Reloading finished in 265 ms. Dec 13 13:33:00.807299 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 13:33:00.809006 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 13:33:00.823952 systemd[1]: Starting ensure-sysext.service... Dec 13 13:33:00.826472 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:33:00.832429 systemd[1]: Reloading requested from client PID 1260 ('systemctl') (unit ensure-sysext.service)... Dec 13 13:33:00.832448 systemd[1]: Reloading... Dec 13 13:33:00.849978 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 13:33:00.850619 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 13:33:00.851741 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 13:33:00.852128 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Dec 13 13:33:00.852333 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Dec 13 13:33:00.907728 zram_generator::config[1288]: No configuration found. Dec 13 13:33:00.914725 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 13:33:00.914739 systemd-tmpfiles[1261]: Skipping /boot Dec 13 13:33:00.927169 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 13:33:00.927185 systemd-tmpfiles[1261]: Skipping /boot Dec 13 13:33:00.993610 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:33:01.042249 systemd[1]: Reloading finished in 209 ms. Dec 13 13:33:01.069720 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:33:01.083325 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 13:33:01.087177 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:33:01.089966 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 13:33:01.092407 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 13:33:01.096246 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:33:01.099884 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:33:01.103026 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 13:33:01.113400 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 13:33:01.119547 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 13:33:01.124377 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:33:01.124557 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:33:01.125897 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:33:01.131758 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:33:01.138524 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:33:01.139782 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:33:01.139883 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:33:01.140062 systemd-udevd[1332]: Using default interface naming scheme 'v255'. Dec 13 13:33:01.140937 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 13:33:01.143220 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:33:01.143393 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:33:01.145117 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:33:01.145324 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:33:01.147329 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:33:01.147506 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:33:01.152543 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:33:01.152765 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:33:01.158918 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 13:33:01.162833 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:33:01.163045 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:33:01.164521 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:33:01.166173 augenrules[1362]: No rules Dec 13 13:33:01.166885 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 13:33:01.172848 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:33:01.175182 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:33:01.176424 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:33:01.176558 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:33:01.177252 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 13:33:01.180618 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:33:01.180877 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:33:01.183991 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 13:33:01.185610 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:33:01.188134 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:33:01.188940 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:33:01.190597 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:33:01.191451 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 13:33:01.194631 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:33:01.195354 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:33:01.197276 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:33:01.197538 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:33:01.200473 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 13:33:01.212026 systemd[1]: Finished ensure-sysext.service. Dec 13 13:33:01.230877 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:33:01.233287 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:33:01.233351 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:33:01.235613 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 13:33:01.237348 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 13:33:01.240718 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1391) Dec 13 13:33:01.242714 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1391) Dec 13 13:33:01.247720 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1378) Dec 13 13:33:01.262247 systemd-resolved[1330]: Positive Trust Anchors: Dec 13 13:33:01.262274 systemd-resolved[1330]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:33:01.262310 systemd-resolved[1330]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:33:01.269459 systemd-resolved[1330]: Defaulting to hostname 'linux'. Dec 13 13:33:01.273779 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:33:01.276815 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:33:01.279505 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 13:33:01.307081 systemd-networkd[1402]: lo: Link UP Dec 13 13:33:01.307094 systemd-networkd[1402]: lo: Gained carrier Dec 13 13:33:01.309439 systemd-networkd[1402]: Enumeration completed Dec 13 13:33:01.309546 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:33:01.309922 systemd-networkd[1402]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:33:01.309932 systemd-networkd[1402]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:33:01.311092 systemd-networkd[1402]: eth0: Link UP Dec 13 13:33:01.311102 systemd-networkd[1402]: eth0: Gained carrier Dec 13 13:33:01.311112 systemd[1]: Reached target network.target - Network. Dec 13 13:33:01.311114 systemd-networkd[1402]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:33:01.315943 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 13:33:01.320139 systemd-networkd[1402]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:33:01.323608 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 13:33:01.325036 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 13:33:01.326442 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 13:33:01.328957 systemd-networkd[1402]: eth0: DHCPv4 address 10.0.0.150/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 13:33:01.330649 systemd-timesyncd[1404]: Network configuration changed, trying to establish connection. Dec 13 13:33:02.212074 systemd-timesyncd[1404]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 13:33:02.212115 systemd-timesyncd[1404]: Initial clock synchronization to Fri 2024-12-13 13:33:02.211983 UTC. Dec 13 13:33:02.212445 systemd-resolved[1330]: Clock change detected. Flushing caches. Dec 13 13:33:02.214521 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 13:33:02.226361 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 13:33:02.229933 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 13:33:02.234326 kernel: ACPI: button: Power Button [PWRF] Dec 13 13:33:02.241030 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 13:33:02.242108 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 13:33:02.242396 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 13:33:02.242556 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 13:33:02.281528 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:33:02.337433 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 13:33:02.350423 kernel: kvm_amd: TSC scaling supported Dec 13 13:33:02.350459 kernel: kvm_amd: Nested Virtualization enabled Dec 13 13:33:02.350473 kernel: kvm_amd: Nested Paging enabled Dec 13 13:33:02.350485 kernel: kvm_amd: LBR virtualization supported Dec 13 13:33:02.351776 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Dec 13 13:33:02.351965 kernel: kvm_amd: Virtual GIF supported Dec 13 13:33:02.375352 kernel: EDAC MC: Ver: 3.0.0 Dec 13 13:33:02.411785 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 13:33:02.422488 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 13:33:02.424155 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:33:02.431323 lvm[1429]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:33:02.467289 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 13:33:02.468834 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:33:02.469965 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:33:02.471130 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 13:33:02.472420 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 13:33:02.473872 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 13:33:02.475044 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 13:33:02.476332 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 13:33:02.477566 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 13:33:02.477592 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:33:02.478492 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:33:02.480225 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 13:33:02.483385 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 13:33:02.490986 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 13:33:02.493426 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 13:33:02.495182 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 13:33:02.496393 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:33:02.497396 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:33:02.498400 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 13:33:02.498442 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 13:33:02.499655 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 13:33:02.501939 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 13:33:02.505336 lvm[1434]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:33:02.505827 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 13:33:02.508545 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 13:33:02.509553 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 13:33:02.512021 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 13:33:02.514797 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 13:33:02.518861 jq[1437]: false Dec 13 13:33:02.521508 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 13:33:02.526538 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 13:33:02.534463 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 13:33:02.536120 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 13:33:02.536639 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 13:33:02.538576 dbus-daemon[1436]: [system] SELinux support is enabled Dec 13 13:33:02.541157 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 13:33:02.544143 extend-filesystems[1438]: Found loop3 Dec 13 13:33:02.546363 extend-filesystems[1438]: Found loop4 Dec 13 13:33:02.546363 extend-filesystems[1438]: Found loop5 Dec 13 13:33:02.546363 extend-filesystems[1438]: Found sr0 Dec 13 13:33:02.546363 extend-filesystems[1438]: Found vda Dec 13 13:33:02.546363 extend-filesystems[1438]: Found vda1 Dec 13 13:33:02.544385 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 13:33:02.551203 extend-filesystems[1438]: Found vda2 Dec 13 13:33:02.551203 extend-filesystems[1438]: Found vda3 Dec 13 13:33:02.551203 extend-filesystems[1438]: Found usr Dec 13 13:33:02.551203 extend-filesystems[1438]: Found vda4 Dec 13 13:33:02.551203 extend-filesystems[1438]: Found vda6 Dec 13 13:33:02.551203 extend-filesystems[1438]: Found vda7 Dec 13 13:33:02.551203 extend-filesystems[1438]: Found vda9 Dec 13 13:33:02.551203 extend-filesystems[1438]: Checking size of /dev/vda9 Dec 13 13:33:02.546516 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 13:33:02.560209 jq[1453]: true Dec 13 13:33:02.564280 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 13:33:02.566069 update_engine[1448]: I20241213 13:33:02.565999 1448 main.cc:92] Flatcar Update Engine starting Dec 13 13:33:02.567814 update_engine[1448]: I20241213 13:33:02.567677 1448 update_check_scheduler.cc:74] Next update check in 4m12s Dec 13 13:33:02.575746 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 13:33:02.575958 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 13:33:02.576348 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 13:33:02.576545 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 13:33:02.578449 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 13:33:02.578642 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 13:33:02.592637 (ntainerd)[1460]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 13:33:02.600628 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 13:33:02.600915 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 13:33:02.602007 tar[1458]: linux-amd64/helm Dec 13 13:33:02.602201 jq[1459]: true Dec 13 13:33:02.602443 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 13:33:02.602468 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 13:33:02.603027 extend-filesystems[1438]: Resized partition /dev/vda9 Dec 13 13:33:02.608103 extend-filesystems[1472]: resize2fs 1.47.1 (20-May-2024) Dec 13 13:33:02.616403 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 13:33:02.618082 systemd[1]: Started update-engine.service - Update Engine. Dec 13 13:33:02.620328 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1384) Dec 13 13:33:02.625590 systemd-logind[1445]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 13:33:02.625622 systemd-logind[1445]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 13:33:02.633166 systemd-logind[1445]: New seat seat0. Dec 13 13:33:02.633522 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 13:33:02.647025 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 13:33:02.653342 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 13:33:02.691809 extend-filesystems[1472]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 13:33:02.691809 extend-filesystems[1472]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 13:33:02.691809 extend-filesystems[1472]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 13:33:02.704246 extend-filesystems[1438]: Resized filesystem in /dev/vda9 Dec 13 13:33:02.692593 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 13:33:02.692936 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 13:33:02.729039 locksmithd[1475]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 13:33:02.786914 bash[1489]: Updated "/home/core/.ssh/authorized_keys" Dec 13 13:33:02.788593 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 13:33:02.790908 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 13:33:02.831384 containerd[1460]: time="2024-12-13T13:33:02.830930856Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Dec 13 13:33:02.853995 containerd[1460]: time="2024-12-13T13:33:02.853893247Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:33:02.855672 containerd[1460]: time="2024-12-13T13:33:02.855616900Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:33:02.855672 containerd[1460]: time="2024-12-13T13:33:02.855658398Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 13:33:02.855672 containerd[1460]: time="2024-12-13T13:33:02.855677243Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 13:33:02.855885 containerd[1460]: time="2024-12-13T13:33:02.855865587Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 13:33:02.855908 containerd[1460]: time="2024-12-13T13:33:02.855886075Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 13:33:02.856027 containerd[1460]: time="2024-12-13T13:33:02.855953993Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:33:02.856027 containerd[1460]: time="2024-12-13T13:33:02.855971756Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:33:02.856188 containerd[1460]: time="2024-12-13T13:33:02.856167303Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:33:02.856188 containerd[1460]: time="2024-12-13T13:33:02.856184775Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 13:33:02.856240 containerd[1460]: time="2024-12-13T13:33:02.856198451Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:33:02.856240 containerd[1460]: time="2024-12-13T13:33:02.856208249Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 13:33:02.856378 containerd[1460]: time="2024-12-13T13:33:02.856306484Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:33:02.856624 containerd[1460]: time="2024-12-13T13:33:02.856559308Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:33:02.856699 containerd[1460]: time="2024-12-13T13:33:02.856680595Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:33:02.856721 containerd[1460]: time="2024-12-13T13:33:02.856697657Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 13:33:02.856848 containerd[1460]: time="2024-12-13T13:33:02.856801663Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 13:33:02.856875 containerd[1460]: time="2024-12-13T13:33:02.856863939Z" level=info msg="metadata content store policy set" policy=shared Dec 13 13:33:02.952704 sshd_keygen[1454]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 13:33:02.976473 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 13:33:02.986534 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 13:33:02.994551 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 13:33:02.994769 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 13:33:02.997454 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 13:33:03.007962 containerd[1460]: time="2024-12-13T13:33:03.007899619Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 13:33:03.007962 containerd[1460]: time="2024-12-13T13:33:03.007962116Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 13:33:03.008085 containerd[1460]: time="2024-12-13T13:33:03.007978607Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 13:33:03.008085 containerd[1460]: time="2024-12-13T13:33:03.007996320Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 13:33:03.008085 containerd[1460]: time="2024-12-13T13:33:03.008009906Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 13:33:03.008261 containerd[1460]: time="2024-12-13T13:33:03.008223316Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 13:33:03.008499 containerd[1460]: time="2024-12-13T13:33:03.008478024Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 13:33:03.008602 containerd[1460]: time="2024-12-13T13:33:03.008584844Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 13:33:03.008648 containerd[1460]: time="2024-12-13T13:33:03.008613137Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 13:33:03.008648 containerd[1460]: time="2024-12-13T13:33:03.008631011Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 13:33:03.008648 containerd[1460]: time="2024-12-13T13:33:03.008644677Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 13:33:03.008704 containerd[1460]: time="2024-12-13T13:33:03.008657210Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 13:33:03.008704 containerd[1460]: time="2024-12-13T13:33:03.008668952Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 13:33:03.008704 containerd[1460]: time="2024-12-13T13:33:03.008683369Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 13:33:03.008704 containerd[1460]: time="2024-12-13T13:33:03.008701233Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 13:33:03.008780 containerd[1460]: time="2024-12-13T13:33:03.008713466Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 13:33:03.008780 containerd[1460]: time="2024-12-13T13:33:03.008725508Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 13:33:03.008780 containerd[1460]: time="2024-12-13T13:33:03.008737090Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 13:33:03.008780 containerd[1460]: time="2024-12-13T13:33:03.008755755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 13:33:03.008780 containerd[1460]: time="2024-12-13T13:33:03.008768659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 13:33:03.008882 containerd[1460]: time="2024-12-13T13:33:03.008779329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 13:33:03.008882 containerd[1460]: time="2024-12-13T13:33:03.008806831Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 13:33:03.008882 containerd[1460]: time="2024-12-13T13:33:03.008818563Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 13:33:03.008882 containerd[1460]: time="2024-12-13T13:33:03.008830726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 13:33:03.008882 containerd[1460]: time="2024-12-13T13:33:03.008841726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 13:33:03.008882 containerd[1460]: time="2024-12-13T13:33:03.008853919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 13:33:03.008882 containerd[1460]: time="2024-12-13T13:33:03.008865751Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 13:33:03.008882 containerd[1460]: time="2024-12-13T13:33:03.008879066Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 13:33:03.009030 containerd[1460]: time="2024-12-13T13:33:03.008891119Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 13:33:03.009030 containerd[1460]: time="2024-12-13T13:33:03.008903001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 13:33:03.009030 containerd[1460]: time="2024-12-13T13:33:03.008915314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 13:33:03.009030 containerd[1460]: time="2024-12-13T13:33:03.008931034Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 13:33:03.009030 containerd[1460]: time="2024-12-13T13:33:03.008959517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 13:33:03.009030 containerd[1460]: time="2024-12-13T13:33:03.008978924Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 13:33:03.009030 containerd[1460]: time="2024-12-13T13:33:03.008989804Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 13:33:03.009890 containerd[1460]: time="2024-12-13T13:33:03.009862571Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 13:33:03.010028 containerd[1460]: time="2024-12-13T13:33:03.009900091Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 13:33:03.010028 containerd[1460]: time="2024-12-13T13:33:03.009917995Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 13:33:03.010028 containerd[1460]: time="2024-12-13T13:33:03.009938153Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 13:33:03.010028 containerd[1460]: time="2024-12-13T13:33:03.009950105Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 13:33:03.010028 containerd[1460]: time="2024-12-13T13:33:03.009965354Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 13:33:03.010028 containerd[1460]: time="2024-12-13T13:33:03.009978809Z" level=info msg="NRI interface is disabled by configuration." Dec 13 13:33:03.010028 containerd[1460]: time="2024-12-13T13:33:03.009992224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 13:33:03.010281 containerd[1460]: time="2024-12-13T13:33:03.010239288Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 13:33:03.010429 containerd[1460]: time="2024-12-13T13:33:03.010285574Z" level=info msg="Connect containerd service" Dec 13 13:33:03.010429 containerd[1460]: time="2024-12-13T13:33:03.010333735Z" level=info msg="using legacy CRI server" Dec 13 13:33:03.010429 containerd[1460]: time="2024-12-13T13:33:03.010341038Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 13:33:03.010484 containerd[1460]: time="2024-12-13T13:33:03.010442819Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 13:33:03.011066 containerd[1460]: time="2024-12-13T13:33:03.011042384Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 13:33:03.011305 containerd[1460]: time="2024-12-13T13:33:03.011213475Z" level=info msg="Start subscribing containerd event" Dec 13 13:33:03.011871 containerd[1460]: time="2024-12-13T13:33:03.011845921Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 13:33:03.011935 containerd[1460]: time="2024-12-13T13:33:03.011916584Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 13:33:03.016333 containerd[1460]: time="2024-12-13T13:33:03.013454769Z" level=info msg="Start recovering state" Dec 13 13:33:03.016333 containerd[1460]: time="2024-12-13T13:33:03.013557642Z" level=info msg="Start event monitor" Dec 13 13:33:03.016333 containerd[1460]: time="2024-12-13T13:33:03.013589953Z" level=info msg="Start snapshots syncer" Dec 13 13:33:03.016333 containerd[1460]: time="2024-12-13T13:33:03.013602486Z" level=info msg="Start cni network conf syncer for default" Dec 13 13:33:03.016333 containerd[1460]: time="2024-12-13T13:33:03.013617134Z" level=info msg="Start streaming server" Dec 13 13:33:03.016485 containerd[1460]: time="2024-12-13T13:33:03.016458784Z" level=info msg="containerd successfully booted in 0.187878s" Dec 13 13:33:03.016562 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 13:33:03.023576 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 13:33:03.031873 tar[1458]: linux-amd64/LICENSE Dec 13 13:33:03.031987 tar[1458]: linux-amd64/README.md Dec 13 13:33:03.032825 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 13:33:03.035377 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 13:33:03.036650 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 13:33:03.049447 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 13:33:03.829579 systemd-networkd[1402]: eth0: Gained IPv6LL Dec 13 13:33:03.833031 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 13:33:03.835149 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 13:33:03.845829 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 13 13:33:03.848997 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:33:03.851534 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 13:33:03.871009 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 13:33:03.871246 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 13 13:33:03.873196 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 13:33:03.880478 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 13:33:04.465472 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:33:04.467450 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 13:33:04.468799 systemd[1]: Startup finished in 684ms (kernel) + 5.696s (initrd) + 4.233s (userspace) = 10.614s. Dec 13 13:33:04.469647 (kubelet)[1549]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:33:04.478660 agetty[1523]: failed to open credentials directory Dec 13 13:33:04.499955 agetty[1522]: failed to open credentials directory Dec 13 13:33:04.909416 kubelet[1549]: E1213 13:33:04.909233 1549 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:33:04.913660 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:33:04.913893 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:33:12.394520 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 13:33:12.395799 systemd[1]: Started sshd@0-10.0.0.150:22-10.0.0.1:51682.service - OpenSSH per-connection server daemon (10.0.0.1:51682). Dec 13 13:33:12.443336 sshd[1563]: Accepted publickey for core from 10.0.0.1 port 51682 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:33:12.445394 sshd-session[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:33:12.453946 systemd-logind[1445]: New session 1 of user core. Dec 13 13:33:12.455260 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 13:33:12.464517 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 13:33:12.476490 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 13:33:12.491824 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 13:33:12.494679 (systemd)[1567]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 13:33:12.607828 systemd[1567]: Queued start job for default target default.target. Dec 13 13:33:12.620604 systemd[1567]: Created slice app.slice - User Application Slice. Dec 13 13:33:12.620629 systemd[1567]: Reached target paths.target - Paths. Dec 13 13:33:12.620643 systemd[1567]: Reached target timers.target - Timers. Dec 13 13:33:12.622191 systemd[1567]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 13:33:12.634559 systemd[1567]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 13:33:12.634701 systemd[1567]: Reached target sockets.target - Sockets. Dec 13 13:33:12.634723 systemd[1567]: Reached target basic.target - Basic System. Dec 13 13:33:12.634766 systemd[1567]: Reached target default.target - Main User Target. Dec 13 13:33:12.634801 systemd[1567]: Startup finished in 133ms. Dec 13 13:33:12.635232 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 13:33:12.636846 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 13:33:12.695447 systemd[1]: Started sshd@1-10.0.0.150:22-10.0.0.1:51692.service - OpenSSH per-connection server daemon (10.0.0.1:51692). Dec 13 13:33:12.739619 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 51692 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:33:12.740978 sshd-session[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:33:12.745216 systemd-logind[1445]: New session 2 of user core. Dec 13 13:33:12.754446 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 13:33:12.807222 sshd[1580]: Connection closed by 10.0.0.1 port 51692 Dec 13 13:33:12.807570 sshd-session[1578]: pam_unix(sshd:session): session closed for user core Dec 13 13:33:12.818123 systemd[1]: sshd@1-10.0.0.150:22-10.0.0.1:51692.service: Deactivated successfully. Dec 13 13:33:12.819992 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 13:33:12.821771 systemd-logind[1445]: Session 2 logged out. Waiting for processes to exit. Dec 13 13:33:12.831563 systemd[1]: Started sshd@2-10.0.0.150:22-10.0.0.1:51702.service - OpenSSH per-connection server daemon (10.0.0.1:51702). Dec 13 13:33:12.832579 systemd-logind[1445]: Removed session 2. Dec 13 13:33:12.864389 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 51702 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:33:12.865844 sshd-session[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:33:12.869752 systemd-logind[1445]: New session 3 of user core. Dec 13 13:33:12.880444 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 13:33:12.928976 sshd[1587]: Connection closed by 10.0.0.1 port 51702 Dec 13 13:33:12.929418 sshd-session[1585]: pam_unix(sshd:session): session closed for user core Dec 13 13:33:12.945227 systemd[1]: sshd@2-10.0.0.150:22-10.0.0.1:51702.service: Deactivated successfully. Dec 13 13:33:12.947266 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 13:33:12.949562 systemd-logind[1445]: Session 3 logged out. Waiting for processes to exit. Dec 13 13:33:12.965866 systemd[1]: Started sshd@3-10.0.0.150:22-10.0.0.1:51710.service - OpenSSH per-connection server daemon (10.0.0.1:51710). Dec 13 13:33:12.967062 systemd-logind[1445]: Removed session 3. Dec 13 13:33:12.999202 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 51710 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:33:13.000804 sshd-session[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:33:13.005452 systemd-logind[1445]: New session 4 of user core. Dec 13 13:33:13.016431 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 13:33:13.069488 sshd[1594]: Connection closed by 10.0.0.1 port 51710 Dec 13 13:33:13.069903 sshd-session[1592]: pam_unix(sshd:session): session closed for user core Dec 13 13:33:13.081124 systemd[1]: sshd@3-10.0.0.150:22-10.0.0.1:51710.service: Deactivated successfully. Dec 13 13:33:13.082969 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 13:33:13.084650 systemd-logind[1445]: Session 4 logged out. Waiting for processes to exit. Dec 13 13:33:13.096541 systemd[1]: Started sshd@4-10.0.0.150:22-10.0.0.1:51714.service - OpenSSH per-connection server daemon (10.0.0.1:51714). Dec 13 13:33:13.097360 systemd-logind[1445]: Removed session 4. Dec 13 13:33:13.129822 sshd[1599]: Accepted publickey for core from 10.0.0.1 port 51714 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:33:13.131393 sshd-session[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:33:13.135234 systemd-logind[1445]: New session 5 of user core. Dec 13 13:33:13.145429 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 13:33:13.203208 sudo[1602]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 13:33:13.203569 sudo[1602]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:33:13.222001 sudo[1602]: pam_unix(sudo:session): session closed for user root Dec 13 13:33:13.223644 sshd[1601]: Connection closed by 10.0.0.1 port 51714 Dec 13 13:33:13.224062 sshd-session[1599]: pam_unix(sshd:session): session closed for user core Dec 13 13:33:13.241283 systemd[1]: sshd@4-10.0.0.150:22-10.0.0.1:51714.service: Deactivated successfully. Dec 13 13:33:13.243127 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 13:33:13.244829 systemd-logind[1445]: Session 5 logged out. Waiting for processes to exit. Dec 13 13:33:13.246203 systemd[1]: Started sshd@5-10.0.0.150:22-10.0.0.1:51730.service - OpenSSH per-connection server daemon (10.0.0.1:51730). Dec 13 13:33:13.246974 systemd-logind[1445]: Removed session 5. Dec 13 13:33:13.283813 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 51730 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:33:13.285176 sshd-session[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:33:13.289011 systemd-logind[1445]: New session 6 of user core. Dec 13 13:33:13.297424 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 13:33:13.350709 sudo[1611]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 13:33:13.351068 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:33:13.354989 sudo[1611]: pam_unix(sudo:session): session closed for user root Dec 13 13:33:13.361425 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 13 13:33:13.361771 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:33:13.385684 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:33:13.416808 augenrules[1633]: No rules Dec 13 13:33:13.418789 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:33:13.419045 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:33:13.420367 sudo[1610]: pam_unix(sudo:session): session closed for user root Dec 13 13:33:13.422259 sshd[1609]: Connection closed by 10.0.0.1 port 51730 Dec 13 13:33:13.422609 sshd-session[1607]: pam_unix(sshd:session): session closed for user core Dec 13 13:33:13.434368 systemd[1]: sshd@5-10.0.0.150:22-10.0.0.1:51730.service: Deactivated successfully. Dec 13 13:33:13.436131 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 13:33:13.437852 systemd-logind[1445]: Session 6 logged out. Waiting for processes to exit. Dec 13 13:33:13.449554 systemd[1]: Started sshd@6-10.0.0.150:22-10.0.0.1:51734.service - OpenSSH per-connection server daemon (10.0.0.1:51734). Dec 13 13:33:13.450580 systemd-logind[1445]: Removed session 6. Dec 13 13:33:13.484225 sshd[1641]: Accepted publickey for core from 10.0.0.1 port 51734 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:33:13.485630 sshd-session[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:33:13.489777 systemd-logind[1445]: New session 7 of user core. Dec 13 13:33:13.498432 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 13:33:13.552137 sudo[1644]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 13:33:13.552501 sudo[1644]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:33:13.813544 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 13:33:13.813653 (dockerd)[1664]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 13:33:14.047982 dockerd[1664]: time="2024-12-13T13:33:14.047914471Z" level=info msg="Starting up" Dec 13 13:33:14.423946 systemd[1]: var-lib-docker-metacopy\x2dcheck2351459610-merged.mount: Deactivated successfully. Dec 13 13:33:14.447733 dockerd[1664]: time="2024-12-13T13:33:14.447665209Z" level=info msg="Loading containers: start." Dec 13 13:33:14.788345 kernel: Initializing XFRM netlink socket Dec 13 13:33:14.865525 systemd-networkd[1402]: docker0: Link UP Dec 13 13:33:14.903647 dockerd[1664]: time="2024-12-13T13:33:14.903609551Z" level=info msg="Loading containers: done." Dec 13 13:33:14.916381 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1406542862-merged.mount: Deactivated successfully. Dec 13 13:33:14.917327 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 13:33:14.918974 dockerd[1664]: time="2024-12-13T13:33:14.918931250Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 13:33:14.919044 dockerd[1664]: time="2024-12-13T13:33:14.919031348Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Dec 13 13:33:14.919176 dockerd[1664]: time="2024-12-13T13:33:14.919150401Z" level=info msg="Daemon has completed initialization" Dec 13 13:33:14.928472 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:33:15.078212 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:33:15.082618 (kubelet)[1853]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:33:15.203249 kubelet[1853]: E1213 13:33:15.203091 1853 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:33:15.209998 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:33:15.210190 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:33:15.315355 dockerd[1664]: time="2024-12-13T13:33:15.315279065Z" level=info msg="API listen on /run/docker.sock" Dec 13 13:33:15.315529 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 13:33:16.022585 containerd[1460]: time="2024-12-13T13:33:16.022332996Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Dec 13 13:33:16.979283 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2929331714.mount: Deactivated successfully. Dec 13 13:33:17.940050 containerd[1460]: time="2024-12-13T13:33:17.939998278Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:17.940682 containerd[1460]: time="2024-12-13T13:33:17.940633449Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=32675642" Dec 13 13:33:17.941767 containerd[1460]: time="2024-12-13T13:33:17.941734926Z" level=info msg="ImageCreate event name:\"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:17.944291 containerd[1460]: time="2024-12-13T13:33:17.944255063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:17.945349 containerd[1460]: time="2024-12-13T13:33:17.945292328Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"32672442\" in 1.922902886s" Dec 13 13:33:17.945393 containerd[1460]: time="2024-12-13T13:33:17.945353343Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\"" Dec 13 13:33:17.964835 containerd[1460]: time="2024-12-13T13:33:17.964793527Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Dec 13 13:33:21.278304 containerd[1460]: time="2024-12-13T13:33:21.278226690Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:21.279976 containerd[1460]: time="2024-12-13T13:33:21.279934854Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=29606409" Dec 13 13:33:21.281652 containerd[1460]: time="2024-12-13T13:33:21.281618913Z" level=info msg="ImageCreate event name:\"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:21.284914 containerd[1460]: time="2024-12-13T13:33:21.284857508Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:21.285690 containerd[1460]: time="2024-12-13T13:33:21.285653761Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"31051521\" in 3.320819368s" Dec 13 13:33:21.285733 containerd[1460]: time="2024-12-13T13:33:21.285691412Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\"" Dec 13 13:33:21.308772 containerd[1460]: time="2024-12-13T13:33:21.308728703Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Dec 13 13:33:22.339122 containerd[1460]: time="2024-12-13T13:33:22.339066503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:22.339844 containerd[1460]: time="2024-12-13T13:33:22.339799768Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=17783035" Dec 13 13:33:22.341129 containerd[1460]: time="2024-12-13T13:33:22.341085019Z" level=info msg="ImageCreate event name:\"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:22.346526 containerd[1460]: time="2024-12-13T13:33:22.346482994Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:22.348007 containerd[1460]: time="2024-12-13T13:33:22.347947431Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"19228165\" in 1.039179724s" Dec 13 13:33:22.348007 containerd[1460]: time="2024-12-13T13:33:22.347990271Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\"" Dec 13 13:33:22.370425 containerd[1460]: time="2024-12-13T13:33:22.370387031Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 13:33:23.465443 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2839646463.mount: Deactivated successfully. Dec 13 13:33:24.216434 containerd[1460]: time="2024-12-13T13:33:24.216373584Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:24.217174 containerd[1460]: time="2024-12-13T13:33:24.217135253Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=29057470" Dec 13 13:33:24.218499 containerd[1460]: time="2024-12-13T13:33:24.218459176Z" level=info msg="ImageCreate event name:\"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:24.220646 containerd[1460]: time="2024-12-13T13:33:24.220614629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:24.221462 containerd[1460]: time="2024-12-13T13:33:24.221435208Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"29056489\" in 1.851008773s" Dec 13 13:33:24.221462 containerd[1460]: time="2024-12-13T13:33:24.221461708Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Dec 13 13:33:24.242794 containerd[1460]: time="2024-12-13T13:33:24.242754517Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 13:33:24.819748 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount185045065.mount: Deactivated successfully. Dec 13 13:33:25.460582 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 13:33:25.469463 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:33:25.613700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:33:25.618068 (kubelet)[2004]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:33:25.656720 kubelet[2004]: E1213 13:33:25.656655 2004 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:33:25.661928 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:33:25.662167 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:33:26.257898 containerd[1460]: time="2024-12-13T13:33:26.257847015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:26.258540 containerd[1460]: time="2024-12-13T13:33:26.258497335Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Dec 13 13:33:26.259569 containerd[1460]: time="2024-12-13T13:33:26.259535813Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:26.262146 containerd[1460]: time="2024-12-13T13:33:26.262106755Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:26.263254 containerd[1460]: time="2024-12-13T13:33:26.263205927Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.02041447s" Dec 13 13:33:26.263254 containerd[1460]: time="2024-12-13T13:33:26.263237165Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 13:33:26.283631 containerd[1460]: time="2024-12-13T13:33:26.283592085Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 13:33:27.027897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount624886992.mount: Deactivated successfully. Dec 13 13:33:27.335543 containerd[1460]: time="2024-12-13T13:33:27.335375460Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:27.336271 containerd[1460]: time="2024-12-13T13:33:27.336191551Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Dec 13 13:33:27.354481 containerd[1460]: time="2024-12-13T13:33:27.354442044Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:27.356682 containerd[1460]: time="2024-12-13T13:33:27.356637682Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:27.357232 containerd[1460]: time="2024-12-13T13:33:27.357196951Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 1.073566204s" Dec 13 13:33:27.357232 containerd[1460]: time="2024-12-13T13:33:27.357222940Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 13:33:27.379134 containerd[1460]: time="2024-12-13T13:33:27.379086199Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Dec 13 13:33:28.187117 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2086111243.mount: Deactivated successfully. Dec 13 13:33:31.315124 containerd[1460]: time="2024-12-13T13:33:31.315048549Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:31.315956 containerd[1460]: time="2024-12-13T13:33:31.315918341Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Dec 13 13:33:31.340520 containerd[1460]: time="2024-12-13T13:33:31.340470714Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:31.380402 containerd[1460]: time="2024-12-13T13:33:31.380369473Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:31.381679 containerd[1460]: time="2024-12-13T13:33:31.381630628Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 4.002510035s" Dec 13 13:33:31.381679 containerd[1460]: time="2024-12-13T13:33:31.381683948Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Dec 13 13:33:33.516778 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:33:33.532516 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:33:33.549126 systemd[1]: Reloading requested from client PID 2188 ('systemctl') (unit session-7.scope)... Dec 13 13:33:33.549140 systemd[1]: Reloading... Dec 13 13:33:33.633348 zram_generator::config[2228]: No configuration found. Dec 13 13:33:33.922601 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:33:33.997299 systemd[1]: Reloading finished in 447 ms. Dec 13 13:33:34.049843 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:33:34.053069 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:33:34.054595 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 13:33:34.054862 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:33:34.067575 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:33:34.213537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:33:34.218298 (kubelet)[2278]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 13:33:34.253813 kubelet[2278]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:33:34.253813 kubelet[2278]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 13:33:34.253813 kubelet[2278]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:33:34.254790 kubelet[2278]: I1213 13:33:34.254734 2278 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 13:33:34.752265 kubelet[2278]: I1213 13:33:34.752227 2278 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 13:33:34.752265 kubelet[2278]: I1213 13:33:34.752259 2278 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 13:33:34.752525 kubelet[2278]: I1213 13:33:34.752503 2278 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 13:33:34.772609 kubelet[2278]: I1213 13:33:34.771568 2278 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:33:34.772609 kubelet[2278]: E1213 13:33:34.772067 2278 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.150:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.150:6443: connect: connection refused Dec 13 13:33:34.784397 kubelet[2278]: I1213 13:33:34.784370 2278 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 13:33:34.784637 kubelet[2278]: I1213 13:33:34.784601 2278 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 13:33:34.784810 kubelet[2278]: I1213 13:33:34.784631 2278 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 13:33:34.785210 kubelet[2278]: I1213 13:33:34.785189 2278 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 13:33:34.785210 kubelet[2278]: I1213 13:33:34.785206 2278 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 13:33:34.785360 kubelet[2278]: I1213 13:33:34.785341 2278 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:33:34.785922 kubelet[2278]: I1213 13:33:34.785901 2278 kubelet.go:400] "Attempting to sync node with API server" Dec 13 13:33:34.785922 kubelet[2278]: I1213 13:33:34.785917 2278 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 13:33:34.785979 kubelet[2278]: I1213 13:33:34.785940 2278 kubelet.go:312] "Adding apiserver pod source" Dec 13 13:33:34.785979 kubelet[2278]: I1213 13:33:34.785956 2278 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 13:33:34.786385 kubelet[2278]: W1213 13:33:34.786299 2278 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.150:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Dec 13 13:33:34.786385 kubelet[2278]: W1213 13:33:34.786336 2278 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.150:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Dec 13 13:33:34.786385 kubelet[2278]: E1213 13:33:34.786362 2278 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.150:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Dec 13 13:33:34.786385 kubelet[2278]: E1213 13:33:34.786368 2278 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.150:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Dec 13 13:33:34.788852 kubelet[2278]: I1213 13:33:34.788834 2278 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Dec 13 13:33:34.789968 kubelet[2278]: I1213 13:33:34.789943 2278 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 13:33:34.790002 kubelet[2278]: W1213 13:33:34.789995 2278 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 13:33:34.790720 kubelet[2278]: I1213 13:33:34.790612 2278 server.go:1264] "Started kubelet" Dec 13 13:33:34.791718 kubelet[2278]: I1213 13:33:34.791186 2278 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 13:33:34.791718 kubelet[2278]: I1213 13:33:34.791540 2278 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 13:33:34.791718 kubelet[2278]: I1213 13:33:34.791568 2278 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 13:33:34.791718 kubelet[2278]: I1213 13:33:34.791717 2278 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 13:33:34.792461 kubelet[2278]: I1213 13:33:34.792442 2278 server.go:455] "Adding debug handlers to kubelet server" Dec 13 13:33:34.793229 kubelet[2278]: E1213 13:33:34.793196 2278 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:33:34.794629 kubelet[2278]: I1213 13:33:34.794562 2278 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 13:33:34.794775 kubelet[2278]: I1213 13:33:34.794752 2278 reconciler.go:26] "Reconciler: start to sync state" Dec 13 13:33:34.796720 kubelet[2278]: E1213 13:33:34.795885 2278 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.150:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.150:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1810bfdd8bbab288 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 13:33:34.790587016 +0000 UTC m=+0.568506557,LastTimestamp:2024-12-13 13:33:34.790587016 +0000 UTC m=+0.568506557,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 13:33:34.796720 kubelet[2278]: E1213 13:33:34.796008 2278 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.150:6443: connect: connection refused" interval="200ms" Dec 13 13:33:34.796720 kubelet[2278]: W1213 13:33:34.796260 2278 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.150:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Dec 13 13:33:34.796720 kubelet[2278]: E1213 13:33:34.796293 2278 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.150:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Dec 13 13:33:34.796898 kubelet[2278]: I1213 13:33:34.796739 2278 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 13:33:34.797198 kubelet[2278]: I1213 13:33:34.797179 2278 factory.go:221] Registration of the systemd container factory successfully Dec 13 13:33:34.797286 kubelet[2278]: I1213 13:33:34.797269 2278 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 13:33:34.798548 kubelet[2278]: E1213 13:33:34.798531 2278 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 13:33:34.798835 kubelet[2278]: I1213 13:33:34.798820 2278 factory.go:221] Registration of the containerd container factory successfully Dec 13 13:33:34.809477 kubelet[2278]: I1213 13:33:34.809435 2278 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 13:33:34.811813 kubelet[2278]: I1213 13:33:34.811792 2278 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 13:33:34.811862 kubelet[2278]: I1213 13:33:34.811817 2278 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 13:33:34.811862 kubelet[2278]: I1213 13:33:34.811837 2278 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 13:33:34.811903 kubelet[2278]: E1213 13:33:34.811874 2278 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 13:33:34.813644 kubelet[2278]: W1213 13:33:34.813603 2278 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.150:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Dec 13 13:33:34.813693 kubelet[2278]: E1213 13:33:34.813652 2278 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.150:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Dec 13 13:33:34.814203 kubelet[2278]: I1213 13:33:34.814182 2278 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 13:33:34.814203 kubelet[2278]: I1213 13:33:34.814199 2278 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 13:33:34.814288 kubelet[2278]: I1213 13:33:34.814215 2278 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:33:34.897098 kubelet[2278]: I1213 13:33:34.897056 2278 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:33:34.897370 kubelet[2278]: E1213 13:33:34.897344 2278 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.150:6443/api/v1/nodes\": dial tcp 10.0.0.150:6443: connect: connection refused" node="localhost" Dec 13 13:33:34.912612 kubelet[2278]: E1213 13:33:34.912585 2278 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 13:33:34.997131 kubelet[2278]: E1213 13:33:34.997098 2278 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.150:6443: connect: connection refused" interval="400ms" Dec 13 13:33:35.098577 kubelet[2278]: I1213 13:33:35.098479 2278 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:33:35.098747 kubelet[2278]: E1213 13:33:35.098710 2278 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.150:6443/api/v1/nodes\": dial tcp 10.0.0.150:6443: connect: connection refused" node="localhost" Dec 13 13:33:35.112893 kubelet[2278]: E1213 13:33:35.112847 2278 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 13:33:35.191211 kubelet[2278]: I1213 13:33:35.191181 2278 policy_none.go:49] "None policy: Start" Dec 13 13:33:35.191883 kubelet[2278]: I1213 13:33:35.191862 2278 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 13:33:35.191952 kubelet[2278]: I1213 13:33:35.191888 2278 state_mem.go:35] "Initializing new in-memory state store" Dec 13 13:33:35.198670 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 13:33:35.222024 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 13:33:35.224875 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 13:33:35.234120 kubelet[2278]: I1213 13:33:35.234085 2278 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 13:33:35.234924 kubelet[2278]: I1213 13:33:35.234299 2278 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 13:33:35.234924 kubelet[2278]: I1213 13:33:35.234435 2278 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 13:33:35.235446 kubelet[2278]: E1213 13:33:35.235428 2278 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 13:33:35.397780 kubelet[2278]: E1213 13:33:35.397654 2278 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.150:6443: connect: connection refused" interval="800ms" Dec 13 13:33:35.500081 kubelet[2278]: I1213 13:33:35.500050 2278 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:33:35.500354 kubelet[2278]: E1213 13:33:35.500336 2278 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.150:6443/api/v1/nodes\": dial tcp 10.0.0.150:6443: connect: connection refused" node="localhost" Dec 13 13:33:35.513639 kubelet[2278]: I1213 13:33:35.513588 2278 topology_manager.go:215] "Topology Admit Handler" podUID="85197171737dd5059ff3276bd42cc394" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 13:33:35.514556 kubelet[2278]: I1213 13:33:35.514524 2278 topology_manager.go:215] "Topology Admit Handler" podUID="8a50003978138b3ab9890682eff4eae8" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 13:33:35.515164 kubelet[2278]: I1213 13:33:35.515145 2278 topology_manager.go:215] "Topology Admit Handler" podUID="b107a98bcf27297d642d248711a3fc70" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 13:33:35.520538 systemd[1]: Created slice kubepods-burstable-pod85197171737dd5059ff3276bd42cc394.slice - libcontainer container kubepods-burstable-pod85197171737dd5059ff3276bd42cc394.slice. Dec 13 13:33:35.541115 systemd[1]: Created slice kubepods-burstable-pod8a50003978138b3ab9890682eff4eae8.slice - libcontainer container kubepods-burstable-pod8a50003978138b3ab9890682eff4eae8.slice. Dec 13 13:33:35.556750 systemd[1]: Created slice kubepods-burstable-podb107a98bcf27297d642d248711a3fc70.slice - libcontainer container kubepods-burstable-podb107a98bcf27297d642d248711a3fc70.slice. Dec 13 13:33:35.699085 kubelet[2278]: I1213 13:33:35.699031 2278 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/85197171737dd5059ff3276bd42cc394-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"85197171737dd5059ff3276bd42cc394\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:33:35.699085 kubelet[2278]: I1213 13:33:35.699089 2278 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/85197171737dd5059ff3276bd42cc394-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"85197171737dd5059ff3276bd42cc394\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:33:35.699226 kubelet[2278]: I1213 13:33:35.699110 2278 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:33:35.699226 kubelet[2278]: I1213 13:33:35.699124 2278 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:33:35.699226 kubelet[2278]: I1213 13:33:35.699138 2278 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b107a98bcf27297d642d248711a3fc70-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b107a98bcf27297d642d248711a3fc70\") " pod="kube-system/kube-scheduler-localhost" Dec 13 13:33:35.699226 kubelet[2278]: I1213 13:33:35.699150 2278 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/85197171737dd5059ff3276bd42cc394-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"85197171737dd5059ff3276bd42cc394\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:33:35.699226 kubelet[2278]: I1213 13:33:35.699167 2278 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:33:35.699401 kubelet[2278]: I1213 13:33:35.699196 2278 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:33:35.699401 kubelet[2278]: I1213 13:33:35.699224 2278 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:33:35.772625 kubelet[2278]: W1213 13:33:35.772580 2278 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.150:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Dec 13 13:33:35.772655 kubelet[2278]: E1213 13:33:35.772631 2278 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.150:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Dec 13 13:33:35.839697 kubelet[2278]: E1213 13:33:35.839668 2278 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:35.840110 containerd[1460]: time="2024-12-13T13:33:35.840081588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:85197171737dd5059ff3276bd42cc394,Namespace:kube-system,Attempt:0,}" Dec 13 13:33:35.855303 kubelet[2278]: E1213 13:33:35.855272 2278 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:35.855569 containerd[1460]: time="2024-12-13T13:33:35.855544184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8a50003978138b3ab9890682eff4eae8,Namespace:kube-system,Attempt:0,}" Dec 13 13:33:35.858810 kubelet[2278]: E1213 13:33:35.858762 2278 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:35.859044 containerd[1460]: time="2024-12-13T13:33:35.859022159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b107a98bcf27297d642d248711a3fc70,Namespace:kube-system,Attempt:0,}" Dec 13 13:33:35.881496 kubelet[2278]: W1213 13:33:35.881466 2278 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.150:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Dec 13 13:33:35.881496 kubelet[2278]: E1213 13:33:35.881494 2278 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.150:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Dec 13 13:33:36.105492 kubelet[2278]: W1213 13:33:36.105365 2278 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.150:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Dec 13 13:33:36.105492 kubelet[2278]: E1213 13:33:36.105420 2278 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.150:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Dec 13 13:33:36.198004 kubelet[2278]: E1213 13:33:36.197961 2278 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.150:6443: connect: connection refused" interval="1.6s" Dec 13 13:33:36.301924 kubelet[2278]: I1213 13:33:36.301891 2278 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:33:36.302157 kubelet[2278]: E1213 13:33:36.302126 2278 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.150:6443/api/v1/nodes\": dial tcp 10.0.0.150:6443: connect: connection refused" node="localhost" Dec 13 13:33:36.328506 kubelet[2278]: W1213 13:33:36.328473 2278 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.150:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Dec 13 13:33:36.328506 kubelet[2278]: E1213 13:33:36.328506 2278 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.150:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Dec 13 13:33:36.818340 kubelet[2278]: E1213 13:33:36.818272 2278 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.150:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.150:6443: connect: connection refused Dec 13 13:33:37.477159 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2027675046.mount: Deactivated successfully. Dec 13 13:33:37.483334 containerd[1460]: time="2024-12-13T13:33:37.483271992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:33:37.485206 containerd[1460]: time="2024-12-13T13:33:37.485166127Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Dec 13 13:33:37.488021 containerd[1460]: time="2024-12-13T13:33:37.487968994Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:33:37.489381 containerd[1460]: time="2024-12-13T13:33:37.489334588Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:33:37.490669 containerd[1460]: time="2024-12-13T13:33:37.490615229Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 13:33:37.491604 containerd[1460]: time="2024-12-13T13:33:37.491571674Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:33:37.492574 containerd[1460]: time="2024-12-13T13:33:37.492508612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:33:37.493343 containerd[1460]: time="2024-12-13T13:33:37.493294709Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 13:33:37.493459 containerd[1460]: time="2024-12-13T13:33:37.493353783Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.653194043s" Dec 13 13:33:37.497347 containerd[1460]: time="2024-12-13T13:33:37.497305747Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.638235074s" Dec 13 13:33:37.498073 containerd[1460]: time="2024-12-13T13:33:37.498038330Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.642434091s" Dec 13 13:33:37.671622 containerd[1460]: time="2024-12-13T13:33:37.671098993Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:33:37.671622 containerd[1460]: time="2024-12-13T13:33:37.671171022Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:33:37.671622 containerd[1460]: time="2024-12-13T13:33:37.671188686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:37.672353 containerd[1460]: time="2024-12-13T13:33:37.672053915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:37.672476 containerd[1460]: time="2024-12-13T13:33:37.672132618Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:33:37.672476 containerd[1460]: time="2024-12-13T13:33:37.672177273Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:33:37.672476 containerd[1460]: time="2024-12-13T13:33:37.672212511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:37.672476 containerd[1460]: time="2024-12-13T13:33:37.672346069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:37.673630 containerd[1460]: time="2024-12-13T13:33:37.670934746Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:33:37.673630 containerd[1460]: time="2024-12-13T13:33:37.673461281Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:33:37.673630 containerd[1460]: time="2024-12-13T13:33:37.673479736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:37.673630 containerd[1460]: time="2024-12-13T13:33:37.673563527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:37.694487 systemd[1]: Started cri-containerd-fcfae072010f1a65cbb2af99ec3c673c302059aa034c38329340124088deaab1.scope - libcontainer container fcfae072010f1a65cbb2af99ec3c673c302059aa034c38329340124088deaab1. Dec 13 13:33:37.698696 systemd[1]: Started cri-containerd-6f1ec67b9980c9a56559345bdaf97f2e207dc39542b0d6da77d3288279d5362e.scope - libcontainer container 6f1ec67b9980c9a56559345bdaf97f2e207dc39542b0d6da77d3288279d5362e. Dec 13 13:33:37.700525 systemd[1]: Started cri-containerd-bf4c3eab374eab543b174d7a7bd3d2c6b579996ee4bdccf60eaa23487380a24d.scope - libcontainer container bf4c3eab374eab543b174d7a7bd3d2c6b579996ee4bdccf60eaa23487380a24d. Dec 13 13:33:37.739407 containerd[1460]: time="2024-12-13T13:33:37.739210559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:85197171737dd5059ff3276bd42cc394,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f1ec67b9980c9a56559345bdaf97f2e207dc39542b0d6da77d3288279d5362e\"" Dec 13 13:33:37.740238 containerd[1460]: time="2024-12-13T13:33:37.740182204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8a50003978138b3ab9890682eff4eae8,Namespace:kube-system,Attempt:0,} returns sandbox id \"fcfae072010f1a65cbb2af99ec3c673c302059aa034c38329340124088deaab1\"" Dec 13 13:33:37.741015 kubelet[2278]: E1213 13:33:37.740965 2278 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:37.742411 kubelet[2278]: E1213 13:33:37.742235 2278 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:37.744345 containerd[1460]: time="2024-12-13T13:33:37.743766138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b107a98bcf27297d642d248711a3fc70,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf4c3eab374eab543b174d7a7bd3d2c6b579996ee4bdccf60eaa23487380a24d\"" Dec 13 13:33:37.744686 containerd[1460]: time="2024-12-13T13:33:37.744658871Z" level=info msg="CreateContainer within sandbox \"6f1ec67b9980c9a56559345bdaf97f2e207dc39542b0d6da77d3288279d5362e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 13:33:37.745303 containerd[1460]: time="2024-12-13T13:33:37.745272415Z" level=info msg="CreateContainer within sandbox \"fcfae072010f1a65cbb2af99ec3c673c302059aa034c38329340124088deaab1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 13:33:37.745669 kubelet[2278]: E1213 13:33:37.745515 2278 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:37.747581 containerd[1460]: time="2024-12-13T13:33:37.747545680Z" level=info msg="CreateContainer within sandbox \"bf4c3eab374eab543b174d7a7bd3d2c6b579996ee4bdccf60eaa23487380a24d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 13:33:37.777629 containerd[1460]: time="2024-12-13T13:33:37.777578663Z" level=info msg="CreateContainer within sandbox \"bf4c3eab374eab543b174d7a7bd3d2c6b579996ee4bdccf60eaa23487380a24d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e8b891bf63aedc0473d2e026a23a43e2b0588c9bbecb93e4ea1b20c55ebf77a2\"" Dec 13 13:33:37.778416 containerd[1460]: time="2024-12-13T13:33:37.778384828Z" level=info msg="StartContainer for \"e8b891bf63aedc0473d2e026a23a43e2b0588c9bbecb93e4ea1b20c55ebf77a2\"" Dec 13 13:33:37.779420 containerd[1460]: time="2024-12-13T13:33:37.779384245Z" level=info msg="CreateContainer within sandbox \"6f1ec67b9980c9a56559345bdaf97f2e207dc39542b0d6da77d3288279d5362e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b6661ac1e0b3e73d6f3bfec4a022995db19bdc1bdc41d6b00d3f1ff4626ce837\"" Dec 13 13:33:37.780155 containerd[1460]: time="2024-12-13T13:33:37.780073154Z" level=info msg="StartContainer for \"b6661ac1e0b3e73d6f3bfec4a022995db19bdc1bdc41d6b00d3f1ff4626ce837\"" Dec 13 13:33:37.781725 containerd[1460]: time="2024-12-13T13:33:37.781588128Z" level=info msg="CreateContainer within sandbox \"fcfae072010f1a65cbb2af99ec3c673c302059aa034c38329340124088deaab1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c51b5ca4f826b8dc1dffa21f8f350d8f70cfd1af0a0af19d1b0140b7e8236d1e\"" Dec 13 13:33:37.782228 containerd[1460]: time="2024-12-13T13:33:37.782202924Z" level=info msg="StartContainer for \"c51b5ca4f826b8dc1dffa21f8f350d8f70cfd1af0a0af19d1b0140b7e8236d1e\"" Dec 13 13:33:37.799433 kubelet[2278]: E1213 13:33:37.799375 2278 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.150:6443: connect: connection refused" interval="3.2s" Dec 13 13:33:37.807468 systemd[1]: Started cri-containerd-e8b891bf63aedc0473d2e026a23a43e2b0588c9bbecb93e4ea1b20c55ebf77a2.scope - libcontainer container e8b891bf63aedc0473d2e026a23a43e2b0588c9bbecb93e4ea1b20c55ebf77a2. Dec 13 13:33:37.810415 systemd[1]: Started cri-containerd-c51b5ca4f826b8dc1dffa21f8f350d8f70cfd1af0a0af19d1b0140b7e8236d1e.scope - libcontainer container c51b5ca4f826b8dc1dffa21f8f350d8f70cfd1af0a0af19d1b0140b7e8236d1e. Dec 13 13:33:37.814540 systemd[1]: Started cri-containerd-b6661ac1e0b3e73d6f3bfec4a022995db19bdc1bdc41d6b00d3f1ff4626ce837.scope - libcontainer container b6661ac1e0b3e73d6f3bfec4a022995db19bdc1bdc41d6b00d3f1ff4626ce837. Dec 13 13:33:37.853354 containerd[1460]: time="2024-12-13T13:33:37.853288097Z" level=info msg="StartContainer for \"e8b891bf63aedc0473d2e026a23a43e2b0588c9bbecb93e4ea1b20c55ebf77a2\" returns successfully" Dec 13 13:33:37.860974 containerd[1460]: time="2024-12-13T13:33:37.860854976Z" level=info msg="StartContainer for \"c51b5ca4f826b8dc1dffa21f8f350d8f70cfd1af0a0af19d1b0140b7e8236d1e\" returns successfully" Dec 13 13:33:37.864893 containerd[1460]: time="2024-12-13T13:33:37.864848931Z" level=info msg="StartContainer for \"b6661ac1e0b3e73d6f3bfec4a022995db19bdc1bdc41d6b00d3f1ff4626ce837\" returns successfully" Dec 13 13:33:37.904104 kubelet[2278]: I1213 13:33:37.904076 2278 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:33:37.905262 kubelet[2278]: E1213 13:33:37.905218 2278 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.150:6443/api/v1/nodes\": dial tcp 10.0.0.150:6443: connect: connection refused" node="localhost" Dec 13 13:33:38.787653 kubelet[2278]: I1213 13:33:38.787612 2278 apiserver.go:52] "Watching apiserver" Dec 13 13:33:38.798962 kubelet[2278]: I1213 13:33:38.798937 2278 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 13:33:38.823810 kubelet[2278]: E1213 13:33:38.823786 2278 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:38.824803 kubelet[2278]: E1213 13:33:38.824756 2278 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:38.825912 kubelet[2278]: E1213 13:33:38.825863 2278 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:39.136336 kubelet[2278]: E1213 13:33:39.136191 2278 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Dec 13 13:33:39.491108 kubelet[2278]: E1213 13:33:39.491079 2278 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Dec 13 13:33:39.828411 kubelet[2278]: E1213 13:33:39.828270 2278 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:39.828411 kubelet[2278]: E1213 13:33:39.828279 2278 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:39.828411 kubelet[2278]: E1213 13:33:39.828398 2278 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:39.921619 kubelet[2278]: E1213 13:33:39.921586 2278 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Dec 13 13:33:40.828648 kubelet[2278]: E1213 13:33:40.828619 2278 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:40.847661 kubelet[2278]: E1213 13:33:40.847636 2278 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Dec 13 13:33:41.002786 kubelet[2278]: E1213 13:33:41.002758 2278 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 13 13:33:41.107349 kubelet[2278]: I1213 13:33:41.107235 2278 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:33:41.112808 kubelet[2278]: I1213 13:33:41.112781 2278 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 13:33:41.300540 systemd[1]: Reloading requested from client PID 2556 ('systemctl') (unit session-7.scope)... Dec 13 13:33:41.300556 systemd[1]: Reloading... Dec 13 13:33:41.378342 zram_generator::config[2600]: No configuration found. Dec 13 13:33:41.484871 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:33:41.573043 systemd[1]: Reloading finished in 272 ms. Dec 13 13:33:41.623236 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:33:41.640507 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 13:33:41.640780 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:33:41.640829 systemd[1]: kubelet.service: Consumed 1.013s CPU time, 116.8M memory peak, 0B memory swap peak. Dec 13 13:33:41.650709 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:33:41.792352 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:33:41.796908 (kubelet)[2640]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 13:33:41.842577 kubelet[2640]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:33:41.842577 kubelet[2640]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 13:33:41.842577 kubelet[2640]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:33:41.843160 kubelet[2640]: I1213 13:33:41.842621 2640 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 13:33:41.850474 kubelet[2640]: I1213 13:33:41.850026 2640 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 13:33:41.852060 kubelet[2640]: I1213 13:33:41.850581 2640 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 13:33:41.852060 kubelet[2640]: I1213 13:33:41.850879 2640 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 13:33:41.852289 kubelet[2640]: I1213 13:33:41.852256 2640 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 13:33:41.853660 kubelet[2640]: I1213 13:33:41.853602 2640 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:33:41.864209 kubelet[2640]: I1213 13:33:41.864179 2640 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 13:33:41.864485 kubelet[2640]: I1213 13:33:41.864456 2640 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 13:33:41.864623 kubelet[2640]: I1213 13:33:41.864485 2640 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 13:33:41.864700 kubelet[2640]: I1213 13:33:41.864644 2640 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 13:33:41.864700 kubelet[2640]: I1213 13:33:41.864654 2640 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 13:33:41.864700 kubelet[2640]: I1213 13:33:41.864695 2640 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:33:41.864800 kubelet[2640]: I1213 13:33:41.864788 2640 kubelet.go:400] "Attempting to sync node with API server" Dec 13 13:33:41.864827 kubelet[2640]: I1213 13:33:41.864807 2640 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 13:33:41.865303 kubelet[2640]: I1213 13:33:41.865210 2640 kubelet.go:312] "Adding apiserver pod source" Dec 13 13:33:41.865303 kubelet[2640]: I1213 13:33:41.865234 2640 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 13:33:41.866750 kubelet[2640]: I1213 13:33:41.866043 2640 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Dec 13 13:33:41.866750 kubelet[2640]: I1213 13:33:41.866478 2640 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 13:33:41.867283 kubelet[2640]: I1213 13:33:41.867243 2640 server.go:1264] "Started kubelet" Dec 13 13:33:41.870242 kubelet[2640]: I1213 13:33:41.868638 2640 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 13:33:41.870242 kubelet[2640]: I1213 13:33:41.868897 2640 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 13:33:41.870242 kubelet[2640]: I1213 13:33:41.868927 2640 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 13:33:41.870242 kubelet[2640]: I1213 13:33:41.869118 2640 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 13:33:41.871233 kubelet[2640]: I1213 13:33:41.870736 2640 server.go:455] "Adding debug handlers to kubelet server" Dec 13 13:33:41.877540 kubelet[2640]: I1213 13:33:41.875903 2640 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 13:33:41.877540 kubelet[2640]: I1213 13:33:41.875996 2640 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 13:33:41.877540 kubelet[2640]: I1213 13:33:41.876106 2640 reconciler.go:26] "Reconciler: start to sync state" Dec 13 13:33:41.880374 kubelet[2640]: I1213 13:33:41.879625 2640 factory.go:221] Registration of the systemd container factory successfully Dec 13 13:33:41.880374 kubelet[2640]: I1213 13:33:41.879778 2640 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 13:33:41.882766 kubelet[2640]: E1213 13:33:41.881271 2640 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 13:33:41.882766 kubelet[2640]: I1213 13:33:41.882288 2640 factory.go:221] Registration of the containerd container factory successfully Dec 13 13:33:41.891843 kubelet[2640]: I1213 13:33:41.891662 2640 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 13:33:41.892914 kubelet[2640]: I1213 13:33:41.892857 2640 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 13:33:41.892914 kubelet[2640]: I1213 13:33:41.892904 2640 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 13:33:41.892914 kubelet[2640]: I1213 13:33:41.892924 2640 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 13:33:41.893136 kubelet[2640]: E1213 13:33:41.892967 2640 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 13:33:41.915508 kubelet[2640]: I1213 13:33:41.915491 2640 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 13:33:41.915633 kubelet[2640]: I1213 13:33:41.915623 2640 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 13:33:41.915724 kubelet[2640]: I1213 13:33:41.915715 2640 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:33:41.915925 kubelet[2640]: I1213 13:33:41.915911 2640 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 13:33:41.915993 kubelet[2640]: I1213 13:33:41.915972 2640 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 13:33:41.916038 kubelet[2640]: I1213 13:33:41.916030 2640 policy_none.go:49] "None policy: Start" Dec 13 13:33:41.916682 kubelet[2640]: I1213 13:33:41.916536 2640 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 13:33:41.916682 kubelet[2640]: I1213 13:33:41.916635 2640 state_mem.go:35] "Initializing new in-memory state store" Dec 13 13:33:41.916847 kubelet[2640]: I1213 13:33:41.916736 2640 state_mem.go:75] "Updated machine memory state" Dec 13 13:33:41.920892 kubelet[2640]: I1213 13:33:41.920859 2640 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 13:33:41.921252 kubelet[2640]: I1213 13:33:41.921202 2640 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 13:33:41.921773 kubelet[2640]: I1213 13:33:41.921745 2640 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 13:33:41.993881 kubelet[2640]: I1213 13:33:41.993822 2640 topology_manager.go:215] "Topology Admit Handler" podUID="8a50003978138b3ab9890682eff4eae8" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 13:33:41.993985 kubelet[2640]: I1213 13:33:41.993941 2640 topology_manager.go:215] "Topology Admit Handler" podUID="b107a98bcf27297d642d248711a3fc70" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 13:33:41.994023 kubelet[2640]: I1213 13:33:41.993989 2640 topology_manager.go:215] "Topology Admit Handler" podUID="85197171737dd5059ff3276bd42cc394" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 13:33:42.028663 kubelet[2640]: I1213 13:33:42.028625 2640 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:33:42.034445 kubelet[2640]: I1213 13:33:42.034420 2640 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Dec 13 13:33:42.034520 kubelet[2640]: I1213 13:33:42.034490 2640 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 13:33:42.177220 kubelet[2640]: I1213 13:33:42.177182 2640 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b107a98bcf27297d642d248711a3fc70-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b107a98bcf27297d642d248711a3fc70\") " pod="kube-system/kube-scheduler-localhost" Dec 13 13:33:42.177220 kubelet[2640]: I1213 13:33:42.177219 2640 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/85197171737dd5059ff3276bd42cc394-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"85197171737dd5059ff3276bd42cc394\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:33:42.177372 kubelet[2640]: I1213 13:33:42.177239 2640 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/85197171737dd5059ff3276bd42cc394-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"85197171737dd5059ff3276bd42cc394\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:33:42.177372 kubelet[2640]: I1213 13:33:42.177254 2640 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:33:42.177372 kubelet[2640]: I1213 13:33:42.177271 2640 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/85197171737dd5059ff3276bd42cc394-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"85197171737dd5059ff3276bd42cc394\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:33:42.177372 kubelet[2640]: I1213 13:33:42.177290 2640 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:33:42.177372 kubelet[2640]: I1213 13:33:42.177343 2640 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:33:42.177490 kubelet[2640]: I1213 13:33:42.177360 2640 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:33:42.177490 kubelet[2640]: I1213 13:33:42.177376 2640 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:33:42.305772 kubelet[2640]: E1213 13:33:42.305741 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:42.306110 kubelet[2640]: E1213 13:33:42.305958 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:42.306228 kubelet[2640]: E1213 13:33:42.306207 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:42.866760 kubelet[2640]: I1213 13:33:42.866709 2640 apiserver.go:52] "Watching apiserver" Dec 13 13:33:42.876518 kubelet[2640]: I1213 13:33:42.876478 2640 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 13:33:42.903707 kubelet[2640]: E1213 13:33:42.903610 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:42.903707 kubelet[2640]: E1213 13:33:42.903634 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:43.281497 kubelet[2640]: E1213 13:33:43.280790 2640 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Dec 13 13:33:43.281497 kubelet[2640]: E1213 13:33:43.281117 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:43.296836 kubelet[2640]: I1213 13:33:43.296767 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.29673845 podStartE2EDuration="2.29673845s" podCreationTimestamp="2024-12-13 13:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:33:43.278935569 +0000 UTC m=+1.478334920" watchObservedRunningTime="2024-12-13 13:33:43.29673845 +0000 UTC m=+1.496137801" Dec 13 13:33:43.308465 kubelet[2640]: I1213 13:33:43.307024 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.307008707 podStartE2EDuration="2.307008707s" podCreationTimestamp="2024-12-13 13:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:33:43.298136513 +0000 UTC m=+1.497535864" watchObservedRunningTime="2024-12-13 13:33:43.307008707 +0000 UTC m=+1.506408058" Dec 13 13:33:43.324367 kubelet[2640]: I1213 13:33:43.324272 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.324252829 podStartE2EDuration="2.324252829s" podCreationTimestamp="2024-12-13 13:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:33:43.307197538 +0000 UTC m=+1.506596899" watchObservedRunningTime="2024-12-13 13:33:43.324252829 +0000 UTC m=+1.523652180" Dec 13 13:33:43.905847 kubelet[2640]: E1213 13:33:43.905815 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:43.906275 kubelet[2640]: E1213 13:33:43.906052 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:46.006530 sudo[1644]: pam_unix(sudo:session): session closed for user root Dec 13 13:33:46.008166 sshd[1643]: Connection closed by 10.0.0.1 port 51734 Dec 13 13:33:46.008681 sshd-session[1641]: pam_unix(sshd:session): session closed for user core Dec 13 13:33:46.013191 systemd[1]: sshd@6-10.0.0.150:22-10.0.0.1:51734.service: Deactivated successfully. Dec 13 13:33:46.015145 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 13:33:46.015351 systemd[1]: session-7.scope: Consumed 4.190s CPU time, 191.0M memory peak, 0B memory swap peak. Dec 13 13:33:46.015871 systemd-logind[1445]: Session 7 logged out. Waiting for processes to exit. Dec 13 13:33:46.016685 systemd-logind[1445]: Removed session 7. Dec 13 13:33:47.423416 kubelet[2640]: E1213 13:33:47.423380 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:47.856620 update_engine[1448]: I20241213 13:33:47.856491 1448 update_attempter.cc:509] Updating boot flags... Dec 13 13:33:47.892359 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2736) Dec 13 13:33:47.910935 kubelet[2640]: E1213 13:33:47.910167 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:47.931355 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2734) Dec 13 13:33:47.968384 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2734) Dec 13 13:33:48.451209 kubelet[2640]: E1213 13:33:48.451174 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:48.831992 kubelet[2640]: E1213 13:33:48.831868 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:48.910633 kubelet[2640]: E1213 13:33:48.910585 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:48.911112 kubelet[2640]: E1213 13:33:48.911001 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:48.911112 kubelet[2640]: E1213 13:33:48.911026 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:55.079871 kubelet[2640]: I1213 13:33:55.079832 2640 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 13:33:55.080358 containerd[1460]: time="2024-12-13T13:33:55.080239693Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 13:33:55.081063 kubelet[2640]: I1213 13:33:55.080423 2640 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 13:33:55.542898 kubelet[2640]: I1213 13:33:55.542259 2640 topology_manager.go:215] "Topology Admit Handler" podUID="eb65c885-58e4-4e00-838a-6312b48565c7" podNamespace="kube-system" podName="kube-proxy-nfrmd" Dec 13 13:33:55.550113 systemd[1]: Created slice kubepods-besteffort-podeb65c885_58e4_4e00_838a_6312b48565c7.slice - libcontainer container kubepods-besteffort-podeb65c885_58e4_4e00_838a_6312b48565c7.slice. Dec 13 13:33:55.559568 kubelet[2640]: I1213 13:33:55.559530 2640 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eb65c885-58e4-4e00-838a-6312b48565c7-lib-modules\") pod \"kube-proxy-nfrmd\" (UID: \"eb65c885-58e4-4e00-838a-6312b48565c7\") " pod="kube-system/kube-proxy-nfrmd" Dec 13 13:33:55.559568 kubelet[2640]: I1213 13:33:55.559560 2640 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/eb65c885-58e4-4e00-838a-6312b48565c7-kube-proxy\") pod \"kube-proxy-nfrmd\" (UID: \"eb65c885-58e4-4e00-838a-6312b48565c7\") " pod="kube-system/kube-proxy-nfrmd" Dec 13 13:33:55.559568 kubelet[2640]: I1213 13:33:55.559576 2640 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eb65c885-58e4-4e00-838a-6312b48565c7-xtables-lock\") pod \"kube-proxy-nfrmd\" (UID: \"eb65c885-58e4-4e00-838a-6312b48565c7\") " pod="kube-system/kube-proxy-nfrmd" Dec 13 13:33:55.559749 kubelet[2640]: I1213 13:33:55.559592 2640 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqwzs\" (UniqueName: \"kubernetes.io/projected/eb65c885-58e4-4e00-838a-6312b48565c7-kube-api-access-nqwzs\") pod \"kube-proxy-nfrmd\" (UID: \"eb65c885-58e4-4e00-838a-6312b48565c7\") " pod="kube-system/kube-proxy-nfrmd" Dec 13 13:33:55.664050 kubelet[2640]: E1213 13:33:55.664010 2640 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 13 13:33:55.664050 kubelet[2640]: E1213 13:33:55.664043 2640 projected.go:200] Error preparing data for projected volume kube-api-access-nqwzs for pod kube-system/kube-proxy-nfrmd: configmap "kube-root-ca.crt" not found Dec 13 13:33:55.664198 kubelet[2640]: E1213 13:33:55.664104 2640 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eb65c885-58e4-4e00-838a-6312b48565c7-kube-api-access-nqwzs podName:eb65c885-58e4-4e00-838a-6312b48565c7 nodeName:}" failed. No retries permitted until 2024-12-13 13:33:56.164081783 +0000 UTC m=+14.363481144 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nqwzs" (UniqueName: "kubernetes.io/projected/eb65c885-58e4-4e00-838a-6312b48565c7-kube-api-access-nqwzs") pod "kube-proxy-nfrmd" (UID: "eb65c885-58e4-4e00-838a-6312b48565c7") : configmap "kube-root-ca.crt" not found Dec 13 13:33:56.203260 kubelet[2640]: I1213 13:33:56.203159 2640 topology_manager.go:215] "Topology Admit Handler" podUID="cadb4860-08a5-4b3a-aa11-223e7b1fbd53" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-brl7f" Dec 13 13:33:56.211369 systemd[1]: Created slice kubepods-besteffort-podcadb4860_08a5_4b3a_aa11_223e7b1fbd53.slice - libcontainer container kubepods-besteffort-podcadb4860_08a5_4b3a_aa11_223e7b1fbd53.slice. Dec 13 13:33:56.264554 kubelet[2640]: I1213 13:33:56.264501 2640 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cadb4860-08a5-4b3a-aa11-223e7b1fbd53-var-lib-calico\") pod \"tigera-operator-7bc55997bb-brl7f\" (UID: \"cadb4860-08a5-4b3a-aa11-223e7b1fbd53\") " pod="tigera-operator/tigera-operator-7bc55997bb-brl7f" Dec 13 13:33:56.264766 kubelet[2640]: I1213 13:33:56.264568 2640 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvfvl\" (UniqueName: \"kubernetes.io/projected/cadb4860-08a5-4b3a-aa11-223e7b1fbd53-kube-api-access-gvfvl\") pod \"tigera-operator-7bc55997bb-brl7f\" (UID: \"cadb4860-08a5-4b3a-aa11-223e7b1fbd53\") " pod="tigera-operator/tigera-operator-7bc55997bb-brl7f" Dec 13 13:33:56.459437 kubelet[2640]: E1213 13:33:56.459304 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:56.459960 containerd[1460]: time="2024-12-13T13:33:56.459927883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nfrmd,Uid:eb65c885-58e4-4e00-838a-6312b48565c7,Namespace:kube-system,Attempt:0,}" Dec 13 13:33:56.482214 containerd[1460]: time="2024-12-13T13:33:56.482137912Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:33:56.482214 containerd[1460]: time="2024-12-13T13:33:56.482185322Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:33:56.482214 containerd[1460]: time="2024-12-13T13:33:56.482195521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:56.482409 containerd[1460]: time="2024-12-13T13:33:56.482261246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:56.503443 systemd[1]: Started cri-containerd-08cf71d29a0406d2a56e4a8ff3260d020c5d0a320d6e9d8a9a5fcfff9a17322d.scope - libcontainer container 08cf71d29a0406d2a56e4a8ff3260d020c5d0a320d6e9d8a9a5fcfff9a17322d. Dec 13 13:33:56.514729 containerd[1460]: time="2024-12-13T13:33:56.514686476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-brl7f,Uid:cadb4860-08a5-4b3a-aa11-223e7b1fbd53,Namespace:tigera-operator,Attempt:0,}" Dec 13 13:33:56.524359 containerd[1460]: time="2024-12-13T13:33:56.524259452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nfrmd,Uid:eb65c885-58e4-4e00-838a-6312b48565c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"08cf71d29a0406d2a56e4a8ff3260d020c5d0a320d6e9d8a9a5fcfff9a17322d\"" Dec 13 13:33:56.525010 kubelet[2640]: E1213 13:33:56.524986 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:56.526908 containerd[1460]: time="2024-12-13T13:33:56.526876944Z" level=info msg="CreateContainer within sandbox \"08cf71d29a0406d2a56e4a8ff3260d020c5d0a320d6e9d8a9a5fcfff9a17322d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 13:33:56.541526 containerd[1460]: time="2024-12-13T13:33:56.541415753Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:33:56.541526 containerd[1460]: time="2024-12-13T13:33:56.541492718Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:33:56.541526 containerd[1460]: time="2024-12-13T13:33:56.541512756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:56.541755 containerd[1460]: time="2024-12-13T13:33:56.541660615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:56.543426 containerd[1460]: time="2024-12-13T13:33:56.543391490Z" level=info msg="CreateContainer within sandbox \"08cf71d29a0406d2a56e4a8ff3260d020c5d0a320d6e9d8a9a5fcfff9a17322d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"81f45b8d7101d90a6508589efb121ad7fd483d4b67ae940621b7db0ab9b41acf\"" Dec 13 13:33:56.544696 containerd[1460]: time="2024-12-13T13:33:56.544152529Z" level=info msg="StartContainer for \"81f45b8d7101d90a6508589efb121ad7fd483d4b67ae940621b7db0ab9b41acf\"" Dec 13 13:33:56.560549 systemd[1]: Started cri-containerd-437c480580fa7081fa04968ba882741ad3e8969c38a3379d1adf7a21cedc1637.scope - libcontainer container 437c480580fa7081fa04968ba882741ad3e8969c38a3379d1adf7a21cedc1637. Dec 13 13:33:56.569387 systemd[1]: Started cri-containerd-81f45b8d7101d90a6508589efb121ad7fd483d4b67ae940621b7db0ab9b41acf.scope - libcontainer container 81f45b8d7101d90a6508589efb121ad7fd483d4b67ae940621b7db0ab9b41acf. Dec 13 13:33:56.597158 containerd[1460]: time="2024-12-13T13:33:56.597113411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-brl7f,Uid:cadb4860-08a5-4b3a-aa11-223e7b1fbd53,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"437c480580fa7081fa04968ba882741ad3e8969c38a3379d1adf7a21cedc1637\"" Dec 13 13:33:56.599344 containerd[1460]: time="2024-12-13T13:33:56.599258107Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 13:33:56.604047 containerd[1460]: time="2024-12-13T13:33:56.604019272Z" level=info msg="StartContainer for \"81f45b8d7101d90a6508589efb121ad7fd483d4b67ae940621b7db0ab9b41acf\" returns successfully" Dec 13 13:33:56.921461 kubelet[2640]: E1213 13:33:56.921427 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:56.929927 kubelet[2640]: I1213 13:33:56.929879 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nfrmd" podStartSLOduration=1.9298648969999999 podStartE2EDuration="1.929864897s" podCreationTimestamp="2024-12-13 13:33:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:33:56.929395409 +0000 UTC m=+15.128794760" watchObservedRunningTime="2024-12-13 13:33:56.929864897 +0000 UTC m=+15.129264248" Dec 13 13:33:57.271556 systemd[1]: run-containerd-runc-k8s.io-08cf71d29a0406d2a56e4a8ff3260d020c5d0a320d6e9d8a9a5fcfff9a17322d-runc.wVTYer.mount: Deactivated successfully. Dec 13 13:33:58.688627 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1308381202.mount: Deactivated successfully. Dec 13 13:34:00.191398 containerd[1460]: time="2024-12-13T13:34:00.191352896Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:00.192059 containerd[1460]: time="2024-12-13T13:34:00.192025236Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764317" Dec 13 13:34:00.193161 containerd[1460]: time="2024-12-13T13:34:00.193136112Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:00.195241 containerd[1460]: time="2024-12-13T13:34:00.195214616Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:00.195915 containerd[1460]: time="2024-12-13T13:34:00.195886856Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 3.596597879s" Dec 13 13:34:00.195953 containerd[1460]: time="2024-12-13T13:34:00.195915229Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Dec 13 13:34:00.197404 containerd[1460]: time="2024-12-13T13:34:00.197345369Z" level=info msg="CreateContainer within sandbox \"437c480580fa7081fa04968ba882741ad3e8969c38a3379d1adf7a21cedc1637\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 13:34:00.208188 containerd[1460]: time="2024-12-13T13:34:00.208151490Z" level=info msg="CreateContainer within sandbox \"437c480580fa7081fa04968ba882741ad3e8969c38a3379d1adf7a21cedc1637\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9bdfba06b631d63e93a6aa43f6b87ae53ddc00eb38d3e1f429ede0ae8a21f9bc\"" Dec 13 13:34:00.208546 containerd[1460]: time="2024-12-13T13:34:00.208525084Z" level=info msg="StartContainer for \"9bdfba06b631d63e93a6aa43f6b87ae53ddc00eb38d3e1f429ede0ae8a21f9bc\"" Dec 13 13:34:00.238462 systemd[1]: Started cri-containerd-9bdfba06b631d63e93a6aa43f6b87ae53ddc00eb38d3e1f429ede0ae8a21f9bc.scope - libcontainer container 9bdfba06b631d63e93a6aa43f6b87ae53ddc00eb38d3e1f429ede0ae8a21f9bc. Dec 13 13:34:00.265576 containerd[1460]: time="2024-12-13T13:34:00.265539650Z" level=info msg="StartContainer for \"9bdfba06b631d63e93a6aa43f6b87ae53ddc00eb38d3e1f429ede0ae8a21f9bc\" returns successfully" Dec 13 13:34:00.933105 kubelet[2640]: I1213 13:34:00.933039 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-brl7f" podStartSLOduration=1.335406699 podStartE2EDuration="4.932960234s" podCreationTimestamp="2024-12-13 13:33:56 +0000 UTC" firstStartedPulling="2024-12-13 13:33:56.598868831 +0000 UTC m=+14.798268182" lastFinishedPulling="2024-12-13 13:34:00.196422366 +0000 UTC m=+18.395821717" observedRunningTime="2024-12-13 13:34:00.932730951 +0000 UTC m=+19.132130302" watchObservedRunningTime="2024-12-13 13:34:00.932960234 +0000 UTC m=+19.132359605" Dec 13 13:34:03.183781 kubelet[2640]: I1213 13:34:03.183733 2640 topology_manager.go:215] "Topology Admit Handler" podUID="b09e7272-39ac-4770-9df6-4d637a5ab23f" podNamespace="calico-system" podName="calico-typha-6fc765cfc-l6p9z" Dec 13 13:34:03.193665 systemd[1]: Created slice kubepods-besteffort-podb09e7272_39ac_4770_9df6_4d637a5ab23f.slice - libcontainer container kubepods-besteffort-podb09e7272_39ac_4770_9df6_4d637a5ab23f.slice. Dec 13 13:34:03.200786 kubelet[2640]: I1213 13:34:03.200739 2640 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b09e7272-39ac-4770-9df6-4d637a5ab23f-typha-certs\") pod \"calico-typha-6fc765cfc-l6p9z\" (UID: \"b09e7272-39ac-4770-9df6-4d637a5ab23f\") " pod="calico-system/calico-typha-6fc765cfc-l6p9z" Dec 13 13:34:03.200786 kubelet[2640]: I1213 13:34:03.200790 2640 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7krxh\" (UniqueName: \"kubernetes.io/projected/b09e7272-39ac-4770-9df6-4d637a5ab23f-kube-api-access-7krxh\") pod \"calico-typha-6fc765cfc-l6p9z\" (UID: \"b09e7272-39ac-4770-9df6-4d637a5ab23f\") " pod="calico-system/calico-typha-6fc765cfc-l6p9z" Dec 13 13:34:03.200965 kubelet[2640]: I1213 13:34:03.200813 2640 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b09e7272-39ac-4770-9df6-4d637a5ab23f-tigera-ca-bundle\") pod \"calico-typha-6fc765cfc-l6p9z\" (UID: \"b09e7272-39ac-4770-9df6-4d637a5ab23f\") " pod="calico-system/calico-typha-6fc765cfc-l6p9z" Dec 13 13:34:03.235929 kubelet[2640]: I1213 13:34:03.235876 2640 topology_manager.go:215] "Topology Admit Handler" podUID="c70a3ad5-c2ce-4547-a571-cb14a813e3cc" podNamespace="calico-system" podName="calico-node-fqnrq" Dec 13 13:34:03.242597 systemd[1]: Created slice kubepods-besteffort-podc70a3ad5_c2ce_4547_a571_cb14a813e3cc.slice - libcontainer container kubepods-besteffort-podc70a3ad5_c2ce_4547_a571_cb14a813e3cc.slice. Dec 13 13:34:03.348042 kubelet[2640]: I1213 13:34:03.347994 2640 topology_manager.go:215] "Topology Admit Handler" podUID="70af0792-807b-45ba-8d22-96d81d38b5e7" podNamespace="calico-system" podName="csi-node-driver-h5jzv" Dec 13 13:34:03.348284 kubelet[2640]: E1213 13:34:03.348257 2640 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h5jzv" podUID="70af0792-807b-45ba-8d22-96d81d38b5e7" Dec 13 13:34:03.401871 kubelet[2640]: I1213 13:34:03.401652 2640 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c70a3ad5-c2ce-4547-a571-cb14a813e3cc-lib-modules\") pod \"calico-node-fqnrq\" (UID: \"c70a3ad5-c2ce-4547-a571-cb14a813e3cc\") " pod="calico-system/calico-node-fqnrq" Dec 13 13:34:03.401871 kubelet[2640]: I1213 13:34:03.401698 2640 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/70af0792-807b-45ba-8d22-96d81d38b5e7-kubelet-dir\") pod \"csi-node-driver-h5jzv\" (UID: \"70af0792-807b-45ba-8d22-96d81d38b5e7\") " pod="calico-system/csi-node-driver-h5jzv" Dec 13 13:34:03.401871 kubelet[2640]: I1213 13:34:03.401718 2640 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c70a3ad5-c2ce-4547-a571-cb14a813e3cc-cni-log-dir\") pod \"calico-node-fqnrq\" (UID: \"c70a3ad5-c2ce-4547-a571-cb14a813e3cc\") " pod="calico-system/calico-node-fqnrq" Dec 13 13:34:03.401871 kubelet[2640]: I1213 13:34:03.401734 2640 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/70af0792-807b-45ba-8d22-96d81d38b5e7-varrun\") pod \"csi-node-driver-h5jzv\" (UID: \"70af0792-807b-45ba-8d22-96d81d38b5e7\") " pod="calico-system/csi-node-driver-h5jzv" Dec 13 13:34:03.401871 kubelet[2640]: I1213 13:34:03.401751 2640 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c70a3ad5-c2ce-4547-a571-cb14a813e3cc-node-certs\") pod \"calico-node-fqnrq\" (UID: \"c70a3ad5-c2ce-4547-a571-cb14a813e3cc\") " pod="calico-system/calico-node-fqnrq" Dec 13 13:34:03.402132 kubelet[2640]: I1213 13:34:03.401766 2640 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c70a3ad5-c2ce-4547-a571-cb14a813e3cc-var-run-calico\") pod \"calico-node-fqnrq\" (UID: \"c70a3ad5-c2ce-4547-a571-cb14a813e3cc\") " pod="calico-system/calico-node-fqnrq" Dec 13 13:34:03.402132 kubelet[2640]: I1213 13:34:03.401779 2640 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/70af0792-807b-45ba-8d22-96d81d38b5e7-socket-dir\") pod \"csi-node-driver-h5jzv\" (UID: \"70af0792-807b-45ba-8d22-96d81d38b5e7\") " pod="calico-system/csi-node-driver-h5jzv" Dec 13 13:34:03.402132 kubelet[2640]: I1213 13:34:03.401792 2640 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c70a3ad5-c2ce-4547-a571-cb14a813e3cc-policysync\") pod \"calico-node-fqnrq\" (UID: \"c70a3ad5-c2ce-4547-a571-cb14a813e3cc\") " pod="calico-system/calico-node-fqnrq" Dec 13 13:34:03.402132 kubelet[2640]: I1213 13:34:03.401809 2640 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c70a3ad5-c2ce-4547-a571-cb14a813e3cc-var-lib-calico\") pod \"calico-node-fqnrq\" (UID: \"c70a3ad5-c2ce-4547-a571-cb14a813e3cc\") " pod="calico-system/calico-node-fqnrq" Dec 13 13:34:03.402132 kubelet[2640]: I1213 13:34:03.401822 2640 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/70af0792-807b-45ba-8d22-96d81d38b5e7-registration-dir\") pod \"csi-node-driver-h5jzv\" (UID: \"70af0792-807b-45ba-8d22-96d81d38b5e7\") " pod="calico-system/csi-node-driver-h5jzv" Dec 13 13:34:03.402248 kubelet[2640]: I1213 13:34:03.401837 2640 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw7pw\" (UniqueName: \"kubernetes.io/projected/70af0792-807b-45ba-8d22-96d81d38b5e7-kube-api-access-pw7pw\") pod \"csi-node-driver-h5jzv\" (UID: \"70af0792-807b-45ba-8d22-96d81d38b5e7\") " pod="calico-system/csi-node-driver-h5jzv" Dec 13 13:34:03.402248 kubelet[2640]: I1213 13:34:03.401866 2640 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c70a3ad5-c2ce-4547-a571-cb14a813e3cc-tigera-ca-bundle\") pod \"calico-node-fqnrq\" (UID: \"c70a3ad5-c2ce-4547-a571-cb14a813e3cc\") " pod="calico-system/calico-node-fqnrq" Dec 13 13:34:03.402248 kubelet[2640]: I1213 13:34:03.401904 2640 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c70a3ad5-c2ce-4547-a571-cb14a813e3cc-cni-bin-dir\") pod \"calico-node-fqnrq\" (UID: \"c70a3ad5-c2ce-4547-a571-cb14a813e3cc\") " pod="calico-system/calico-node-fqnrq" Dec 13 13:34:03.402248 kubelet[2640]: I1213 13:34:03.401926 2640 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c70a3ad5-c2ce-4547-a571-cb14a813e3cc-cni-net-dir\") pod \"calico-node-fqnrq\" (UID: \"c70a3ad5-c2ce-4547-a571-cb14a813e3cc\") " pod="calico-system/calico-node-fqnrq" Dec 13 13:34:03.402248 kubelet[2640]: I1213 13:34:03.401945 2640 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c70a3ad5-c2ce-4547-a571-cb14a813e3cc-flexvol-driver-host\") pod \"calico-node-fqnrq\" (UID: \"c70a3ad5-c2ce-4547-a571-cb14a813e3cc\") " pod="calico-system/calico-node-fqnrq" Dec 13 13:34:03.402385 kubelet[2640]: I1213 13:34:03.401973 2640 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6q4x\" (UniqueName: \"kubernetes.io/projected/c70a3ad5-c2ce-4547-a571-cb14a813e3cc-kube-api-access-x6q4x\") pod \"calico-node-fqnrq\" (UID: \"c70a3ad5-c2ce-4547-a571-cb14a813e3cc\") " pod="calico-system/calico-node-fqnrq" Dec 13 13:34:03.402385 kubelet[2640]: I1213 13:34:03.401990 2640 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c70a3ad5-c2ce-4547-a571-cb14a813e3cc-xtables-lock\") pod \"calico-node-fqnrq\" (UID: \"c70a3ad5-c2ce-4547-a571-cb14a813e3cc\") " pod="calico-system/calico-node-fqnrq" Dec 13 13:34:03.499569 kubelet[2640]: E1213 13:34:03.499450 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:34:03.500151 containerd[1460]: time="2024-12-13T13:34:03.500033851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6fc765cfc-l6p9z,Uid:b09e7272-39ac-4770-9df6-4d637a5ab23f,Namespace:calico-system,Attempt:0,}" Dec 13 13:34:03.504143 kubelet[2640]: E1213 13:34:03.503958 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:03.504143 kubelet[2640]: W1213 13:34:03.503980 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:03.504143 kubelet[2640]: E1213 13:34:03.504008 2640 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:03.504430 kubelet[2640]: E1213 13:34:03.504292 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:03.504430 kubelet[2640]: W1213 13:34:03.504302 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:03.504430 kubelet[2640]: E1213 13:34:03.504351 2640 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:03.504748 kubelet[2640]: E1213 13:34:03.504607 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:03.504748 kubelet[2640]: W1213 13:34:03.504617 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:03.504748 kubelet[2640]: E1213 13:34:03.504636 2640 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:03.505254 kubelet[2640]: E1213 13:34:03.505234 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:03.505380 kubelet[2640]: W1213 13:34:03.505248 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:03.505380 kubelet[2640]: E1213 13:34:03.505283 2640 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:03.506201 kubelet[2640]: E1213 13:34:03.505712 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:03.506201 kubelet[2640]: W1213 13:34:03.505726 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:03.506201 kubelet[2640]: E1213 13:34:03.505778 2640 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:03.506201 kubelet[2640]: E1213 13:34:03.506005 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:03.506201 kubelet[2640]: W1213 13:34:03.506014 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:03.506201 kubelet[2640]: E1213 13:34:03.506047 2640 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:03.506201 kubelet[2640]: E1213 13:34:03.506190 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:03.506201 kubelet[2640]: W1213 13:34:03.506198 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:03.506201 kubelet[2640]: E1213 13:34:03.506207 2640 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:03.507086 kubelet[2640]: E1213 13:34:03.506775 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:03.507086 kubelet[2640]: W1213 13:34:03.506786 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:03.507086 kubelet[2640]: E1213 13:34:03.506794 2640 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:03.507086 kubelet[2640]: E1213 13:34:03.506959 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:03.507086 kubelet[2640]: W1213 13:34:03.506966 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:03.507086 kubelet[2640]: E1213 13:34:03.506974 2640 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:03.509723 kubelet[2640]: E1213 13:34:03.509692 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:03.509808 kubelet[2640]: W1213 13:34:03.509783 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:03.509989 kubelet[2640]: E1213 13:34:03.509970 2640 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:03.510337 kubelet[2640]: E1213 13:34:03.510290 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:03.511231 kubelet[2640]: W1213 13:34:03.510340 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:03.511231 kubelet[2640]: E1213 13:34:03.510354 2640 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:03.514903 kubelet[2640]: E1213 13:34:03.514876 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:03.514903 kubelet[2640]: W1213 13:34:03.514896 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:03.515021 kubelet[2640]: E1213 13:34:03.514913 2640 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:03.516167 kubelet[2640]: E1213 13:34:03.516149 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:03.516229 kubelet[2640]: W1213 13:34:03.516162 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:03.516229 kubelet[2640]: E1213 13:34:03.516182 2640 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:03.533406 containerd[1460]: time="2024-12-13T13:34:03.533256169Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:34:03.533406 containerd[1460]: time="2024-12-13T13:34:03.533328625Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:34:03.533406 containerd[1460]: time="2024-12-13T13:34:03.533348092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:34:03.533610 containerd[1460]: time="2024-12-13T13:34:03.533427131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:34:03.545000 kubelet[2640]: E1213 13:34:03.544936 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:34:03.545947 containerd[1460]: time="2024-12-13T13:34:03.545546014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fqnrq,Uid:c70a3ad5-c2ce-4547-a571-cb14a813e3cc,Namespace:calico-system,Attempt:0,}" Dec 13 13:34:03.559527 systemd[1]: Started cri-containerd-ef46ab4c09e15d4922dca03f296ab87fe8fe05c839f77424ba23064f976f509f.scope - libcontainer container ef46ab4c09e15d4922dca03f296ab87fe8fe05c839f77424ba23064f976f509f. Dec 13 13:34:03.571212 containerd[1460]: time="2024-12-13T13:34:03.570608525Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:34:03.571212 containerd[1460]: time="2024-12-13T13:34:03.570700979Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:34:03.571212 containerd[1460]: time="2024-12-13T13:34:03.570724533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:34:03.571212 containerd[1460]: time="2024-12-13T13:34:03.570986628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:34:03.591532 systemd[1]: Started cri-containerd-8ee57868a7f34a07725621e7dd98ba063377757c6ade474fdbba29a39ceb1332.scope - libcontainer container 8ee57868a7f34a07725621e7dd98ba063377757c6ade474fdbba29a39ceb1332. Dec 13 13:34:03.597453 containerd[1460]: time="2024-12-13T13:34:03.597390728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6fc765cfc-l6p9z,Uid:b09e7272-39ac-4770-9df6-4d637a5ab23f,Namespace:calico-system,Attempt:0,} returns sandbox id \"ef46ab4c09e15d4922dca03f296ab87fe8fe05c839f77424ba23064f976f509f\"" Dec 13 13:34:03.598187 kubelet[2640]: E1213 13:34:03.598164 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:34:03.599540 containerd[1460]: time="2024-12-13T13:34:03.599501770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 13:34:03.615235 containerd[1460]: time="2024-12-13T13:34:03.615198329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fqnrq,Uid:c70a3ad5-c2ce-4547-a571-cb14a813e3cc,Namespace:calico-system,Attempt:0,} returns sandbox id \"8ee57868a7f34a07725621e7dd98ba063377757c6ade474fdbba29a39ceb1332\"" Dec 13 13:34:03.615943 kubelet[2640]: E1213 13:34:03.615915 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:34:04.893365 kubelet[2640]: E1213 13:34:04.893290 2640 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h5jzv" podUID="70af0792-807b-45ba-8d22-96d81d38b5e7" Dec 13 13:34:05.382830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3233117818.mount: Deactivated successfully. Dec 13 13:34:05.972042 containerd[1460]: time="2024-12-13T13:34:05.971994300Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:05.972757 containerd[1460]: time="2024-12-13T13:34:05.972700781Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Dec 13 13:34:05.973823 containerd[1460]: time="2024-12-13T13:34:05.973765417Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:05.975822 containerd[1460]: time="2024-12-13T13:34:05.975787959Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:05.976383 containerd[1460]: time="2024-12-13T13:34:05.976354757Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.376814025s" Dec 13 13:34:05.976383 containerd[1460]: time="2024-12-13T13:34:05.976380416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Dec 13 13:34:05.977769 containerd[1460]: time="2024-12-13T13:34:05.977392943Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 13:34:05.983871 containerd[1460]: time="2024-12-13T13:34:05.983839182Z" level=info msg="CreateContainer within sandbox \"ef46ab4c09e15d4922dca03f296ab87fe8fe05c839f77424ba23064f976f509f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 13:34:05.998224 containerd[1460]: time="2024-12-13T13:34:05.998186510Z" level=info msg="CreateContainer within sandbox \"ef46ab4c09e15d4922dca03f296ab87fe8fe05c839f77424ba23064f976f509f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1ed1d5eb10274799c9082c8591c621a2c686e2ba17a98f3fcb1a7d62f2d60a76\"" Dec 13 13:34:05.998658 containerd[1460]: time="2024-12-13T13:34:05.998627381Z" level=info msg="StartContainer for \"1ed1d5eb10274799c9082c8591c621a2c686e2ba17a98f3fcb1a7d62f2d60a76\"" Dec 13 13:34:06.026445 systemd[1]: Started cri-containerd-1ed1d5eb10274799c9082c8591c621a2c686e2ba17a98f3fcb1a7d62f2d60a76.scope - libcontainer container 1ed1d5eb10274799c9082c8591c621a2c686e2ba17a98f3fcb1a7d62f2d60a76. Dec 13 13:34:06.064993 containerd[1460]: time="2024-12-13T13:34:06.064950542Z" level=info msg="StartContainer for \"1ed1d5eb10274799c9082c8591c621a2c686e2ba17a98f3fcb1a7d62f2d60a76\" returns successfully" Dec 13 13:34:06.894144 kubelet[2640]: E1213 13:34:06.894087 2640 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h5jzv" podUID="70af0792-807b-45ba-8d22-96d81d38b5e7" Dec 13 13:34:06.939387 kubelet[2640]: E1213 13:34:06.939357 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:34:07.023157 kubelet[2640]: E1213 13:34:07.023127 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:07.023157 kubelet[2640]: W1213 13:34:07.023145 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:07.023157 kubelet[2640]: E1213 13:34:07.023163 2640 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:07.023439 kubelet[2640]: E1213 13:34:07.023425 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:07.023439 kubelet[2640]: W1213 13:34:07.023436 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:07.023487 kubelet[2640]: E1213 13:34:07.023445 2640 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:07.023659 kubelet[2640]: E1213 13:34:07.023645 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:07.023659 kubelet[2640]: W1213 13:34:07.023655 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:07.023719 kubelet[2640]: E1213 13:34:07.023665 2640 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:07.023862 kubelet[2640]: E1213 13:34:07.023849 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:07.023862 kubelet[2640]: W1213 13:34:07.023859 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:07.023920 kubelet[2640]: E1213 13:34:07.023866 2640 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:07.024088 kubelet[2640]: E1213 13:34:07.024075 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:07.024088 kubelet[2640]: W1213 13:34:07.024084 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:07.024144 kubelet[2640]: E1213 13:34:07.024092 2640 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:07.024284 kubelet[2640]: E1213 13:34:07.024271 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:07.024284 kubelet[2640]: W1213 13:34:07.024281 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:07.024340 kubelet[2640]: E1213 13:34:07.024288 2640 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:07.024498 kubelet[2640]: E1213 13:34:07.024485 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:07.024498 kubelet[2640]: W1213 13:34:07.024495 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:07.024555 kubelet[2640]: E1213 13:34:07.024502 2640 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:07.024712 kubelet[2640]: E1213 13:34:07.024698 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:07.024712 kubelet[2640]: W1213 13:34:07.024708 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:07.024764 kubelet[2640]: E1213 13:34:07.024716 2640 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:07.024927 kubelet[2640]: E1213 13:34:07.024914 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:07.024927 kubelet[2640]: W1213 13:34:07.024923 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:07.024985 kubelet[2640]: E1213 13:34:07.024931 2640 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:07.025123 kubelet[2640]: E1213 13:34:07.025111 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:07.025123 kubelet[2640]: W1213 13:34:07.025120 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:07.025174 kubelet[2640]: E1213 13:34:07.025127 2640 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:07.025305 kubelet[2640]: E1213 13:34:07.025292 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:07.025305 kubelet[2640]: W1213 13:34:07.025302 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:07.025386 kubelet[2640]: E1213 13:34:07.025323 2640 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:07.025520 kubelet[2640]: E1213 13:34:07.025506 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:07.025520 kubelet[2640]: W1213 13:34:07.025516 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:07.025567 kubelet[2640]: E1213 13:34:07.025524 2640 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:07.025736 kubelet[2640]: E1213 13:34:07.025723 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:07.025736 kubelet[2640]: W1213 13:34:07.025732 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:07.025789 kubelet[2640]: E1213 13:34:07.025740 2640 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:07.025935 kubelet[2640]: E1213 13:34:07.025922 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:07.025935 kubelet[2640]: W1213 13:34:07.025932 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:07.025980 kubelet[2640]: E1213 13:34:07.025942 2640 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:07.026119 kubelet[2640]: E1213 13:34:07.026105 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:07.026119 kubelet[2640]: W1213 13:34:07.026115 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:07.026175 kubelet[2640]: E1213 13:34:07.026122 2640 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:07.026401 kubelet[2640]: E1213 13:34:07.026377 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:07.026401 kubelet[2640]: W1213 13:34:07.026388 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:07.026401 kubelet[2640]: E1213 13:34:07.026396 2640 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:07.026629 kubelet[2640]: E1213 13:34:07.026615 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:07.026629 kubelet[2640]: W1213 13:34:07.026626 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:07.026694 kubelet[2640]: E1213 13:34:07.026639 2640 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:07.026943 kubelet[2640]: E1213 13:34:07.026899 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:07.026943 kubelet[2640]: W1213 13:34:07.026918 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:07.026996 kubelet[2640]: E1213 13:34:07.026943 2640 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:07.027148 kubelet[2640]: E1213 13:34:07.027134 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:07.027148 kubelet[2640]: W1213 13:34:07.027145 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:07.027206 kubelet[2640]: E1213 13:34:07.027157 2640 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:07.027360 kubelet[2640]: E1213 13:34:07.027347 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:07.027360 kubelet[2640]: W1213 13:34:07.027356 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:07.027424 kubelet[2640]: E1213 13:34:07.027369 2640 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:07.027621 kubelet[2640]: E1213 13:34:07.027600 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:07.027647 kubelet[2640]: W1213 13:34:07.027620 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:07.027672 kubelet[2640]: E1213 13:34:07.027644 2640 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:07.027869 kubelet[2640]: E1213 13:34:07.027859 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:07.027869 kubelet[2640]: W1213 13:34:07.027867 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:07.027921 kubelet[2640]: E1213 13:34:07.027891 2640 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:07.028058 kubelet[2640]: E1213 13:34:07.028047 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:07.028058 kubelet[2640]: W1213 13:34:07.028056 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:07.028106 kubelet[2640]: E1213 13:34:07.028080 2640 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:07.028241 kubelet[2640]: E1213 13:34:07.028231 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:07.028241 kubelet[2640]: W1213 13:34:07.028239 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:07.028366 kubelet[2640]: E1213 13:34:07.028252 2640 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:07.028496 kubelet[2640]: E1213 13:34:07.028482 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:07.028496 kubelet[2640]: W1213 13:34:07.028493 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:07.028547 kubelet[2640]: E1213 13:34:07.028506 2640 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:07.028694 kubelet[2640]: E1213 13:34:07.028680 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:07.028694 kubelet[2640]: W1213 13:34:07.028691 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:07.028750 kubelet[2640]: E1213 13:34:07.028706 2640 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:07.028921 kubelet[2640]: E1213 13:34:07.028908 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:07.028921 kubelet[2640]: W1213 13:34:07.028917 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:07.028971 kubelet[2640]: E1213 13:34:07.028932 2640 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:07.029224 kubelet[2640]: E1213 13:34:07.029205 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:07.029224 kubelet[2640]: W1213 13:34:07.029222 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:07.029284 kubelet[2640]: E1213 13:34:07.029238 2640 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:07.029454 kubelet[2640]: E1213 13:34:07.029440 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:07.029454 kubelet[2640]: W1213 13:34:07.029451 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:07.029500 kubelet[2640]: E1213 13:34:07.029464 2640 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:07.029665 kubelet[2640]: E1213 13:34:07.029652 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:07.029665 kubelet[2640]: W1213 13:34:07.029663 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:07.029734 kubelet[2640]: E1213 13:34:07.029678 2640 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:07.029903 kubelet[2640]: E1213 13:34:07.029891 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:07.029903 kubelet[2640]: W1213 13:34:07.029900 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:07.029960 kubelet[2640]: E1213 13:34:07.029913 2640 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:07.030138 kubelet[2640]: E1213 13:34:07.030122 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:07.030138 kubelet[2640]: W1213 13:34:07.030135 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:07.030188 kubelet[2640]: E1213 13:34:07.030144 2640 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:07.030530 kubelet[2640]: E1213 13:34:07.030515 2640 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:34:07.030530 kubelet[2640]: W1213 13:34:07.030526 2640 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:34:07.030595 kubelet[2640]: E1213 13:34:07.030534 2640 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:34:07.355710 containerd[1460]: time="2024-12-13T13:34:07.355660603Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:07.356378 containerd[1460]: time="2024-12-13T13:34:07.356345041Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Dec 13 13:34:07.357456 containerd[1460]: time="2024-12-13T13:34:07.357415128Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:07.359341 containerd[1460]: time="2024-12-13T13:34:07.359295829Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:07.359946 containerd[1460]: time="2024-12-13T13:34:07.359919043Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.38248928s" Dec 13 13:34:07.359974 containerd[1460]: time="2024-12-13T13:34:07.359946845Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Dec 13 13:34:07.361870 containerd[1460]: time="2024-12-13T13:34:07.361848096Z" level=info msg="CreateContainer within sandbox \"8ee57868a7f34a07725621e7dd98ba063377757c6ade474fdbba29a39ceb1332\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 13:34:07.373751 containerd[1460]: time="2024-12-13T13:34:07.373715308Z" level=info msg="CreateContainer within sandbox \"8ee57868a7f34a07725621e7dd98ba063377757c6ade474fdbba29a39ceb1332\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"cfd7c8276b658a7742b606c31ecf876c42ab5422b1473085ca4dfb8f0d24f7b4\"" Dec 13 13:34:07.374068 containerd[1460]: time="2024-12-13T13:34:07.374018719Z" level=info msg="StartContainer for \"cfd7c8276b658a7742b606c31ecf876c42ab5422b1473085ca4dfb8f0d24f7b4\"" Dec 13 13:34:07.408455 systemd[1]: Started cri-containerd-cfd7c8276b658a7742b606c31ecf876c42ab5422b1473085ca4dfb8f0d24f7b4.scope - libcontainer container cfd7c8276b658a7742b606c31ecf876c42ab5422b1473085ca4dfb8f0d24f7b4. Dec 13 13:34:07.441624 containerd[1460]: time="2024-12-13T13:34:07.441518772Z" level=info msg="StartContainer for \"cfd7c8276b658a7742b606c31ecf876c42ab5422b1473085ca4dfb8f0d24f7b4\" returns successfully" Dec 13 13:34:07.453567 systemd[1]: cri-containerd-cfd7c8276b658a7742b606c31ecf876c42ab5422b1473085ca4dfb8f0d24f7b4.scope: Deactivated successfully. Dec 13 13:34:07.940890 kubelet[2640]: I1213 13:34:07.940858 2640 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 13:34:07.941370 kubelet[2640]: E1213 13:34:07.941213 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:34:07.941370 kubelet[2640]: E1213 13:34:07.941358 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:34:08.027613 kubelet[2640]: I1213 13:34:08.027224 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6fc765cfc-l6p9z" podStartSLOduration=2.649253757 podStartE2EDuration="5.027205867s" podCreationTimestamp="2024-12-13 13:34:03 +0000 UTC" firstStartedPulling="2024-12-13 13:34:03.599237812 +0000 UTC m=+21.798637163" lastFinishedPulling="2024-12-13 13:34:05.977189922 +0000 UTC m=+24.176589273" observedRunningTime="2024-12-13 13:34:06.947359674 +0000 UTC m=+25.146759025" watchObservedRunningTime="2024-12-13 13:34:08.027205867 +0000 UTC m=+26.226605218" Dec 13 13:34:08.370649 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cfd7c8276b658a7742b606c31ecf876c42ab5422b1473085ca4dfb8f0d24f7b4-rootfs.mount: Deactivated successfully. Dec 13 13:34:08.414881 containerd[1460]: time="2024-12-13T13:34:08.414818997Z" level=info msg="shim disconnected" id=cfd7c8276b658a7742b606c31ecf876c42ab5422b1473085ca4dfb8f0d24f7b4 namespace=k8s.io Dec 13 13:34:08.414881 containerd[1460]: time="2024-12-13T13:34:08.414868951Z" level=warning msg="cleaning up after shim disconnected" id=cfd7c8276b658a7742b606c31ecf876c42ab5422b1473085ca4dfb8f0d24f7b4 namespace=k8s.io Dec 13 13:34:08.414881 containerd[1460]: time="2024-12-13T13:34:08.414876856Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:34:08.893895 kubelet[2640]: E1213 13:34:08.893758 2640 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h5jzv" podUID="70af0792-807b-45ba-8d22-96d81d38b5e7" Dec 13 13:34:08.943410 kubelet[2640]: E1213 13:34:08.943369 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:34:08.944252 containerd[1460]: time="2024-12-13T13:34:08.944168310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 13:34:09.773349 systemd[1]: Started sshd@7-10.0.0.150:22-10.0.0.1:34716.service - OpenSSH per-connection server daemon (10.0.0.1:34716). Dec 13 13:34:09.811143 sshd[3303]: Accepted publickey for core from 10.0.0.1 port 34716 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:34:09.812565 sshd-session[3303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:34:09.816175 systemd-logind[1445]: New session 8 of user core. Dec 13 13:34:09.828424 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 13:34:09.935562 sshd[3305]: Connection closed by 10.0.0.1 port 34716 Dec 13 13:34:09.935884 sshd-session[3303]: pam_unix(sshd:session): session closed for user core Dec 13 13:34:09.939259 systemd[1]: sshd@7-10.0.0.150:22-10.0.0.1:34716.service: Deactivated successfully. Dec 13 13:34:09.941217 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 13:34:09.941883 systemd-logind[1445]: Session 8 logged out. Waiting for processes to exit. Dec 13 13:34:09.942705 systemd-logind[1445]: Removed session 8. Dec 13 13:34:10.894670 kubelet[2640]: E1213 13:34:10.894605 2640 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h5jzv" podUID="70af0792-807b-45ba-8d22-96d81d38b5e7" Dec 13 13:34:12.901561 kubelet[2640]: E1213 13:34:12.901483 2640 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h5jzv" podUID="70af0792-807b-45ba-8d22-96d81d38b5e7" Dec 13 13:34:13.568631 containerd[1460]: time="2024-12-13T13:34:13.568578409Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:13.569250 containerd[1460]: time="2024-12-13T13:34:13.569208463Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Dec 13 13:34:13.570338 containerd[1460]: time="2024-12-13T13:34:13.570276523Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:13.572520 containerd[1460]: time="2024-12-13T13:34:13.572478063Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:13.573176 containerd[1460]: time="2024-12-13T13:34:13.573145218Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.628885047s" Dec 13 13:34:13.573176 containerd[1460]: time="2024-12-13T13:34:13.573170556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Dec 13 13:34:13.578013 containerd[1460]: time="2024-12-13T13:34:13.577982387Z" level=info msg="CreateContainer within sandbox \"8ee57868a7f34a07725621e7dd98ba063377757c6ade474fdbba29a39ceb1332\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 13:34:13.590723 containerd[1460]: time="2024-12-13T13:34:13.590685912Z" level=info msg="CreateContainer within sandbox \"8ee57868a7f34a07725621e7dd98ba063377757c6ade474fdbba29a39ceb1332\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6de277dc2a91ff18707fbd261c7e5c0f31650791b9e01fa8db44e659fba1a23c\"" Dec 13 13:34:13.591154 containerd[1460]: time="2024-12-13T13:34:13.591126660Z" level=info msg="StartContainer for \"6de277dc2a91ff18707fbd261c7e5c0f31650791b9e01fa8db44e659fba1a23c\"" Dec 13 13:34:13.626466 systemd[1]: Started cri-containerd-6de277dc2a91ff18707fbd261c7e5c0f31650791b9e01fa8db44e659fba1a23c.scope - libcontainer container 6de277dc2a91ff18707fbd261c7e5c0f31650791b9e01fa8db44e659fba1a23c. Dec 13 13:34:13.688108 containerd[1460]: time="2024-12-13T13:34:13.688047726Z" level=info msg="StartContainer for \"6de277dc2a91ff18707fbd261c7e5c0f31650791b9e01fa8db44e659fba1a23c\" returns successfully" Dec 13 13:34:14.514276 kubelet[2640]: E1213 13:34:14.514229 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:34:14.894203 kubelet[2640]: E1213 13:34:14.894061 2640 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h5jzv" podUID="70af0792-807b-45ba-8d22-96d81d38b5e7" Dec 13 13:34:14.931581 containerd[1460]: time="2024-12-13T13:34:14.931533148Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 13:34:14.934117 systemd[1]: cri-containerd-6de277dc2a91ff18707fbd261c7e5c0f31650791b9e01fa8db44e659fba1a23c.scope: Deactivated successfully. Dec 13 13:34:14.945223 systemd[1]: Started sshd@8-10.0.0.150:22-10.0.0.1:34720.service - OpenSSH per-connection server daemon (10.0.0.1:34720). Dec 13 13:34:14.953950 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6de277dc2a91ff18707fbd261c7e5c0f31650791b9e01fa8db44e659fba1a23c-rootfs.mount: Deactivated successfully. Dec 13 13:34:14.978935 kubelet[2640]: I1213 13:34:14.978911 2640 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 13:34:15.023442 containerd[1460]: time="2024-12-13T13:34:15.022932300Z" level=info msg="shim disconnected" id=6de277dc2a91ff18707fbd261c7e5c0f31650791b9e01fa8db44e659fba1a23c namespace=k8s.io Dec 13 13:34:15.023442 containerd[1460]: time="2024-12-13T13:34:15.022987443Z" level=warning msg="cleaning up after shim disconnected" id=6de277dc2a91ff18707fbd261c7e5c0f31650791b9e01fa8db44e659fba1a23c namespace=k8s.io Dec 13 13:34:15.023442 containerd[1460]: time="2024-12-13T13:34:15.022998394Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:34:15.025854 kubelet[2640]: I1213 13:34:15.025562 2640 topology_manager.go:215] "Topology Admit Handler" podUID="4efa0db1-4649-47cd-847b-b2cd3ddad9b5" podNamespace="kube-system" podName="coredns-7db6d8ff4d-n488k" Dec 13 13:34:15.030289 kubelet[2640]: I1213 13:34:15.030217 2640 topology_manager.go:215] "Topology Admit Handler" podUID="bcb02bd3-79c5-4f97-892a-aafa3090dcbe" podNamespace="calico-system" podName="calico-kube-controllers-55f74d585b-gkp22" Dec 13 13:34:15.031133 sshd[3381]: Accepted publickey for core from 10.0.0.1 port 34720 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:34:15.032548 kubelet[2640]: I1213 13:34:15.031134 2640 topology_manager.go:215] "Topology Admit Handler" podUID="bd2088f6-f886-4664-af58-213044237f3c" podNamespace="kube-system" podName="coredns-7db6d8ff4d-tdhb9" Dec 13 13:34:15.034964 sshd-session[3381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:34:15.035656 kubelet[2640]: I1213 13:34:15.035502 2640 topology_manager.go:215] "Topology Admit Handler" podUID="cadc3599-3084-49e0-99bc-626d4d423dd6" podNamespace="calico-apiserver" podName="calico-apiserver-7d87f7746b-mqntw" Dec 13 13:34:15.038223 kubelet[2640]: I1213 13:34:15.038184 2640 topology_manager.go:215] "Topology Admit Handler" podUID="5ba12381-f554-4e4f-8ceb-405dc070dc9a" podNamespace="calico-apiserver" podName="calico-apiserver-7d87f7746b-flf6n" Dec 13 13:34:15.043056 systemd[1]: Created slice kubepods-burstable-pod4efa0db1_4649_47cd_847b_b2cd3ddad9b5.slice - libcontainer container kubepods-burstable-pod4efa0db1_4649_47cd_847b_b2cd3ddad9b5.slice. Dec 13 13:34:15.045932 containerd[1460]: time="2024-12-13T13:34:15.045877284Z" level=warning msg="cleanup warnings time=\"2024-12-13T13:34:15Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 13:34:15.046532 systemd-logind[1445]: New session 9 of user core. Dec 13 13:34:15.054545 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 13:34:15.058569 systemd[1]: Created slice kubepods-besteffort-podbcb02bd3_79c5_4f97_892a_aafa3090dcbe.slice - libcontainer container kubepods-besteffort-podbcb02bd3_79c5_4f97_892a_aafa3090dcbe.slice. Dec 13 13:34:15.067037 systemd[1]: Created slice kubepods-burstable-podbd2088f6_f886_4664_af58_213044237f3c.slice - libcontainer container kubepods-burstable-podbd2088f6_f886_4664_af58_213044237f3c.slice. Dec 13 13:34:15.073663 systemd[1]: Created slice kubepods-besteffort-podcadc3599_3084_49e0_99bc_626d4d423dd6.slice - libcontainer container kubepods-besteffort-podcadc3599_3084_49e0_99bc_626d4d423dd6.slice. Dec 13 13:34:15.076726 kubelet[2640]: I1213 13:34:15.076691 2640 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4efa0db1-4649-47cd-847b-b2cd3ddad9b5-config-volume\") pod \"coredns-7db6d8ff4d-n488k\" (UID: \"4efa0db1-4649-47cd-847b-b2cd3ddad9b5\") " pod="kube-system/coredns-7db6d8ff4d-n488k" Dec 13 13:34:15.076726 kubelet[2640]: I1213 13:34:15.076724 2640 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd2088f6-f886-4664-af58-213044237f3c-config-volume\") pod \"coredns-7db6d8ff4d-tdhb9\" (UID: \"bd2088f6-f886-4664-af58-213044237f3c\") " pod="kube-system/coredns-7db6d8ff4d-tdhb9" Dec 13 13:34:15.076726 kubelet[2640]: I1213 13:34:15.076743 2640 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkq74\" (UniqueName: \"kubernetes.io/projected/bd2088f6-f886-4664-af58-213044237f3c-kube-api-access-jkq74\") pod \"coredns-7db6d8ff4d-tdhb9\" (UID: \"bd2088f6-f886-4664-af58-213044237f3c\") " pod="kube-system/coredns-7db6d8ff4d-tdhb9" Dec 13 13:34:15.076936 kubelet[2640]: I1213 13:34:15.076763 2640 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cadc3599-3084-49e0-99bc-626d4d423dd6-calico-apiserver-certs\") pod \"calico-apiserver-7d87f7746b-mqntw\" (UID: \"cadc3599-3084-49e0-99bc-626d4d423dd6\") " pod="calico-apiserver/calico-apiserver-7d87f7746b-mqntw" Dec 13 13:34:15.076936 kubelet[2640]: I1213 13:34:15.076780 2640 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5ba12381-f554-4e4f-8ceb-405dc070dc9a-calico-apiserver-certs\") pod \"calico-apiserver-7d87f7746b-flf6n\" (UID: \"5ba12381-f554-4e4f-8ceb-405dc070dc9a\") " pod="calico-apiserver/calico-apiserver-7d87f7746b-flf6n" Dec 13 13:34:15.076936 kubelet[2640]: I1213 13:34:15.076796 2640 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6flrw\" (UniqueName: \"kubernetes.io/projected/5ba12381-f554-4e4f-8ceb-405dc070dc9a-kube-api-access-6flrw\") pod \"calico-apiserver-7d87f7746b-flf6n\" (UID: \"5ba12381-f554-4e4f-8ceb-405dc070dc9a\") " pod="calico-apiserver/calico-apiserver-7d87f7746b-flf6n" Dec 13 13:34:15.076936 kubelet[2640]: I1213 13:34:15.076813 2640 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff8dd\" (UniqueName: \"kubernetes.io/projected/cadc3599-3084-49e0-99bc-626d4d423dd6-kube-api-access-ff8dd\") pod \"calico-apiserver-7d87f7746b-mqntw\" (UID: \"cadc3599-3084-49e0-99bc-626d4d423dd6\") " pod="calico-apiserver/calico-apiserver-7d87f7746b-mqntw" Dec 13 13:34:15.077036 kubelet[2640]: I1213 13:34:15.076937 2640 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvxst\" (UniqueName: \"kubernetes.io/projected/4efa0db1-4649-47cd-847b-b2cd3ddad9b5-kube-api-access-xvxst\") pod \"coredns-7db6d8ff4d-n488k\" (UID: \"4efa0db1-4649-47cd-847b-b2cd3ddad9b5\") " pod="kube-system/coredns-7db6d8ff4d-n488k" Dec 13 13:34:15.077036 kubelet[2640]: I1213 13:34:15.076963 2640 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bcb02bd3-79c5-4f97-892a-aafa3090dcbe-tigera-ca-bundle\") pod \"calico-kube-controllers-55f74d585b-gkp22\" (UID: \"bcb02bd3-79c5-4f97-892a-aafa3090dcbe\") " pod="calico-system/calico-kube-controllers-55f74d585b-gkp22" Dec 13 13:34:15.077036 kubelet[2640]: I1213 13:34:15.076979 2640 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqx5w\" (UniqueName: \"kubernetes.io/projected/bcb02bd3-79c5-4f97-892a-aafa3090dcbe-kube-api-access-fqx5w\") pod \"calico-kube-controllers-55f74d585b-gkp22\" (UID: \"bcb02bd3-79c5-4f97-892a-aafa3090dcbe\") " pod="calico-system/calico-kube-controllers-55f74d585b-gkp22" Dec 13 13:34:15.078911 systemd[1]: Created slice kubepods-besteffort-pod5ba12381_f554_4e4f_8ceb_405dc070dc9a.slice - libcontainer container kubepods-besteffort-pod5ba12381_f554_4e4f_8ceb_405dc070dc9a.slice. Dec 13 13:34:15.165223 sshd[3401]: Connection closed by 10.0.0.1 port 34720 Dec 13 13:34:15.165619 sshd-session[3381]: pam_unix(sshd:session): session closed for user core Dec 13 13:34:15.169786 systemd[1]: sshd@8-10.0.0.150:22-10.0.0.1:34720.service: Deactivated successfully. Dec 13 13:34:15.171770 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 13:34:15.172444 systemd-logind[1445]: Session 9 logged out. Waiting for processes to exit. Dec 13 13:34:15.173227 systemd-logind[1445]: Removed session 9. Dec 13 13:34:15.348254 kubelet[2640]: E1213 13:34:15.348205 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:34:15.349089 containerd[1460]: time="2024-12-13T13:34:15.348801780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-n488k,Uid:4efa0db1-4649-47cd-847b-b2cd3ddad9b5,Namespace:kube-system,Attempt:0,}" Dec 13 13:34:15.362281 containerd[1460]: time="2024-12-13T13:34:15.362245208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55f74d585b-gkp22,Uid:bcb02bd3-79c5-4f97-892a-aafa3090dcbe,Namespace:calico-system,Attempt:0,}" Dec 13 13:34:15.371758 kubelet[2640]: E1213 13:34:15.371720 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:34:15.373652 containerd[1460]: time="2024-12-13T13:34:15.372177364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tdhb9,Uid:bd2088f6-f886-4664-af58-213044237f3c,Namespace:kube-system,Attempt:0,}" Dec 13 13:34:15.378671 containerd[1460]: time="2024-12-13T13:34:15.378603067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d87f7746b-mqntw,Uid:cadc3599-3084-49e0-99bc-626d4d423dd6,Namespace:calico-apiserver,Attempt:0,}" Dec 13 13:34:15.384093 containerd[1460]: time="2024-12-13T13:34:15.383691765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d87f7746b-flf6n,Uid:5ba12381-f554-4e4f-8ceb-405dc070dc9a,Namespace:calico-apiserver,Attempt:0,}" Dec 13 13:34:15.468690 containerd[1460]: time="2024-12-13T13:34:15.468578399Z" level=error msg="Failed to destroy network for sandbox \"558af8a45bc01399591427b598962160a612f4630f01ffcfbcb0b3a90050035e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:15.469675 containerd[1460]: time="2024-12-13T13:34:15.469534257Z" level=error msg="encountered an error cleaning up failed sandbox \"558af8a45bc01399591427b598962160a612f4630f01ffcfbcb0b3a90050035e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:15.469827 containerd[1460]: time="2024-12-13T13:34:15.469693896Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-n488k,Uid:4efa0db1-4649-47cd-847b-b2cd3ddad9b5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"558af8a45bc01399591427b598962160a612f4630f01ffcfbcb0b3a90050035e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:15.470355 kubelet[2640]: E1213 13:34:15.470300 2640 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"558af8a45bc01399591427b598962160a612f4630f01ffcfbcb0b3a90050035e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:15.471033 kubelet[2640]: E1213 13:34:15.470391 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"558af8a45bc01399591427b598962160a612f4630f01ffcfbcb0b3a90050035e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-n488k" Dec 13 13:34:15.471033 kubelet[2640]: E1213 13:34:15.470415 2640 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"558af8a45bc01399591427b598962160a612f4630f01ffcfbcb0b3a90050035e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-n488k" Dec 13 13:34:15.471033 kubelet[2640]: E1213 13:34:15.470459 2640 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-n488k_kube-system(4efa0db1-4649-47cd-847b-b2cd3ddad9b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-n488k_kube-system(4efa0db1-4649-47cd-847b-b2cd3ddad9b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"558af8a45bc01399591427b598962160a612f4630f01ffcfbcb0b3a90050035e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-n488k" podUID="4efa0db1-4649-47cd-847b-b2cd3ddad9b5" Dec 13 13:34:15.493584 containerd[1460]: time="2024-12-13T13:34:15.493522863Z" level=error msg="Failed to destroy network for sandbox \"c62e1d6a49358b1088df68486d06de5d565abc5cb224cecb08a72fc49527b818\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:15.494187 containerd[1460]: time="2024-12-13T13:34:15.494151014Z" level=error msg="encountered an error cleaning up failed sandbox \"c62e1d6a49358b1088df68486d06de5d565abc5cb224cecb08a72fc49527b818\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:15.494339 containerd[1460]: time="2024-12-13T13:34:15.494257554Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55f74d585b-gkp22,Uid:bcb02bd3-79c5-4f97-892a-aafa3090dcbe,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c62e1d6a49358b1088df68486d06de5d565abc5cb224cecb08a72fc49527b818\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:15.495527 kubelet[2640]: E1213 13:34:15.495146 2640 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c62e1d6a49358b1088df68486d06de5d565abc5cb224cecb08a72fc49527b818\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:15.495527 kubelet[2640]: E1213 13:34:15.495200 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c62e1d6a49358b1088df68486d06de5d565abc5cb224cecb08a72fc49527b818\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55f74d585b-gkp22" Dec 13 13:34:15.495527 kubelet[2640]: E1213 13:34:15.495219 2640 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c62e1d6a49358b1088df68486d06de5d565abc5cb224cecb08a72fc49527b818\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55f74d585b-gkp22" Dec 13 13:34:15.495645 kubelet[2640]: E1213 13:34:15.495267 2640 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-55f74d585b-gkp22_calico-system(bcb02bd3-79c5-4f97-892a-aafa3090dcbe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-55f74d585b-gkp22_calico-system(bcb02bd3-79c5-4f97-892a-aafa3090dcbe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c62e1d6a49358b1088df68486d06de5d565abc5cb224cecb08a72fc49527b818\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55f74d585b-gkp22" podUID="bcb02bd3-79c5-4f97-892a-aafa3090dcbe" Dec 13 13:34:15.503825 containerd[1460]: time="2024-12-13T13:34:15.503716300Z" level=error msg="Failed to destroy network for sandbox \"21489d44f97457445adcc907a914ad67e96a4016825cb60915c67d35d31dba11\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:15.504228 containerd[1460]: time="2024-12-13T13:34:15.504146629Z" level=error msg="encountered an error cleaning up failed sandbox \"21489d44f97457445adcc907a914ad67e96a4016825cb60915c67d35d31dba11\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:15.504228 containerd[1460]: time="2024-12-13T13:34:15.504201763Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tdhb9,Uid:bd2088f6-f886-4664-af58-213044237f3c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"21489d44f97457445adcc907a914ad67e96a4016825cb60915c67d35d31dba11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:15.504523 kubelet[2640]: E1213 13:34:15.504410 2640 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21489d44f97457445adcc907a914ad67e96a4016825cb60915c67d35d31dba11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:15.504523 kubelet[2640]: E1213 13:34:15.504460 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21489d44f97457445adcc907a914ad67e96a4016825cb60915c67d35d31dba11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-tdhb9" Dec 13 13:34:15.504523 kubelet[2640]: E1213 13:34:15.504479 2640 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21489d44f97457445adcc907a914ad67e96a4016825cb60915c67d35d31dba11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-tdhb9" Dec 13 13:34:15.504650 kubelet[2640]: E1213 13:34:15.504514 2640 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-tdhb9_kube-system(bd2088f6-f886-4664-af58-213044237f3c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-tdhb9_kube-system(bd2088f6-f886-4664-af58-213044237f3c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"21489d44f97457445adcc907a914ad67e96a4016825cb60915c67d35d31dba11\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-tdhb9" podUID="bd2088f6-f886-4664-af58-213044237f3c" Dec 13 13:34:15.507189 containerd[1460]: time="2024-12-13T13:34:15.507127834Z" level=error msg="Failed to destroy network for sandbox \"3471811395b1ff1780abbbc92eeb9a683464d2b59eb4b36f7ca30acd080ae9c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:15.507663 containerd[1460]: time="2024-12-13T13:34:15.507633375Z" level=error msg="encountered an error cleaning up failed sandbox \"3471811395b1ff1780abbbc92eeb9a683464d2b59eb4b36f7ca30acd080ae9c4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:15.507744 containerd[1460]: time="2024-12-13T13:34:15.507702766Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d87f7746b-mqntw,Uid:cadc3599-3084-49e0-99bc-626d4d423dd6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3471811395b1ff1780abbbc92eeb9a683464d2b59eb4b36f7ca30acd080ae9c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:15.507885 kubelet[2640]: E1213 13:34:15.507861 2640 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3471811395b1ff1780abbbc92eeb9a683464d2b59eb4b36f7ca30acd080ae9c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:15.507929 kubelet[2640]: E1213 13:34:15.507892 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3471811395b1ff1780abbbc92eeb9a683464d2b59eb4b36f7ca30acd080ae9c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d87f7746b-mqntw" Dec 13 13:34:15.507929 kubelet[2640]: E1213 13:34:15.507909 2640 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3471811395b1ff1780abbbc92eeb9a683464d2b59eb4b36f7ca30acd080ae9c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d87f7746b-mqntw" Dec 13 13:34:15.507987 kubelet[2640]: E1213 13:34:15.507952 2640 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d87f7746b-mqntw_calico-apiserver(cadc3599-3084-49e0-99bc-626d4d423dd6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d87f7746b-mqntw_calico-apiserver(cadc3599-3084-49e0-99bc-626d4d423dd6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3471811395b1ff1780abbbc92eeb9a683464d2b59eb4b36f7ca30acd080ae9c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d87f7746b-mqntw" podUID="cadc3599-3084-49e0-99bc-626d4d423dd6" Dec 13 13:34:15.508553 containerd[1460]: time="2024-12-13T13:34:15.508513290Z" level=error msg="Failed to destroy network for sandbox \"d1a332cec12278e8a2848eda31dc70e9822bb2e5c607ef1e71dfc9da622ecebb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:15.508876 containerd[1460]: time="2024-12-13T13:34:15.508852587Z" level=error msg="encountered an error cleaning up failed sandbox \"d1a332cec12278e8a2848eda31dc70e9822bb2e5c607ef1e71dfc9da622ecebb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:15.508916 containerd[1460]: time="2024-12-13T13:34:15.508895609Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d87f7746b-flf6n,Uid:5ba12381-f554-4e4f-8ceb-405dc070dc9a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d1a332cec12278e8a2848eda31dc70e9822bb2e5c607ef1e71dfc9da622ecebb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:15.509110 kubelet[2640]: E1213 13:34:15.509070 2640 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1a332cec12278e8a2848eda31dc70e9822bb2e5c607ef1e71dfc9da622ecebb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:15.509293 kubelet[2640]: E1213 13:34:15.509126 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1a332cec12278e8a2848eda31dc70e9822bb2e5c607ef1e71dfc9da622ecebb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d87f7746b-flf6n" Dec 13 13:34:15.509293 kubelet[2640]: E1213 13:34:15.509150 2640 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1a332cec12278e8a2848eda31dc70e9822bb2e5c607ef1e71dfc9da622ecebb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d87f7746b-flf6n" Dec 13 13:34:15.509293 kubelet[2640]: E1213 13:34:15.509191 2640 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d87f7746b-flf6n_calico-apiserver(5ba12381-f554-4e4f-8ceb-405dc070dc9a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d87f7746b-flf6n_calico-apiserver(5ba12381-f554-4e4f-8ceb-405dc070dc9a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d1a332cec12278e8a2848eda31dc70e9822bb2e5c607ef1e71dfc9da622ecebb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d87f7746b-flf6n" podUID="5ba12381-f554-4e4f-8ceb-405dc070dc9a" Dec 13 13:34:15.515868 kubelet[2640]: I1213 13:34:15.515847 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21489d44f97457445adcc907a914ad67e96a4016825cb60915c67d35d31dba11" Dec 13 13:34:15.516443 containerd[1460]: time="2024-12-13T13:34:15.516409677Z" level=info msg="StopPodSandbox for \"21489d44f97457445adcc907a914ad67e96a4016825cb60915c67d35d31dba11\"" Dec 13 13:34:15.516608 containerd[1460]: time="2024-12-13T13:34:15.516590057Z" level=info msg="Ensure that sandbox 21489d44f97457445adcc907a914ad67e96a4016825cb60915c67d35d31dba11 in task-service has been cleanup successfully" Dec 13 13:34:15.516845 containerd[1460]: time="2024-12-13T13:34:15.516824257Z" level=info msg="TearDown network for sandbox \"21489d44f97457445adcc907a914ad67e96a4016825cb60915c67d35d31dba11\" successfully" Dec 13 13:34:15.516916 containerd[1460]: time="2024-12-13T13:34:15.516895391Z" level=info msg="StopPodSandbox for \"21489d44f97457445adcc907a914ad67e96a4016825cb60915c67d35d31dba11\" returns successfully" Dec 13 13:34:15.517051 kubelet[2640]: I1213 13:34:15.516960 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="558af8a45bc01399591427b598962160a612f4630f01ffcfbcb0b3a90050035e" Dec 13 13:34:15.517476 containerd[1460]: time="2024-12-13T13:34:15.517443622Z" level=info msg="StopPodSandbox for \"558af8a45bc01399591427b598962160a612f4630f01ffcfbcb0b3a90050035e\"" Dec 13 13:34:15.517558 kubelet[2640]: E1213 13:34:15.517520 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:34:15.517590 containerd[1460]: time="2024-12-13T13:34:15.517580600Z" level=info msg="Ensure that sandbox 558af8a45bc01399591427b598962160a612f4630f01ffcfbcb0b3a90050035e in task-service has been cleanup successfully" Dec 13 13:34:15.517893 containerd[1460]: time="2024-12-13T13:34:15.517729890Z" level=info msg="TearDown network for sandbox \"558af8a45bc01399591427b598962160a612f4630f01ffcfbcb0b3a90050035e\" successfully" Dec 13 13:34:15.517893 containerd[1460]: time="2024-12-13T13:34:15.517743877Z" level=info msg="StopPodSandbox for \"558af8a45bc01399591427b598962160a612f4630f01ffcfbcb0b3a90050035e\" returns successfully" Dec 13 13:34:15.517893 containerd[1460]: time="2024-12-13T13:34:15.517781608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tdhb9,Uid:bd2088f6-f886-4664-af58-213044237f3c,Namespace:kube-system,Attempt:1,}" Dec 13 13:34:15.517981 kubelet[2640]: E1213 13:34:15.517863 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:34:15.518024 containerd[1460]: time="2024-12-13T13:34:15.517997323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-n488k,Uid:4efa0db1-4649-47cd-847b-b2cd3ddad9b5,Namespace:kube-system,Attempt:1,}" Dec 13 13:34:15.519152 kubelet[2640]: E1213 13:34:15.519132 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:34:15.519854 containerd[1460]: time="2024-12-13T13:34:15.519766239Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 13:34:15.519904 kubelet[2640]: I1213 13:34:15.519853 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1a332cec12278e8a2848eda31dc70e9822bb2e5c607ef1e71dfc9da622ecebb" Dec 13 13:34:15.520209 containerd[1460]: time="2024-12-13T13:34:15.520185217Z" level=info msg="StopPodSandbox for \"d1a332cec12278e8a2848eda31dc70e9822bb2e5c607ef1e71dfc9da622ecebb\"" Dec 13 13:34:15.520358 containerd[1460]: time="2024-12-13T13:34:15.520340509Z" level=info msg="Ensure that sandbox d1a332cec12278e8a2848eda31dc70e9822bb2e5c607ef1e71dfc9da622ecebb in task-service has been cleanup successfully" Dec 13 13:34:15.520619 containerd[1460]: time="2024-12-13T13:34:15.520597242Z" level=info msg="TearDown network for sandbox \"d1a332cec12278e8a2848eda31dc70e9822bb2e5c607ef1e71dfc9da622ecebb\" successfully" Dec 13 13:34:15.520619 containerd[1460]: time="2024-12-13T13:34:15.520613943Z" level=info msg="StopPodSandbox for \"d1a332cec12278e8a2848eda31dc70e9822bb2e5c607ef1e71dfc9da622ecebb\" returns successfully" Dec 13 13:34:15.520980 containerd[1460]: time="2024-12-13T13:34:15.520960464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d87f7746b-flf6n,Uid:5ba12381-f554-4e4f-8ceb-405dc070dc9a,Namespace:calico-apiserver,Attempt:1,}" Dec 13 13:34:15.521245 kubelet[2640]: I1213 13:34:15.521014 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3471811395b1ff1780abbbc92eeb9a683464d2b59eb4b36f7ca30acd080ae9c4" Dec 13 13:34:15.521587 containerd[1460]: time="2024-12-13T13:34:15.521507062Z" level=info msg="StopPodSandbox for \"3471811395b1ff1780abbbc92eeb9a683464d2b59eb4b36f7ca30acd080ae9c4\"" Dec 13 13:34:15.522129 containerd[1460]: time="2024-12-13T13:34:15.522005980Z" level=info msg="Ensure that sandbox 3471811395b1ff1780abbbc92eeb9a683464d2b59eb4b36f7ca30acd080ae9c4 in task-service has been cleanup successfully" Dec 13 13:34:15.522246 containerd[1460]: time="2024-12-13T13:34:15.522230362Z" level=info msg="TearDown network for sandbox \"3471811395b1ff1780abbbc92eeb9a683464d2b59eb4b36f7ca30acd080ae9c4\" successfully" Dec 13 13:34:15.523385 containerd[1460]: time="2024-12-13T13:34:15.522295514Z" level=info msg="StopPodSandbox for \"3471811395b1ff1780abbbc92eeb9a683464d2b59eb4b36f7ca30acd080ae9c4\" returns successfully" Dec 13 13:34:15.523385 containerd[1460]: time="2024-12-13T13:34:15.522914989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d87f7746b-mqntw,Uid:cadc3599-3084-49e0-99bc-626d4d423dd6,Namespace:calico-apiserver,Attempt:1,}" Dec 13 13:34:15.523385 containerd[1460]: time="2024-12-13T13:34:15.523075441Z" level=info msg="StopPodSandbox for \"c62e1d6a49358b1088df68486d06de5d565abc5cb224cecb08a72fc49527b818\"" Dec 13 13:34:15.523385 containerd[1460]: time="2024-12-13T13:34:15.523237005Z" level=info msg="Ensure that sandbox c62e1d6a49358b1088df68486d06de5d565abc5cb224cecb08a72fc49527b818 in task-service has been cleanup successfully" Dec 13 13:34:15.523502 kubelet[2640]: I1213 13:34:15.522501 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c62e1d6a49358b1088df68486d06de5d565abc5cb224cecb08a72fc49527b818" Dec 13 13:34:15.523605 containerd[1460]: time="2024-12-13T13:34:15.523577545Z" level=info msg="TearDown network for sandbox \"c62e1d6a49358b1088df68486d06de5d565abc5cb224cecb08a72fc49527b818\" successfully" Dec 13 13:34:15.523652 containerd[1460]: time="2024-12-13T13:34:15.523641256Z" level=info msg="StopPodSandbox for \"c62e1d6a49358b1088df68486d06de5d565abc5cb224cecb08a72fc49527b818\" returns successfully" Dec 13 13:34:15.524908 containerd[1460]: time="2024-12-13T13:34:15.524881026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55f74d585b-gkp22,Uid:bcb02bd3-79c5-4f97-892a-aafa3090dcbe,Namespace:calico-system,Attempt:1,}" Dec 13 13:34:15.685763 containerd[1460]: time="2024-12-13T13:34:15.685716379Z" level=error msg="Failed to destroy network for sandbox \"043d5f360472a5cf84f2e7fe5d34c855f1c1466ecd61a4548840ff0da56c473d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:15.686246 containerd[1460]: time="2024-12-13T13:34:15.686086104Z" level=error msg="encountered an error cleaning up failed sandbox \"043d5f360472a5cf84f2e7fe5d34c855f1c1466ecd61a4548840ff0da56c473d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:15.686246 containerd[1460]: time="2024-12-13T13:34:15.686138803Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55f74d585b-gkp22,Uid:bcb02bd3-79c5-4f97-892a-aafa3090dcbe,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"043d5f360472a5cf84f2e7fe5d34c855f1c1466ecd61a4548840ff0da56c473d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:15.686888 kubelet[2640]: E1213 13:34:15.686539 2640 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"043d5f360472a5cf84f2e7fe5d34c855f1c1466ecd61a4548840ff0da56c473d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:15.686888 kubelet[2640]: E1213 13:34:15.686596 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"043d5f360472a5cf84f2e7fe5d34c855f1c1466ecd61a4548840ff0da56c473d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55f74d585b-gkp22" Dec 13 13:34:15.686888 kubelet[2640]: E1213 13:34:15.686617 2640 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"043d5f360472a5cf84f2e7fe5d34c855f1c1466ecd61a4548840ff0da56c473d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55f74d585b-gkp22" Dec 13 13:34:15.687022 kubelet[2640]: E1213 13:34:15.686654 2640 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-55f74d585b-gkp22_calico-system(bcb02bd3-79c5-4f97-892a-aafa3090dcbe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-55f74d585b-gkp22_calico-system(bcb02bd3-79c5-4f97-892a-aafa3090dcbe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"043d5f360472a5cf84f2e7fe5d34c855f1c1466ecd61a4548840ff0da56c473d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55f74d585b-gkp22" podUID="bcb02bd3-79c5-4f97-892a-aafa3090dcbe" Dec 13 13:34:15.687306 containerd[1460]: time="2024-12-13T13:34:15.687211670Z" level=error msg="Failed to destroy network for sandbox \"8e04eb66310a3917ffdcbb002bcd66f739fb7b5c6e195d72921f7f71735c6ad9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:15.687905 containerd[1460]: time="2024-12-13T13:34:15.687869958Z" level=error msg="encountered an error cleaning up failed sandbox \"8e04eb66310a3917ffdcbb002bcd66f739fb7b5c6e195d72921f7f71735c6ad9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:15.688036 containerd[1460]: time="2024-12-13T13:34:15.687927407Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tdhb9,Uid:bd2088f6-f886-4664-af58-213044237f3c,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"8e04eb66310a3917ffdcbb002bcd66f739fb7b5c6e195d72921f7f71735c6ad9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:15.688113 kubelet[2640]: E1213 13:34:15.688077 2640 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e04eb66310a3917ffdcbb002bcd66f739fb7b5c6e195d72921f7f71735c6ad9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:15.688339 kubelet[2640]: E1213 13:34:15.688289 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e04eb66310a3917ffdcbb002bcd66f739fb7b5c6e195d72921f7f71735c6ad9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-tdhb9" Dec 13 13:34:15.688413 kubelet[2640]: E1213 13:34:15.688359 2640 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e04eb66310a3917ffdcbb002bcd66f739fb7b5c6e195d72921f7f71735c6ad9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-tdhb9" Dec 13 13:34:15.688510 kubelet[2640]: E1213 13:34:15.688469 2640 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-tdhb9_kube-system(bd2088f6-f886-4664-af58-213044237f3c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-tdhb9_kube-system(bd2088f6-f886-4664-af58-213044237f3c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8e04eb66310a3917ffdcbb002bcd66f739fb7b5c6e195d72921f7f71735c6ad9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-tdhb9" podUID="bd2088f6-f886-4664-af58-213044237f3c" Dec 13 13:34:15.691158 containerd[1460]: time="2024-12-13T13:34:15.691118196Z" level=error msg="Failed to destroy network for sandbox \"5830ddd4f895c70171dbfc66a59269f13cec0912a16a8f19a7fc422dce5b8861\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:15.691574 containerd[1460]: time="2024-12-13T13:34:15.691549497Z" level=error msg="encountered an error cleaning up failed sandbox \"5830ddd4f895c70171dbfc66a59269f13cec0912a16a8f19a7fc422dce5b8861\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:15.691634 containerd[1460]: time="2024-12-13T13:34:15.691600884Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d87f7746b-flf6n,Uid:5ba12381-f554-4e4f-8ceb-405dc070dc9a,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"5830ddd4f895c70171dbfc66a59269f13cec0912a16a8f19a7fc422dce5b8861\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:15.691854 kubelet[2640]: E1213 13:34:15.691778 2640 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5830ddd4f895c70171dbfc66a59269f13cec0912a16a8f19a7fc422dce5b8861\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:15.691906 kubelet[2640]: E1213 13:34:15.691869 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5830ddd4f895c70171dbfc66a59269f13cec0912a16a8f19a7fc422dce5b8861\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d87f7746b-flf6n" Dec 13 13:34:15.691945 kubelet[2640]: E1213 13:34:15.691925 2640 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5830ddd4f895c70171dbfc66a59269f13cec0912a16a8f19a7fc422dce5b8861\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d87f7746b-flf6n" Dec 13 13:34:15.692533 kubelet[2640]: E1213 13:34:15.692236 2640 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d87f7746b-flf6n_calico-apiserver(5ba12381-f554-4e4f-8ceb-405dc070dc9a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d87f7746b-flf6n_calico-apiserver(5ba12381-f554-4e4f-8ceb-405dc070dc9a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5830ddd4f895c70171dbfc66a59269f13cec0912a16a8f19a7fc422dce5b8861\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d87f7746b-flf6n" podUID="5ba12381-f554-4e4f-8ceb-405dc070dc9a" Dec 13 13:34:15.694014 containerd[1460]: time="2024-12-13T13:34:15.693962425Z" level=error msg="Failed to destroy network for sandbox \"706c13db8235334bc38488f0490b1867ec134804b2b05c3d93c69a593fd16e13\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:15.694674 containerd[1460]: time="2024-12-13T13:34:15.694646521Z" level=error msg="encountered an error cleaning up failed sandbox \"706c13db8235334bc38488f0490b1867ec134804b2b05c3d93c69a593fd16e13\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:15.694796 containerd[1460]: time="2024-12-13T13:34:15.694771235Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-n488k,Uid:4efa0db1-4649-47cd-847b-b2cd3ddad9b5,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"706c13db8235334bc38488f0490b1867ec134804b2b05c3d93c69a593fd16e13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:15.695092 kubelet[2640]: E1213 13:34:15.695040 2640 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"706c13db8235334bc38488f0490b1867ec134804b2b05c3d93c69a593fd16e13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:15.695197 kubelet[2640]: E1213 13:34:15.695108 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"706c13db8235334bc38488f0490b1867ec134804b2b05c3d93c69a593fd16e13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-n488k" Dec 13 13:34:15.695197 kubelet[2640]: E1213 13:34:15.695132 2640 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"706c13db8235334bc38488f0490b1867ec134804b2b05c3d93c69a593fd16e13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-n488k" Dec 13 13:34:15.695433 kubelet[2640]: E1213 13:34:15.695189 2640 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-n488k_kube-system(4efa0db1-4649-47cd-847b-b2cd3ddad9b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-n488k_kube-system(4efa0db1-4649-47cd-847b-b2cd3ddad9b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"706c13db8235334bc38488f0490b1867ec134804b2b05c3d93c69a593fd16e13\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-n488k" podUID="4efa0db1-4649-47cd-847b-b2cd3ddad9b5" Dec 13 13:34:15.709182 containerd[1460]: time="2024-12-13T13:34:15.709123222Z" level=error msg="Failed to destroy network for sandbox \"03f3d3da32a6bad9575866ecc071f7db7e2ef21eb15c13b5d3ccc4981ea3a6b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:15.709619 containerd[1460]: time="2024-12-13T13:34:15.709582155Z" level=error msg="encountered an error cleaning up failed sandbox \"03f3d3da32a6bad9575866ecc071f7db7e2ef21eb15c13b5d3ccc4981ea3a6b8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:15.709658 containerd[1460]: time="2024-12-13T13:34:15.709644311Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d87f7746b-mqntw,Uid:cadc3599-3084-49e0-99bc-626d4d423dd6,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"03f3d3da32a6bad9575866ecc071f7db7e2ef21eb15c13b5d3ccc4981ea3a6b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:15.709914 kubelet[2640]: E1213 13:34:15.709866 2640 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03f3d3da32a6bad9575866ecc071f7db7e2ef21eb15c13b5d3ccc4981ea3a6b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:15.709971 kubelet[2640]: E1213 13:34:15.709930 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03f3d3da32a6bad9575866ecc071f7db7e2ef21eb15c13b5d3ccc4981ea3a6b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d87f7746b-mqntw" Dec 13 13:34:15.709971 kubelet[2640]: E1213 13:34:15.709955 2640 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03f3d3da32a6bad9575866ecc071f7db7e2ef21eb15c13b5d3ccc4981ea3a6b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d87f7746b-mqntw" Dec 13 13:34:15.710038 kubelet[2640]: E1213 13:34:15.710004 2640 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d87f7746b-mqntw_calico-apiserver(cadc3599-3084-49e0-99bc-626d4d423dd6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d87f7746b-mqntw_calico-apiserver(cadc3599-3084-49e0-99bc-626d4d423dd6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"03f3d3da32a6bad9575866ecc071f7db7e2ef21eb15c13b5d3ccc4981ea3a6b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d87f7746b-mqntw" podUID="cadc3599-3084-49e0-99bc-626d4d423dd6" Dec 13 13:34:16.525960 kubelet[2640]: I1213 13:34:16.525919 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e04eb66310a3917ffdcbb002bcd66f739fb7b5c6e195d72921f7f71735c6ad9" Dec 13 13:34:16.527481 containerd[1460]: time="2024-12-13T13:34:16.527290361Z" level=info msg="StopPodSandbox for \"8e04eb66310a3917ffdcbb002bcd66f739fb7b5c6e195d72921f7f71735c6ad9\"" Dec 13 13:34:16.528034 containerd[1460]: time="2024-12-13T13:34:16.527654466Z" level=info msg="Ensure that sandbox 8e04eb66310a3917ffdcbb002bcd66f739fb7b5c6e195d72921f7f71735c6ad9 in task-service has been cleanup successfully" Dec 13 13:34:16.530769 containerd[1460]: time="2024-12-13T13:34:16.528282677Z" level=info msg="TearDown network for sandbox \"8e04eb66310a3917ffdcbb002bcd66f739fb7b5c6e195d72921f7f71735c6ad9\" successfully" Dec 13 13:34:16.530769 containerd[1460]: time="2024-12-13T13:34:16.528302835Z" level=info msg="StopPodSandbox for \"8e04eb66310a3917ffdcbb002bcd66f739fb7b5c6e195d72921f7f71735c6ad9\" returns successfully" Dec 13 13:34:16.530769 containerd[1460]: time="2024-12-13T13:34:16.529809347Z" level=info msg="StopPodSandbox for \"21489d44f97457445adcc907a914ad67e96a4016825cb60915c67d35d31dba11\"" Dec 13 13:34:16.530769 containerd[1460]: time="2024-12-13T13:34:16.529941045Z" level=info msg="TearDown network for sandbox \"21489d44f97457445adcc907a914ad67e96a4016825cb60915c67d35d31dba11\" successfully" Dec 13 13:34:16.530769 containerd[1460]: time="2024-12-13T13:34:16.529953889Z" level=info msg="StopPodSandbox for \"21489d44f97457445adcc907a914ad67e96a4016825cb60915c67d35d31dba11\" returns successfully" Dec 13 13:34:16.531007 kubelet[2640]: E1213 13:34:16.530297 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:34:16.531052 containerd[1460]: time="2024-12-13T13:34:16.531023821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tdhb9,Uid:bd2088f6-f886-4664-af58-213044237f3c,Namespace:kube-system,Attempt:2,}" Dec 13 13:34:16.532109 systemd[1]: run-netns-cni\x2d3969a768\x2d8563\x2d6bb4\x2d524c\x2d2ffd2484504e.mount: Deactivated successfully. Dec 13 13:34:16.533150 kubelet[2640]: I1213 13:34:16.532412 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="706c13db8235334bc38488f0490b1867ec134804b2b05c3d93c69a593fd16e13" Dec 13 13:34:16.534237 containerd[1460]: time="2024-12-13T13:34:16.534119340Z" level=info msg="StopPodSandbox for \"706c13db8235334bc38488f0490b1867ec134804b2b05c3d93c69a593fd16e13\"" Dec 13 13:34:16.534622 containerd[1460]: time="2024-12-13T13:34:16.534588452Z" level=info msg="Ensure that sandbox 706c13db8235334bc38488f0490b1867ec134804b2b05c3d93c69a593fd16e13 in task-service has been cleanup successfully" Dec 13 13:34:16.536382 containerd[1460]: time="2024-12-13T13:34:16.534824546Z" level=info msg="TearDown network for sandbox \"706c13db8235334bc38488f0490b1867ec134804b2b05c3d93c69a593fd16e13\" successfully" Dec 13 13:34:16.536382 containerd[1460]: time="2024-12-13T13:34:16.534844403Z" level=info msg="StopPodSandbox for \"706c13db8235334bc38488f0490b1867ec134804b2b05c3d93c69a593fd16e13\" returns successfully" Dec 13 13:34:16.536382 containerd[1460]: time="2024-12-13T13:34:16.535814076Z" level=info msg="StopPodSandbox for \"558af8a45bc01399591427b598962160a612f4630f01ffcfbcb0b3a90050035e\"" Dec 13 13:34:16.536382 containerd[1460]: time="2024-12-13T13:34:16.535936567Z" level=info msg="TearDown network for sandbox \"558af8a45bc01399591427b598962160a612f4630f01ffcfbcb0b3a90050035e\" successfully" Dec 13 13:34:16.536382 containerd[1460]: time="2024-12-13T13:34:16.535950363Z" level=info msg="StopPodSandbox for \"558af8a45bc01399591427b598962160a612f4630f01ffcfbcb0b3a90050035e\" returns successfully" Dec 13 13:34:16.536832 kubelet[2640]: I1213 13:34:16.535586 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="043d5f360472a5cf84f2e7fe5d34c855f1c1466ecd61a4548840ff0da56c473d" Dec 13 13:34:16.536832 kubelet[2640]: E1213 13:34:16.536146 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:34:16.536978 containerd[1460]: time="2024-12-13T13:34:16.536523429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-n488k,Uid:4efa0db1-4649-47cd-847b-b2cd3ddad9b5,Namespace:kube-system,Attempt:2,}" Dec 13 13:34:16.536978 containerd[1460]: time="2024-12-13T13:34:16.536781705Z" level=info msg="StopPodSandbox for \"043d5f360472a5cf84f2e7fe5d34c855f1c1466ecd61a4548840ff0da56c473d\"" Dec 13 13:34:16.537059 containerd[1460]: time="2024-12-13T13:34:16.537019142Z" level=info msg="Ensure that sandbox 043d5f360472a5cf84f2e7fe5d34c855f1c1466ecd61a4548840ff0da56c473d in task-service has been cleanup successfully" Dec 13 13:34:16.538279 containerd[1460]: time="2024-12-13T13:34:16.538241881Z" level=info msg="TearDown network for sandbox \"043d5f360472a5cf84f2e7fe5d34c855f1c1466ecd61a4548840ff0da56c473d\" successfully" Dec 13 13:34:16.538567 containerd[1460]: time="2024-12-13T13:34:16.538401862Z" level=info msg="StopPodSandbox for \"043d5f360472a5cf84f2e7fe5d34c855f1c1466ecd61a4548840ff0da56c473d\" returns successfully" Dec 13 13:34:16.538717 kubelet[2640]: I1213 13:34:16.538618 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5830ddd4f895c70171dbfc66a59269f13cec0912a16a8f19a7fc422dce5b8861" Dec 13 13:34:16.539520 containerd[1460]: time="2024-12-13T13:34:16.539299198Z" level=info msg="StopPodSandbox for \"c62e1d6a49358b1088df68486d06de5d565abc5cb224cecb08a72fc49527b818\"" Dec 13 13:34:16.539520 containerd[1460]: time="2024-12-13T13:34:16.539433411Z" level=info msg="StopPodSandbox for \"5830ddd4f895c70171dbfc66a59269f13cec0912a16a8f19a7fc422dce5b8861\"" Dec 13 13:34:16.539520 containerd[1460]: time="2024-12-13T13:34:16.539457366Z" level=info msg="TearDown network for sandbox \"c62e1d6a49358b1088df68486d06de5d565abc5cb224cecb08a72fc49527b818\" successfully" Dec 13 13:34:16.539520 containerd[1460]: time="2024-12-13T13:34:16.539471212Z" level=info msg="StopPodSandbox for \"c62e1d6a49358b1088df68486d06de5d565abc5cb224cecb08a72fc49527b818\" returns successfully" Dec 13 13:34:16.539671 containerd[1460]: time="2024-12-13T13:34:16.539659295Z" level=info msg="Ensure that sandbox 5830ddd4f895c70171dbfc66a59269f13cec0912a16a8f19a7fc422dce5b8861 in task-service has been cleanup successfully" Dec 13 13:34:16.540359 containerd[1460]: time="2024-12-13T13:34:16.540173352Z" level=info msg="TearDown network for sandbox \"5830ddd4f895c70171dbfc66a59269f13cec0912a16a8f19a7fc422dce5b8861\" successfully" Dec 13 13:34:16.540359 containerd[1460]: time="2024-12-13T13:34:16.540195623Z" level=info msg="StopPodSandbox for \"5830ddd4f895c70171dbfc66a59269f13cec0912a16a8f19a7fc422dce5b8861\" returns successfully" Dec 13 13:34:16.540593 containerd[1460]: time="2024-12-13T13:34:16.540541665Z" level=info msg="StopPodSandbox for \"d1a332cec12278e8a2848eda31dc70e9822bb2e5c607ef1e71dfc9da622ecebb\"" Dec 13 13:34:16.540408 systemd[1]: run-netns-cni\x2dc13d8541\x2d0aac\x2d0ec1\x2d04c0\x2d964fd82290fc.mount: Deactivated successfully. Dec 13 13:34:16.540702 containerd[1460]: time="2024-12-13T13:34:16.540637304Z" level=info msg="TearDown network for sandbox \"d1a332cec12278e8a2848eda31dc70e9822bb2e5c607ef1e71dfc9da622ecebb\" successfully" Dec 13 13:34:16.540702 containerd[1460]: time="2024-12-13T13:34:16.540651941Z" level=info msg="StopPodSandbox for \"d1a332cec12278e8a2848eda31dc70e9822bb2e5c607ef1e71dfc9da622ecebb\" returns successfully" Dec 13 13:34:16.540764 containerd[1460]: time="2024-12-13T13:34:16.540741831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55f74d585b-gkp22,Uid:bcb02bd3-79c5-4f97-892a-aafa3090dcbe,Namespace:calico-system,Attempt:2,}" Dec 13 13:34:16.541733 containerd[1460]: time="2024-12-13T13:34:16.541694602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d87f7746b-flf6n,Uid:5ba12381-f554-4e4f-8ceb-405dc070dc9a,Namespace:calico-apiserver,Attempt:2,}" Dec 13 13:34:16.543458 kubelet[2640]: I1213 13:34:16.542954 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03f3d3da32a6bad9575866ecc071f7db7e2ef21eb15c13b5d3ccc4981ea3a6b8" Dec 13 13:34:16.543598 containerd[1460]: time="2024-12-13T13:34:16.543570769Z" level=info msg="StopPodSandbox for \"03f3d3da32a6bad9575866ecc071f7db7e2ef21eb15c13b5d3ccc4981ea3a6b8\"" Dec 13 13:34:16.543809 containerd[1460]: time="2024-12-13T13:34:16.543774963Z" level=info msg="Ensure that sandbox 03f3d3da32a6bad9575866ecc071f7db7e2ef21eb15c13b5d3ccc4981ea3a6b8 in task-service has been cleanup successfully" Dec 13 13:34:16.544051 containerd[1460]: time="2024-12-13T13:34:16.544027809Z" level=info msg="TearDown network for sandbox \"03f3d3da32a6bad9575866ecc071f7db7e2ef21eb15c13b5d3ccc4981ea3a6b8\" successfully" Dec 13 13:34:16.544051 containerd[1460]: time="2024-12-13T13:34:16.544047115Z" level=info msg="StopPodSandbox for \"03f3d3da32a6bad9575866ecc071f7db7e2ef21eb15c13b5d3ccc4981ea3a6b8\" returns successfully" Dec 13 13:34:16.544326 containerd[1460]: time="2024-12-13T13:34:16.544284511Z" level=info msg="StopPodSandbox for \"3471811395b1ff1780abbbc92eeb9a683464d2b59eb4b36f7ca30acd080ae9c4\"" Dec 13 13:34:16.544404 systemd[1]: run-netns-cni\x2de70e6051\x2db850\x2dae01\x2d5bf7\x2db5ad05d3680b.mount: Deactivated successfully. Dec 13 13:34:16.544586 containerd[1460]: time="2024-12-13T13:34:16.544443801Z" level=info msg="TearDown network for sandbox \"3471811395b1ff1780abbbc92eeb9a683464d2b59eb4b36f7ca30acd080ae9c4\" successfully" Dec 13 13:34:16.544586 containerd[1460]: time="2024-12-13T13:34:16.544458729Z" level=info msg="StopPodSandbox for \"3471811395b1ff1780abbbc92eeb9a683464d2b59eb4b36f7ca30acd080ae9c4\" returns successfully" Dec 13 13:34:16.544825 systemd[1]: run-netns-cni\x2d0044926a\x2ddc22\x2d97dc\x2d2249\x2d9be45914f9a5.mount: Deactivated successfully. Dec 13 13:34:16.546105 containerd[1460]: time="2024-12-13T13:34:16.545114241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d87f7746b-mqntw,Uid:cadc3599-3084-49e0-99bc-626d4d423dd6,Namespace:calico-apiserver,Attempt:2,}" Dec 13 13:34:16.549352 systemd[1]: run-netns-cni\x2d6762deeb\x2d51c8\x2d54a8\x2d4b63\x2d2954b0bc84a1.mount: Deactivated successfully. Dec 13 13:34:16.667230 containerd[1460]: time="2024-12-13T13:34:16.667147277Z" level=error msg="Failed to destroy network for sandbox \"09abfaaccf32e858d49fc89ee268bc73ea6fb80d4d0a90443906199a1d27306d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:16.667723 containerd[1460]: time="2024-12-13T13:34:16.667676923Z" level=error msg="Failed to destroy network for sandbox \"e93b7769b9d88742aaf6059922f8c6c4a26e010f6ef3858da386188108ad8af8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:16.668006 containerd[1460]: time="2024-12-13T13:34:16.667980755Z" level=error msg="encountered an error cleaning up failed sandbox \"09abfaaccf32e858d49fc89ee268bc73ea6fb80d4d0a90443906199a1d27306d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:16.668229 containerd[1460]: time="2024-12-13T13:34:16.668208543Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-n488k,Uid:4efa0db1-4649-47cd-847b-b2cd3ddad9b5,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"09abfaaccf32e858d49fc89ee268bc73ea6fb80d4d0a90443906199a1d27306d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:16.668355 containerd[1460]: time="2024-12-13T13:34:16.668329270Z" level=error msg="encountered an error cleaning up failed sandbox \"e93b7769b9d88742aaf6059922f8c6c4a26e010f6ef3858da386188108ad8af8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:16.668404 containerd[1460]: time="2024-12-13T13:34:16.668392138Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tdhb9,Uid:bd2088f6-f886-4664-af58-213044237f3c,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"e93b7769b9d88742aaf6059922f8c6c4a26e010f6ef3858da386188108ad8af8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:16.670727 kubelet[2640]: E1213 13:34:16.669458 2640 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e93b7769b9d88742aaf6059922f8c6c4a26e010f6ef3858da386188108ad8af8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:16.670727 kubelet[2640]: E1213 13:34:16.669483 2640 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09abfaaccf32e858d49fc89ee268bc73ea6fb80d4d0a90443906199a1d27306d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:16.670727 kubelet[2640]: E1213 13:34:16.669518 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e93b7769b9d88742aaf6059922f8c6c4a26e010f6ef3858da386188108ad8af8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-tdhb9" Dec 13 13:34:16.670727 kubelet[2640]: E1213 13:34:16.669537 2640 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e93b7769b9d88742aaf6059922f8c6c4a26e010f6ef3858da386188108ad8af8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-tdhb9" Dec 13 13:34:16.670896 kubelet[2640]: E1213 13:34:16.669545 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09abfaaccf32e858d49fc89ee268bc73ea6fb80d4d0a90443906199a1d27306d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-n488k" Dec 13 13:34:16.670896 kubelet[2640]: E1213 13:34:16.669575 2640 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09abfaaccf32e858d49fc89ee268bc73ea6fb80d4d0a90443906199a1d27306d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-n488k" Dec 13 13:34:16.670896 kubelet[2640]: E1213 13:34:16.669579 2640 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-tdhb9_kube-system(bd2088f6-f886-4664-af58-213044237f3c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-tdhb9_kube-system(bd2088f6-f886-4664-af58-213044237f3c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e93b7769b9d88742aaf6059922f8c6c4a26e010f6ef3858da386188108ad8af8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-tdhb9" podUID="bd2088f6-f886-4664-af58-213044237f3c" Dec 13 13:34:16.671009 kubelet[2640]: E1213 13:34:16.669632 2640 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-n488k_kube-system(4efa0db1-4649-47cd-847b-b2cd3ddad9b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-n488k_kube-system(4efa0db1-4649-47cd-847b-b2cd3ddad9b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"09abfaaccf32e858d49fc89ee268bc73ea6fb80d4d0a90443906199a1d27306d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-n488k" podUID="4efa0db1-4649-47cd-847b-b2cd3ddad9b5" Dec 13 13:34:16.679073 containerd[1460]: time="2024-12-13T13:34:16.679028706Z" level=error msg="Failed to destroy network for sandbox \"8c8a323be73a25f027a42aa31bd62c7c50a5548b9c0f738623c6c902439d54ca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:16.679667 containerd[1460]: time="2024-12-13T13:34:16.679576505Z" level=error msg="encountered an error cleaning up failed sandbox \"8c8a323be73a25f027a42aa31bd62c7c50a5548b9c0f738623c6c902439d54ca\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:16.679667 containerd[1460]: time="2024-12-13T13:34:16.679627511Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d87f7746b-flf6n,Uid:5ba12381-f554-4e4f-8ceb-405dc070dc9a,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"8c8a323be73a25f027a42aa31bd62c7c50a5548b9c0f738623c6c902439d54ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:16.680300 kubelet[2640]: E1213 13:34:16.679958 2640 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c8a323be73a25f027a42aa31bd62c7c50a5548b9c0f738623c6c902439d54ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:16.680300 kubelet[2640]: E1213 13:34:16.680014 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c8a323be73a25f027a42aa31bd62c7c50a5548b9c0f738623c6c902439d54ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d87f7746b-flf6n" Dec 13 13:34:16.680300 kubelet[2640]: E1213 13:34:16.680033 2640 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c8a323be73a25f027a42aa31bd62c7c50a5548b9c0f738623c6c902439d54ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d87f7746b-flf6n" Dec 13 13:34:16.680434 kubelet[2640]: E1213 13:34:16.680081 2640 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d87f7746b-flf6n_calico-apiserver(5ba12381-f554-4e4f-8ceb-405dc070dc9a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d87f7746b-flf6n_calico-apiserver(5ba12381-f554-4e4f-8ceb-405dc070dc9a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8c8a323be73a25f027a42aa31bd62c7c50a5548b9c0f738623c6c902439d54ca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d87f7746b-flf6n" podUID="5ba12381-f554-4e4f-8ceb-405dc070dc9a" Dec 13 13:34:16.685880 containerd[1460]: time="2024-12-13T13:34:16.685843377Z" level=error msg="Failed to destroy network for sandbox \"41522410dfe8d457521fb9fe296f7b8dae420fe4186ddf5966e8b8e3c6ce7668\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:16.686516 containerd[1460]: time="2024-12-13T13:34:16.686474033Z" level=error msg="encountered an error cleaning up failed sandbox \"41522410dfe8d457521fb9fe296f7b8dae420fe4186ddf5966e8b8e3c6ce7668\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:16.686516 containerd[1460]: time="2024-12-13T13:34:16.686531100Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55f74d585b-gkp22,Uid:bcb02bd3-79c5-4f97-892a-aafa3090dcbe,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"41522410dfe8d457521fb9fe296f7b8dae420fe4186ddf5966e8b8e3c6ce7668\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:16.686829 kubelet[2640]: E1213 13:34:16.686770 2640 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41522410dfe8d457521fb9fe296f7b8dae420fe4186ddf5966e8b8e3c6ce7668\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:16.686891 kubelet[2640]: E1213 13:34:16.686864 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41522410dfe8d457521fb9fe296f7b8dae420fe4186ddf5966e8b8e3c6ce7668\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55f74d585b-gkp22" Dec 13 13:34:16.686891 kubelet[2640]: E1213 13:34:16.686883 2640 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41522410dfe8d457521fb9fe296f7b8dae420fe4186ddf5966e8b8e3c6ce7668\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55f74d585b-gkp22" Dec 13 13:34:16.686948 kubelet[2640]: E1213 13:34:16.686926 2640 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-55f74d585b-gkp22_calico-system(bcb02bd3-79c5-4f97-892a-aafa3090dcbe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-55f74d585b-gkp22_calico-system(bcb02bd3-79c5-4f97-892a-aafa3090dcbe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"41522410dfe8d457521fb9fe296f7b8dae420fe4186ddf5966e8b8e3c6ce7668\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55f74d585b-gkp22" podUID="bcb02bd3-79c5-4f97-892a-aafa3090dcbe" Dec 13 13:34:16.687383 containerd[1460]: time="2024-12-13T13:34:16.687327037Z" level=error msg="Failed to destroy network for sandbox \"3096d2cf061035900198ce891b4e5cbd737796b92099b44cfd3fb9172cdbac54\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:16.687723 containerd[1460]: time="2024-12-13T13:34:16.687690339Z" level=error msg="encountered an error cleaning up failed sandbox \"3096d2cf061035900198ce891b4e5cbd737796b92099b44cfd3fb9172cdbac54\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:16.687761 containerd[1460]: time="2024-12-13T13:34:16.687748017Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d87f7746b-mqntw,Uid:cadc3599-3084-49e0-99bc-626d4d423dd6,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"3096d2cf061035900198ce891b4e5cbd737796b92099b44cfd3fb9172cdbac54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:16.687957 kubelet[2640]: E1213 13:34:16.687913 2640 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3096d2cf061035900198ce891b4e5cbd737796b92099b44cfd3fb9172cdbac54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:16.688023 kubelet[2640]: E1213 13:34:16.687979 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3096d2cf061035900198ce891b4e5cbd737796b92099b44cfd3fb9172cdbac54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d87f7746b-mqntw" Dec 13 13:34:16.688023 kubelet[2640]: E1213 13:34:16.688000 2640 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3096d2cf061035900198ce891b4e5cbd737796b92099b44cfd3fb9172cdbac54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d87f7746b-mqntw" Dec 13 13:34:16.688076 kubelet[2640]: E1213 13:34:16.688039 2640 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d87f7746b-mqntw_calico-apiserver(cadc3599-3084-49e0-99bc-626d4d423dd6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d87f7746b-mqntw_calico-apiserver(cadc3599-3084-49e0-99bc-626d4d423dd6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3096d2cf061035900198ce891b4e5cbd737796b92099b44cfd3fb9172cdbac54\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d87f7746b-mqntw" podUID="cadc3599-3084-49e0-99bc-626d4d423dd6" Dec 13 13:34:16.901381 systemd[1]: Created slice kubepods-besteffort-pod70af0792_807b_45ba_8d22_96d81d38b5e7.slice - libcontainer container kubepods-besteffort-pod70af0792_807b_45ba_8d22_96d81d38b5e7.slice. Dec 13 13:34:16.904778 containerd[1460]: time="2024-12-13T13:34:16.904678856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h5jzv,Uid:70af0792-807b-45ba-8d22-96d81d38b5e7,Namespace:calico-system,Attempt:0,}" Dec 13 13:34:17.032584 containerd[1460]: time="2024-12-13T13:34:17.032524341Z" level=error msg="Failed to destroy network for sandbox \"4ef0d56a397cd72f2356f51c144189ca5302b06a02860d08835bc16ddf93f006\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:17.035545 containerd[1460]: time="2024-12-13T13:34:17.035362547Z" level=error msg="encountered an error cleaning up failed sandbox \"4ef0d56a397cd72f2356f51c144189ca5302b06a02860d08835bc16ddf93f006\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:17.035545 containerd[1460]: time="2024-12-13T13:34:17.035421628Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h5jzv,Uid:70af0792-807b-45ba-8d22-96d81d38b5e7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4ef0d56a397cd72f2356f51c144189ca5302b06a02860d08835bc16ddf93f006\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:17.035877 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4ef0d56a397cd72f2356f51c144189ca5302b06a02860d08835bc16ddf93f006-shm.mount: Deactivated successfully. Dec 13 13:34:17.037279 kubelet[2640]: E1213 13:34:17.037238 2640 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ef0d56a397cd72f2356f51c144189ca5302b06a02860d08835bc16ddf93f006\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:17.037537 kubelet[2640]: E1213 13:34:17.037418 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ef0d56a397cd72f2356f51c144189ca5302b06a02860d08835bc16ddf93f006\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h5jzv" Dec 13 13:34:17.037537 kubelet[2640]: E1213 13:34:17.037442 2640 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ef0d56a397cd72f2356f51c144189ca5302b06a02860d08835bc16ddf93f006\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h5jzv" Dec 13 13:34:17.037537 kubelet[2640]: E1213 13:34:17.037505 2640 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-h5jzv_calico-system(70af0792-807b-45ba-8d22-96d81d38b5e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-h5jzv_calico-system(70af0792-807b-45ba-8d22-96d81d38b5e7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4ef0d56a397cd72f2356f51c144189ca5302b06a02860d08835bc16ddf93f006\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h5jzv" podUID="70af0792-807b-45ba-8d22-96d81d38b5e7" Dec 13 13:34:17.545850 kubelet[2640]: I1213 13:34:17.545811 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ef0d56a397cd72f2356f51c144189ca5302b06a02860d08835bc16ddf93f006" Dec 13 13:34:17.546504 containerd[1460]: time="2024-12-13T13:34:17.546334956Z" level=info msg="StopPodSandbox for \"4ef0d56a397cd72f2356f51c144189ca5302b06a02860d08835bc16ddf93f006\"" Dec 13 13:34:17.546862 containerd[1460]: time="2024-12-13T13:34:17.546539661Z" level=info msg="Ensure that sandbox 4ef0d56a397cd72f2356f51c144189ca5302b06a02860d08835bc16ddf93f006 in task-service has been cleanup successfully" Dec 13 13:34:17.546862 containerd[1460]: time="2024-12-13T13:34:17.546836669Z" level=info msg="TearDown network for sandbox \"4ef0d56a397cd72f2356f51c144189ca5302b06a02860d08835bc16ddf93f006\" successfully" Dec 13 13:34:17.546862 containerd[1460]: time="2024-12-13T13:34:17.546849103Z" level=info msg="StopPodSandbox for \"4ef0d56a397cd72f2356f51c144189ca5302b06a02860d08835bc16ddf93f006\" returns successfully" Dec 13 13:34:17.547702 containerd[1460]: time="2024-12-13T13:34:17.547387174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h5jzv,Uid:70af0792-807b-45ba-8d22-96d81d38b5e7,Namespace:calico-system,Attempt:1,}" Dec 13 13:34:17.547993 kubelet[2640]: I1213 13:34:17.547971 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c8a323be73a25f027a42aa31bd62c7c50a5548b9c0f738623c6c902439d54ca" Dec 13 13:34:17.549204 containerd[1460]: time="2024-12-13T13:34:17.548735468Z" level=info msg="StopPodSandbox for \"8c8a323be73a25f027a42aa31bd62c7c50a5548b9c0f738623c6c902439d54ca\"" Dec 13 13:34:17.549204 containerd[1460]: time="2024-12-13T13:34:17.549059828Z" level=info msg="Ensure that sandbox 8c8a323be73a25f027a42aa31bd62c7c50a5548b9c0f738623c6c902439d54ca in task-service has been cleanup successfully" Dec 13 13:34:17.549497 containerd[1460]: time="2024-12-13T13:34:17.549479356Z" level=info msg="TearDown network for sandbox \"8c8a323be73a25f027a42aa31bd62c7c50a5548b9c0f738623c6c902439d54ca\" successfully" Dec 13 13:34:17.549535 systemd[1]: run-netns-cni\x2d15e8a90c\x2d9993\x2de866\x2d4f01\x2d89f23dcf1a6b.mount: Deactivated successfully. Dec 13 13:34:17.550157 containerd[1460]: time="2024-12-13T13:34:17.549656870Z" level=info msg="StopPodSandbox for \"8c8a323be73a25f027a42aa31bd62c7c50a5548b9c0f738623c6c902439d54ca\" returns successfully" Dec 13 13:34:17.550157 containerd[1460]: time="2024-12-13T13:34:17.549999665Z" level=info msg="StopPodSandbox for \"5830ddd4f895c70171dbfc66a59269f13cec0912a16a8f19a7fc422dce5b8861\"" Dec 13 13:34:17.550157 containerd[1460]: time="2024-12-13T13:34:17.550077111Z" level=info msg="TearDown network for sandbox \"5830ddd4f895c70171dbfc66a59269f13cec0912a16a8f19a7fc422dce5b8861\" successfully" Dec 13 13:34:17.550157 containerd[1460]: time="2024-12-13T13:34:17.550086087Z" level=info msg="StopPodSandbox for \"5830ddd4f895c70171dbfc66a59269f13cec0912a16a8f19a7fc422dce5b8861\" returns successfully" Dec 13 13:34:17.551095 kubelet[2640]: I1213 13:34:17.550428 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3096d2cf061035900198ce891b4e5cbd737796b92099b44cfd3fb9172cdbac54" Dec 13 13:34:17.551165 containerd[1460]: time="2024-12-13T13:34:17.550589604Z" level=info msg="StopPodSandbox for \"d1a332cec12278e8a2848eda31dc70e9822bb2e5c607ef1e71dfc9da622ecebb\"" Dec 13 13:34:17.551165 containerd[1460]: time="2024-12-13T13:34:17.550668101Z" level=info msg="TearDown network for sandbox \"d1a332cec12278e8a2848eda31dc70e9822bb2e5c607ef1e71dfc9da622ecebb\" successfully" Dec 13 13:34:17.551165 containerd[1460]: time="2024-12-13T13:34:17.550680204Z" level=info msg="StopPodSandbox for \"d1a332cec12278e8a2848eda31dc70e9822bb2e5c607ef1e71dfc9da622ecebb\" returns successfully" Dec 13 13:34:17.551165 containerd[1460]: time="2024-12-13T13:34:17.550846547Z" level=info msg="StopPodSandbox for \"3096d2cf061035900198ce891b4e5cbd737796b92099b44cfd3fb9172cdbac54\"" Dec 13 13:34:17.551165 containerd[1460]: time="2024-12-13T13:34:17.550980689Z" level=info msg="Ensure that sandbox 3096d2cf061035900198ce891b4e5cbd737796b92099b44cfd3fb9172cdbac54 in task-service has been cleanup successfully" Dec 13 13:34:17.551375 containerd[1460]: time="2024-12-13T13:34:17.551357357Z" level=info msg="TearDown network for sandbox \"3096d2cf061035900198ce891b4e5cbd737796b92099b44cfd3fb9172cdbac54\" successfully" Dec 13 13:34:17.551463 containerd[1460]: time="2024-12-13T13:34:17.551419864Z" level=info msg="StopPodSandbox for \"3096d2cf061035900198ce891b4e5cbd737796b92099b44cfd3fb9172cdbac54\" returns successfully" Dec 13 13:34:17.551850 containerd[1460]: time="2024-12-13T13:34:17.551488734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d87f7746b-flf6n,Uid:5ba12381-f554-4e4f-8ceb-405dc070dc9a,Namespace:calico-apiserver,Attempt:3,}" Dec 13 13:34:17.551850 containerd[1460]: time="2024-12-13T13:34:17.551690693Z" level=info msg="StopPodSandbox for \"03f3d3da32a6bad9575866ecc071f7db7e2ef21eb15c13b5d3ccc4981ea3a6b8\"" Dec 13 13:34:17.551850 containerd[1460]: time="2024-12-13T13:34:17.551779080Z" level=info msg="TearDown network for sandbox \"03f3d3da32a6bad9575866ecc071f7db7e2ef21eb15c13b5d3ccc4981ea3a6b8\" successfully" Dec 13 13:34:17.551850 containerd[1460]: time="2024-12-13T13:34:17.551793076Z" level=info msg="StopPodSandbox for \"03f3d3da32a6bad9575866ecc071f7db7e2ef21eb15c13b5d3ccc4981ea3a6b8\" returns successfully" Dec 13 13:34:17.552674 containerd[1460]: time="2024-12-13T13:34:17.552645488Z" level=info msg="StopPodSandbox for \"3471811395b1ff1780abbbc92eeb9a683464d2b59eb4b36f7ca30acd080ae9c4\"" Dec 13 13:34:17.552776 containerd[1460]: time="2024-12-13T13:34:17.552721983Z" level=info msg="TearDown network for sandbox \"3471811395b1ff1780abbbc92eeb9a683464d2b59eb4b36f7ca30acd080ae9c4\" successfully" Dec 13 13:34:17.552776 containerd[1460]: time="2024-12-13T13:34:17.552733855Z" level=info msg="StopPodSandbox for \"3471811395b1ff1780abbbc92eeb9a683464d2b59eb4b36f7ca30acd080ae9c4\" returns successfully" Dec 13 13:34:17.553834 systemd[1]: run-netns-cni\x2dae815c5e\x2d0c38\x2d5e4e\x2ddaea\x2ddc8098216454.mount: Deactivated successfully. Dec 13 13:34:17.554594 containerd[1460]: time="2024-12-13T13:34:17.554067121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d87f7746b-mqntw,Uid:cadc3599-3084-49e0-99bc-626d4d423dd6,Namespace:calico-apiserver,Attempt:3,}" Dec 13 13:34:17.554670 kubelet[2640]: I1213 13:34:17.554417 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e93b7769b9d88742aaf6059922f8c6c4a26e010f6ef3858da386188108ad8af8" Dec 13 13:34:17.555812 containerd[1460]: time="2024-12-13T13:34:17.554900166Z" level=info msg="StopPodSandbox for \"e93b7769b9d88742aaf6059922f8c6c4a26e010f6ef3858da386188108ad8af8\"" Dec 13 13:34:17.555812 containerd[1460]: time="2024-12-13T13:34:17.555169684Z" level=info msg="Ensure that sandbox e93b7769b9d88742aaf6059922f8c6c4a26e010f6ef3858da386188108ad8af8 in task-service has been cleanup successfully" Dec 13 13:34:17.555812 containerd[1460]: time="2024-12-13T13:34:17.555494053Z" level=info msg="TearDown network for sandbox \"e93b7769b9d88742aaf6059922f8c6c4a26e010f6ef3858da386188108ad8af8\" successfully" Dec 13 13:34:17.555812 containerd[1460]: time="2024-12-13T13:34:17.555511095Z" level=info msg="StopPodSandbox for \"e93b7769b9d88742aaf6059922f8c6c4a26e010f6ef3858da386188108ad8af8\" returns successfully" Dec 13 13:34:17.555812 containerd[1460]: time="2024-12-13T13:34:17.555800950Z" level=info msg="StopPodSandbox for \"8e04eb66310a3917ffdcbb002bcd66f739fb7b5c6e195d72921f7f71735c6ad9\"" Dec 13 13:34:17.556010 containerd[1460]: time="2024-12-13T13:34:17.555904745Z" level=info msg="TearDown network for sandbox \"8e04eb66310a3917ffdcbb002bcd66f739fb7b5c6e195d72921f7f71735c6ad9\" successfully" Dec 13 13:34:17.556010 containerd[1460]: time="2024-12-13T13:34:17.555915636Z" level=info msg="StopPodSandbox for \"8e04eb66310a3917ffdcbb002bcd66f739fb7b5c6e195d72921f7f71735c6ad9\" returns successfully" Dec 13 13:34:17.556208 containerd[1460]: time="2024-12-13T13:34:17.556178991Z" level=info msg="StopPodSandbox for \"21489d44f97457445adcc907a914ad67e96a4016825cb60915c67d35d31dba11\"" Dec 13 13:34:17.556447 kubelet[2640]: I1213 13:34:17.556427 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09abfaaccf32e858d49fc89ee268bc73ea6fb80d4d0a90443906199a1d27306d" Dec 13 13:34:17.556951 containerd[1460]: time="2024-12-13T13:34:17.556277165Z" level=info msg="TearDown network for sandbox \"21489d44f97457445adcc907a914ad67e96a4016825cb60915c67d35d31dba11\" successfully" Dec 13 13:34:17.556951 containerd[1460]: time="2024-12-13T13:34:17.556904194Z" level=info msg="StopPodSandbox for \"21489d44f97457445adcc907a914ad67e96a4016825cb60915c67d35d31dba11\" returns successfully" Dec 13 13:34:17.556951 containerd[1460]: time="2024-12-13T13:34:17.556808253Z" level=info msg="StopPodSandbox for \"09abfaaccf32e858d49fc89ee268bc73ea6fb80d4d0a90443906199a1d27306d\"" Dec 13 13:34:17.557175 containerd[1460]: time="2024-12-13T13:34:17.557057141Z" level=info msg="Ensure that sandbox 09abfaaccf32e858d49fc89ee268bc73ea6fb80d4d0a90443906199a1d27306d in task-service has been cleanup successfully" Dec 13 13:34:17.557195 systemd[1]: run-netns-cni\x2db2f330e4\x2d4bc7\x2d6652\x2df861\x2d2b2578d0d57c.mount: Deactivated successfully. Dec 13 13:34:17.557289 systemd[1]: run-netns-cni\x2d3ed2169c\x2daa98\x2dad0d\x2d67a8\x2dd429ff8421e8.mount: Deactivated successfully. Dec 13 13:34:17.557562 kubelet[2640]: E1213 13:34:17.557287 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:34:17.557747 containerd[1460]: time="2024-12-13T13:34:17.557301310Z" level=info msg="TearDown network for sandbox \"09abfaaccf32e858d49fc89ee268bc73ea6fb80d4d0a90443906199a1d27306d\" successfully" Dec 13 13:34:17.557813 containerd[1460]: time="2024-12-13T13:34:17.557749002Z" level=info msg="StopPodSandbox for \"09abfaaccf32e858d49fc89ee268bc73ea6fb80d4d0a90443906199a1d27306d\" returns successfully" Dec 13 13:34:17.557840 containerd[1460]: time="2024-12-13T13:34:17.557616983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tdhb9,Uid:bd2088f6-f886-4664-af58-213044237f3c,Namespace:kube-system,Attempt:3,}" Dec 13 13:34:17.558135 containerd[1460]: time="2024-12-13T13:34:17.558112064Z" level=info msg="StopPodSandbox for \"706c13db8235334bc38488f0490b1867ec134804b2b05c3d93c69a593fd16e13\"" Dec 13 13:34:17.558221 containerd[1460]: time="2024-12-13T13:34:17.558185513Z" level=info msg="TearDown network for sandbox \"706c13db8235334bc38488f0490b1867ec134804b2b05c3d93c69a593fd16e13\" successfully" Dec 13 13:34:17.558221 containerd[1460]: time="2024-12-13T13:34:17.558196062Z" level=info msg="StopPodSandbox for \"706c13db8235334bc38488f0490b1867ec134804b2b05c3d93c69a593fd16e13\" returns successfully" Dec 13 13:34:17.558552 containerd[1460]: time="2024-12-13T13:34:17.558528597Z" level=info msg="StopPodSandbox for \"558af8a45bc01399591427b598962160a612f4630f01ffcfbcb0b3a90050035e\"" Dec 13 13:34:17.558694 kubelet[2640]: I1213 13:34:17.558677 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41522410dfe8d457521fb9fe296f7b8dae420fe4186ddf5966e8b8e3c6ce7668" Dec 13 13:34:17.558746 containerd[1460]: time="2024-12-13T13:34:17.558708385Z" level=info msg="TearDown network for sandbox \"558af8a45bc01399591427b598962160a612f4630f01ffcfbcb0b3a90050035e\" successfully" Dec 13 13:34:17.558746 containerd[1460]: time="2024-12-13T13:34:17.558718645Z" level=info msg="StopPodSandbox for \"558af8a45bc01399591427b598962160a612f4630f01ffcfbcb0b3a90050035e\" returns successfully" Dec 13 13:34:17.558945 kubelet[2640]: E1213 13:34:17.558925 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:34:17.559167 containerd[1460]: time="2024-12-13T13:34:17.559137683Z" level=info msg="StopPodSandbox for \"41522410dfe8d457521fb9fe296f7b8dae420fe4186ddf5966e8b8e3c6ce7668\"" Dec 13 13:34:17.559292 containerd[1460]: time="2024-12-13T13:34:17.559154334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-n488k,Uid:4efa0db1-4649-47cd-847b-b2cd3ddad9b5,Namespace:kube-system,Attempt:3,}" Dec 13 13:34:17.559408 containerd[1460]: time="2024-12-13T13:34:17.559387312Z" level=info msg="Ensure that sandbox 41522410dfe8d457521fb9fe296f7b8dae420fe4186ddf5966e8b8e3c6ce7668 in task-service has been cleanup successfully" Dec 13 13:34:17.559583 containerd[1460]: time="2024-12-13T13:34:17.559563602Z" level=info msg="TearDown network for sandbox \"41522410dfe8d457521fb9fe296f7b8dae420fe4186ddf5966e8b8e3c6ce7668\" successfully" Dec 13 13:34:17.559611 containerd[1460]: time="2024-12-13T13:34:17.559581536Z" level=info msg="StopPodSandbox for \"41522410dfe8d457521fb9fe296f7b8dae420fe4186ddf5966e8b8e3c6ce7668\" returns successfully" Dec 13 13:34:17.559861 containerd[1460]: time="2024-12-13T13:34:17.559840052Z" level=info msg="StopPodSandbox for \"043d5f360472a5cf84f2e7fe5d34c855f1c1466ecd61a4548840ff0da56c473d\"" Dec 13 13:34:17.559933 containerd[1460]: time="2024-12-13T13:34:17.559916186Z" level=info msg="TearDown network for sandbox \"043d5f360472a5cf84f2e7fe5d34c855f1c1466ecd61a4548840ff0da56c473d\" successfully" Dec 13 13:34:17.559933 containerd[1460]: time="2024-12-13T13:34:17.559930963Z" level=info msg="StopPodSandbox for \"043d5f360472a5cf84f2e7fe5d34c855f1c1466ecd61a4548840ff0da56c473d\" returns successfully" Dec 13 13:34:17.560368 containerd[1460]: time="2024-12-13T13:34:17.560194649Z" level=info msg="StopPodSandbox for \"c62e1d6a49358b1088df68486d06de5d565abc5cb224cecb08a72fc49527b818\"" Dec 13 13:34:17.560368 containerd[1460]: time="2024-12-13T13:34:17.560271293Z" level=info msg="TearDown network for sandbox \"c62e1d6a49358b1088df68486d06de5d565abc5cb224cecb08a72fc49527b818\" successfully" Dec 13 13:34:17.560368 containerd[1460]: time="2024-12-13T13:34:17.560279949Z" level=info msg="StopPodSandbox for \"c62e1d6a49358b1088df68486d06de5d565abc5cb224cecb08a72fc49527b818\" returns successfully" Dec 13 13:34:17.560698 containerd[1460]: time="2024-12-13T13:34:17.560679861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55f74d585b-gkp22,Uid:bcb02bd3-79c5-4f97-892a-aafa3090dcbe,Namespace:calico-system,Attempt:3,}" Dec 13 13:34:17.924575 containerd[1460]: time="2024-12-13T13:34:17.924382548Z" level=error msg="Failed to destroy network for sandbox \"fd4a3ec4455a4558ba5951ceb7f7c3f3eeeab9ca964d38b261406ee89bda46a4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:17.925638 containerd[1460]: time="2024-12-13T13:34:17.925595207Z" level=error msg="encountered an error cleaning up failed sandbox \"fd4a3ec4455a4558ba5951ceb7f7c3f3eeeab9ca964d38b261406ee89bda46a4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:17.925827 containerd[1460]: time="2024-12-13T13:34:17.925652675Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d87f7746b-mqntw,Uid:cadc3599-3084-49e0-99bc-626d4d423dd6,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"fd4a3ec4455a4558ba5951ceb7f7c3f3eeeab9ca964d38b261406ee89bda46a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:17.925934 kubelet[2640]: E1213 13:34:17.925832 2640 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd4a3ec4455a4558ba5951ceb7f7c3f3eeeab9ca964d38b261406ee89bda46a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:17.925934 kubelet[2640]: E1213 13:34:17.925903 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd4a3ec4455a4558ba5951ceb7f7c3f3eeeab9ca964d38b261406ee89bda46a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d87f7746b-mqntw" Dec 13 13:34:17.925934 kubelet[2640]: E1213 13:34:17.925925 2640 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd4a3ec4455a4558ba5951ceb7f7c3f3eeeab9ca964d38b261406ee89bda46a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d87f7746b-mqntw" Dec 13 13:34:17.926079 kubelet[2640]: E1213 13:34:17.925959 2640 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d87f7746b-mqntw_calico-apiserver(cadc3599-3084-49e0-99bc-626d4d423dd6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d87f7746b-mqntw_calico-apiserver(cadc3599-3084-49e0-99bc-626d4d423dd6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fd4a3ec4455a4558ba5951ceb7f7c3f3eeeab9ca964d38b261406ee89bda46a4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d87f7746b-mqntw" podUID="cadc3599-3084-49e0-99bc-626d4d423dd6" Dec 13 13:34:17.942954 containerd[1460]: time="2024-12-13T13:34:17.942916828Z" level=error msg="Failed to destroy network for sandbox \"a3d938a9fe1cbda4b552ace19138d55db25439b920e8a82798c7e324c7afe2f4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:17.943722 containerd[1460]: time="2024-12-13T13:34:17.943610091Z" level=error msg="encountered an error cleaning up failed sandbox \"a3d938a9fe1cbda4b552ace19138d55db25439b920e8a82798c7e324c7afe2f4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:17.943722 containerd[1460]: time="2024-12-13T13:34:17.943677708Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h5jzv,Uid:70af0792-807b-45ba-8d22-96d81d38b5e7,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"a3d938a9fe1cbda4b552ace19138d55db25439b920e8a82798c7e324c7afe2f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:17.944093 kubelet[2640]: E1213 13:34:17.944047 2640 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3d938a9fe1cbda4b552ace19138d55db25439b920e8a82798c7e324c7afe2f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:17.944512 kubelet[2640]: E1213 13:34:17.944193 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3d938a9fe1cbda4b552ace19138d55db25439b920e8a82798c7e324c7afe2f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h5jzv" Dec 13 13:34:17.944512 kubelet[2640]: E1213 13:34:17.944219 2640 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3d938a9fe1cbda4b552ace19138d55db25439b920e8a82798c7e324c7afe2f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h5jzv" Dec 13 13:34:17.944512 kubelet[2640]: E1213 13:34:17.944250 2640 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-h5jzv_calico-system(70af0792-807b-45ba-8d22-96d81d38b5e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-h5jzv_calico-system(70af0792-807b-45ba-8d22-96d81d38b5e7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a3d938a9fe1cbda4b552ace19138d55db25439b920e8a82798c7e324c7afe2f4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h5jzv" podUID="70af0792-807b-45ba-8d22-96d81d38b5e7" Dec 13 13:34:17.949008 containerd[1460]: time="2024-12-13T13:34:17.948947684Z" level=error msg="Failed to destroy network for sandbox \"72d027854b05139635b45a0e92158d8fa8f40ee284fc63177385a814ee2244d0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:17.949541 containerd[1460]: time="2024-12-13T13:34:17.949484263Z" level=error msg="encountered an error cleaning up failed sandbox \"72d027854b05139635b45a0e92158d8fa8f40ee284fc63177385a814ee2244d0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:17.949588 containerd[1460]: time="2024-12-13T13:34:17.949569022Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d87f7746b-flf6n,Uid:5ba12381-f554-4e4f-8ceb-405dc070dc9a,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"72d027854b05139635b45a0e92158d8fa8f40ee284fc63177385a814ee2244d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:17.950539 kubelet[2640]: E1213 13:34:17.950480 2640 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72d027854b05139635b45a0e92158d8fa8f40ee284fc63177385a814ee2244d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:17.950597 kubelet[2640]: E1213 13:34:17.950557 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72d027854b05139635b45a0e92158d8fa8f40ee284fc63177385a814ee2244d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d87f7746b-flf6n" Dec 13 13:34:17.950679 kubelet[2640]: E1213 13:34:17.950590 2640 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72d027854b05139635b45a0e92158d8fa8f40ee284fc63177385a814ee2244d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d87f7746b-flf6n" Dec 13 13:34:17.950679 kubelet[2640]: E1213 13:34:17.950655 2640 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d87f7746b-flf6n_calico-apiserver(5ba12381-f554-4e4f-8ceb-405dc070dc9a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d87f7746b-flf6n_calico-apiserver(5ba12381-f554-4e4f-8ceb-405dc070dc9a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"72d027854b05139635b45a0e92158d8fa8f40ee284fc63177385a814ee2244d0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d87f7746b-flf6n" podUID="5ba12381-f554-4e4f-8ceb-405dc070dc9a" Dec 13 13:34:17.959858 containerd[1460]: time="2024-12-13T13:34:17.959704514Z" level=error msg="Failed to destroy network for sandbox \"3d6b3dd240a00c2a00a729b97087fb6f836a068b396113e925387d9d8ae54c7f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:17.960230 containerd[1460]: time="2024-12-13T13:34:17.960202801Z" level=error msg="encountered an error cleaning up failed sandbox \"3d6b3dd240a00c2a00a729b97087fb6f836a068b396113e925387d9d8ae54c7f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:17.960537 containerd[1460]: time="2024-12-13T13:34:17.960399962Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tdhb9,Uid:bd2088f6-f886-4664-af58-213044237f3c,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"3d6b3dd240a00c2a00a729b97087fb6f836a068b396113e925387d9d8ae54c7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:17.960860 kubelet[2640]: E1213 13:34:17.960701 2640 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d6b3dd240a00c2a00a729b97087fb6f836a068b396113e925387d9d8ae54c7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:17.960860 kubelet[2640]: E1213 13:34:17.960755 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d6b3dd240a00c2a00a729b97087fb6f836a068b396113e925387d9d8ae54c7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-tdhb9" Dec 13 13:34:17.960860 kubelet[2640]: E1213 13:34:17.960777 2640 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d6b3dd240a00c2a00a729b97087fb6f836a068b396113e925387d9d8ae54c7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-tdhb9" Dec 13 13:34:17.960974 kubelet[2640]: E1213 13:34:17.960816 2640 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-tdhb9_kube-system(bd2088f6-f886-4664-af58-213044237f3c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-tdhb9_kube-system(bd2088f6-f886-4664-af58-213044237f3c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3d6b3dd240a00c2a00a729b97087fb6f836a068b396113e925387d9d8ae54c7f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-tdhb9" podUID="bd2088f6-f886-4664-af58-213044237f3c" Dec 13 13:34:17.961358 systemd[1]: run-netns-cni\x2d41016586\x2de1ab\x2d631e\x2d1d0d\x2d3a78224268fc.mount: Deactivated successfully. Dec 13 13:34:17.961462 systemd[1]: run-netns-cni\x2d9cf769b9\x2d04bf\x2da4e6\x2df2ff\x2dc8608bd0e704.mount: Deactivated successfully. Dec 13 13:34:17.963487 containerd[1460]: time="2024-12-13T13:34:17.963414628Z" level=error msg="Failed to destroy network for sandbox \"e3e675aabe036a2d2aa1e1782dcf65d1cb1d86f83cf782f33096729bc19652d6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:17.964536 containerd[1460]: time="2024-12-13T13:34:17.964507503Z" level=error msg="encountered an error cleaning up failed sandbox \"e3e675aabe036a2d2aa1e1782dcf65d1cb1d86f83cf782f33096729bc19652d6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:17.964654 containerd[1460]: time="2024-12-13T13:34:17.964625074Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-n488k,Uid:4efa0db1-4649-47cd-847b-b2cd3ddad9b5,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"e3e675aabe036a2d2aa1e1782dcf65d1cb1d86f83cf782f33096729bc19652d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:17.964776 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3d6b3dd240a00c2a00a729b97087fb6f836a068b396113e925387d9d8ae54c7f-shm.mount: Deactivated successfully. Dec 13 13:34:17.965115 kubelet[2640]: E1213 13:34:17.964953 2640 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3e675aabe036a2d2aa1e1782dcf65d1cb1d86f83cf782f33096729bc19652d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:17.965115 kubelet[2640]: E1213 13:34:17.965047 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3e675aabe036a2d2aa1e1782dcf65d1cb1d86f83cf782f33096729bc19652d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-n488k" Dec 13 13:34:17.965115 kubelet[2640]: E1213 13:34:17.965070 2640 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3e675aabe036a2d2aa1e1782dcf65d1cb1d86f83cf782f33096729bc19652d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-n488k" Dec 13 13:34:17.965513 kubelet[2640]: E1213 13:34:17.965424 2640 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-n488k_kube-system(4efa0db1-4649-47cd-847b-b2cd3ddad9b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-n488k_kube-system(4efa0db1-4649-47cd-847b-b2cd3ddad9b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e3e675aabe036a2d2aa1e1782dcf65d1cb1d86f83cf782f33096729bc19652d6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-n488k" podUID="4efa0db1-4649-47cd-847b-b2cd3ddad9b5" Dec 13 13:34:17.968337 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e3e675aabe036a2d2aa1e1782dcf65d1cb1d86f83cf782f33096729bc19652d6-shm.mount: Deactivated successfully. Dec 13 13:34:17.969039 containerd[1460]: time="2024-12-13T13:34:17.968993947Z" level=error msg="Failed to destroy network for sandbox \"c6522ad2e1a3375e607b651b292cf0023738e3078af5ef59656343f350a5c5f4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:17.970337 containerd[1460]: time="2024-12-13T13:34:17.969463329Z" level=error msg="encountered an error cleaning up failed sandbox \"c6522ad2e1a3375e607b651b292cf0023738e3078af5ef59656343f350a5c5f4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:17.970337 containerd[1460]: time="2024-12-13T13:34:17.969544361Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55f74d585b-gkp22,Uid:bcb02bd3-79c5-4f97-892a-aafa3090dcbe,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"c6522ad2e1a3375e607b651b292cf0023738e3078af5ef59656343f350a5c5f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:17.970449 kubelet[2640]: E1213 13:34:17.969800 2640 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6522ad2e1a3375e607b651b292cf0023738e3078af5ef59656343f350a5c5f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:17.970449 kubelet[2640]: E1213 13:34:17.969869 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6522ad2e1a3375e607b651b292cf0023738e3078af5ef59656343f350a5c5f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55f74d585b-gkp22" Dec 13 13:34:17.970449 kubelet[2640]: E1213 13:34:17.969909 2640 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6522ad2e1a3375e607b651b292cf0023738e3078af5ef59656343f350a5c5f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55f74d585b-gkp22" Dec 13 13:34:17.970553 kubelet[2640]: E1213 13:34:17.969960 2640 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-55f74d585b-gkp22_calico-system(bcb02bd3-79c5-4f97-892a-aafa3090dcbe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-55f74d585b-gkp22_calico-system(bcb02bd3-79c5-4f97-892a-aafa3090dcbe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c6522ad2e1a3375e607b651b292cf0023738e3078af5ef59656343f350a5c5f4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55f74d585b-gkp22" podUID="bcb02bd3-79c5-4f97-892a-aafa3090dcbe" Dec 13 13:34:17.971774 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c6522ad2e1a3375e607b651b292cf0023738e3078af5ef59656343f350a5c5f4-shm.mount: Deactivated successfully. Dec 13 13:34:18.562796 kubelet[2640]: I1213 13:34:18.562762 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3e675aabe036a2d2aa1e1782dcf65d1cb1d86f83cf782f33096729bc19652d6" Dec 13 13:34:18.564282 containerd[1460]: time="2024-12-13T13:34:18.563652975Z" level=info msg="StopPodSandbox for \"e3e675aabe036a2d2aa1e1782dcf65d1cb1d86f83cf782f33096729bc19652d6\"" Dec 13 13:34:18.564282 containerd[1460]: time="2024-12-13T13:34:18.563929534Z" level=info msg="Ensure that sandbox e3e675aabe036a2d2aa1e1782dcf65d1cb1d86f83cf782f33096729bc19652d6 in task-service has been cleanup successfully" Dec 13 13:34:18.564862 containerd[1460]: time="2024-12-13T13:34:18.564819316Z" level=info msg="TearDown network for sandbox \"e3e675aabe036a2d2aa1e1782dcf65d1cb1d86f83cf782f33096729bc19652d6\" successfully" Dec 13 13:34:18.564933 containerd[1460]: time="2024-12-13T13:34:18.564917911Z" level=info msg="StopPodSandbox for \"e3e675aabe036a2d2aa1e1782dcf65d1cb1d86f83cf782f33096729bc19652d6\" returns successfully" Dec 13 13:34:18.565298 containerd[1460]: time="2024-12-13T13:34:18.565249034Z" level=info msg="StopPodSandbox for \"09abfaaccf32e858d49fc89ee268bc73ea6fb80d4d0a90443906199a1d27306d\"" Dec 13 13:34:18.565596 containerd[1460]: time="2024-12-13T13:34:18.565389939Z" level=info msg="TearDown network for sandbox \"09abfaaccf32e858d49fc89ee268bc73ea6fb80d4d0a90443906199a1d27306d\" successfully" Dec 13 13:34:18.565596 containerd[1460]: time="2024-12-13T13:34:18.565406049Z" level=info msg="StopPodSandbox for \"09abfaaccf32e858d49fc89ee268bc73ea6fb80d4d0a90443906199a1d27306d\" returns successfully" Dec 13 13:34:18.566037 containerd[1460]: time="2024-12-13T13:34:18.565998994Z" level=info msg="StopPodSandbox for \"706c13db8235334bc38488f0490b1867ec134804b2b05c3d93c69a593fd16e13\"" Dec 13 13:34:18.566090 containerd[1460]: time="2024-12-13T13:34:18.566067923Z" level=info msg="TearDown network for sandbox \"706c13db8235334bc38488f0490b1867ec134804b2b05c3d93c69a593fd16e13\" successfully" Dec 13 13:34:18.566090 containerd[1460]: time="2024-12-13T13:34:18.566077471Z" level=info msg="StopPodSandbox for \"706c13db8235334bc38488f0490b1867ec134804b2b05c3d93c69a593fd16e13\" returns successfully" Dec 13 13:34:18.568219 containerd[1460]: time="2024-12-13T13:34:18.567490407Z" level=info msg="StopPodSandbox for \"558af8a45bc01399591427b598962160a612f4630f01ffcfbcb0b3a90050035e\"" Dec 13 13:34:18.568219 containerd[1460]: time="2024-12-13T13:34:18.567605383Z" level=info msg="TearDown network for sandbox \"558af8a45bc01399591427b598962160a612f4630f01ffcfbcb0b3a90050035e\" successfully" Dec 13 13:34:18.568219 containerd[1460]: time="2024-12-13T13:34:18.567618428Z" level=info msg="StopPodSandbox for \"558af8a45bc01399591427b598962160a612f4630f01ffcfbcb0b3a90050035e\" returns successfully" Dec 13 13:34:18.568387 kubelet[2640]: E1213 13:34:18.568015 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:34:18.567723 systemd[1]: run-netns-cni\x2d6273264f\x2de2e0\x2ddc36\x2d450a\x2d9f9c30b4c9d9.mount: Deactivated successfully. Dec 13 13:34:18.569350 containerd[1460]: time="2024-12-13T13:34:18.569209819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-n488k,Uid:4efa0db1-4649-47cd-847b-b2cd3ddad9b5,Namespace:kube-system,Attempt:4,}" Dec 13 13:34:18.569757 kubelet[2640]: I1213 13:34:18.569723 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6522ad2e1a3375e607b651b292cf0023738e3078af5ef59656343f350a5c5f4" Dec 13 13:34:18.570364 containerd[1460]: time="2024-12-13T13:34:18.570330304Z" level=info msg="StopPodSandbox for \"c6522ad2e1a3375e607b651b292cf0023738e3078af5ef59656343f350a5c5f4\"" Dec 13 13:34:18.570955 containerd[1460]: time="2024-12-13T13:34:18.570751206Z" level=info msg="Ensure that sandbox c6522ad2e1a3375e607b651b292cf0023738e3078af5ef59656343f350a5c5f4 in task-service has been cleanup successfully" Dec 13 13:34:18.571092 containerd[1460]: time="2024-12-13T13:34:18.571064645Z" level=info msg="TearDown network for sandbox \"c6522ad2e1a3375e607b651b292cf0023738e3078af5ef59656343f350a5c5f4\" successfully" Dec 13 13:34:18.571092 containerd[1460]: time="2024-12-13T13:34:18.571088980Z" level=info msg="StopPodSandbox for \"c6522ad2e1a3375e607b651b292cf0023738e3078af5ef59656343f350a5c5f4\" returns successfully" Dec 13 13:34:18.572223 containerd[1460]: time="2024-12-13T13:34:18.572193997Z" level=info msg="StopPodSandbox for \"41522410dfe8d457521fb9fe296f7b8dae420fe4186ddf5966e8b8e3c6ce7668\"" Dec 13 13:34:18.572396 containerd[1460]: time="2024-12-13T13:34:18.572377052Z" level=info msg="TearDown network for sandbox \"41522410dfe8d457521fb9fe296f7b8dae420fe4186ddf5966e8b8e3c6ce7668\" successfully" Dec 13 13:34:18.572440 containerd[1460]: time="2024-12-13T13:34:18.572395717Z" level=info msg="StopPodSandbox for \"41522410dfe8d457521fb9fe296f7b8dae420fe4186ddf5966e8b8e3c6ce7668\" returns successfully" Dec 13 13:34:18.572780 containerd[1460]: time="2024-12-13T13:34:18.572759129Z" level=info msg="StopPodSandbox for \"043d5f360472a5cf84f2e7fe5d34c855f1c1466ecd61a4548840ff0da56c473d\"" Dec 13 13:34:18.572864 containerd[1460]: time="2024-12-13T13:34:18.572848338Z" level=info msg="TearDown network for sandbox \"043d5f360472a5cf84f2e7fe5d34c855f1c1466ecd61a4548840ff0da56c473d\" successfully" Dec 13 13:34:18.572890 containerd[1460]: time="2024-12-13T13:34:18.572862875Z" level=info msg="StopPodSandbox for \"043d5f360472a5cf84f2e7fe5d34c855f1c1466ecd61a4548840ff0da56c473d\" returns successfully" Dec 13 13:34:18.573079 kubelet[2640]: I1213 13:34:18.573061 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3d938a9fe1cbda4b552ace19138d55db25439b920e8a82798c7e324c7afe2f4" Dec 13 13:34:18.573529 containerd[1460]: time="2024-12-13T13:34:18.573506274Z" level=info msg="StopPodSandbox for \"c62e1d6a49358b1088df68486d06de5d565abc5cb224cecb08a72fc49527b818\"" Dec 13 13:34:18.573612 containerd[1460]: time="2024-12-13T13:34:18.573595702Z" level=info msg="TearDown network for sandbox \"c62e1d6a49358b1088df68486d06de5d565abc5cb224cecb08a72fc49527b818\" successfully" Dec 13 13:34:18.573639 containerd[1460]: time="2024-12-13T13:34:18.573611191Z" level=info msg="StopPodSandbox for \"c62e1d6a49358b1088df68486d06de5d565abc5cb224cecb08a72fc49527b818\" returns successfully" Dec 13 13:34:18.573762 containerd[1460]: time="2024-12-13T13:34:18.573745022Z" level=info msg="StopPodSandbox for \"a3d938a9fe1cbda4b552ace19138d55db25439b920e8a82798c7e324c7afe2f4\"" Dec 13 13:34:18.573973 containerd[1460]: time="2024-12-13T13:34:18.573945189Z" level=info msg="Ensure that sandbox a3d938a9fe1cbda4b552ace19138d55db25439b920e8a82798c7e324c7afe2f4 in task-service has been cleanup successfully" Dec 13 13:34:18.574156 containerd[1460]: time="2024-12-13T13:34:18.574137521Z" level=info msg="TearDown network for sandbox \"a3d938a9fe1cbda4b552ace19138d55db25439b920e8a82798c7e324c7afe2f4\" successfully" Dec 13 13:34:18.574184 containerd[1460]: time="2024-12-13T13:34:18.574155925Z" level=info msg="StopPodSandbox for \"a3d938a9fe1cbda4b552ace19138d55db25439b920e8a82798c7e324c7afe2f4\" returns successfully" Dec 13 13:34:18.574662 containerd[1460]: time="2024-12-13T13:34:18.574573890Z" level=info msg="StopPodSandbox for \"4ef0d56a397cd72f2356f51c144189ca5302b06a02860d08835bc16ddf93f006\"" Dec 13 13:34:18.576555 kubelet[2640]: I1213 13:34:18.576525 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72d027854b05139635b45a0e92158d8fa8f40ee284fc63177385a814ee2244d0" Dec 13 13:34:18.578953 kubelet[2640]: I1213 13:34:18.578913 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd4a3ec4455a4558ba5951ceb7f7c3f3eeeab9ca964d38b261406ee89bda46a4" Dec 13 13:34:18.581207 kubelet[2640]: I1213 13:34:18.581148 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d6b3dd240a00c2a00a729b97087fb6f836a068b396113e925387d9d8ae54c7f" Dec 13 13:34:18.623636 containerd[1460]: time="2024-12-13T13:34:18.574684208Z" level=info msg="TearDown network for sandbox \"4ef0d56a397cd72f2356f51c144189ca5302b06a02860d08835bc16ddf93f006\" successfully" Dec 13 13:34:18.623636 containerd[1460]: time="2024-12-13T13:34:18.623633418Z" level=info msg="StopPodSandbox for \"4ef0d56a397cd72f2356f51c144189ca5302b06a02860d08835bc16ddf93f006\" returns successfully" Dec 13 13:34:18.623825 containerd[1460]: time="2024-12-13T13:34:18.574860809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55f74d585b-gkp22,Uid:bcb02bd3-79c5-4f97-892a-aafa3090dcbe,Namespace:calico-system,Attempt:4,}" Dec 13 13:34:18.623825 containerd[1460]: time="2024-12-13T13:34:18.577178235Z" level=info msg="StopPodSandbox for \"72d027854b05139635b45a0e92158d8fa8f40ee284fc63177385a814ee2244d0\"" Dec 13 13:34:18.624948 containerd[1460]: time="2024-12-13T13:34:18.624885562Z" level=info msg="Ensure that sandbox 72d027854b05139635b45a0e92158d8fa8f40ee284fc63177385a814ee2244d0 in task-service has been cleanup successfully" Dec 13 13:34:18.626392 containerd[1460]: time="2024-12-13T13:34:18.579415711Z" level=info msg="StopPodSandbox for \"fd4a3ec4455a4558ba5951ceb7f7c3f3eeeab9ca964d38b261406ee89bda46a4\"" Dec 13 13:34:18.626392 containerd[1460]: time="2024-12-13T13:34:18.625628318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h5jzv,Uid:70af0792-807b-45ba-8d22-96d81d38b5e7,Namespace:calico-system,Attempt:2,}" Dec 13 13:34:18.626392 containerd[1460]: time="2024-12-13T13:34:18.626071320Z" level=info msg="Ensure that sandbox fd4a3ec4455a4558ba5951ceb7f7c3f3eeeab9ca964d38b261406ee89bda46a4 in task-service has been cleanup successfully" Dec 13 13:34:18.628907 containerd[1460]: time="2024-12-13T13:34:18.628834384Z" level=info msg="TearDown network for sandbox \"fd4a3ec4455a4558ba5951ceb7f7c3f3eeeab9ca964d38b261406ee89bda46a4\" successfully" Dec 13 13:34:18.629930 containerd[1460]: time="2024-12-13T13:34:18.628861204Z" level=info msg="StopPodSandbox for \"fd4a3ec4455a4558ba5951ceb7f7c3f3eeeab9ca964d38b261406ee89bda46a4\" returns successfully" Dec 13 13:34:18.629930 containerd[1460]: time="2024-12-13T13:34:18.581726563Z" level=info msg="StopPodSandbox for \"3d6b3dd240a00c2a00a729b97087fb6f836a068b396113e925387d9d8ae54c7f\"" Dec 13 13:34:18.631011 containerd[1460]: time="2024-12-13T13:34:18.630898554Z" level=info msg="Ensure that sandbox 3d6b3dd240a00c2a00a729b97087fb6f836a068b396113e925387d9d8ae54c7f in task-service has been cleanup successfully" Dec 13 13:34:18.632705 containerd[1460]: time="2024-12-13T13:34:18.631327259Z" level=info msg="TearDown network for sandbox \"3d6b3dd240a00c2a00a729b97087fb6f836a068b396113e925387d9d8ae54c7f\" successfully" Dec 13 13:34:18.632705 containerd[1460]: time="2024-12-13T13:34:18.631516916Z" level=info msg="StopPodSandbox for \"3d6b3dd240a00c2a00a729b97087fb6f836a068b396113e925387d9d8ae54c7f\" returns successfully" Dec 13 13:34:18.644356 containerd[1460]: time="2024-12-13T13:34:18.644278131Z" level=info msg="StopPodSandbox for \"3096d2cf061035900198ce891b4e5cbd737796b92099b44cfd3fb9172cdbac54\"" Dec 13 13:34:18.644664 containerd[1460]: time="2024-12-13T13:34:18.644632106Z" level=info msg="TearDown network for sandbox \"3096d2cf061035900198ce891b4e5cbd737796b92099b44cfd3fb9172cdbac54\" successfully" Dec 13 13:34:18.644664 containerd[1460]: time="2024-12-13T13:34:18.644650520Z" level=info msg="StopPodSandbox for \"3096d2cf061035900198ce891b4e5cbd737796b92099b44cfd3fb9172cdbac54\" returns successfully" Dec 13 13:34:18.644826 containerd[1460]: time="2024-12-13T13:34:18.644795333Z" level=info msg="StopPodSandbox for \"e93b7769b9d88742aaf6059922f8c6c4a26e010f6ef3858da386188108ad8af8\"" Dec 13 13:34:18.645047 containerd[1460]: time="2024-12-13T13:34:18.645008042Z" level=info msg="TearDown network for sandbox \"e93b7769b9d88742aaf6059922f8c6c4a26e010f6ef3858da386188108ad8af8\" successfully" Dec 13 13:34:18.645047 containerd[1460]: time="2024-12-13T13:34:18.645021217Z" level=info msg="StopPodSandbox for \"e93b7769b9d88742aaf6059922f8c6c4a26e010f6ef3858da386188108ad8af8\" returns successfully" Dec 13 13:34:18.646887 containerd[1460]: time="2024-12-13T13:34:18.645500488Z" level=info msg="StopPodSandbox for \"03f3d3da32a6bad9575866ecc071f7db7e2ef21eb15c13b5d3ccc4981ea3a6b8\"" Dec 13 13:34:18.646887 containerd[1460]: time="2024-12-13T13:34:18.645603863Z" level=info msg="TearDown network for sandbox \"03f3d3da32a6bad9575866ecc071f7db7e2ef21eb15c13b5d3ccc4981ea3a6b8\" successfully" Dec 13 13:34:18.646887 containerd[1460]: time="2024-12-13T13:34:18.645613200Z" level=info msg="StopPodSandbox for \"03f3d3da32a6bad9575866ecc071f7db7e2ef21eb15c13b5d3ccc4981ea3a6b8\" returns successfully" Dec 13 13:34:18.646887 containerd[1460]: time="2024-12-13T13:34:18.645698280Z" level=info msg="StopPodSandbox for \"8e04eb66310a3917ffdcbb002bcd66f739fb7b5c6e195d72921f7f71735c6ad9\"" Dec 13 13:34:18.646887 containerd[1460]: time="2024-12-13T13:34:18.645826460Z" level=info msg="TearDown network for sandbox \"8e04eb66310a3917ffdcbb002bcd66f739fb7b5c6e195d72921f7f71735c6ad9\" successfully" Dec 13 13:34:18.646887 containerd[1460]: time="2024-12-13T13:34:18.645925166Z" level=info msg="StopPodSandbox for \"8e04eb66310a3917ffdcbb002bcd66f739fb7b5c6e195d72921f7f71735c6ad9\" returns successfully" Dec 13 13:34:18.646887 containerd[1460]: time="2024-12-13T13:34:18.646273471Z" level=info msg="StopPodSandbox for \"3471811395b1ff1780abbbc92eeb9a683464d2b59eb4b36f7ca30acd080ae9c4\"" Dec 13 13:34:18.660267 containerd[1460]: time="2024-12-13T13:34:18.659488228Z" level=info msg="TearDown network for sandbox \"72d027854b05139635b45a0e92158d8fa8f40ee284fc63177385a814ee2244d0\" successfully" Dec 13 13:34:18.660267 containerd[1460]: time="2024-12-13T13:34:18.659563259Z" level=info msg="StopPodSandbox for \"72d027854b05139635b45a0e92158d8fa8f40ee284fc63177385a814ee2244d0\" returns successfully" Dec 13 13:34:18.661765 containerd[1460]: time="2024-12-13T13:34:18.660011521Z" level=info msg="TearDown network for sandbox \"3471811395b1ff1780abbbc92eeb9a683464d2b59eb4b36f7ca30acd080ae9c4\" successfully" Dec 13 13:34:18.661765 containerd[1460]: time="2024-12-13T13:34:18.661511841Z" level=info msg="StopPodSandbox for \"3471811395b1ff1780abbbc92eeb9a683464d2b59eb4b36f7ca30acd080ae9c4\" returns successfully" Dec 13 13:34:18.661765 containerd[1460]: time="2024-12-13T13:34:18.661738588Z" level=info msg="StopPodSandbox for \"21489d44f97457445adcc907a914ad67e96a4016825cb60915c67d35d31dba11\"" Dec 13 13:34:18.662579 containerd[1460]: time="2024-12-13T13:34:18.662503646Z" level=info msg="TearDown network for sandbox \"21489d44f97457445adcc907a914ad67e96a4016825cb60915c67d35d31dba11\" successfully" Dec 13 13:34:18.662622 containerd[1460]: time="2024-12-13T13:34:18.662571483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d87f7746b-mqntw,Uid:cadc3599-3084-49e0-99bc-626d4d423dd6,Namespace:calico-apiserver,Attempt:4,}" Dec 13 13:34:18.662779 containerd[1460]: time="2024-12-13T13:34:18.662751451Z" level=info msg="StopPodSandbox for \"8c8a323be73a25f027a42aa31bd62c7c50a5548b9c0f738623c6c902439d54ca\"" Dec 13 13:34:18.662814 containerd[1460]: time="2024-12-13T13:34:18.662574559Z" level=info msg="StopPodSandbox for \"21489d44f97457445adcc907a914ad67e96a4016825cb60915c67d35d31dba11\" returns successfully" Dec 13 13:34:18.662863 containerd[1460]: time="2024-12-13T13:34:18.662840388Z" level=info msg="TearDown network for sandbox \"8c8a323be73a25f027a42aa31bd62c7c50a5548b9c0f738623c6c902439d54ca\" successfully" Dec 13 13:34:18.663117 kubelet[2640]: E1213 13:34:18.663086 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:34:18.663709 containerd[1460]: time="2024-12-13T13:34:18.663012161Z" level=info msg="StopPodSandbox for \"8c8a323be73a25f027a42aa31bd62c7c50a5548b9c0f738623c6c902439d54ca\" returns successfully" Dec 13 13:34:18.664464 containerd[1460]: time="2024-12-13T13:34:18.663834156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tdhb9,Uid:bd2088f6-f886-4664-af58-213044237f3c,Namespace:kube-system,Attempt:4,}" Dec 13 13:34:18.665119 containerd[1460]: time="2024-12-13T13:34:18.664794161Z" level=info msg="StopPodSandbox for \"5830ddd4f895c70171dbfc66a59269f13cec0912a16a8f19a7fc422dce5b8861\"" Dec 13 13:34:18.665119 containerd[1460]: time="2024-12-13T13:34:18.664921670Z" level=info msg="TearDown network for sandbox \"5830ddd4f895c70171dbfc66a59269f13cec0912a16a8f19a7fc422dce5b8861\" successfully" Dec 13 13:34:18.665119 containerd[1460]: time="2024-12-13T13:34:18.664932270Z" level=info msg="StopPodSandbox for \"5830ddd4f895c70171dbfc66a59269f13cec0912a16a8f19a7fc422dce5b8861\" returns successfully" Dec 13 13:34:18.665526 containerd[1460]: time="2024-12-13T13:34:18.665471554Z" level=info msg="StopPodSandbox for \"d1a332cec12278e8a2848eda31dc70e9822bb2e5c607ef1e71dfc9da622ecebb\"" Dec 13 13:34:18.665715 containerd[1460]: time="2024-12-13T13:34:18.665556474Z" level=info msg="TearDown network for sandbox \"d1a332cec12278e8a2848eda31dc70e9822bb2e5c607ef1e71dfc9da622ecebb\" successfully" Dec 13 13:34:18.665715 containerd[1460]: time="2024-12-13T13:34:18.665572314Z" level=info msg="StopPodSandbox for \"d1a332cec12278e8a2848eda31dc70e9822bb2e5c607ef1e71dfc9da622ecebb\" returns successfully" Dec 13 13:34:18.667450 containerd[1460]: time="2024-12-13T13:34:18.667176318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d87f7746b-flf6n,Uid:5ba12381-f554-4e4f-8ceb-405dc070dc9a,Namespace:calico-apiserver,Attempt:4,}" Dec 13 13:34:18.954781 systemd[1]: run-netns-cni\x2d162f5601\x2d5bde\x2d159f\x2db92e\x2dc851c1496273.mount: Deactivated successfully. Dec 13 13:34:18.954899 systemd[1]: run-netns-cni\x2da4aec4b5\x2dc046\x2de825\x2d3645\x2d274242856187.mount: Deactivated successfully. Dec 13 13:34:18.954997 systemd[1]: run-netns-cni\x2ddc5ff318\x2d788d\x2df07f\x2d70ce\x2d9fc403a74dfa.mount: Deactivated successfully. Dec 13 13:34:18.955070 systemd[1]: run-netns-cni\x2d428e5167\x2d0964\x2dd551\x2db41f\x2d5537f748f7b6.mount: Deactivated successfully. Dec 13 13:34:18.955272 systemd[1]: run-netns-cni\x2d5b6fd91a\x2d4ea7\x2d3839\x2d7e66\x2d5e81847ac6ae.mount: Deactivated successfully. Dec 13 13:34:19.281208 containerd[1460]: time="2024-12-13T13:34:19.281070955Z" level=error msg="Failed to destroy network for sandbox \"ba6f8affd16e998d2e69c27c01590791060c1de1a38eb975939e57a9835e83db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:19.282030 containerd[1460]: time="2024-12-13T13:34:19.281779407Z" level=error msg="encountered an error cleaning up failed sandbox \"ba6f8affd16e998d2e69c27c01590791060c1de1a38eb975939e57a9835e83db\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:19.282030 containerd[1460]: time="2024-12-13T13:34:19.281841463Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-n488k,Uid:4efa0db1-4649-47cd-847b-b2cd3ddad9b5,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"ba6f8affd16e998d2e69c27c01590791060c1de1a38eb975939e57a9835e83db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:19.282145 kubelet[2640]: E1213 13:34:19.282110 2640 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba6f8affd16e998d2e69c27c01590791060c1de1a38eb975939e57a9835e83db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:19.282236 kubelet[2640]: E1213 13:34:19.282164 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba6f8affd16e998d2e69c27c01590791060c1de1a38eb975939e57a9835e83db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-n488k" Dec 13 13:34:19.282236 kubelet[2640]: E1213 13:34:19.282189 2640 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba6f8affd16e998d2e69c27c01590791060c1de1a38eb975939e57a9835e83db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-n488k" Dec 13 13:34:19.282331 kubelet[2640]: E1213 13:34:19.282231 2640 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-n488k_kube-system(4efa0db1-4649-47cd-847b-b2cd3ddad9b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-n488k_kube-system(4efa0db1-4649-47cd-847b-b2cd3ddad9b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ba6f8affd16e998d2e69c27c01590791060c1de1a38eb975939e57a9835e83db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-n488k" podUID="4efa0db1-4649-47cd-847b-b2cd3ddad9b5" Dec 13 13:34:19.453777 containerd[1460]: time="2024-12-13T13:34:19.453425947Z" level=error msg="Failed to destroy network for sandbox \"6a2a4f3c81b7850246110bb3a39e21303a213d976a8f97927fa282ec47803ddb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:19.455521 containerd[1460]: time="2024-12-13T13:34:19.455382424Z" level=error msg="encountered an error cleaning up failed sandbox \"6a2a4f3c81b7850246110bb3a39e21303a213d976a8f97927fa282ec47803ddb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:19.455733 containerd[1460]: time="2024-12-13T13:34:19.455702906Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55f74d585b-gkp22,Uid:bcb02bd3-79c5-4f97-892a-aafa3090dcbe,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"6a2a4f3c81b7850246110bb3a39e21303a213d976a8f97927fa282ec47803ddb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:19.456489 kubelet[2640]: E1213 13:34:19.456442 2640 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a2a4f3c81b7850246110bb3a39e21303a213d976a8f97927fa282ec47803ddb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:19.456571 kubelet[2640]: E1213 13:34:19.456499 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a2a4f3c81b7850246110bb3a39e21303a213d976a8f97927fa282ec47803ddb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55f74d585b-gkp22" Dec 13 13:34:19.456571 kubelet[2640]: E1213 13:34:19.456520 2640 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a2a4f3c81b7850246110bb3a39e21303a213d976a8f97927fa282ec47803ddb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55f74d585b-gkp22" Dec 13 13:34:19.456663 kubelet[2640]: E1213 13:34:19.456562 2640 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-55f74d585b-gkp22_calico-system(bcb02bd3-79c5-4f97-892a-aafa3090dcbe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-55f74d585b-gkp22_calico-system(bcb02bd3-79c5-4f97-892a-aafa3090dcbe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6a2a4f3c81b7850246110bb3a39e21303a213d976a8f97927fa282ec47803ddb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55f74d585b-gkp22" podUID="bcb02bd3-79c5-4f97-892a-aafa3090dcbe" Dec 13 13:34:19.467412 containerd[1460]: time="2024-12-13T13:34:19.467354193Z" level=error msg="Failed to destroy network for sandbox \"9c61f38680623a070a89cebb1cf248b637d83806377c0ceb07f4313094888769\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:19.467869 containerd[1460]: time="2024-12-13T13:34:19.467844915Z" level=error msg="encountered an error cleaning up failed sandbox \"9c61f38680623a070a89cebb1cf248b637d83806377c0ceb07f4313094888769\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:19.467934 containerd[1460]: time="2024-12-13T13:34:19.467903435Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h5jzv,Uid:70af0792-807b-45ba-8d22-96d81d38b5e7,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"9c61f38680623a070a89cebb1cf248b637d83806377c0ceb07f4313094888769\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:19.468371 kubelet[2640]: E1213 13:34:19.468153 2640 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c61f38680623a070a89cebb1cf248b637d83806377c0ceb07f4313094888769\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:19.468371 kubelet[2640]: E1213 13:34:19.468207 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c61f38680623a070a89cebb1cf248b637d83806377c0ceb07f4313094888769\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h5jzv" Dec 13 13:34:19.468371 kubelet[2640]: E1213 13:34:19.468231 2640 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c61f38680623a070a89cebb1cf248b637d83806377c0ceb07f4313094888769\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h5jzv" Dec 13 13:34:19.468690 kubelet[2640]: E1213 13:34:19.468299 2640 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-h5jzv_calico-system(70af0792-807b-45ba-8d22-96d81d38b5e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-h5jzv_calico-system(70af0792-807b-45ba-8d22-96d81d38b5e7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9c61f38680623a070a89cebb1cf248b637d83806377c0ceb07f4313094888769\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h5jzv" podUID="70af0792-807b-45ba-8d22-96d81d38b5e7" Dec 13 13:34:19.498501 containerd[1460]: time="2024-12-13T13:34:19.486644955Z" level=error msg="Failed to destroy network for sandbox \"ea5cf49efc7cd6bceaf869c25a1e0ee305ee2c5cb1307feaaffb986d0540529c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:19.498671 containerd[1460]: time="2024-12-13T13:34:19.492999948Z" level=error msg="Failed to destroy network for sandbox \"bef88fc60cdb3838d0dc9850d23e07d807225163db3a44fc0c007e1c2aa7f9d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:19.498954 containerd[1460]: time="2024-12-13T13:34:19.498914804Z" level=error msg="encountered an error cleaning up failed sandbox \"ea5cf49efc7cd6bceaf869c25a1e0ee305ee2c5cb1307feaaffb986d0540529c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:19.499034 containerd[1460]: time="2024-12-13T13:34:19.499007348Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tdhb9,Uid:bd2088f6-f886-4664-af58-213044237f3c,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"ea5cf49efc7cd6bceaf869c25a1e0ee305ee2c5cb1307feaaffb986d0540529c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:19.499385 kubelet[2640]: E1213 13:34:19.499285 2640 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea5cf49efc7cd6bceaf869c25a1e0ee305ee2c5cb1307feaaffb986d0540529c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:19.499445 kubelet[2640]: E1213 13:34:19.499405 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea5cf49efc7cd6bceaf869c25a1e0ee305ee2c5cb1307feaaffb986d0540529c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-tdhb9" Dec 13 13:34:19.499445 kubelet[2640]: E1213 13:34:19.499434 2640 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea5cf49efc7cd6bceaf869c25a1e0ee305ee2c5cb1307feaaffb986d0540529c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-tdhb9" Dec 13 13:34:19.499495 containerd[1460]: time="2024-12-13T13:34:19.499416176Z" level=error msg="encountered an error cleaning up failed sandbox \"bef88fc60cdb3838d0dc9850d23e07d807225163db3a44fc0c007e1c2aa7f9d8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:19.499528 containerd[1460]: time="2024-12-13T13:34:19.499503931Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d87f7746b-mqntw,Uid:cadc3599-3084-49e0-99bc-626d4d423dd6,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"bef88fc60cdb3838d0dc9850d23e07d807225163db3a44fc0c007e1c2aa7f9d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:19.500051 kubelet[2640]: E1213 13:34:19.499777 2640 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bef88fc60cdb3838d0dc9850d23e07d807225163db3a44fc0c007e1c2aa7f9d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:19.500051 kubelet[2640]: E1213 13:34:19.499845 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bef88fc60cdb3838d0dc9850d23e07d807225163db3a44fc0c007e1c2aa7f9d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d87f7746b-mqntw" Dec 13 13:34:19.500051 kubelet[2640]: E1213 13:34:19.499865 2640 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bef88fc60cdb3838d0dc9850d23e07d807225163db3a44fc0c007e1c2aa7f9d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d87f7746b-mqntw" Dec 13 13:34:19.500142 kubelet[2640]: E1213 13:34:19.499905 2640 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d87f7746b-mqntw_calico-apiserver(cadc3599-3084-49e0-99bc-626d4d423dd6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d87f7746b-mqntw_calico-apiserver(cadc3599-3084-49e0-99bc-626d4d423dd6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bef88fc60cdb3838d0dc9850d23e07d807225163db3a44fc0c007e1c2aa7f9d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d87f7746b-mqntw" podUID="cadc3599-3084-49e0-99bc-626d4d423dd6" Dec 13 13:34:19.500306 kubelet[2640]: E1213 13:34:19.499501 2640 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-tdhb9_kube-system(bd2088f6-f886-4664-af58-213044237f3c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-tdhb9_kube-system(bd2088f6-f886-4664-af58-213044237f3c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea5cf49efc7cd6bceaf869c25a1e0ee305ee2c5cb1307feaaffb986d0540529c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-tdhb9" podUID="bd2088f6-f886-4664-af58-213044237f3c" Dec 13 13:34:19.529855 containerd[1460]: time="2024-12-13T13:34:19.529796219Z" level=error msg="Failed to destroy network for sandbox \"d3f6fbb3f4076657051c373b6d68491beda4319936476f5a29f465b9cc1dee09\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:19.530249 containerd[1460]: time="2024-12-13T13:34:19.530218963Z" level=error msg="encountered an error cleaning up failed sandbox \"d3f6fbb3f4076657051c373b6d68491beda4319936476f5a29f465b9cc1dee09\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:19.530344 containerd[1460]: time="2024-12-13T13:34:19.530293533Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d87f7746b-flf6n,Uid:5ba12381-f554-4e4f-8ceb-405dc070dc9a,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"d3f6fbb3f4076657051c373b6d68491beda4319936476f5a29f465b9cc1dee09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:19.530574 kubelet[2640]: E1213 13:34:19.530511 2640 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3f6fbb3f4076657051c373b6d68491beda4319936476f5a29f465b9cc1dee09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:19.530574 kubelet[2640]: E1213 13:34:19.530572 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3f6fbb3f4076657051c373b6d68491beda4319936476f5a29f465b9cc1dee09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d87f7746b-flf6n" Dec 13 13:34:19.530747 kubelet[2640]: E1213 13:34:19.530594 2640 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3f6fbb3f4076657051c373b6d68491beda4319936476f5a29f465b9cc1dee09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d87f7746b-flf6n" Dec 13 13:34:19.530747 kubelet[2640]: E1213 13:34:19.530637 2640 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d87f7746b-flf6n_calico-apiserver(5ba12381-f554-4e4f-8ceb-405dc070dc9a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d87f7746b-flf6n_calico-apiserver(5ba12381-f554-4e4f-8ceb-405dc070dc9a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d3f6fbb3f4076657051c373b6d68491beda4319936476f5a29f465b9cc1dee09\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d87f7746b-flf6n" podUID="5ba12381-f554-4e4f-8ceb-405dc070dc9a" Dec 13 13:34:19.587471 kubelet[2640]: I1213 13:34:19.585567 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c61f38680623a070a89cebb1cf248b637d83806377c0ceb07f4313094888769" Dec 13 13:34:19.587837 containerd[1460]: time="2024-12-13T13:34:19.586397288Z" level=info msg="StopPodSandbox for \"9c61f38680623a070a89cebb1cf248b637d83806377c0ceb07f4313094888769\"" Dec 13 13:34:19.587837 containerd[1460]: time="2024-12-13T13:34:19.586594968Z" level=info msg="Ensure that sandbox 9c61f38680623a070a89cebb1cf248b637d83806377c0ceb07f4313094888769 in task-service has been cleanup successfully" Dec 13 13:34:19.587837 containerd[1460]: time="2024-12-13T13:34:19.587586001Z" level=info msg="TearDown network for sandbox \"9c61f38680623a070a89cebb1cf248b637d83806377c0ceb07f4313094888769\" successfully" Dec 13 13:34:19.587837 containerd[1460]: time="2024-12-13T13:34:19.587612721Z" level=info msg="StopPodSandbox for \"9c61f38680623a070a89cebb1cf248b637d83806377c0ceb07f4313094888769\" returns successfully" Dec 13 13:34:19.589300 containerd[1460]: time="2024-12-13T13:34:19.588415971Z" level=info msg="StopPodSandbox for \"a3d938a9fe1cbda4b552ace19138d55db25439b920e8a82798c7e324c7afe2f4\"" Dec 13 13:34:19.589300 containerd[1460]: time="2024-12-13T13:34:19.588522080Z" level=info msg="TearDown network for sandbox \"a3d938a9fe1cbda4b552ace19138d55db25439b920e8a82798c7e324c7afe2f4\" successfully" Dec 13 13:34:19.589300 containerd[1460]: time="2024-12-13T13:34:19.588531848Z" level=info msg="StopPodSandbox for \"a3d938a9fe1cbda4b552ace19138d55db25439b920e8a82798c7e324c7afe2f4\" returns successfully" Dec 13 13:34:19.589300 containerd[1460]: time="2024-12-13T13:34:19.588851049Z" level=info msg="StopPodSandbox for \"4ef0d56a397cd72f2356f51c144189ca5302b06a02860d08835bc16ddf93f006\"" Dec 13 13:34:19.589300 containerd[1460]: time="2024-12-13T13:34:19.588922273Z" level=info msg="TearDown network for sandbox \"4ef0d56a397cd72f2356f51c144189ca5302b06a02860d08835bc16ddf93f006\" successfully" Dec 13 13:34:19.589300 containerd[1460]: time="2024-12-13T13:34:19.588931811Z" level=info msg="StopPodSandbox for \"4ef0d56a397cd72f2356f51c144189ca5302b06a02860d08835bc16ddf93f006\" returns successfully" Dec 13 13:34:19.589300 containerd[1460]: time="2024-12-13T13:34:19.589244408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h5jzv,Uid:70af0792-807b-45ba-8d22-96d81d38b5e7,Namespace:calico-system,Attempt:3,}" Dec 13 13:34:19.590436 kubelet[2640]: I1213 13:34:19.590417 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3f6fbb3f4076657051c373b6d68491beda4319936476f5a29f465b9cc1dee09" Dec 13 13:34:19.591398 containerd[1460]: time="2024-12-13T13:34:19.590868420Z" level=info msg="StopPodSandbox for \"d3f6fbb3f4076657051c373b6d68491beda4319936476f5a29f465b9cc1dee09\"" Dec 13 13:34:19.591398 containerd[1460]: time="2024-12-13T13:34:19.591034892Z" level=info msg="Ensure that sandbox d3f6fbb3f4076657051c373b6d68491beda4319936476f5a29f465b9cc1dee09 in task-service has been cleanup successfully" Dec 13 13:34:19.591398 containerd[1460]: time="2024-12-13T13:34:19.591342380Z" level=info msg="TearDown network for sandbox \"d3f6fbb3f4076657051c373b6d68491beda4319936476f5a29f465b9cc1dee09\" successfully" Dec 13 13:34:19.591398 containerd[1460]: time="2024-12-13T13:34:19.591354693Z" level=info msg="StopPodSandbox for \"d3f6fbb3f4076657051c373b6d68491beda4319936476f5a29f465b9cc1dee09\" returns successfully" Dec 13 13:34:19.592098 containerd[1460]: time="2024-12-13T13:34:19.592069818Z" level=info msg="StopPodSandbox for \"72d027854b05139635b45a0e92158d8fa8f40ee284fc63177385a814ee2244d0\"" Dec 13 13:34:19.592176 containerd[1460]: time="2024-12-13T13:34:19.592152984Z" level=info msg="TearDown network for sandbox \"72d027854b05139635b45a0e92158d8fa8f40ee284fc63177385a814ee2244d0\" successfully" Dec 13 13:34:19.592176 containerd[1460]: time="2024-12-13T13:34:19.592170256Z" level=info msg="StopPodSandbox for \"72d027854b05139635b45a0e92158d8fa8f40ee284fc63177385a814ee2244d0\" returns successfully" Dec 13 13:34:19.592614 containerd[1460]: time="2024-12-13T13:34:19.592574075Z" level=info msg="StopPodSandbox for \"8c8a323be73a25f027a42aa31bd62c7c50a5548b9c0f738623c6c902439d54ca\"" Dec 13 13:34:19.592724 containerd[1460]: time="2024-12-13T13:34:19.592699270Z" level=info msg="TearDown network for sandbox \"8c8a323be73a25f027a42aa31bd62c7c50a5548b9c0f738623c6c902439d54ca\" successfully" Dec 13 13:34:19.592724 containerd[1460]: time="2024-12-13T13:34:19.592722233Z" level=info msg="StopPodSandbox for \"8c8a323be73a25f027a42aa31bd62c7c50a5548b9c0f738623c6c902439d54ca\" returns successfully" Dec 13 13:34:19.594666 containerd[1460]: time="2024-12-13T13:34:19.594626071Z" level=info msg="StopPodSandbox for \"5830ddd4f895c70171dbfc66a59269f13cec0912a16a8f19a7fc422dce5b8861\"" Dec 13 13:34:19.594893 containerd[1460]: time="2024-12-13T13:34:19.594861093Z" level=info msg="TearDown network for sandbox \"5830ddd4f895c70171dbfc66a59269f13cec0912a16a8f19a7fc422dce5b8861\" successfully" Dec 13 13:34:19.594893 containerd[1460]: time="2024-12-13T13:34:19.594895869Z" level=info msg="StopPodSandbox for \"5830ddd4f895c70171dbfc66a59269f13cec0912a16a8f19a7fc422dce5b8861\" returns successfully" Dec 13 13:34:19.595437 containerd[1460]: time="2024-12-13T13:34:19.595379828Z" level=info msg="StopPodSandbox for \"d1a332cec12278e8a2848eda31dc70e9822bb2e5c607ef1e71dfc9da622ecebb\"" Dec 13 13:34:19.595578 containerd[1460]: time="2024-12-13T13:34:19.595510032Z" level=info msg="TearDown network for sandbox \"d1a332cec12278e8a2848eda31dc70e9822bb2e5c607ef1e71dfc9da622ecebb\" successfully" Dec 13 13:34:19.595578 containerd[1460]: time="2024-12-13T13:34:19.595520242Z" level=info msg="StopPodSandbox for \"d1a332cec12278e8a2848eda31dc70e9822bb2e5c607ef1e71dfc9da622ecebb\" returns successfully" Dec 13 13:34:19.595974 kubelet[2640]: I1213 13:34:19.595944 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bef88fc60cdb3838d0dc9850d23e07d807225163db3a44fc0c007e1c2aa7f9d8" Dec 13 13:34:19.596031 containerd[1460]: time="2024-12-13T13:34:19.596006345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d87f7746b-flf6n,Uid:5ba12381-f554-4e4f-8ceb-405dc070dc9a,Namespace:calico-apiserver,Attempt:5,}" Dec 13 13:34:19.596737 containerd[1460]: time="2024-12-13T13:34:19.596592297Z" level=info msg="StopPodSandbox for \"bef88fc60cdb3838d0dc9850d23e07d807225163db3a44fc0c007e1c2aa7f9d8\"" Dec 13 13:34:19.597147 containerd[1460]: time="2024-12-13T13:34:19.597096083Z" level=info msg="Ensure that sandbox bef88fc60cdb3838d0dc9850d23e07d807225163db3a44fc0c007e1c2aa7f9d8 in task-service has been cleanup successfully" Dec 13 13:34:19.597573 containerd[1460]: time="2024-12-13T13:34:19.597524508Z" level=info msg="TearDown network for sandbox \"bef88fc60cdb3838d0dc9850d23e07d807225163db3a44fc0c007e1c2aa7f9d8\" successfully" Dec 13 13:34:19.597683 containerd[1460]: time="2024-12-13T13:34:19.597541721Z" level=info msg="StopPodSandbox for \"bef88fc60cdb3838d0dc9850d23e07d807225163db3a44fc0c007e1c2aa7f9d8\" returns successfully" Dec 13 13:34:19.598082 containerd[1460]: time="2024-12-13T13:34:19.598063451Z" level=info msg="StopPodSandbox for \"fd4a3ec4455a4558ba5951ceb7f7c3f3eeeab9ca964d38b261406ee89bda46a4\"" Dec 13 13:34:19.598222 kubelet[2640]: I1213 13:34:19.598140 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea5cf49efc7cd6bceaf869c25a1e0ee305ee2c5cb1307feaaffb986d0540529c" Dec 13 13:34:19.598351 containerd[1460]: time="2024-12-13T13:34:19.598330593Z" level=info msg="TearDown network for sandbox \"fd4a3ec4455a4558ba5951ceb7f7c3f3eeeab9ca964d38b261406ee89bda46a4\" successfully" Dec 13 13:34:19.598351 containerd[1460]: time="2024-12-13T13:34:19.598345441Z" level=info msg="StopPodSandbox for \"fd4a3ec4455a4558ba5951ceb7f7c3f3eeeab9ca964d38b261406ee89bda46a4\" returns successfully" Dec 13 13:34:19.598599 containerd[1460]: time="2024-12-13T13:34:19.598579551Z" level=info msg="StopPodSandbox for \"3096d2cf061035900198ce891b4e5cbd737796b92099b44cfd3fb9172cdbac54\"" Dec 13 13:34:19.598674 containerd[1460]: time="2024-12-13T13:34:19.598656296Z" level=info msg="TearDown network for sandbox \"3096d2cf061035900198ce891b4e5cbd737796b92099b44cfd3fb9172cdbac54\" successfully" Dec 13 13:34:19.598674 containerd[1460]: time="2024-12-13T13:34:19.598670593Z" level=info msg="StopPodSandbox for \"3096d2cf061035900198ce891b4e5cbd737796b92099b44cfd3fb9172cdbac54\" returns successfully" Dec 13 13:34:19.598773 containerd[1460]: time="2024-12-13T13:34:19.598759299Z" level=info msg="StopPodSandbox for \"ea5cf49efc7cd6bceaf869c25a1e0ee305ee2c5cb1307feaaffb986d0540529c\"" Dec 13 13:34:19.598945 containerd[1460]: time="2024-12-13T13:34:19.598918398Z" level=info msg="Ensure that sandbox ea5cf49efc7cd6bceaf869c25a1e0ee305ee2c5cb1307feaaffb986d0540529c in task-service has been cleanup successfully" Dec 13 13:34:19.599102 containerd[1460]: time="2024-12-13T13:34:19.599086263Z" level=info msg="TearDown network for sandbox \"ea5cf49efc7cd6bceaf869c25a1e0ee305ee2c5cb1307feaaffb986d0540529c\" successfully" Dec 13 13:34:19.599411 containerd[1460]: time="2024-12-13T13:34:19.599102654Z" level=info msg="StopPodSandbox for \"ea5cf49efc7cd6bceaf869c25a1e0ee305ee2c5cb1307feaaffb986d0540529c\" returns successfully" Dec 13 13:34:19.599411 containerd[1460]: time="2024-12-13T13:34:19.599264038Z" level=info msg="StopPodSandbox for \"3d6b3dd240a00c2a00a729b97087fb6f836a068b396113e925387d9d8ae54c7f\"" Dec 13 13:34:19.599411 containerd[1460]: time="2024-12-13T13:34:19.599353185Z" level=info msg="TearDown network for sandbox \"3d6b3dd240a00c2a00a729b97087fb6f836a068b396113e925387d9d8ae54c7f\" successfully" Dec 13 13:34:19.599411 containerd[1460]: time="2024-12-13T13:34:19.599363284Z" level=info msg="StopPodSandbox for \"3d6b3dd240a00c2a00a729b97087fb6f836a068b396113e925387d9d8ae54c7f\" returns successfully" Dec 13 13:34:19.599411 containerd[1460]: time="2024-12-13T13:34:19.599384734Z" level=info msg="StopPodSandbox for \"03f3d3da32a6bad9575866ecc071f7db7e2ef21eb15c13b5d3ccc4981ea3a6b8\"" Dec 13 13:34:19.599533 containerd[1460]: time="2024-12-13T13:34:19.599454806Z" level=info msg="TearDown network for sandbox \"03f3d3da32a6bad9575866ecc071f7db7e2ef21eb15c13b5d3ccc4981ea3a6b8\" successfully" Dec 13 13:34:19.599533 containerd[1460]: time="2024-12-13T13:34:19.599463312Z" level=info msg="StopPodSandbox for \"03f3d3da32a6bad9575866ecc071f7db7e2ef21eb15c13b5d3ccc4981ea3a6b8\" returns successfully" Dec 13 13:34:19.599674 containerd[1460]: time="2024-12-13T13:34:19.599656465Z" level=info msg="StopPodSandbox for \"3471811395b1ff1780abbbc92eeb9a683464d2b59eb4b36f7ca30acd080ae9c4\"" Dec 13 13:34:19.599770 containerd[1460]: time="2024-12-13T13:34:19.599723501Z" level=info msg="TearDown network for sandbox \"3471811395b1ff1780abbbc92eeb9a683464d2b59eb4b36f7ca30acd080ae9c4\" successfully" Dec 13 13:34:19.599770 containerd[1460]: time="2024-12-13T13:34:19.599735203Z" level=info msg="StopPodSandbox for \"3471811395b1ff1780abbbc92eeb9a683464d2b59eb4b36f7ca30acd080ae9c4\" returns successfully" Dec 13 13:34:19.599963 containerd[1460]: time="2024-12-13T13:34:19.599831815Z" level=info msg="StopPodSandbox for \"e93b7769b9d88742aaf6059922f8c6c4a26e010f6ef3858da386188108ad8af8\"" Dec 13 13:34:19.599963 containerd[1460]: time="2024-12-13T13:34:19.599899532Z" level=info msg="TearDown network for sandbox \"e93b7769b9d88742aaf6059922f8c6c4a26e010f6ef3858da386188108ad8af8\" successfully" Dec 13 13:34:19.599963 containerd[1460]: time="2024-12-13T13:34:19.599907848Z" level=info msg="StopPodSandbox for \"e93b7769b9d88742aaf6059922f8c6c4a26e010f6ef3858da386188108ad8af8\" returns successfully" Dec 13 13:34:19.600822 containerd[1460]: time="2024-12-13T13:34:19.600416594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d87f7746b-mqntw,Uid:cadc3599-3084-49e0-99bc-626d4d423dd6,Namespace:calico-apiserver,Attempt:5,}" Dec 13 13:34:19.600822 containerd[1460]: time="2024-12-13T13:34:19.600420902Z" level=info msg="StopPodSandbox for \"8e04eb66310a3917ffdcbb002bcd66f739fb7b5c6e195d72921f7f71735c6ad9\"" Dec 13 13:34:19.600822 containerd[1460]: time="2024-12-13T13:34:19.600632710Z" level=info msg="TearDown network for sandbox \"8e04eb66310a3917ffdcbb002bcd66f739fb7b5c6e195d72921f7f71735c6ad9\" successfully" Dec 13 13:34:19.600822 containerd[1460]: time="2024-12-13T13:34:19.600641968Z" level=info msg="StopPodSandbox for \"8e04eb66310a3917ffdcbb002bcd66f739fb7b5c6e195d72921f7f71735c6ad9\" returns successfully" Dec 13 13:34:19.600936 containerd[1460]: time="2024-12-13T13:34:19.600832164Z" level=info msg="StopPodSandbox for \"21489d44f97457445adcc907a914ad67e96a4016825cb60915c67d35d31dba11\"" Dec 13 13:34:19.601001 containerd[1460]: time="2024-12-13T13:34:19.600964133Z" level=info msg="TearDown network for sandbox \"21489d44f97457445adcc907a914ad67e96a4016825cb60915c67d35d31dba11\" successfully" Dec 13 13:34:19.601001 containerd[1460]: time="2024-12-13T13:34:19.600984792Z" level=info msg="StopPodSandbox for \"21489d44f97457445adcc907a914ad67e96a4016825cb60915c67d35d31dba11\" returns successfully" Dec 13 13:34:19.601390 kubelet[2640]: E1213 13:34:19.601349 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:34:19.601929 containerd[1460]: time="2024-12-13T13:34:19.601802979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tdhb9,Uid:bd2088f6-f886-4664-af58-213044237f3c,Namespace:kube-system,Attempt:5,}" Dec 13 13:34:19.602286 kubelet[2640]: I1213 13:34:19.602254 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba6f8affd16e998d2e69c27c01590791060c1de1a38eb975939e57a9835e83db" Dec 13 13:34:19.603379 containerd[1460]: time="2024-12-13T13:34:19.602982526Z" level=info msg="StopPodSandbox for \"ba6f8affd16e998d2e69c27c01590791060c1de1a38eb975939e57a9835e83db\"" Dec 13 13:34:19.603591 containerd[1460]: time="2024-12-13T13:34:19.603519946Z" level=info msg="Ensure that sandbox ba6f8affd16e998d2e69c27c01590791060c1de1a38eb975939e57a9835e83db in task-service has been cleanup successfully" Dec 13 13:34:19.603757 containerd[1460]: time="2024-12-13T13:34:19.603720773Z" level=info msg="TearDown network for sandbox \"ba6f8affd16e998d2e69c27c01590791060c1de1a38eb975939e57a9835e83db\" successfully" Dec 13 13:34:19.603757 containerd[1460]: time="2024-12-13T13:34:19.603738106Z" level=info msg="StopPodSandbox for \"ba6f8affd16e998d2e69c27c01590791060c1de1a38eb975939e57a9835e83db\" returns successfully" Dec 13 13:34:19.604157 containerd[1460]: time="2024-12-13T13:34:19.604131956Z" level=info msg="StopPodSandbox for \"e3e675aabe036a2d2aa1e1782dcf65d1cb1d86f83cf782f33096729bc19652d6\"" Dec 13 13:34:19.604299 containerd[1460]: time="2024-12-13T13:34:19.604209631Z" level=info msg="TearDown network for sandbox \"e3e675aabe036a2d2aa1e1782dcf65d1cb1d86f83cf782f33096729bc19652d6\" successfully" Dec 13 13:34:19.604299 containerd[1460]: time="2024-12-13T13:34:19.604223117Z" level=info msg="StopPodSandbox for \"e3e675aabe036a2d2aa1e1782dcf65d1cb1d86f83cf782f33096729bc19652d6\" returns successfully" Dec 13 13:34:19.604572 containerd[1460]: time="2024-12-13T13:34:19.604474389Z" level=info msg="StopPodSandbox for \"09abfaaccf32e858d49fc89ee268bc73ea6fb80d4d0a90443906199a1d27306d\"" Dec 13 13:34:19.604572 containerd[1460]: time="2024-12-13T13:34:19.604547207Z" level=info msg="TearDown network for sandbox \"09abfaaccf32e858d49fc89ee268bc73ea6fb80d4d0a90443906199a1d27306d\" successfully" Dec 13 13:34:19.604572 containerd[1460]: time="2024-12-13T13:34:19.604556654Z" level=info msg="StopPodSandbox for \"09abfaaccf32e858d49fc89ee268bc73ea6fb80d4d0a90443906199a1d27306d\" returns successfully" Dec 13 13:34:19.605088 containerd[1460]: time="2024-12-13T13:34:19.605023341Z" level=info msg="StopPodSandbox for \"706c13db8235334bc38488f0490b1867ec134804b2b05c3d93c69a593fd16e13\"" Dec 13 13:34:19.605127 containerd[1460]: time="2024-12-13T13:34:19.605095838Z" level=info msg="TearDown network for sandbox \"706c13db8235334bc38488f0490b1867ec134804b2b05c3d93c69a593fd16e13\" successfully" Dec 13 13:34:19.605127 containerd[1460]: time="2024-12-13T13:34:19.605105686Z" level=info msg="StopPodSandbox for \"706c13db8235334bc38488f0490b1867ec134804b2b05c3d93c69a593fd16e13\" returns successfully" Dec 13 13:34:19.605458 containerd[1460]: time="2024-12-13T13:34:19.605371335Z" level=info msg="StopPodSandbox for \"558af8a45bc01399591427b598962160a612f4630f01ffcfbcb0b3a90050035e\"" Dec 13 13:34:19.605513 containerd[1460]: time="2024-12-13T13:34:19.605473247Z" level=info msg="TearDown network for sandbox \"558af8a45bc01399591427b598962160a612f4630f01ffcfbcb0b3a90050035e\" successfully" Dec 13 13:34:19.605513 containerd[1460]: time="2024-12-13T13:34:19.605483546Z" level=info msg="StopPodSandbox for \"558af8a45bc01399591427b598962160a612f4630f01ffcfbcb0b3a90050035e\" returns successfully" Dec 13 13:34:19.605766 kubelet[2640]: E1213 13:34:19.605658 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:34:19.605925 kubelet[2640]: I1213 13:34:19.605896 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a2a4f3c81b7850246110bb3a39e21303a213d976a8f97927fa282ec47803ddb" Dec 13 13:34:19.605959 containerd[1460]: time="2024-12-13T13:34:19.605900439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-n488k,Uid:4efa0db1-4649-47cd-847b-b2cd3ddad9b5,Namespace:kube-system,Attempt:5,}" Dec 13 13:34:19.606468 containerd[1460]: time="2024-12-13T13:34:19.606445603Z" level=info msg="StopPodSandbox for \"6a2a4f3c81b7850246110bb3a39e21303a213d976a8f97927fa282ec47803ddb\"" Dec 13 13:34:19.606624 containerd[1460]: time="2024-12-13T13:34:19.606603380Z" level=info msg="Ensure that sandbox 6a2a4f3c81b7850246110bb3a39e21303a213d976a8f97927fa282ec47803ddb in task-service has been cleanup successfully" Dec 13 13:34:19.608433 containerd[1460]: time="2024-12-13T13:34:19.608396620Z" level=info msg="TearDown network for sandbox \"6a2a4f3c81b7850246110bb3a39e21303a213d976a8f97927fa282ec47803ddb\" successfully" Dec 13 13:34:19.608433 containerd[1460]: time="2024-12-13T13:34:19.608429452Z" level=info msg="StopPodSandbox for \"6a2a4f3c81b7850246110bb3a39e21303a213d976a8f97927fa282ec47803ddb\" returns successfully" Dec 13 13:34:19.609032 containerd[1460]: time="2024-12-13T13:34:19.608766756Z" level=info msg="StopPodSandbox for \"c6522ad2e1a3375e607b651b292cf0023738e3078af5ef59656343f350a5c5f4\"" Dec 13 13:34:19.609032 containerd[1460]: time="2024-12-13T13:34:19.608857537Z" level=info msg="TearDown network for sandbox \"c6522ad2e1a3375e607b651b292cf0023738e3078af5ef59656343f350a5c5f4\" successfully" Dec 13 13:34:19.609032 containerd[1460]: time="2024-12-13T13:34:19.608867305Z" level=info msg="StopPodSandbox for \"c6522ad2e1a3375e607b651b292cf0023738e3078af5ef59656343f350a5c5f4\" returns successfully" Dec 13 13:34:19.609406 containerd[1460]: time="2024-12-13T13:34:19.609380049Z" level=info msg="StopPodSandbox for \"41522410dfe8d457521fb9fe296f7b8dae420fe4186ddf5966e8b8e3c6ce7668\"" Dec 13 13:34:19.609496 containerd[1460]: time="2024-12-13T13:34:19.609471620Z" level=info msg="TearDown network for sandbox \"41522410dfe8d457521fb9fe296f7b8dae420fe4186ddf5966e8b8e3c6ce7668\" successfully" Dec 13 13:34:19.609496 containerd[1460]: time="2024-12-13T13:34:19.609490305Z" level=info msg="StopPodSandbox for \"41522410dfe8d457521fb9fe296f7b8dae420fe4186ddf5966e8b8e3c6ce7668\" returns successfully" Dec 13 13:34:19.610211 containerd[1460]: time="2024-12-13T13:34:19.610171085Z" level=info msg="StopPodSandbox for \"043d5f360472a5cf84f2e7fe5d34c855f1c1466ecd61a4548840ff0da56c473d\"" Dec 13 13:34:19.610284 containerd[1460]: time="2024-12-13T13:34:19.610251286Z" level=info msg="TearDown network for sandbox \"043d5f360472a5cf84f2e7fe5d34c855f1c1466ecd61a4548840ff0da56c473d\" successfully" Dec 13 13:34:19.610284 containerd[1460]: time="2024-12-13T13:34:19.610278487Z" level=info msg="StopPodSandbox for \"043d5f360472a5cf84f2e7fe5d34c855f1c1466ecd61a4548840ff0da56c473d\" returns successfully" Dec 13 13:34:19.610553 containerd[1460]: time="2024-12-13T13:34:19.610529158Z" level=info msg="StopPodSandbox for \"c62e1d6a49358b1088df68486d06de5d565abc5cb224cecb08a72fc49527b818\"" Dec 13 13:34:19.610660 containerd[1460]: time="2024-12-13T13:34:19.610617794Z" level=info msg="TearDown network for sandbox \"c62e1d6a49358b1088df68486d06de5d565abc5cb224cecb08a72fc49527b818\" successfully" Dec 13 13:34:19.610660 containerd[1460]: time="2024-12-13T13:34:19.610632542Z" level=info msg="StopPodSandbox for \"c62e1d6a49358b1088df68486d06de5d565abc5cb224cecb08a72fc49527b818\" returns successfully" Dec 13 13:34:19.611388 containerd[1460]: time="2024-12-13T13:34:19.611288064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55f74d585b-gkp22,Uid:bcb02bd3-79c5-4f97-892a-aafa3090dcbe,Namespace:calico-system,Attempt:5,}" Dec 13 13:34:19.954592 systemd[1]: run-netns-cni\x2d4fa93b5c\x2de54a\x2d6e2f\x2d6f02\x2dee085858d843.mount: Deactivated successfully. Dec 13 13:34:19.955048 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6a2a4f3c81b7850246110bb3a39e21303a213d976a8f97927fa282ec47803ddb-shm.mount: Deactivated successfully. Dec 13 13:34:19.955135 systemd[1]: run-netns-cni\x2d81cf8fa5\x2d7b06\x2d4ff0\x2d7ff9\x2d9bc07c01ddb6.mount: Deactivated successfully. Dec 13 13:34:19.955208 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ba6f8affd16e998d2e69c27c01590791060c1de1a38eb975939e57a9835e83db-shm.mount: Deactivated successfully. Dec 13 13:34:20.094235 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3955561822.mount: Deactivated successfully. Dec 13 13:34:20.177648 systemd[1]: Started sshd@9-10.0.0.150:22-10.0.0.1:45718.service - OpenSSH per-connection server daemon (10.0.0.1:45718). Dec 13 13:34:20.244974 sshd[4462]: Accepted publickey for core from 10.0.0.1 port 45718 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:34:20.246517 sshd-session[4462]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:34:20.250609 systemd-logind[1445]: New session 10 of user core. Dec 13 13:34:20.258449 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 13:34:20.439731 sshd[4464]: Connection closed by 10.0.0.1 port 45718 Dec 13 13:34:20.440059 sshd-session[4462]: pam_unix(sshd:session): session closed for user core Dec 13 13:34:20.444395 systemd[1]: sshd@9-10.0.0.150:22-10.0.0.1:45718.service: Deactivated successfully. Dec 13 13:34:20.446524 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 13:34:20.447203 systemd-logind[1445]: Session 10 logged out. Waiting for processes to exit. Dec 13 13:34:20.448093 systemd-logind[1445]: Removed session 10. Dec 13 13:34:20.673110 containerd[1460]: time="2024-12-13T13:34:20.673066009Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:20.692382 containerd[1460]: time="2024-12-13T13:34:20.692328895Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Dec 13 13:34:20.700097 containerd[1460]: time="2024-12-13T13:34:20.700060553Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:20.758679 containerd[1460]: time="2024-12-13T13:34:20.757938949Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:20.758679 containerd[1460]: time="2024-12-13T13:34:20.758549727Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 5.238757369s" Dec 13 13:34:20.758679 containerd[1460]: time="2024-12-13T13:34:20.758580887Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Dec 13 13:34:20.771015 containerd[1460]: time="2024-12-13T13:34:20.770963444Z" level=info msg="CreateContainer within sandbox \"8ee57868a7f34a07725621e7dd98ba063377757c6ade474fdbba29a39ceb1332\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 13:34:20.799447 containerd[1460]: time="2024-12-13T13:34:20.799394657Z" level=error msg="Failed to destroy network for sandbox \"9c3a341d124d9379d933702cd59eae0a502abb56529e8f56106691cfdd5210df\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:20.801392 containerd[1460]: time="2024-12-13T13:34:20.801360792Z" level=error msg="Failed to destroy network for sandbox \"a51067dfa3319463394d4929f4d513a74dfa510e4dddd7a818017a13ba006fa2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:20.811689 containerd[1460]: time="2024-12-13T13:34:20.811630900Z" level=error msg="encountered an error cleaning up failed sandbox \"a51067dfa3319463394d4929f4d513a74dfa510e4dddd7a818017a13ba006fa2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:20.811757 containerd[1460]: time="2024-12-13T13:34:20.811712252Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h5jzv,Uid:70af0792-807b-45ba-8d22-96d81d38b5e7,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"a51067dfa3319463394d4929f4d513a74dfa510e4dddd7a818017a13ba006fa2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:20.812006 kubelet[2640]: E1213 13:34:20.811968 2640 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a51067dfa3319463394d4929f4d513a74dfa510e4dddd7a818017a13ba006fa2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:20.812396 containerd[1460]: time="2024-12-13T13:34:20.812014671Z" level=error msg="encountered an error cleaning up failed sandbox \"9c3a341d124d9379d933702cd59eae0a502abb56529e8f56106691cfdd5210df\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:20.812396 containerd[1460]: time="2024-12-13T13:34:20.812049376Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d87f7746b-mqntw,Uid:cadc3599-3084-49e0-99bc-626d4d423dd6,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"9c3a341d124d9379d933702cd59eae0a502abb56529e8f56106691cfdd5210df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:20.813076 kubelet[2640]: E1213 13:34:20.812769 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a51067dfa3319463394d4929f4d513a74dfa510e4dddd7a818017a13ba006fa2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h5jzv" Dec 13 13:34:20.813076 kubelet[2640]: E1213 13:34:20.812801 2640 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a51067dfa3319463394d4929f4d513a74dfa510e4dddd7a818017a13ba006fa2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h5jzv" Dec 13 13:34:20.813076 kubelet[2640]: E1213 13:34:20.812275 2640 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c3a341d124d9379d933702cd59eae0a502abb56529e8f56106691cfdd5210df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:20.813176 kubelet[2640]: E1213 13:34:20.812849 2640 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-h5jzv_calico-system(70af0792-807b-45ba-8d22-96d81d38b5e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-h5jzv_calico-system(70af0792-807b-45ba-8d22-96d81d38b5e7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a51067dfa3319463394d4929f4d513a74dfa510e4dddd7a818017a13ba006fa2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h5jzv" podUID="70af0792-807b-45ba-8d22-96d81d38b5e7" Dec 13 13:34:20.813176 kubelet[2640]: E1213 13:34:20.812877 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c3a341d124d9379d933702cd59eae0a502abb56529e8f56106691cfdd5210df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d87f7746b-mqntw" Dec 13 13:34:20.813176 kubelet[2640]: E1213 13:34:20.812941 2640 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c3a341d124d9379d933702cd59eae0a502abb56529e8f56106691cfdd5210df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d87f7746b-mqntw" Dec 13 13:34:20.813282 kubelet[2640]: E1213 13:34:20.813027 2640 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d87f7746b-mqntw_calico-apiserver(cadc3599-3084-49e0-99bc-626d4d423dd6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d87f7746b-mqntw_calico-apiserver(cadc3599-3084-49e0-99bc-626d4d423dd6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9c3a341d124d9379d933702cd59eae0a502abb56529e8f56106691cfdd5210df\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d87f7746b-mqntw" podUID="cadc3599-3084-49e0-99bc-626d4d423dd6" Dec 13 13:34:20.813877 containerd[1460]: time="2024-12-13T13:34:20.813834019Z" level=error msg="Failed to destroy network for sandbox \"9dc7abc46ced9ea2f1fa53d72a6c4814f3579022ef445d58cd7ed3e5773800b2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:20.814253 containerd[1460]: time="2024-12-13T13:34:20.814211099Z" level=error msg="encountered an error cleaning up failed sandbox \"9dc7abc46ced9ea2f1fa53d72a6c4814f3579022ef445d58cd7ed3e5773800b2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:20.814299 containerd[1460]: time="2024-12-13T13:34:20.814263817Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d87f7746b-flf6n,Uid:5ba12381-f554-4e4f-8ceb-405dc070dc9a,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"9dc7abc46ced9ea2f1fa53d72a6c4814f3579022ef445d58cd7ed3e5773800b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:20.814453 kubelet[2640]: E1213 13:34:20.814419 2640 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dc7abc46ced9ea2f1fa53d72a6c4814f3579022ef445d58cd7ed3e5773800b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:20.814489 kubelet[2640]: E1213 13:34:20.814458 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dc7abc46ced9ea2f1fa53d72a6c4814f3579022ef445d58cd7ed3e5773800b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d87f7746b-flf6n" Dec 13 13:34:20.814489 kubelet[2640]: E1213 13:34:20.814475 2640 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dc7abc46ced9ea2f1fa53d72a6c4814f3579022ef445d58cd7ed3e5773800b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d87f7746b-flf6n" Dec 13 13:34:20.814916 kubelet[2640]: E1213 13:34:20.814503 2640 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d87f7746b-flf6n_calico-apiserver(5ba12381-f554-4e4f-8ceb-405dc070dc9a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d87f7746b-flf6n_calico-apiserver(5ba12381-f554-4e4f-8ceb-405dc070dc9a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9dc7abc46ced9ea2f1fa53d72a6c4814f3579022ef445d58cd7ed3e5773800b2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d87f7746b-flf6n" podUID="5ba12381-f554-4e4f-8ceb-405dc070dc9a" Dec 13 13:34:20.821065 containerd[1460]: time="2024-12-13T13:34:20.820994935Z" level=error msg="Failed to destroy network for sandbox \"aca8c409ba4808458cd390c815351d323f7da4bd417c1bcb075eb4cf65179d20\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:20.821128 containerd[1460]: time="2024-12-13T13:34:20.821094413Z" level=error msg="Failed to destroy network for sandbox \"7c57ef222078b00fb1358031d13913b076e7f887d430518e050f8ecbe0260ca9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:20.821620 containerd[1460]: time="2024-12-13T13:34:20.821594391Z" level=error msg="encountered an error cleaning up failed sandbox \"aca8c409ba4808458cd390c815351d323f7da4bd417c1bcb075eb4cf65179d20\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:20.821697 containerd[1460]: time="2024-12-13T13:34:20.821618206Z" level=error msg="encountered an error cleaning up failed sandbox \"7c57ef222078b00fb1358031d13913b076e7f887d430518e050f8ecbe0260ca9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:20.821697 containerd[1460]: time="2024-12-13T13:34:20.821647341Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-n488k,Uid:4efa0db1-4649-47cd-847b-b2cd3ddad9b5,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"aca8c409ba4808458cd390c815351d323f7da4bd417c1bcb075eb4cf65179d20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:20.821697 containerd[1460]: time="2024-12-13T13:34:20.821655287Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tdhb9,Uid:bd2088f6-f886-4664-af58-213044237f3c,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"7c57ef222078b00fb1358031d13913b076e7f887d430518e050f8ecbe0260ca9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:20.821903 kubelet[2640]: E1213 13:34:20.821834 2640 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c57ef222078b00fb1358031d13913b076e7f887d430518e050f8ecbe0260ca9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:20.821947 kubelet[2640]: E1213 13:34:20.821872 2640 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aca8c409ba4808458cd390c815351d323f7da4bd417c1bcb075eb4cf65179d20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:20.821974 kubelet[2640]: E1213 13:34:20.821944 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aca8c409ba4808458cd390c815351d323f7da4bd417c1bcb075eb4cf65179d20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-n488k" Dec 13 13:34:20.821974 kubelet[2640]: E1213 13:34:20.821952 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c57ef222078b00fb1358031d13913b076e7f887d430518e050f8ecbe0260ca9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-tdhb9" Dec 13 13:34:20.821974 kubelet[2640]: E1213 13:34:20.821966 2640 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aca8c409ba4808458cd390c815351d323f7da4bd417c1bcb075eb4cf65179d20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-n488k" Dec 13 13:34:20.822049 kubelet[2640]: E1213 13:34:20.821973 2640 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c57ef222078b00fb1358031d13913b076e7f887d430518e050f8ecbe0260ca9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-tdhb9" Dec 13 13:34:20.822049 kubelet[2640]: E1213 13:34:20.822022 2640 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-n488k_kube-system(4efa0db1-4649-47cd-847b-b2cd3ddad9b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-n488k_kube-system(4efa0db1-4649-47cd-847b-b2cd3ddad9b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aca8c409ba4808458cd390c815351d323f7da4bd417c1bcb075eb4cf65179d20\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-n488k" podUID="4efa0db1-4649-47cd-847b-b2cd3ddad9b5" Dec 13 13:34:20.822123 kubelet[2640]: E1213 13:34:20.822065 2640 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-tdhb9_kube-system(bd2088f6-f886-4664-af58-213044237f3c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-tdhb9_kube-system(bd2088f6-f886-4664-af58-213044237f3c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c57ef222078b00fb1358031d13913b076e7f887d430518e050f8ecbe0260ca9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-tdhb9" podUID="bd2088f6-f886-4664-af58-213044237f3c" Dec 13 13:34:20.833330 containerd[1460]: time="2024-12-13T13:34:20.833270702Z" level=error msg="Failed to destroy network for sandbox \"c361cf0d45af987326726b600a805f9395b6a442483d81b3ec5659ec3ac4bb62\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:20.833685 containerd[1460]: time="2024-12-13T13:34:20.833660335Z" level=error msg="encountered an error cleaning up failed sandbox \"c361cf0d45af987326726b600a805f9395b6a442483d81b3ec5659ec3ac4bb62\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:20.833729 containerd[1460]: time="2024-12-13T13:34:20.833709837Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55f74d585b-gkp22,Uid:bcb02bd3-79c5-4f97-892a-aafa3090dcbe,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"c361cf0d45af987326726b600a805f9395b6a442483d81b3ec5659ec3ac4bb62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:20.833934 kubelet[2640]: E1213 13:34:20.833909 2640 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c361cf0d45af987326726b600a805f9395b6a442483d81b3ec5659ec3ac4bb62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:34:20.833969 kubelet[2640]: E1213 13:34:20.833945 2640 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c361cf0d45af987326726b600a805f9395b6a442483d81b3ec5659ec3ac4bb62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55f74d585b-gkp22" Dec 13 13:34:20.833969 kubelet[2640]: E1213 13:34:20.833963 2640 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c361cf0d45af987326726b600a805f9395b6a442483d81b3ec5659ec3ac4bb62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55f74d585b-gkp22" Dec 13 13:34:20.834027 kubelet[2640]: E1213 13:34:20.834000 2640 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-55f74d585b-gkp22_calico-system(bcb02bd3-79c5-4f97-892a-aafa3090dcbe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-55f74d585b-gkp22_calico-system(bcb02bd3-79c5-4f97-892a-aafa3090dcbe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c361cf0d45af987326726b600a805f9395b6a442483d81b3ec5659ec3ac4bb62\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55f74d585b-gkp22" podUID="bcb02bd3-79c5-4f97-892a-aafa3090dcbe" Dec 13 13:34:20.957602 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9dc7abc46ced9ea2f1fa53d72a6c4814f3579022ef445d58cd7ed3e5773800b2-shm.mount: Deactivated successfully. Dec 13 13:34:20.957709 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a51067dfa3319463394d4929f4d513a74dfa510e4dddd7a818017a13ba006fa2-shm.mount: Deactivated successfully. Dec 13 13:34:21.010723 containerd[1460]: time="2024-12-13T13:34:21.010670083Z" level=info msg="CreateContainer within sandbox \"8ee57868a7f34a07725621e7dd98ba063377757c6ade474fdbba29a39ceb1332\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a3521161a88e70d8ffc57e8490c71eecadc26fc69c8a198c0b3f5ca9035a7d2a\"" Dec 13 13:34:21.011483 containerd[1460]: time="2024-12-13T13:34:21.011442093Z" level=info msg="StartContainer for \"a3521161a88e70d8ffc57e8490c71eecadc26fc69c8a198c0b3f5ca9035a7d2a\"" Dec 13 13:34:21.093450 systemd[1]: Started cri-containerd-a3521161a88e70d8ffc57e8490c71eecadc26fc69c8a198c0b3f5ca9035a7d2a.scope - libcontainer container a3521161a88e70d8ffc57e8490c71eecadc26fc69c8a198c0b3f5ca9035a7d2a. Dec 13 13:34:21.212494 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 13:34:21.213018 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 13:34:21.241779 containerd[1460]: time="2024-12-13T13:34:21.241723247Z" level=info msg="StartContainer for \"a3521161a88e70d8ffc57e8490c71eecadc26fc69c8a198c0b3f5ca9035a7d2a\" returns successfully" Dec 13 13:34:21.612225 kubelet[2640]: E1213 13:34:21.611957 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:34:21.614242 kubelet[2640]: I1213 13:34:21.614209 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9dc7abc46ced9ea2f1fa53d72a6c4814f3579022ef445d58cd7ed3e5773800b2" Dec 13 13:34:21.614818 containerd[1460]: time="2024-12-13T13:34:21.614782016Z" level=info msg="StopPodSandbox for \"9dc7abc46ced9ea2f1fa53d72a6c4814f3579022ef445d58cd7ed3e5773800b2\"" Dec 13 13:34:21.616805 kubelet[2640]: I1213 13:34:21.616779 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c361cf0d45af987326726b600a805f9395b6a442483d81b3ec5659ec3ac4bb62" Dec 13 13:34:21.617276 containerd[1460]: time="2024-12-13T13:34:21.617142682Z" level=info msg="StopPodSandbox for \"c361cf0d45af987326726b600a805f9395b6a442483d81b3ec5659ec3ac4bb62\"" Dec 13 13:34:21.618395 kubelet[2640]: I1213 13:34:21.618371 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a51067dfa3319463394d4929f4d513a74dfa510e4dddd7a818017a13ba006fa2" Dec 13 13:34:21.618724 containerd[1460]: time="2024-12-13T13:34:21.618702231Z" level=info msg="StopPodSandbox for \"a51067dfa3319463394d4929f4d513a74dfa510e4dddd7a818017a13ba006fa2\"" Dec 13 13:34:21.619162 containerd[1460]: time="2024-12-13T13:34:21.618998969Z" level=info msg="Ensure that sandbox a51067dfa3319463394d4929f4d513a74dfa510e4dddd7a818017a13ba006fa2 in task-service has been cleanup successfully" Dec 13 13:34:21.619162 containerd[1460]: time="2024-12-13T13:34:21.619096533Z" level=info msg="Ensure that sandbox 9dc7abc46ced9ea2f1fa53d72a6c4814f3579022ef445d58cd7ed3e5773800b2 in task-service has been cleanup successfully" Dec 13 13:34:21.619246 containerd[1460]: time="2024-12-13T13:34:21.619158078Z" level=info msg="Ensure that sandbox c361cf0d45af987326726b600a805f9395b6a442483d81b3ec5659ec3ac4bb62 in task-service has been cleanup successfully" Dec 13 13:34:21.619514 containerd[1460]: time="2024-12-13T13:34:21.619399412Z" level=info msg="TearDown network for sandbox \"9dc7abc46ced9ea2f1fa53d72a6c4814f3579022ef445d58cd7ed3e5773800b2\" successfully" Dec 13 13:34:21.619514 containerd[1460]: time="2024-12-13T13:34:21.619425741Z" level=info msg="TearDown network for sandbox \"a51067dfa3319463394d4929f4d513a74dfa510e4dddd7a818017a13ba006fa2\" successfully" Dec 13 13:34:21.619514 containerd[1460]: time="2024-12-13T13:34:21.619446049Z" level=info msg="StopPodSandbox for \"a51067dfa3319463394d4929f4d513a74dfa510e4dddd7a818017a13ba006fa2\" returns successfully" Dec 13 13:34:21.619514 containerd[1460]: time="2024-12-13T13:34:21.619456799Z" level=info msg="TearDown network for sandbox \"c361cf0d45af987326726b600a805f9395b6a442483d81b3ec5659ec3ac4bb62\" successfully" Dec 13 13:34:21.619514 containerd[1460]: time="2024-12-13T13:34:21.619470926Z" level=info msg="StopPodSandbox for \"c361cf0d45af987326726b600a805f9395b6a442483d81b3ec5659ec3ac4bb62\" returns successfully" Dec 13 13:34:21.619514 containerd[1460]: time="2024-12-13T13:34:21.619433897Z" level=info msg="StopPodSandbox for \"9dc7abc46ced9ea2f1fa53d72a6c4814f3579022ef445d58cd7ed3e5773800b2\" returns successfully" Dec 13 13:34:21.620397 containerd[1460]: time="2024-12-13T13:34:21.619927133Z" level=info msg="StopPodSandbox for \"d3f6fbb3f4076657051c373b6d68491beda4319936476f5a29f465b9cc1dee09\"" Dec 13 13:34:21.620397 containerd[1460]: time="2024-12-13T13:34:21.619972979Z" level=info msg="StopPodSandbox for \"6a2a4f3c81b7850246110bb3a39e21303a213d976a8f97927fa282ec47803ddb\"" Dec 13 13:34:21.620397 containerd[1460]: time="2024-12-13T13:34:21.620003796Z" level=info msg="TearDown network for sandbox \"d3f6fbb3f4076657051c373b6d68491beda4319936476f5a29f465b9cc1dee09\" successfully" Dec 13 13:34:21.620397 containerd[1460]: time="2024-12-13T13:34:21.620013696Z" level=info msg="StopPodSandbox for \"d3f6fbb3f4076657051c373b6d68491beda4319936476f5a29f465b9cc1dee09\" returns successfully" Dec 13 13:34:21.620397 containerd[1460]: time="2024-12-13T13:34:21.620052308Z" level=info msg="TearDown network for sandbox \"6a2a4f3c81b7850246110bb3a39e21303a213d976a8f97927fa282ec47803ddb\" successfully" Dec 13 13:34:21.620397 containerd[1460]: time="2024-12-13T13:34:21.620064171Z" level=info msg="StopPodSandbox for \"6a2a4f3c81b7850246110bb3a39e21303a213d976a8f97927fa282ec47803ddb\" returns successfully" Dec 13 13:34:21.620397 containerd[1460]: time="2024-12-13T13:34:21.620098695Z" level=info msg="StopPodSandbox for \"9c61f38680623a070a89cebb1cf248b637d83806377c0ceb07f4313094888769\"" Dec 13 13:34:21.620397 containerd[1460]: time="2024-12-13T13:34:21.620165882Z" level=info msg="TearDown network for sandbox \"9c61f38680623a070a89cebb1cf248b637d83806377c0ceb07f4313094888769\" successfully" Dec 13 13:34:21.620397 containerd[1460]: time="2024-12-13T13:34:21.620174698Z" level=info msg="StopPodSandbox for \"9c61f38680623a070a89cebb1cf248b637d83806377c0ceb07f4313094888769\" returns successfully" Dec 13 13:34:21.620685 containerd[1460]: time="2024-12-13T13:34:21.620559140Z" level=info msg="StopPodSandbox for \"c6522ad2e1a3375e607b651b292cf0023738e3078af5ef59656343f350a5c5f4\"" Dec 13 13:34:21.620685 containerd[1460]: time="2024-12-13T13:34:21.620642638Z" level=info msg="TearDown network for sandbox \"c6522ad2e1a3375e607b651b292cf0023738e3078af5ef59656343f350a5c5f4\" successfully" Dec 13 13:34:21.620685 containerd[1460]: time="2024-12-13T13:34:21.620654550Z" level=info msg="StopPodSandbox for \"c6522ad2e1a3375e607b651b292cf0023738e3078af5ef59656343f350a5c5f4\" returns successfully" Dec 13 13:34:21.620763 containerd[1460]: time="2024-12-13T13:34:21.620691299Z" level=info msg="StopPodSandbox for \"a3d938a9fe1cbda4b552ace19138d55db25439b920e8a82798c7e324c7afe2f4\"" Dec 13 13:34:21.620787 containerd[1460]: time="2024-12-13T13:34:21.620773113Z" level=info msg="TearDown network for sandbox \"a3d938a9fe1cbda4b552ace19138d55db25439b920e8a82798c7e324c7afe2f4\" successfully" Dec 13 13:34:21.620815 containerd[1460]: time="2024-12-13T13:34:21.620784995Z" level=info msg="StopPodSandbox for \"a3d938a9fe1cbda4b552ace19138d55db25439b920e8a82798c7e324c7afe2f4\" returns successfully" Dec 13 13:34:21.620870 containerd[1460]: time="2024-12-13T13:34:21.620841240Z" level=info msg="StopPodSandbox for \"72d027854b05139635b45a0e92158d8fa8f40ee284fc63177385a814ee2244d0\"" Dec 13 13:34:21.621002 containerd[1460]: time="2024-12-13T13:34:21.620927793Z" level=info msg="TearDown network for sandbox \"72d027854b05139635b45a0e92158d8fa8f40ee284fc63177385a814ee2244d0\" successfully" Dec 13 13:34:21.621002 containerd[1460]: time="2024-12-13T13:34:21.620939725Z" level=info msg="StopPodSandbox for \"72d027854b05139635b45a0e92158d8fa8f40ee284fc63177385a814ee2244d0\" returns successfully" Dec 13 13:34:21.621288 containerd[1460]: time="2024-12-13T13:34:21.621267061Z" level=info msg="StopPodSandbox for \"4ef0d56a397cd72f2356f51c144189ca5302b06a02860d08835bc16ddf93f006\"" Dec 13 13:34:21.621382 containerd[1460]: time="2024-12-13T13:34:21.621364293Z" level=info msg="TearDown network for sandbox \"4ef0d56a397cd72f2356f51c144189ca5302b06a02860d08835bc16ddf93f006\" successfully" Dec 13 13:34:21.621407 containerd[1460]: time="2024-12-13T13:34:21.621379442Z" level=info msg="StopPodSandbox for \"4ef0d56a397cd72f2356f51c144189ca5302b06a02860d08835bc16ddf93f006\" returns successfully" Dec 13 13:34:21.621457 containerd[1460]: time="2024-12-13T13:34:21.621439665Z" level=info msg="StopPodSandbox for \"8c8a323be73a25f027a42aa31bd62c7c50a5548b9c0f738623c6c902439d54ca\"" Dec 13 13:34:21.621536 containerd[1460]: time="2024-12-13T13:34:21.621518493Z" level=info msg="TearDown network for sandbox \"8c8a323be73a25f027a42aa31bd62c7c50a5548b9c0f738623c6c902439d54ca\" successfully" Dec 13 13:34:21.621565 containerd[1460]: time="2024-12-13T13:34:21.621533451Z" level=info msg="StopPodSandbox for \"8c8a323be73a25f027a42aa31bd62c7c50a5548b9c0f738623c6c902439d54ca\" returns successfully" Dec 13 13:34:21.621644 containerd[1460]: time="2024-12-13T13:34:21.621624312Z" level=info msg="StopPodSandbox for \"41522410dfe8d457521fb9fe296f7b8dae420fe4186ddf5966e8b8e3c6ce7668\"" Dec 13 13:34:21.621732 containerd[1460]: time="2024-12-13T13:34:21.621705755Z" level=info msg="TearDown network for sandbox \"41522410dfe8d457521fb9fe296f7b8dae420fe4186ddf5966e8b8e3c6ce7668\" successfully" Dec 13 13:34:21.621732 containerd[1460]: time="2024-12-13T13:34:21.621724119Z" level=info msg="StopPodSandbox for \"41522410dfe8d457521fb9fe296f7b8dae420fe4186ddf5966e8b8e3c6ce7668\" returns successfully" Dec 13 13:34:21.622285 containerd[1460]: time="2024-12-13T13:34:21.622254396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h5jzv,Uid:70af0792-807b-45ba-8d22-96d81d38b5e7,Namespace:calico-system,Attempt:4,}" Dec 13 13:34:21.626709 containerd[1460]: time="2024-12-13T13:34:21.626677015Z" level=info msg="StopPodSandbox for \"043d5f360472a5cf84f2e7fe5d34c855f1c1466ecd61a4548840ff0da56c473d\"" Dec 13 13:34:21.626779 containerd[1460]: time="2024-12-13T13:34:21.626770030Z" level=info msg="TearDown network for sandbox \"043d5f360472a5cf84f2e7fe5d34c855f1c1466ecd61a4548840ff0da56c473d\" successfully" Dec 13 13:34:21.626807 containerd[1460]: time="2024-12-13T13:34:21.626780840Z" level=info msg="StopPodSandbox for \"043d5f360472a5cf84f2e7fe5d34c855f1c1466ecd61a4548840ff0da56c473d\" returns successfully" Dec 13 13:34:21.626857 containerd[1460]: time="2024-12-13T13:34:21.626839390Z" level=info msg="StopPodSandbox for \"5830ddd4f895c70171dbfc66a59269f13cec0912a16a8f19a7fc422dce5b8861\"" Dec 13 13:34:21.627038 containerd[1460]: time="2024-12-13T13:34:21.626901617Z" level=info msg="TearDown network for sandbox \"5830ddd4f895c70171dbfc66a59269f13cec0912a16a8f19a7fc422dce5b8861\" successfully" Dec 13 13:34:21.627038 containerd[1460]: time="2024-12-13T13:34:21.626913740Z" level=info msg="StopPodSandbox for \"5830ddd4f895c70171dbfc66a59269f13cec0912a16a8f19a7fc422dce5b8861\" returns successfully" Dec 13 13:34:21.627473 containerd[1460]: time="2024-12-13T13:34:21.627447903Z" level=info msg="StopPodSandbox for \"c62e1d6a49358b1088df68486d06de5d565abc5cb224cecb08a72fc49527b818\"" Dec 13 13:34:21.627722 containerd[1460]: time="2024-12-13T13:34:21.627456760Z" level=info msg="StopPodSandbox for \"d1a332cec12278e8a2848eda31dc70e9822bb2e5c607ef1e71dfc9da622ecebb\"" Dec 13 13:34:21.627722 containerd[1460]: time="2024-12-13T13:34:21.627642880Z" level=info msg="TearDown network for sandbox \"c62e1d6a49358b1088df68486d06de5d565abc5cb224cecb08a72fc49527b818\" successfully" Dec 13 13:34:21.627722 containerd[1460]: time="2024-12-13T13:34:21.627655414Z" level=info msg="StopPodSandbox for \"c62e1d6a49358b1088df68486d06de5d565abc5cb224cecb08a72fc49527b818\" returns successfully" Dec 13 13:34:21.627807 kubelet[2640]: I1213 13:34:21.627630 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c3a341d124d9379d933702cd59eae0a502abb56529e8f56106691cfdd5210df" Dec 13 13:34:21.628164 containerd[1460]: time="2024-12-13T13:34:21.628009900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55f74d585b-gkp22,Uid:bcb02bd3-79c5-4f97-892a-aafa3090dcbe,Namespace:calico-system,Attempt:6,}" Dec 13 13:34:21.628164 containerd[1460]: time="2024-12-13T13:34:21.628104537Z" level=info msg="StopPodSandbox for \"9c3a341d124d9379d933702cd59eae0a502abb56529e8f56106691cfdd5210df\"" Dec 13 13:34:21.628327 containerd[1460]: time="2024-12-13T13:34:21.628291108Z" level=info msg="Ensure that sandbox 9c3a341d124d9379d933702cd59eae0a502abb56529e8f56106691cfdd5210df in task-service has been cleanup successfully" Dec 13 13:34:21.628546 containerd[1460]: time="2024-12-13T13:34:21.628515840Z" level=info msg="TearDown network for sandbox \"9c3a341d124d9379d933702cd59eae0a502abb56529e8f56106691cfdd5210df\" successfully" Dec 13 13:34:21.628546 containerd[1460]: time="2024-12-13T13:34:21.628543401Z" level=info msg="StopPodSandbox for \"9c3a341d124d9379d933702cd59eae0a502abb56529e8f56106691cfdd5210df\" returns successfully" Dec 13 13:34:21.628765 containerd[1460]: time="2024-12-13T13:34:21.628589618Z" level=info msg="TearDown network for sandbox \"d1a332cec12278e8a2848eda31dc70e9822bb2e5c607ef1e71dfc9da622ecebb\" successfully" Dec 13 13:34:21.628765 containerd[1460]: time="2024-12-13T13:34:21.628764297Z" level=info msg="StopPodSandbox for \"d1a332cec12278e8a2848eda31dc70e9822bb2e5c607ef1e71dfc9da622ecebb\" returns successfully" Dec 13 13:34:21.628896 containerd[1460]: time="2024-12-13T13:34:21.628828096Z" level=info msg="StopPodSandbox for \"bef88fc60cdb3838d0dc9850d23e07d807225163db3a44fc0c007e1c2aa7f9d8\"" Dec 13 13:34:21.628948 containerd[1460]: time="2024-12-13T13:34:21.628906083Z" level=info msg="TearDown network for sandbox \"bef88fc60cdb3838d0dc9850d23e07d807225163db3a44fc0c007e1c2aa7f9d8\" successfully" Dec 13 13:34:21.628948 containerd[1460]: time="2024-12-13T13:34:21.628944916Z" level=info msg="StopPodSandbox for \"bef88fc60cdb3838d0dc9850d23e07d807225163db3a44fc0c007e1c2aa7f9d8\" returns successfully" Dec 13 13:34:21.629348 containerd[1460]: time="2024-12-13T13:34:21.629270318Z" level=info msg="StopPodSandbox for \"fd4a3ec4455a4558ba5951ceb7f7c3f3eeeab9ca964d38b261406ee89bda46a4\"" Dec 13 13:34:21.629387 containerd[1460]: time="2024-12-13T13:34:21.629273143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d87f7746b-flf6n,Uid:5ba12381-f554-4e4f-8ceb-405dc070dc9a,Namespace:calico-apiserver,Attempt:6,}" Dec 13 13:34:21.629494 containerd[1460]: time="2024-12-13T13:34:21.629474871Z" level=info msg="TearDown network for sandbox \"fd4a3ec4455a4558ba5951ceb7f7c3f3eeeab9ca964d38b261406ee89bda46a4\" successfully" Dec 13 13:34:21.629494 containerd[1460]: time="2024-12-13T13:34:21.629490781Z" level=info msg="StopPodSandbox for \"fd4a3ec4455a4558ba5951ceb7f7c3f3eeeab9ca964d38b261406ee89bda46a4\" returns successfully" Dec 13 13:34:21.630283 containerd[1460]: time="2024-12-13T13:34:21.630261440Z" level=info msg="StopPodSandbox for \"3096d2cf061035900198ce891b4e5cbd737796b92099b44cfd3fb9172cdbac54\"" Dec 13 13:34:21.630359 containerd[1460]: time="2024-12-13T13:34:21.630350527Z" level=info msg="TearDown network for sandbox \"3096d2cf061035900198ce891b4e5cbd737796b92099b44cfd3fb9172cdbac54\" successfully" Dec 13 13:34:21.630387 containerd[1460]: time="2024-12-13T13:34:21.630360826Z" level=info msg="StopPodSandbox for \"3096d2cf061035900198ce891b4e5cbd737796b92099b44cfd3fb9172cdbac54\" returns successfully" Dec 13 13:34:21.630641 containerd[1460]: time="2024-12-13T13:34:21.630621857Z" level=info msg="StopPodSandbox for \"03f3d3da32a6bad9575866ecc071f7db7e2ef21eb15c13b5d3ccc4981ea3a6b8\"" Dec 13 13:34:21.630712 containerd[1460]: time="2024-12-13T13:34:21.630696377Z" level=info msg="TearDown network for sandbox \"03f3d3da32a6bad9575866ecc071f7db7e2ef21eb15c13b5d3ccc4981ea3a6b8\" successfully" Dec 13 13:34:21.630712 containerd[1460]: time="2024-12-13T13:34:21.630708469Z" level=info msg="StopPodSandbox for \"03f3d3da32a6bad9575866ecc071f7db7e2ef21eb15c13b5d3ccc4981ea3a6b8\" returns successfully" Dec 13 13:34:21.631026 containerd[1460]: time="2024-12-13T13:34:21.631006489Z" level=info msg="StopPodSandbox for \"3471811395b1ff1780abbbc92eeb9a683464d2b59eb4b36f7ca30acd080ae9c4\"" Dec 13 13:34:21.631091 containerd[1460]: time="2024-12-13T13:34:21.631075769Z" level=info msg="TearDown network for sandbox \"3471811395b1ff1780abbbc92eeb9a683464d2b59eb4b36f7ca30acd080ae9c4\" successfully" Dec 13 13:34:21.631091 containerd[1460]: time="2024-12-13T13:34:21.631088303Z" level=info msg="StopPodSandbox for \"3471811395b1ff1780abbbc92eeb9a683464d2b59eb4b36f7ca30acd080ae9c4\" returns successfully" Dec 13 13:34:21.632027 kubelet[2640]: I1213 13:34:21.631390 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c57ef222078b00fb1358031d13913b076e7f887d430518e050f8ecbe0260ca9" Dec 13 13:34:21.632091 containerd[1460]: time="2024-12-13T13:34:21.631458769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d87f7746b-mqntw,Uid:cadc3599-3084-49e0-99bc-626d4d423dd6,Namespace:calico-apiserver,Attempt:6,}" Dec 13 13:34:21.632091 containerd[1460]: time="2024-12-13T13:34:21.631761648Z" level=info msg="StopPodSandbox for \"7c57ef222078b00fb1358031d13913b076e7f887d430518e050f8ecbe0260ca9\"" Dec 13 13:34:21.632091 containerd[1460]: time="2024-12-13T13:34:21.631910769Z" level=info msg="Ensure that sandbox 7c57ef222078b00fb1358031d13913b076e7f887d430518e050f8ecbe0260ca9 in task-service has been cleanup successfully" Dec 13 13:34:21.632378 containerd[1460]: time="2024-12-13T13:34:21.632358109Z" level=info msg="TearDown network for sandbox \"7c57ef222078b00fb1358031d13913b076e7f887d430518e050f8ecbe0260ca9\" successfully" Dec 13 13:34:21.632378 containerd[1460]: time="2024-12-13T13:34:21.632373457Z" level=info msg="StopPodSandbox for \"7c57ef222078b00fb1358031d13913b076e7f887d430518e050f8ecbe0260ca9\" returns successfully" Dec 13 13:34:21.632753 containerd[1460]: time="2024-12-13T13:34:21.632730508Z" level=info msg="StopPodSandbox for \"ea5cf49efc7cd6bceaf869c25a1e0ee305ee2c5cb1307feaaffb986d0540529c\"" Dec 13 13:34:21.632816 containerd[1460]: time="2024-12-13T13:34:21.632805590Z" level=info msg="TearDown network for sandbox \"ea5cf49efc7cd6bceaf869c25a1e0ee305ee2c5cb1307feaaffb986d0540529c\" successfully" Dec 13 13:34:21.632840 containerd[1460]: time="2024-12-13T13:34:21.632816110Z" level=info msg="StopPodSandbox for \"ea5cf49efc7cd6bceaf869c25a1e0ee305ee2c5cb1307feaaffb986d0540529c\" returns successfully" Dec 13 13:34:21.633045 containerd[1460]: time="2024-12-13T13:34:21.633024621Z" level=info msg="StopPodSandbox for \"3d6b3dd240a00c2a00a729b97087fb6f836a068b396113e925387d9d8ae54c7f\"" Dec 13 13:34:21.633123 containerd[1460]: time="2024-12-13T13:34:21.633097629Z" level=info msg="TearDown network for sandbox \"3d6b3dd240a00c2a00a729b97087fb6f836a068b396113e925387d9d8ae54c7f\" successfully" Dec 13 13:34:21.633123 containerd[1460]: time="2024-12-13T13:34:21.633116003Z" level=info msg="StopPodSandbox for \"3d6b3dd240a00c2a00a729b97087fb6f836a068b396113e925387d9d8ae54c7f\" returns successfully" Dec 13 13:34:21.633379 containerd[1460]: time="2024-12-13T13:34:21.633358548Z" level=info msg="StopPodSandbox for \"e93b7769b9d88742aaf6059922f8c6c4a26e010f6ef3858da386188108ad8af8\"" Dec 13 13:34:21.633443 containerd[1460]: time="2024-12-13T13:34:21.633426727Z" level=info msg="TearDown network for sandbox \"e93b7769b9d88742aaf6059922f8c6c4a26e010f6ef3858da386188108ad8af8\" successfully" Dec 13 13:34:21.633443 containerd[1460]: time="2024-12-13T13:34:21.633439531Z" level=info msg="StopPodSandbox for \"e93b7769b9d88742aaf6059922f8c6c4a26e010f6ef3858da386188108ad8af8\" returns successfully" Dec 13 13:34:21.633784 containerd[1460]: time="2024-12-13T13:34:21.633761756Z" level=info msg="StopPodSandbox for \"8e04eb66310a3917ffdcbb002bcd66f739fb7b5c6e195d72921f7f71735c6ad9\"" Dec 13 13:34:21.633850 containerd[1460]: time="2024-12-13T13:34:21.633833852Z" level=info msg="TearDown network for sandbox \"8e04eb66310a3917ffdcbb002bcd66f739fb7b5c6e195d72921f7f71735c6ad9\" successfully" Dec 13 13:34:21.633850 containerd[1460]: time="2024-12-13T13:34:21.633843169Z" level=info msg="StopPodSandbox for \"8e04eb66310a3917ffdcbb002bcd66f739fb7b5c6e195d72921f7f71735c6ad9\" returns successfully" Dec 13 13:34:21.634150 containerd[1460]: time="2024-12-13T13:34:21.634118046Z" level=info msg="StopPodSandbox for \"21489d44f97457445adcc907a914ad67e96a4016825cb60915c67d35d31dba11\"" Dec 13 13:34:21.656346 containerd[1460]: time="2024-12-13T13:34:21.656289462Z" level=info msg="TearDown network for sandbox \"21489d44f97457445adcc907a914ad67e96a4016825cb60915c67d35d31dba11\" successfully" Dec 13 13:34:21.656346 containerd[1460]: time="2024-12-13T13:34:21.656335138Z" level=info msg="StopPodSandbox for \"21489d44f97457445adcc907a914ad67e96a4016825cb60915c67d35d31dba11\" returns successfully" Dec 13 13:34:21.656907 kubelet[2640]: E1213 13:34:21.656886 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:34:21.658049 kubelet[2640]: I1213 13:34:21.657781 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-fqnrq" podStartSLOduration=1.5144829450000001 podStartE2EDuration="18.657755517s" podCreationTimestamp="2024-12-13 13:34:03 +0000 UTC" firstStartedPulling="2024-12-13 13:34:03.616453526 +0000 UTC m=+21.815852877" lastFinishedPulling="2024-12-13 13:34:20.759726098 +0000 UTC m=+38.959125449" observedRunningTime="2024-12-13 13:34:21.65580391 +0000 UTC m=+39.855203261" watchObservedRunningTime="2024-12-13 13:34:21.657755517 +0000 UTC m=+39.857154868" Dec 13 13:34:21.662057 containerd[1460]: time="2024-12-13T13:34:21.661962411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tdhb9,Uid:bd2088f6-f886-4664-af58-213044237f3c,Namespace:kube-system,Attempt:6,}" Dec 13 13:34:21.678474 kubelet[2640]: I1213 13:34:21.677999 2640 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aca8c409ba4808458cd390c815351d323f7da4bd417c1bcb075eb4cf65179d20" Dec 13 13:34:21.678909 containerd[1460]: time="2024-12-13T13:34:21.678878864Z" level=info msg="StopPodSandbox for \"aca8c409ba4808458cd390c815351d323f7da4bd417c1bcb075eb4cf65179d20\"" Dec 13 13:34:21.680557 containerd[1460]: time="2024-12-13T13:34:21.679090402Z" level=info msg="Ensure that sandbox aca8c409ba4808458cd390c815351d323f7da4bd417c1bcb075eb4cf65179d20 in task-service has been cleanup successfully" Dec 13 13:34:21.680557 containerd[1460]: time="2024-12-13T13:34:21.679455017Z" level=info msg="TearDown network for sandbox \"aca8c409ba4808458cd390c815351d323f7da4bd417c1bcb075eb4cf65179d20\" successfully" Dec 13 13:34:21.680557 containerd[1460]: time="2024-12-13T13:34:21.679495713Z" level=info msg="StopPodSandbox for \"aca8c409ba4808458cd390c815351d323f7da4bd417c1bcb075eb4cf65179d20\" returns successfully" Dec 13 13:34:21.680557 containerd[1460]: time="2024-12-13T13:34:21.679836643Z" level=info msg="StopPodSandbox for \"ba6f8affd16e998d2e69c27c01590791060c1de1a38eb975939e57a9835e83db\"" Dec 13 13:34:21.680557 containerd[1460]: time="2024-12-13T13:34:21.679923406Z" level=info msg="TearDown network for sandbox \"ba6f8affd16e998d2e69c27c01590791060c1de1a38eb975939e57a9835e83db\" successfully" Dec 13 13:34:21.680557 containerd[1460]: time="2024-12-13T13:34:21.679938596Z" level=info msg="StopPodSandbox for \"ba6f8affd16e998d2e69c27c01590791060c1de1a38eb975939e57a9835e83db\" returns successfully" Dec 13 13:34:21.680557 containerd[1460]: time="2024-12-13T13:34:21.680195958Z" level=info msg="StopPodSandbox for \"e3e675aabe036a2d2aa1e1782dcf65d1cb1d86f83cf782f33096729bc19652d6\"" Dec 13 13:34:21.680557 containerd[1460]: time="2024-12-13T13:34:21.680361610Z" level=info msg="TearDown network for sandbox \"e3e675aabe036a2d2aa1e1782dcf65d1cb1d86f83cf782f33096729bc19652d6\" successfully" Dec 13 13:34:21.680557 containerd[1460]: time="2024-12-13T13:34:21.680372260Z" level=info msg="StopPodSandbox for \"e3e675aabe036a2d2aa1e1782dcf65d1cb1d86f83cf782f33096729bc19652d6\" returns successfully" Dec 13 13:34:21.680765 containerd[1460]: time="2024-12-13T13:34:21.680626979Z" level=info msg="StopPodSandbox for \"09abfaaccf32e858d49fc89ee268bc73ea6fb80d4d0a90443906199a1d27306d\"" Dec 13 13:34:21.680765 containerd[1460]: time="2024-12-13T13:34:21.680756081Z" level=info msg="TearDown network for sandbox \"09abfaaccf32e858d49fc89ee268bc73ea6fb80d4d0a90443906199a1d27306d\" successfully" Dec 13 13:34:21.680810 containerd[1460]: time="2024-12-13T13:34:21.680766060Z" level=info msg="StopPodSandbox for \"09abfaaccf32e858d49fc89ee268bc73ea6fb80d4d0a90443906199a1d27306d\" returns successfully" Dec 13 13:34:21.681469 containerd[1460]: time="2024-12-13T13:34:21.680998246Z" level=info msg="StopPodSandbox for \"706c13db8235334bc38488f0490b1867ec134804b2b05c3d93c69a593fd16e13\"" Dec 13 13:34:21.681469 containerd[1460]: time="2024-12-13T13:34:21.681076754Z" level=info msg="TearDown network for sandbox \"706c13db8235334bc38488f0490b1867ec134804b2b05c3d93c69a593fd16e13\" successfully" Dec 13 13:34:21.681469 containerd[1460]: time="2024-12-13T13:34:21.681091742Z" level=info msg="StopPodSandbox for \"706c13db8235334bc38488f0490b1867ec134804b2b05c3d93c69a593fd16e13\" returns successfully" Dec 13 13:34:21.681469 containerd[1460]: time="2024-12-13T13:34:21.681331773Z" level=info msg="StopPodSandbox for \"558af8a45bc01399591427b598962160a612f4630f01ffcfbcb0b3a90050035e\"" Dec 13 13:34:21.681469 containerd[1460]: time="2024-12-13T13:34:21.681415871Z" level=info msg="TearDown network for sandbox \"558af8a45bc01399591427b598962160a612f4630f01ffcfbcb0b3a90050035e\" successfully" Dec 13 13:34:21.681469 containerd[1460]: time="2024-12-13T13:34:21.681425178Z" level=info msg="StopPodSandbox for \"558af8a45bc01399591427b598962160a612f4630f01ffcfbcb0b3a90050035e\" returns successfully" Dec 13 13:34:21.681654 kubelet[2640]: E1213 13:34:21.681595 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:34:21.682482 containerd[1460]: time="2024-12-13T13:34:21.682444794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-n488k,Uid:4efa0db1-4649-47cd-847b-b2cd3ddad9b5,Namespace:kube-system,Attempt:6,}" Dec 13 13:34:21.964666 systemd[1]: run-netns-cni\x2d50d086b1\x2d85e7\x2d7b2e\x2d1eea\x2d35bda686bdf3.mount: Deactivated successfully. Dec 13 13:34:21.965218 systemd[1]: run-netns-cni\x2d3ce7ec2c\x2d36c5\x2d74b2\x2df631\x2d99339903d15e.mount: Deactivated successfully. Dec 13 13:34:21.965294 systemd[1]: run-netns-cni\x2da206add5\x2df82c\x2dd1fa\x2d19d8\x2de130e2b6bee5.mount: Deactivated successfully. Dec 13 13:34:21.965376 systemd[1]: run-netns-cni\x2d168d8ca0\x2db376\x2d7e3d\x2df504\x2d1fd0499e88af.mount: Deactivated successfully. Dec 13 13:34:21.965443 systemd[1]: run-netns-cni\x2d7ea64831\x2d655a\x2d29c8\x2dcd43\x2d9f8e8b5f2d36.mount: Deactivated successfully. Dec 13 13:34:21.965526 systemd[1]: run-netns-cni\x2d6da28960\x2d3ac0\x2dffae\x2d05ce\x2dea48c57113a1.mount: Deactivated successfully. Dec 13 13:34:22.034804 systemd-networkd[1402]: calibdb04e42e10: Link UP Dec 13 13:34:22.035702 systemd-networkd[1402]: calibdb04e42e10: Gained carrier Dec 13 13:34:22.047846 containerd[1460]: 2024-12-13 13:34:21.918 [INFO][4844] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 13:34:22.047846 containerd[1460]: 2024-12-13 13:34:21.936 [INFO][4844] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--n488k-eth0 coredns-7db6d8ff4d- kube-system 4efa0db1-4649-47cd-847b-b2cd3ddad9b5 804 0 2024-12-13 13:33:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-n488k eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibdb04e42e10 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="9096b2d20114e03dd6176b2ee390ed4348d6d76baef9c659a0f7eec1c0a11cf1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-n488k" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--n488k-" Dec 13 13:34:22.047846 containerd[1460]: 2024-12-13 13:34:21.936 [INFO][4844] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9096b2d20114e03dd6176b2ee390ed4348d6d76baef9c659a0f7eec1c0a11cf1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-n488k" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--n488k-eth0" Dec 13 13:34:22.047846 containerd[1460]: 2024-12-13 13:34:21.991 [INFO][4905] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9096b2d20114e03dd6176b2ee390ed4348d6d76baef9c659a0f7eec1c0a11cf1" HandleID="k8s-pod-network.9096b2d20114e03dd6176b2ee390ed4348d6d76baef9c659a0f7eec1c0a11cf1" Workload="localhost-k8s-coredns--7db6d8ff4d--n488k-eth0" Dec 13 13:34:22.047846 containerd[1460]: 2024-12-13 13:34:22.000 [INFO][4905] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9096b2d20114e03dd6176b2ee390ed4348d6d76baef9c659a0f7eec1c0a11cf1" HandleID="k8s-pod-network.9096b2d20114e03dd6176b2ee390ed4348d6d76baef9c659a0f7eec1c0a11cf1" Workload="localhost-k8s-coredns--7db6d8ff4d--n488k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000292960), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-n488k", "timestamp":"2024-12-13 13:34:21.991551317 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:34:22.047846 containerd[1460]: 2024-12-13 13:34:22.000 [INFO][4905] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:34:22.047846 containerd[1460]: 2024-12-13 13:34:22.000 [INFO][4905] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:34:22.047846 containerd[1460]: 2024-12-13 13:34:22.000 [INFO][4905] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 13:34:22.047846 containerd[1460]: 2024-12-13 13:34:22.003 [INFO][4905] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9096b2d20114e03dd6176b2ee390ed4348d6d76baef9c659a0f7eec1c0a11cf1" host="localhost" Dec 13 13:34:22.047846 containerd[1460]: 2024-12-13 13:34:22.008 [INFO][4905] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 13:34:22.047846 containerd[1460]: 2024-12-13 13:34:22.011 [INFO][4905] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 13:34:22.047846 containerd[1460]: 2024-12-13 13:34:22.012 [INFO][4905] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 13:34:22.047846 containerd[1460]: 2024-12-13 13:34:22.014 [INFO][4905] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 13:34:22.047846 containerd[1460]: 2024-12-13 13:34:22.014 [INFO][4905] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9096b2d20114e03dd6176b2ee390ed4348d6d76baef9c659a0f7eec1c0a11cf1" host="localhost" Dec 13 13:34:22.047846 containerd[1460]: 2024-12-13 13:34:22.015 [INFO][4905] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9096b2d20114e03dd6176b2ee390ed4348d6d76baef9c659a0f7eec1c0a11cf1 Dec 13 13:34:22.047846 containerd[1460]: 2024-12-13 13:34:22.018 [INFO][4905] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9096b2d20114e03dd6176b2ee390ed4348d6d76baef9c659a0f7eec1c0a11cf1" host="localhost" Dec 13 13:34:22.047846 containerd[1460]: 2024-12-13 13:34:22.024 [INFO][4905] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.9096b2d20114e03dd6176b2ee390ed4348d6d76baef9c659a0f7eec1c0a11cf1" host="localhost" Dec 13 13:34:22.047846 containerd[1460]: 2024-12-13 13:34:22.024 [INFO][4905] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.9096b2d20114e03dd6176b2ee390ed4348d6d76baef9c659a0f7eec1c0a11cf1" host="localhost" Dec 13 13:34:22.047846 containerd[1460]: 2024-12-13 13:34:22.024 [INFO][4905] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:34:22.047846 containerd[1460]: 2024-12-13 13:34:22.024 [INFO][4905] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="9096b2d20114e03dd6176b2ee390ed4348d6d76baef9c659a0f7eec1c0a11cf1" HandleID="k8s-pod-network.9096b2d20114e03dd6176b2ee390ed4348d6d76baef9c659a0f7eec1c0a11cf1" Workload="localhost-k8s-coredns--7db6d8ff4d--n488k-eth0" Dec 13 13:34:22.048487 containerd[1460]: 2024-12-13 13:34:22.027 [INFO][4844] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9096b2d20114e03dd6176b2ee390ed4348d6d76baef9c659a0f7eec1c0a11cf1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-n488k" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--n488k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--n488k-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4efa0db1-4649-47cd-847b-b2cd3ddad9b5", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 33, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-n488k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibdb04e42e10", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:34:22.048487 containerd[1460]: 2024-12-13 13:34:22.027 [INFO][4844] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="9096b2d20114e03dd6176b2ee390ed4348d6d76baef9c659a0f7eec1c0a11cf1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-n488k" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--n488k-eth0" Dec 13 13:34:22.048487 containerd[1460]: 2024-12-13 13:34:22.027 [INFO][4844] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibdb04e42e10 ContainerID="9096b2d20114e03dd6176b2ee390ed4348d6d76baef9c659a0f7eec1c0a11cf1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-n488k" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--n488k-eth0" Dec 13 13:34:22.048487 containerd[1460]: 2024-12-13 13:34:22.035 [INFO][4844] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9096b2d20114e03dd6176b2ee390ed4348d6d76baef9c659a0f7eec1c0a11cf1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-n488k" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--n488k-eth0" Dec 13 13:34:22.048487 containerd[1460]: 2024-12-13 13:34:22.035 [INFO][4844] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9096b2d20114e03dd6176b2ee390ed4348d6d76baef9c659a0f7eec1c0a11cf1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-n488k" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--n488k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--n488k-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4efa0db1-4649-47cd-847b-b2cd3ddad9b5", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 33, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9096b2d20114e03dd6176b2ee390ed4348d6d76baef9c659a0f7eec1c0a11cf1", Pod:"coredns-7db6d8ff4d-n488k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibdb04e42e10", MAC:"36:2c:5c:cb:0a:ba", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:34:22.048487 containerd[1460]: 2024-12-13 13:34:22.044 [INFO][4844] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9096b2d20114e03dd6176b2ee390ed4348d6d76baef9c659a0f7eec1c0a11cf1" Namespace="kube-system" Pod="coredns-7db6d8ff4d-n488k" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--n488k-eth0" Dec 13 13:34:22.062504 systemd-networkd[1402]: cali47bfc7cde53: Link UP Dec 13 13:34:22.062704 systemd-networkd[1402]: cali47bfc7cde53: Gained carrier Dec 13 13:34:22.077790 containerd[1460]: 2024-12-13 13:34:21.879 [INFO][4817] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 13:34:22.077790 containerd[1460]: 2024-12-13 13:34:21.893 [INFO][4817] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7d87f7746b--flf6n-eth0 calico-apiserver-7d87f7746b- calico-apiserver 5ba12381-f554-4e4f-8ceb-405dc070dc9a 810 0 2024-12-13 13:34:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d87f7746b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7d87f7746b-flf6n eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali47bfc7cde53 [] []}} ContainerID="15bd886bfad9865c2cbf05528f4845081be4b7ea2734d0168e1b88e3c239ed4f" Namespace="calico-apiserver" Pod="calico-apiserver-7d87f7746b-flf6n" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d87f7746b--flf6n-" Dec 13 13:34:22.077790 containerd[1460]: 2024-12-13 13:34:21.893 [INFO][4817] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="15bd886bfad9865c2cbf05528f4845081be4b7ea2734d0168e1b88e3c239ed4f" Namespace="calico-apiserver" Pod="calico-apiserver-7d87f7746b-flf6n" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d87f7746b--flf6n-eth0" Dec 13 13:34:22.077790 containerd[1460]: 2024-12-13 13:34:21.991 [INFO][4867] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="15bd886bfad9865c2cbf05528f4845081be4b7ea2734d0168e1b88e3c239ed4f" HandleID="k8s-pod-network.15bd886bfad9865c2cbf05528f4845081be4b7ea2734d0168e1b88e3c239ed4f" Workload="localhost-k8s-calico--apiserver--7d87f7746b--flf6n-eth0" Dec 13 13:34:22.077790 containerd[1460]: 2024-12-13 13:34:22.002 [INFO][4867] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="15bd886bfad9865c2cbf05528f4845081be4b7ea2734d0168e1b88e3c239ed4f" HandleID="k8s-pod-network.15bd886bfad9865c2cbf05528f4845081be4b7ea2734d0168e1b88e3c239ed4f" Workload="localhost-k8s-calico--apiserver--7d87f7746b--flf6n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000315220), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7d87f7746b-flf6n", "timestamp":"2024-12-13 13:34:21.991706499 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:34:22.077790 containerd[1460]: 2024-12-13 13:34:22.002 [INFO][4867] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:34:22.077790 containerd[1460]: 2024-12-13 13:34:22.025 [INFO][4867] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:34:22.077790 containerd[1460]: 2024-12-13 13:34:22.025 [INFO][4867] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 13:34:22.077790 containerd[1460]: 2024-12-13 13:34:22.027 [INFO][4867] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.15bd886bfad9865c2cbf05528f4845081be4b7ea2734d0168e1b88e3c239ed4f" host="localhost" Dec 13 13:34:22.077790 containerd[1460]: 2024-12-13 13:34:22.031 [INFO][4867] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 13:34:22.077790 containerd[1460]: 2024-12-13 13:34:22.037 [INFO][4867] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 13:34:22.077790 containerd[1460]: 2024-12-13 13:34:22.044 [INFO][4867] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 13:34:22.077790 containerd[1460]: 2024-12-13 13:34:22.046 [INFO][4867] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 13:34:22.077790 containerd[1460]: 2024-12-13 13:34:22.046 [INFO][4867] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.15bd886bfad9865c2cbf05528f4845081be4b7ea2734d0168e1b88e3c239ed4f" host="localhost" Dec 13 13:34:22.077790 containerd[1460]: 2024-12-13 13:34:22.047 [INFO][4867] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.15bd886bfad9865c2cbf05528f4845081be4b7ea2734d0168e1b88e3c239ed4f Dec 13 13:34:22.077790 containerd[1460]: 2024-12-13 13:34:22.051 [INFO][4867] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.15bd886bfad9865c2cbf05528f4845081be4b7ea2734d0168e1b88e3c239ed4f" host="localhost" Dec 13 13:34:22.077790 containerd[1460]: 2024-12-13 13:34:22.056 [INFO][4867] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.15bd886bfad9865c2cbf05528f4845081be4b7ea2734d0168e1b88e3c239ed4f" host="localhost" Dec 13 13:34:22.077790 containerd[1460]: 2024-12-13 13:34:22.056 [INFO][4867] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.15bd886bfad9865c2cbf05528f4845081be4b7ea2734d0168e1b88e3c239ed4f" host="localhost" Dec 13 13:34:22.077790 containerd[1460]: 2024-12-13 13:34:22.057 [INFO][4867] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:34:22.077790 containerd[1460]: 2024-12-13 13:34:22.057 [INFO][4867] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="15bd886bfad9865c2cbf05528f4845081be4b7ea2734d0168e1b88e3c239ed4f" HandleID="k8s-pod-network.15bd886bfad9865c2cbf05528f4845081be4b7ea2734d0168e1b88e3c239ed4f" Workload="localhost-k8s-calico--apiserver--7d87f7746b--flf6n-eth0" Dec 13 13:34:22.078572 containerd[1460]: 2024-12-13 13:34:22.059 [INFO][4817] cni-plugin/k8s.go 386: Populated endpoint ContainerID="15bd886bfad9865c2cbf05528f4845081be4b7ea2734d0168e1b88e3c239ed4f" Namespace="calico-apiserver" Pod="calico-apiserver-7d87f7746b-flf6n" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d87f7746b--flf6n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d87f7746b--flf6n-eth0", GenerateName:"calico-apiserver-7d87f7746b-", Namespace:"calico-apiserver", SelfLink:"", UID:"5ba12381-f554-4e4f-8ceb-405dc070dc9a", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 34, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d87f7746b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7d87f7746b-flf6n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali47bfc7cde53", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:34:22.078572 containerd[1460]: 2024-12-13 13:34:22.060 [INFO][4817] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="15bd886bfad9865c2cbf05528f4845081be4b7ea2734d0168e1b88e3c239ed4f" Namespace="calico-apiserver" Pod="calico-apiserver-7d87f7746b-flf6n" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d87f7746b--flf6n-eth0" Dec 13 13:34:22.078572 containerd[1460]: 2024-12-13 13:34:22.060 [INFO][4817] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali47bfc7cde53 ContainerID="15bd886bfad9865c2cbf05528f4845081be4b7ea2734d0168e1b88e3c239ed4f" Namespace="calico-apiserver" Pod="calico-apiserver-7d87f7746b-flf6n" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d87f7746b--flf6n-eth0" Dec 13 13:34:22.078572 containerd[1460]: 2024-12-13 13:34:22.062 [INFO][4817] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="15bd886bfad9865c2cbf05528f4845081be4b7ea2734d0168e1b88e3c239ed4f" Namespace="calico-apiserver" Pod="calico-apiserver-7d87f7746b-flf6n" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d87f7746b--flf6n-eth0" Dec 13 13:34:22.078572 containerd[1460]: 2024-12-13 13:34:22.063 [INFO][4817] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="15bd886bfad9865c2cbf05528f4845081be4b7ea2734d0168e1b88e3c239ed4f" Namespace="calico-apiserver" Pod="calico-apiserver-7d87f7746b-flf6n" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d87f7746b--flf6n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d87f7746b--flf6n-eth0", GenerateName:"calico-apiserver-7d87f7746b-", Namespace:"calico-apiserver", SelfLink:"", UID:"5ba12381-f554-4e4f-8ceb-405dc070dc9a", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 34, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d87f7746b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"15bd886bfad9865c2cbf05528f4845081be4b7ea2734d0168e1b88e3c239ed4f", Pod:"calico-apiserver-7d87f7746b-flf6n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali47bfc7cde53", MAC:"22:f8:51:10:cd:0a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:34:22.078572 containerd[1460]: 2024-12-13 13:34:22.074 [INFO][4817] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="15bd886bfad9865c2cbf05528f4845081be4b7ea2734d0168e1b88e3c239ed4f" Namespace="calico-apiserver" Pod="calico-apiserver-7d87f7746b-flf6n" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d87f7746b--flf6n-eth0" Dec 13 13:34:22.099856 systemd-networkd[1402]: cali170f32b928e: Link UP Dec 13 13:34:22.100074 systemd-networkd[1402]: cali170f32b928e: Gained carrier Dec 13 13:34:22.105625 containerd[1460]: time="2024-12-13T13:34:22.105215063Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:34:22.105625 containerd[1460]: time="2024-12-13T13:34:22.105294202Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:34:22.105625 containerd[1460]: time="2024-12-13T13:34:22.105352522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:34:22.105625 containerd[1460]: time="2024-12-13T13:34:22.105462819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:34:22.124625 containerd[1460]: time="2024-12-13T13:34:22.120052288Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:34:22.124625 containerd[1460]: time="2024-12-13T13:34:22.120136936Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:34:22.124625 containerd[1460]: time="2024-12-13T13:34:22.120149340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:34:22.124625 containerd[1460]: time="2024-12-13T13:34:22.120249678Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:34:22.124625 containerd[1460]: 2024-12-13 13:34:21.838 [INFO][4783] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 13:34:22.124625 containerd[1460]: 2024-12-13 13:34:21.884 [INFO][4783] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--h5jzv-eth0 csi-node-driver- calico-system 70af0792-807b-45ba-8d22-96d81d38b5e7 662 0 2024-12-13 13:34:03 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-h5jzv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali170f32b928e [] []}} ContainerID="eebc8245c67c09382ddf9863b64b3bfc1c4564b4b28cabda982433ba51a45c95" Namespace="calico-system" Pod="csi-node-driver-h5jzv" WorkloadEndpoint="localhost-k8s-csi--node--driver--h5jzv-" Dec 13 13:34:22.124625 containerd[1460]: 2024-12-13 13:34:21.885 [INFO][4783] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="eebc8245c67c09382ddf9863b64b3bfc1c4564b4b28cabda982433ba51a45c95" Namespace="calico-system" Pod="csi-node-driver-h5jzv" WorkloadEndpoint="localhost-k8s-csi--node--driver--h5jzv-eth0" Dec 13 13:34:22.124625 containerd[1460]: 2024-12-13 13:34:21.995 [INFO][4868] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eebc8245c67c09382ddf9863b64b3bfc1c4564b4b28cabda982433ba51a45c95" HandleID="k8s-pod-network.eebc8245c67c09382ddf9863b64b3bfc1c4564b4b28cabda982433ba51a45c95" Workload="localhost-k8s-csi--node--driver--h5jzv-eth0" Dec 13 13:34:22.124625 containerd[1460]: 2024-12-13 13:34:22.004 [INFO][4868] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eebc8245c67c09382ddf9863b64b3bfc1c4564b4b28cabda982433ba51a45c95" HandleID="k8s-pod-network.eebc8245c67c09382ddf9863b64b3bfc1c4564b4b28cabda982433ba51a45c95" Workload="localhost-k8s-csi--node--driver--h5jzv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003767d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-h5jzv", "timestamp":"2024-12-13 13:34:21.995219189 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:34:22.124625 containerd[1460]: 2024-12-13 13:34:22.004 [INFO][4868] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:34:22.124625 containerd[1460]: 2024-12-13 13:34:22.057 [INFO][4868] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:34:22.124625 containerd[1460]: 2024-12-13 13:34:22.057 [INFO][4868] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 13:34:22.124625 containerd[1460]: 2024-12-13 13:34:22.059 [INFO][4868] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.eebc8245c67c09382ddf9863b64b3bfc1c4564b4b28cabda982433ba51a45c95" host="localhost" Dec 13 13:34:22.124625 containerd[1460]: 2024-12-13 13:34:22.064 [INFO][4868] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 13:34:22.124625 containerd[1460]: 2024-12-13 13:34:22.068 [INFO][4868] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 13:34:22.124625 containerd[1460]: 2024-12-13 13:34:22.069 [INFO][4868] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 13:34:22.124625 containerd[1460]: 2024-12-13 13:34:22.073 [INFO][4868] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 13:34:22.124625 containerd[1460]: 2024-12-13 13:34:22.073 [INFO][4868] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.eebc8245c67c09382ddf9863b64b3bfc1c4564b4b28cabda982433ba51a45c95" host="localhost" Dec 13 13:34:22.124625 containerd[1460]: 2024-12-13 13:34:22.075 [INFO][4868] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.eebc8245c67c09382ddf9863b64b3bfc1c4564b4b28cabda982433ba51a45c95 Dec 13 13:34:22.124625 containerd[1460]: 2024-12-13 13:34:22.083 [INFO][4868] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.eebc8245c67c09382ddf9863b64b3bfc1c4564b4b28cabda982433ba51a45c95" host="localhost" Dec 13 13:34:22.124625 containerd[1460]: 2024-12-13 13:34:22.089 [INFO][4868] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.eebc8245c67c09382ddf9863b64b3bfc1c4564b4b28cabda982433ba51a45c95" host="localhost" Dec 13 13:34:22.124625 containerd[1460]: 2024-12-13 13:34:22.089 [INFO][4868] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.eebc8245c67c09382ddf9863b64b3bfc1c4564b4b28cabda982433ba51a45c95" host="localhost" Dec 13 13:34:22.124625 containerd[1460]: 2024-12-13 13:34:22.089 [INFO][4868] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:34:22.124625 containerd[1460]: 2024-12-13 13:34:22.089 [INFO][4868] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="eebc8245c67c09382ddf9863b64b3bfc1c4564b4b28cabda982433ba51a45c95" HandleID="k8s-pod-network.eebc8245c67c09382ddf9863b64b3bfc1c4564b4b28cabda982433ba51a45c95" Workload="localhost-k8s-csi--node--driver--h5jzv-eth0" Dec 13 13:34:22.125486 containerd[1460]: 2024-12-13 13:34:22.094 [INFO][4783] cni-plugin/k8s.go 386: Populated endpoint ContainerID="eebc8245c67c09382ddf9863b64b3bfc1c4564b4b28cabda982433ba51a45c95" Namespace="calico-system" Pod="csi-node-driver-h5jzv" WorkloadEndpoint="localhost-k8s-csi--node--driver--h5jzv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--h5jzv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"70af0792-807b-45ba-8d22-96d81d38b5e7", ResourceVersion:"662", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 34, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-h5jzv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali170f32b928e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:34:22.125486 containerd[1460]: 2024-12-13 13:34:22.096 [INFO][4783] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="eebc8245c67c09382ddf9863b64b3bfc1c4564b4b28cabda982433ba51a45c95" Namespace="calico-system" Pod="csi-node-driver-h5jzv" WorkloadEndpoint="localhost-k8s-csi--node--driver--h5jzv-eth0" Dec 13 13:34:22.125486 containerd[1460]: 2024-12-13 13:34:22.096 [INFO][4783] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali170f32b928e ContainerID="eebc8245c67c09382ddf9863b64b3bfc1c4564b4b28cabda982433ba51a45c95" Namespace="calico-system" Pod="csi-node-driver-h5jzv" WorkloadEndpoint="localhost-k8s-csi--node--driver--h5jzv-eth0" Dec 13 13:34:22.125486 containerd[1460]: 2024-12-13 13:34:22.100 [INFO][4783] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eebc8245c67c09382ddf9863b64b3bfc1c4564b4b28cabda982433ba51a45c95" Namespace="calico-system" Pod="csi-node-driver-h5jzv" WorkloadEndpoint="localhost-k8s-csi--node--driver--h5jzv-eth0" Dec 13 13:34:22.125486 containerd[1460]: 2024-12-13 13:34:22.100 [INFO][4783] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="eebc8245c67c09382ddf9863b64b3bfc1c4564b4b28cabda982433ba51a45c95" Namespace="calico-system" Pod="csi-node-driver-h5jzv" WorkloadEndpoint="localhost-k8s-csi--node--driver--h5jzv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--h5jzv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"70af0792-807b-45ba-8d22-96d81d38b5e7", ResourceVersion:"662", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 34, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eebc8245c67c09382ddf9863b64b3bfc1c4564b4b28cabda982433ba51a45c95", Pod:"csi-node-driver-h5jzv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali170f32b928e", MAC:"a6:9e:d4:07:05:ec", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:34:22.125486 containerd[1460]: 2024-12-13 13:34:22.110 [INFO][4783] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="eebc8245c67c09382ddf9863b64b3bfc1c4564b4b28cabda982433ba51a45c95" Namespace="calico-system" Pod="csi-node-driver-h5jzv" WorkloadEndpoint="localhost-k8s-csi--node--driver--h5jzv-eth0" Dec 13 13:34:22.137065 systemd-networkd[1402]: cali929f90214b0: Link UP Dec 13 13:34:22.137298 systemd-networkd[1402]: cali929f90214b0: Gained carrier Dec 13 13:34:22.149460 containerd[1460]: 2024-12-13 13:34:21.860 [INFO][4796] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 13:34:22.149460 containerd[1460]: 2024-12-13 13:34:21.888 [INFO][4796] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--tdhb9-eth0 coredns-7db6d8ff4d- kube-system bd2088f6-f886-4664-af58-213044237f3c 809 0 2024-12-13 13:33:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-tdhb9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali929f90214b0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b674573071878cf88e90118562c552750697888ea87021c6207f1ff0bd7bb094" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tdhb9" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--tdhb9-" Dec 13 13:34:22.149460 containerd[1460]: 2024-12-13 13:34:21.889 [INFO][4796] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b674573071878cf88e90118562c552750697888ea87021c6207f1ff0bd7bb094" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tdhb9" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--tdhb9-eth0" Dec 13 13:34:22.149460 containerd[1460]: 2024-12-13 13:34:21.994 [INFO][4865] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b674573071878cf88e90118562c552750697888ea87021c6207f1ff0bd7bb094" HandleID="k8s-pod-network.b674573071878cf88e90118562c552750697888ea87021c6207f1ff0bd7bb094" Workload="localhost-k8s-coredns--7db6d8ff4d--tdhb9-eth0" Dec 13 13:34:22.149460 containerd[1460]: 2024-12-13 13:34:22.005 [INFO][4865] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b674573071878cf88e90118562c552750697888ea87021c6207f1ff0bd7bb094" HandleID="k8s-pod-network.b674573071878cf88e90118562c552750697888ea87021c6207f1ff0bd7bb094" Workload="localhost-k8s-coredns--7db6d8ff4d--tdhb9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000378ac0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-tdhb9", "timestamp":"2024-12-13 13:34:21.993982776 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:34:22.149460 containerd[1460]: 2024-12-13 13:34:22.005 [INFO][4865] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:34:22.149460 containerd[1460]: 2024-12-13 13:34:22.089 [INFO][4865] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:34:22.149460 containerd[1460]: 2024-12-13 13:34:22.089 [INFO][4865] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 13:34:22.149460 containerd[1460]: 2024-12-13 13:34:22.091 [INFO][4865] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b674573071878cf88e90118562c552750697888ea87021c6207f1ff0bd7bb094" host="localhost" Dec 13 13:34:22.149460 containerd[1460]: 2024-12-13 13:34:22.097 [INFO][4865] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 13:34:22.149460 containerd[1460]: 2024-12-13 13:34:22.106 [INFO][4865] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 13:34:22.149460 containerd[1460]: 2024-12-13 13:34:22.110 [INFO][4865] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 13:34:22.149460 containerd[1460]: 2024-12-13 13:34:22.112 [INFO][4865] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 13:34:22.149460 containerd[1460]: 2024-12-13 13:34:22.112 [INFO][4865] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b674573071878cf88e90118562c552750697888ea87021c6207f1ff0bd7bb094" host="localhost" Dec 13 13:34:22.149460 containerd[1460]: 2024-12-13 13:34:22.113 [INFO][4865] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b674573071878cf88e90118562c552750697888ea87021c6207f1ff0bd7bb094 Dec 13 13:34:22.149460 containerd[1460]: 2024-12-13 13:34:22.117 [INFO][4865] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b674573071878cf88e90118562c552750697888ea87021c6207f1ff0bd7bb094" host="localhost" Dec 13 13:34:22.149460 containerd[1460]: 2024-12-13 13:34:22.123 [INFO][4865] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.b674573071878cf88e90118562c552750697888ea87021c6207f1ff0bd7bb094" host="localhost" Dec 13 13:34:22.149460 containerd[1460]: 2024-12-13 13:34:22.123 [INFO][4865] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.b674573071878cf88e90118562c552750697888ea87021c6207f1ff0bd7bb094" host="localhost" Dec 13 13:34:22.149460 containerd[1460]: 2024-12-13 13:34:22.124 [INFO][4865] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:34:22.149460 containerd[1460]: 2024-12-13 13:34:22.124 [INFO][4865] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="b674573071878cf88e90118562c552750697888ea87021c6207f1ff0bd7bb094" HandleID="k8s-pod-network.b674573071878cf88e90118562c552750697888ea87021c6207f1ff0bd7bb094" Workload="localhost-k8s-coredns--7db6d8ff4d--tdhb9-eth0" Dec 13 13:34:22.150045 containerd[1460]: 2024-12-13 13:34:22.131 [INFO][4796] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b674573071878cf88e90118562c552750697888ea87021c6207f1ff0bd7bb094" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tdhb9" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--tdhb9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--tdhb9-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"bd2088f6-f886-4664-af58-213044237f3c", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 33, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-tdhb9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali929f90214b0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:34:22.150045 containerd[1460]: 2024-12-13 13:34:22.132 [INFO][4796] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="b674573071878cf88e90118562c552750697888ea87021c6207f1ff0bd7bb094" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tdhb9" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--tdhb9-eth0" Dec 13 13:34:22.150045 containerd[1460]: 2024-12-13 13:34:22.132 [INFO][4796] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali929f90214b0 ContainerID="b674573071878cf88e90118562c552750697888ea87021c6207f1ff0bd7bb094" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tdhb9" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--tdhb9-eth0" Dec 13 13:34:22.150045 containerd[1460]: 2024-12-13 13:34:22.135 [INFO][4796] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b674573071878cf88e90118562c552750697888ea87021c6207f1ff0bd7bb094" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tdhb9" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--tdhb9-eth0" Dec 13 13:34:22.150045 containerd[1460]: 2024-12-13 13:34:22.135 [INFO][4796] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b674573071878cf88e90118562c552750697888ea87021c6207f1ff0bd7bb094" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tdhb9" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--tdhb9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--tdhb9-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"bd2088f6-f886-4664-af58-213044237f3c", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 33, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b674573071878cf88e90118562c552750697888ea87021c6207f1ff0bd7bb094", Pod:"coredns-7db6d8ff4d-tdhb9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali929f90214b0", MAC:"9a:38:89:c8:9e:28", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:34:22.150045 containerd[1460]: 2024-12-13 13:34:22.146 [INFO][4796] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b674573071878cf88e90118562c552750697888ea87021c6207f1ff0bd7bb094" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tdhb9" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--tdhb9-eth0" Dec 13 13:34:22.150471 systemd[1]: Started cri-containerd-15bd886bfad9865c2cbf05528f4845081be4b7ea2734d0168e1b88e3c239ed4f.scope - libcontainer container 15bd886bfad9865c2cbf05528f4845081be4b7ea2734d0168e1b88e3c239ed4f. Dec 13 13:34:22.157826 systemd[1]: Started cri-containerd-9096b2d20114e03dd6176b2ee390ed4348d6d76baef9c659a0f7eec1c0a11cf1.scope - libcontainer container 9096b2d20114e03dd6176b2ee390ed4348d6d76baef9c659a0f7eec1c0a11cf1. Dec 13 13:34:22.171944 containerd[1460]: time="2024-12-13T13:34:22.170808470Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:34:22.171944 containerd[1460]: time="2024-12-13T13:34:22.171357162Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:34:22.171944 containerd[1460]: time="2024-12-13T13:34:22.171573598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:34:22.172073 containerd[1460]: time="2024-12-13T13:34:22.171966075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:34:22.173479 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 13:34:22.177466 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 13:34:22.181593 systemd-networkd[1402]: calic7dc8a8f002: Link UP Dec 13 13:34:22.181794 systemd-networkd[1402]: calic7dc8a8f002: Gained carrier Dec 13 13:34:22.194106 containerd[1460]: time="2024-12-13T13:34:22.193860417Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:34:22.194106 containerd[1460]: time="2024-12-13T13:34:22.193917023Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:34:22.194106 containerd[1460]: time="2024-12-13T13:34:22.193932052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:34:22.194106 containerd[1460]: time="2024-12-13T13:34:22.194009958Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:34:22.205800 containerd[1460]: 2024-12-13 13:34:21.883 [INFO][4807] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 13:34:22.205800 containerd[1460]: 2024-12-13 13:34:21.902 [INFO][4807] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--55f74d585b--gkp22-eth0 calico-kube-controllers-55f74d585b- calico-system bcb02bd3-79c5-4f97-892a-aafa3090dcbe 808 0 2024-12-13 13:34:03 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:55f74d585b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-55f74d585b-gkp22 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic7dc8a8f002 [] []}} ContainerID="507a8a50479890f6939e3183f9edd31e384c272bac973a078b682a0de21067c2" Namespace="calico-system" Pod="calico-kube-controllers-55f74d585b-gkp22" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55f74d585b--gkp22-" Dec 13 13:34:22.205800 containerd[1460]: 2024-12-13 13:34:21.902 [INFO][4807] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="507a8a50479890f6939e3183f9edd31e384c272bac973a078b682a0de21067c2" Namespace="calico-system" Pod="calico-kube-controllers-55f74d585b-gkp22" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55f74d585b--gkp22-eth0" Dec 13 13:34:22.205800 containerd[1460]: 2024-12-13 13:34:21.995 [INFO][4872] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="507a8a50479890f6939e3183f9edd31e384c272bac973a078b682a0de21067c2" HandleID="k8s-pod-network.507a8a50479890f6939e3183f9edd31e384c272bac973a078b682a0de21067c2" Workload="localhost-k8s-calico--kube--controllers--55f74d585b--gkp22-eth0" Dec 13 13:34:22.205800 containerd[1460]: 2024-12-13 13:34:22.005 [INFO][4872] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="507a8a50479890f6939e3183f9edd31e384c272bac973a078b682a0de21067c2" HandleID="k8s-pod-network.507a8a50479890f6939e3183f9edd31e384c272bac973a078b682a0de21067c2" Workload="localhost-k8s-calico--kube--controllers--55f74d585b--gkp22-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd1f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-55f74d585b-gkp22", "timestamp":"2024-12-13 13:34:21.99561345 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:34:22.205800 containerd[1460]: 2024-12-13 13:34:22.005 [INFO][4872] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:34:22.205800 containerd[1460]: 2024-12-13 13:34:22.123 [INFO][4872] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:34:22.205800 containerd[1460]: 2024-12-13 13:34:22.124 [INFO][4872] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 13:34:22.205800 containerd[1460]: 2024-12-13 13:34:22.126 [INFO][4872] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.507a8a50479890f6939e3183f9edd31e384c272bac973a078b682a0de21067c2" host="localhost" Dec 13 13:34:22.205800 containerd[1460]: 2024-12-13 13:34:22.131 [INFO][4872] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 13:34:22.205800 containerd[1460]: 2024-12-13 13:34:22.135 [INFO][4872] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 13:34:22.205800 containerd[1460]: 2024-12-13 13:34:22.139 [INFO][4872] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 13:34:22.205800 containerd[1460]: 2024-12-13 13:34:22.146 [INFO][4872] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 13:34:22.205800 containerd[1460]: 2024-12-13 13:34:22.146 [INFO][4872] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.507a8a50479890f6939e3183f9edd31e384c272bac973a078b682a0de21067c2" host="localhost" Dec 13 13:34:22.205800 containerd[1460]: 2024-12-13 13:34:22.148 [INFO][4872] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.507a8a50479890f6939e3183f9edd31e384c272bac973a078b682a0de21067c2 Dec 13 13:34:22.205800 containerd[1460]: 2024-12-13 13:34:22.158 [INFO][4872] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.507a8a50479890f6939e3183f9edd31e384c272bac973a078b682a0de21067c2" host="localhost" Dec 13 13:34:22.205800 containerd[1460]: 2024-12-13 13:34:22.166 [INFO][4872] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.507a8a50479890f6939e3183f9edd31e384c272bac973a078b682a0de21067c2" host="localhost" Dec 13 13:34:22.205800 containerd[1460]: 2024-12-13 13:34:22.166 [INFO][4872] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.507a8a50479890f6939e3183f9edd31e384c272bac973a078b682a0de21067c2" host="localhost" Dec 13 13:34:22.205800 containerd[1460]: 2024-12-13 13:34:22.166 [INFO][4872] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:34:22.205800 containerd[1460]: 2024-12-13 13:34:22.166 [INFO][4872] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="507a8a50479890f6939e3183f9edd31e384c272bac973a078b682a0de21067c2" HandleID="k8s-pod-network.507a8a50479890f6939e3183f9edd31e384c272bac973a078b682a0de21067c2" Workload="localhost-k8s-calico--kube--controllers--55f74d585b--gkp22-eth0" Dec 13 13:34:22.206409 containerd[1460]: 2024-12-13 13:34:22.175 [INFO][4807] cni-plugin/k8s.go 386: Populated endpoint ContainerID="507a8a50479890f6939e3183f9edd31e384c272bac973a078b682a0de21067c2" Namespace="calico-system" Pod="calico-kube-controllers-55f74d585b-gkp22" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55f74d585b--gkp22-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--55f74d585b--gkp22-eth0", GenerateName:"calico-kube-controllers-55f74d585b-", Namespace:"calico-system", SelfLink:"", UID:"bcb02bd3-79c5-4f97-892a-aafa3090dcbe", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 34, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55f74d585b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-55f74d585b-gkp22", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic7dc8a8f002", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:34:22.206409 containerd[1460]: 2024-12-13 13:34:22.175 [INFO][4807] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="507a8a50479890f6939e3183f9edd31e384c272bac973a078b682a0de21067c2" Namespace="calico-system" Pod="calico-kube-controllers-55f74d585b-gkp22" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55f74d585b--gkp22-eth0" Dec 13 13:34:22.206409 containerd[1460]: 2024-12-13 13:34:22.175 [INFO][4807] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic7dc8a8f002 ContainerID="507a8a50479890f6939e3183f9edd31e384c272bac973a078b682a0de21067c2" Namespace="calico-system" Pod="calico-kube-controllers-55f74d585b-gkp22" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55f74d585b--gkp22-eth0" Dec 13 13:34:22.206409 containerd[1460]: 2024-12-13 13:34:22.182 [INFO][4807] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="507a8a50479890f6939e3183f9edd31e384c272bac973a078b682a0de21067c2" Namespace="calico-system" Pod="calico-kube-controllers-55f74d585b-gkp22" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55f74d585b--gkp22-eth0" Dec 13 13:34:22.206409 containerd[1460]: 2024-12-13 13:34:22.185 [INFO][4807] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="507a8a50479890f6939e3183f9edd31e384c272bac973a078b682a0de21067c2" Namespace="calico-system" Pod="calico-kube-controllers-55f74d585b-gkp22" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55f74d585b--gkp22-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--55f74d585b--gkp22-eth0", GenerateName:"calico-kube-controllers-55f74d585b-", Namespace:"calico-system", SelfLink:"", UID:"bcb02bd3-79c5-4f97-892a-aafa3090dcbe", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 34, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55f74d585b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"507a8a50479890f6939e3183f9edd31e384c272bac973a078b682a0de21067c2", Pod:"calico-kube-controllers-55f74d585b-gkp22", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic7dc8a8f002", MAC:"e6:9b:b8:42:03:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:34:22.206409 containerd[1460]: 2024-12-13 13:34:22.198 [INFO][4807] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="507a8a50479890f6939e3183f9edd31e384c272bac973a078b682a0de21067c2" Namespace="calico-system" Pod="calico-kube-controllers-55f74d585b-gkp22" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55f74d585b--gkp22-eth0" Dec 13 13:34:22.216096 containerd[1460]: time="2024-12-13T13:34:22.215975223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d87f7746b-flf6n,Uid:5ba12381-f554-4e4f-8ceb-405dc070dc9a,Namespace:calico-apiserver,Attempt:6,} returns sandbox id \"15bd886bfad9865c2cbf05528f4845081be4b7ea2734d0168e1b88e3c239ed4f\"" Dec 13 13:34:22.225710 containerd[1460]: time="2024-12-13T13:34:22.225643756Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 13:34:22.226568 systemd[1]: Started cri-containerd-eebc8245c67c09382ddf9863b64b3bfc1c4564b4b28cabda982433ba51a45c95.scope - libcontainer container eebc8245c67c09382ddf9863b64b3bfc1c4564b4b28cabda982433ba51a45c95. Dec 13 13:34:22.231256 containerd[1460]: time="2024-12-13T13:34:22.231210906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-n488k,Uid:4efa0db1-4649-47cd-847b-b2cd3ddad9b5,Namespace:kube-system,Attempt:6,} returns sandbox id \"9096b2d20114e03dd6176b2ee390ed4348d6d76baef9c659a0f7eec1c0a11cf1\"" Dec 13 13:34:22.231884 kubelet[2640]: E1213 13:34:22.231851 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:34:22.231989 systemd-networkd[1402]: cali80c268274f2: Link UP Dec 13 13:34:22.232462 systemd-networkd[1402]: cali80c268274f2: Gained carrier Dec 13 13:34:22.234381 systemd[1]: Started cri-containerd-b674573071878cf88e90118562c552750697888ea87021c6207f1ff0bd7bb094.scope - libcontainer container b674573071878cf88e90118562c552750697888ea87021c6207f1ff0bd7bb094. Dec 13 13:34:22.243296 containerd[1460]: time="2024-12-13T13:34:22.243239373Z" level=info msg="CreateContainer within sandbox \"9096b2d20114e03dd6176b2ee390ed4348d6d76baef9c659a0f7eec1c0a11cf1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 13:34:22.254780 containerd[1460]: 2024-12-13 13:34:21.914 [INFO][4828] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 13:34:22.254780 containerd[1460]: 2024-12-13 13:34:21.928 [INFO][4828] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7d87f7746b--mqntw-eth0 calico-apiserver-7d87f7746b- calico-apiserver cadc3599-3084-49e0-99bc-626d4d423dd6 807 0 2024-12-13 13:34:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d87f7746b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7d87f7746b-mqntw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali80c268274f2 [] []}} ContainerID="4e13fdab269067716759091d8e4b2f3b7241fd6258b33d2aa74f0333bfee1bbc" Namespace="calico-apiserver" Pod="calico-apiserver-7d87f7746b-mqntw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d87f7746b--mqntw-" Dec 13 13:34:22.254780 containerd[1460]: 2024-12-13 13:34:21.928 [INFO][4828] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4e13fdab269067716759091d8e4b2f3b7241fd6258b33d2aa74f0333bfee1bbc" Namespace="calico-apiserver" Pod="calico-apiserver-7d87f7746b-mqntw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d87f7746b--mqntw-eth0" Dec 13 13:34:22.254780 containerd[1460]: 2024-12-13 13:34:21.991 [INFO][4899] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4e13fdab269067716759091d8e4b2f3b7241fd6258b33d2aa74f0333bfee1bbc" HandleID="k8s-pod-network.4e13fdab269067716759091d8e4b2f3b7241fd6258b33d2aa74f0333bfee1bbc" Workload="localhost-k8s-calico--apiserver--7d87f7746b--mqntw-eth0" Dec 13 13:34:22.254780 containerd[1460]: 2024-12-13 13:34:22.005 [INFO][4899] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4e13fdab269067716759091d8e4b2f3b7241fd6258b33d2aa74f0333bfee1bbc" HandleID="k8s-pod-network.4e13fdab269067716759091d8e4b2f3b7241fd6258b33d2aa74f0333bfee1bbc" Workload="localhost-k8s-calico--apiserver--7d87f7746b--mqntw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027e2f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7d87f7746b-mqntw", "timestamp":"2024-12-13 13:34:21.991764488 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:34:22.254780 containerd[1460]: 2024-12-13 13:34:22.005 [INFO][4899] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:34:22.254780 containerd[1460]: 2024-12-13 13:34:22.167 [INFO][4899] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:34:22.254780 containerd[1460]: 2024-12-13 13:34:22.168 [INFO][4899] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 13:34:22.254780 containerd[1460]: 2024-12-13 13:34:22.173 [INFO][4899] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4e13fdab269067716759091d8e4b2f3b7241fd6258b33d2aa74f0333bfee1bbc" host="localhost" Dec 13 13:34:22.254780 containerd[1460]: 2024-12-13 13:34:22.182 [INFO][4899] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 13:34:22.254780 containerd[1460]: 2024-12-13 13:34:22.193 [INFO][4899] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 13:34:22.254780 containerd[1460]: 2024-12-13 13:34:22.196 [INFO][4899] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 13:34:22.254780 containerd[1460]: 2024-12-13 13:34:22.198 [INFO][4899] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 13:34:22.254780 containerd[1460]: 2024-12-13 13:34:22.198 [INFO][4899] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4e13fdab269067716759091d8e4b2f3b7241fd6258b33d2aa74f0333bfee1bbc" host="localhost" Dec 13 13:34:22.254780 containerd[1460]: 2024-12-13 13:34:22.200 [INFO][4899] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4e13fdab269067716759091d8e4b2f3b7241fd6258b33d2aa74f0333bfee1bbc Dec 13 13:34:22.254780 containerd[1460]: 2024-12-13 13:34:22.211 [INFO][4899] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4e13fdab269067716759091d8e4b2f3b7241fd6258b33d2aa74f0333bfee1bbc" host="localhost" Dec 13 13:34:22.254780 containerd[1460]: 2024-12-13 13:34:22.221 [INFO][4899] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.4e13fdab269067716759091d8e4b2f3b7241fd6258b33d2aa74f0333bfee1bbc" host="localhost" Dec 13 13:34:22.254780 containerd[1460]: 2024-12-13 13:34:22.222 [INFO][4899] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.4e13fdab269067716759091d8e4b2f3b7241fd6258b33d2aa74f0333bfee1bbc" host="localhost" Dec 13 13:34:22.254780 containerd[1460]: 2024-12-13 13:34:22.222 [INFO][4899] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:34:22.254780 containerd[1460]: 2024-12-13 13:34:22.222 [INFO][4899] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="4e13fdab269067716759091d8e4b2f3b7241fd6258b33d2aa74f0333bfee1bbc" HandleID="k8s-pod-network.4e13fdab269067716759091d8e4b2f3b7241fd6258b33d2aa74f0333bfee1bbc" Workload="localhost-k8s-calico--apiserver--7d87f7746b--mqntw-eth0" Dec 13 13:34:22.255333 containerd[1460]: 2024-12-13 13:34:22.229 [INFO][4828] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4e13fdab269067716759091d8e4b2f3b7241fd6258b33d2aa74f0333bfee1bbc" Namespace="calico-apiserver" Pod="calico-apiserver-7d87f7746b-mqntw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d87f7746b--mqntw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d87f7746b--mqntw-eth0", GenerateName:"calico-apiserver-7d87f7746b-", Namespace:"calico-apiserver", SelfLink:"", UID:"cadc3599-3084-49e0-99bc-626d4d423dd6", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 34, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d87f7746b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7d87f7746b-mqntw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali80c268274f2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:34:22.255333 containerd[1460]: 2024-12-13 13:34:22.229 [INFO][4828] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="4e13fdab269067716759091d8e4b2f3b7241fd6258b33d2aa74f0333bfee1bbc" Namespace="calico-apiserver" Pod="calico-apiserver-7d87f7746b-mqntw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d87f7746b--mqntw-eth0" Dec 13 13:34:22.255333 containerd[1460]: 2024-12-13 13:34:22.229 [INFO][4828] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali80c268274f2 ContainerID="4e13fdab269067716759091d8e4b2f3b7241fd6258b33d2aa74f0333bfee1bbc" Namespace="calico-apiserver" Pod="calico-apiserver-7d87f7746b-mqntw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d87f7746b--mqntw-eth0" Dec 13 13:34:22.255333 containerd[1460]: 2024-12-13 13:34:22.233 [INFO][4828] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4e13fdab269067716759091d8e4b2f3b7241fd6258b33d2aa74f0333bfee1bbc" Namespace="calico-apiserver" Pod="calico-apiserver-7d87f7746b-mqntw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d87f7746b--mqntw-eth0" Dec 13 13:34:22.255333 containerd[1460]: 2024-12-13 13:34:22.236 [INFO][4828] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4e13fdab269067716759091d8e4b2f3b7241fd6258b33d2aa74f0333bfee1bbc" Namespace="calico-apiserver" Pod="calico-apiserver-7d87f7746b-mqntw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d87f7746b--mqntw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d87f7746b--mqntw-eth0", GenerateName:"calico-apiserver-7d87f7746b-", Namespace:"calico-apiserver", SelfLink:"", UID:"cadc3599-3084-49e0-99bc-626d4d423dd6", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 34, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d87f7746b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4e13fdab269067716759091d8e4b2f3b7241fd6258b33d2aa74f0333bfee1bbc", Pod:"calico-apiserver-7d87f7746b-mqntw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali80c268274f2", MAC:"5a:d5:06:c7:26:d4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:34:22.255333 containerd[1460]: 2024-12-13 13:34:22.247 [INFO][4828] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4e13fdab269067716759091d8e4b2f3b7241fd6258b33d2aa74f0333bfee1bbc" Namespace="calico-apiserver" Pod="calico-apiserver-7d87f7746b-mqntw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d87f7746b--mqntw-eth0" Dec 13 13:34:22.254787 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 13:34:22.257772 containerd[1460]: time="2024-12-13T13:34:22.257563543Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:34:22.257772 containerd[1460]: time="2024-12-13T13:34:22.257620830Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:34:22.257772 containerd[1460]: time="2024-12-13T13:34:22.257635127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:34:22.257772 containerd[1460]: time="2024-12-13T13:34:22.257707073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:34:22.258302 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 13:34:22.274027 containerd[1460]: time="2024-12-13T13:34:22.273468944Z" level=info msg="CreateContainer within sandbox \"9096b2d20114e03dd6176b2ee390ed4348d6d76baef9c659a0f7eec1c0a11cf1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a8cad7176a0cf0f6c835f229237e8805f2953e37f224b7cb9282287c00a9fde8\"" Dec 13 13:34:22.276554 containerd[1460]: time="2024-12-13T13:34:22.276516429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h5jzv,Uid:70af0792-807b-45ba-8d22-96d81d38b5e7,Namespace:calico-system,Attempt:4,} returns sandbox id \"eebc8245c67c09382ddf9863b64b3bfc1c4564b4b28cabda982433ba51a45c95\"" Dec 13 13:34:22.281484 systemd[1]: Started cri-containerd-507a8a50479890f6939e3183f9edd31e384c272bac973a078b682a0de21067c2.scope - libcontainer container 507a8a50479890f6939e3183f9edd31e384c272bac973a078b682a0de21067c2. Dec 13 13:34:22.292105 containerd[1460]: time="2024-12-13T13:34:22.292058397Z" level=info msg="StartContainer for \"a8cad7176a0cf0f6c835f229237e8805f2953e37f224b7cb9282287c00a9fde8\"" Dec 13 13:34:22.303848 containerd[1460]: time="2024-12-13T13:34:22.303668678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tdhb9,Uid:bd2088f6-f886-4664-af58-213044237f3c,Namespace:kube-system,Attempt:6,} returns sandbox id \"b674573071878cf88e90118562c552750697888ea87021c6207f1ff0bd7bb094\"" Dec 13 13:34:22.304030 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 13:34:22.305288 containerd[1460]: time="2024-12-13T13:34:22.305175759Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:34:22.305647 containerd[1460]: time="2024-12-13T13:34:22.305586010Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:34:22.305724 containerd[1460]: time="2024-12-13T13:34:22.305623941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:34:22.305914 containerd[1460]: time="2024-12-13T13:34:22.305874262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:34:22.307115 kubelet[2640]: E1213 13:34:22.306903 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:34:22.314810 containerd[1460]: time="2024-12-13T13:34:22.314775113Z" level=info msg="CreateContainer within sandbox \"b674573071878cf88e90118562c552750697888ea87021c6207f1ff0bd7bb094\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 13:34:22.332573 systemd[1]: Started cri-containerd-4e13fdab269067716759091d8e4b2f3b7241fd6258b33d2aa74f0333bfee1bbc.scope - libcontainer container 4e13fdab269067716759091d8e4b2f3b7241fd6258b33d2aa74f0333bfee1bbc. Dec 13 13:34:22.335762 containerd[1460]: time="2024-12-13T13:34:22.335633319Z" level=info msg="CreateContainer within sandbox \"b674573071878cf88e90118562c552750697888ea87021c6207f1ff0bd7bb094\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7d329dbdebfe1934f12a8611fd605a137949deecc2a7b75fa953e559db6ada1b\"" Dec 13 13:34:22.338644 containerd[1460]: time="2024-12-13T13:34:22.338279189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55f74d585b-gkp22,Uid:bcb02bd3-79c5-4f97-892a-aafa3090dcbe,Namespace:calico-system,Attempt:6,} returns sandbox id \"507a8a50479890f6939e3183f9edd31e384c272bac973a078b682a0de21067c2\"" Dec 13 13:34:22.339011 containerd[1460]: time="2024-12-13T13:34:22.338814134Z" level=info msg="StartContainer for \"7d329dbdebfe1934f12a8611fd605a137949deecc2a7b75fa953e559db6ada1b\"" Dec 13 13:34:22.342413 kubelet[2640]: E1213 13:34:22.342288 2640 cadvisor_stats_provider.go:500] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbcb02bd3_79c5_4f97_892a_aafa3090dcbe.slice/cri-containerd-507a8a50479890f6939e3183f9edd31e384c272bac973a078b682a0de21067c2.scope\": RecentStats: unable to find data in memory cache]" Dec 13 13:34:22.345507 systemd[1]: Started cri-containerd-a8cad7176a0cf0f6c835f229237e8805f2953e37f224b7cb9282287c00a9fde8.scope - libcontainer container a8cad7176a0cf0f6c835f229237e8805f2953e37f224b7cb9282287c00a9fde8. Dec 13 13:34:22.350268 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 13:34:22.369445 systemd[1]: Started cri-containerd-7d329dbdebfe1934f12a8611fd605a137949deecc2a7b75fa953e559db6ada1b.scope - libcontainer container 7d329dbdebfe1934f12a8611fd605a137949deecc2a7b75fa953e559db6ada1b. Dec 13 13:34:22.382517 containerd[1460]: time="2024-12-13T13:34:22.382481040Z" level=info msg="StartContainer for \"a8cad7176a0cf0f6c835f229237e8805f2953e37f224b7cb9282287c00a9fde8\" returns successfully" Dec 13 13:34:22.387448 containerd[1460]: time="2024-12-13T13:34:22.387420969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d87f7746b-mqntw,Uid:cadc3599-3084-49e0-99bc-626d4d423dd6,Namespace:calico-apiserver,Attempt:6,} returns sandbox id \"4e13fdab269067716759091d8e4b2f3b7241fd6258b33d2aa74f0333bfee1bbc\"" Dec 13 13:34:22.403037 containerd[1460]: time="2024-12-13T13:34:22.402949793Z" level=info msg="StartContainer for \"7d329dbdebfe1934f12a8611fd605a137949deecc2a7b75fa953e559db6ada1b\" returns successfully" Dec 13 13:34:22.682647 kubelet[2640]: E1213 13:34:22.682372 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:34:22.686249 kubelet[2640]: E1213 13:34:22.686224 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:34:22.700133 kubelet[2640]: E1213 13:34:22.700104 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:34:22.709721 kubelet[2640]: I1213 13:34:22.706912 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-n488k" podStartSLOduration=26.706891854 podStartE2EDuration="26.706891854s" podCreationTimestamp="2024-12-13 13:33:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:34:22.706844795 +0000 UTC m=+40.906244156" watchObservedRunningTime="2024-12-13 13:34:22.706891854 +0000 UTC m=+40.906291205" Dec 13 13:34:22.709721 kubelet[2640]: I1213 13:34:22.707015 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-tdhb9" podStartSLOduration=26.707011248 podStartE2EDuration="26.707011248s" podCreationTimestamp="2024-12-13 13:33:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:34:22.692529671 +0000 UTC m=+40.891929032" watchObservedRunningTime="2024-12-13 13:34:22.707011248 +0000 UTC m=+40.906410609" Dec 13 13:34:22.722630 kubelet[2640]: I1213 13:34:22.722601 2640 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 13:34:22.727824 kubelet[2640]: E1213 13:34:22.727029 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:34:23.120348 kernel: bpftool[5469]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 13:34:23.317532 systemd-networkd[1402]: cali47bfc7cde53: Gained IPv6LL Dec 13 13:34:23.363751 systemd-networkd[1402]: vxlan.calico: Link UP Dec 13 13:34:23.363762 systemd-networkd[1402]: vxlan.calico: Gained carrier Dec 13 13:34:23.383941 systemd-networkd[1402]: cali929f90214b0: Gained IPv6LL Dec 13 13:34:23.711341 kubelet[2640]: E1213 13:34:23.709083 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:34:23.711341 kubelet[2640]: E1213 13:34:23.709798 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:34:23.711341 kubelet[2640]: E1213 13:34:23.710252 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:34:23.766866 systemd-networkd[1402]: cali80c268274f2: Gained IPv6LL Dec 13 13:34:23.957454 systemd-networkd[1402]: calic7dc8a8f002: Gained IPv6LL Dec 13 13:34:23.958597 systemd-networkd[1402]: calibdb04e42e10: Gained IPv6LL Dec 13 13:34:24.086440 systemd-networkd[1402]: cali170f32b928e: Gained IPv6LL Dec 13 13:34:24.710193 kubelet[2640]: E1213 13:34:24.710141 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:34:24.710889 kubelet[2640]: E1213 13:34:24.710451 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:34:25.173440 systemd-networkd[1402]: vxlan.calico: Gained IPv6LL Dec 13 13:34:25.396387 containerd[1460]: time="2024-12-13T13:34:25.396333568Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:25.397067 containerd[1460]: time="2024-12-13T13:34:25.397041407Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Dec 13 13:34:25.398363 containerd[1460]: time="2024-12-13T13:34:25.398336320Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:25.414716 containerd[1460]: time="2024-12-13T13:34:25.414676819Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:25.415321 containerd[1460]: time="2024-12-13T13:34:25.415257319Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.18955924s" Dec 13 13:34:25.415321 containerd[1460]: time="2024-12-13T13:34:25.415284661Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 13:34:25.416415 containerd[1460]: time="2024-12-13T13:34:25.416364889Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 13:34:25.420850 containerd[1460]: time="2024-12-13T13:34:25.420794328Z" level=info msg="CreateContainer within sandbox \"15bd886bfad9865c2cbf05528f4845081be4b7ea2734d0168e1b88e3c239ed4f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 13:34:25.458531 containerd[1460]: time="2024-12-13T13:34:25.458267027Z" level=info msg="CreateContainer within sandbox \"15bd886bfad9865c2cbf05528f4845081be4b7ea2734d0168e1b88e3c239ed4f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4d0768dc2368275cedd37212483e604e1943d112917a0bbf7d1b7b288bc09089\"" Dec 13 13:34:25.463405 containerd[1460]: time="2024-12-13T13:34:25.463303496Z" level=info msg="StartContainer for \"4d0768dc2368275cedd37212483e604e1943d112917a0bbf7d1b7b288bc09089\"" Dec 13 13:34:25.463628 systemd[1]: Started sshd@10-10.0.0.150:22-10.0.0.1:45724.service - OpenSSH per-connection server daemon (10.0.0.1:45724). Dec 13 13:34:25.494567 systemd[1]: Started cri-containerd-4d0768dc2368275cedd37212483e604e1943d112917a0bbf7d1b7b288bc09089.scope - libcontainer container 4d0768dc2368275cedd37212483e604e1943d112917a0bbf7d1b7b288bc09089. Dec 13 13:34:25.515592 sshd[5565]: Accepted publickey for core from 10.0.0.1 port 45724 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:34:25.517339 sshd-session[5565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:34:25.521399 systemd-logind[1445]: New session 11 of user core. Dec 13 13:34:25.534552 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 13:34:25.550326 containerd[1460]: time="2024-12-13T13:34:25.550274590Z" level=info msg="StartContainer for \"4d0768dc2368275cedd37212483e604e1943d112917a0bbf7d1b7b288bc09089\" returns successfully" Dec 13 13:34:25.645485 sshd[5594]: Connection closed by 10.0.0.1 port 45724 Dec 13 13:34:25.645798 sshd-session[5565]: pam_unix(sshd:session): session closed for user core Dec 13 13:34:25.653942 systemd[1]: sshd@10-10.0.0.150:22-10.0.0.1:45724.service: Deactivated successfully. Dec 13 13:34:25.655741 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 13:34:25.657209 systemd-logind[1445]: Session 11 logged out. Waiting for processes to exit. Dec 13 13:34:25.663607 systemd[1]: Started sshd@11-10.0.0.150:22-10.0.0.1:45734.service - OpenSSH per-connection server daemon (10.0.0.1:45734). Dec 13 13:34:25.664610 systemd-logind[1445]: Removed session 11. Dec 13 13:34:25.695507 sshd[5619]: Accepted publickey for core from 10.0.0.1 port 45734 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:34:25.696878 sshd-session[5619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:34:25.700615 systemd-logind[1445]: New session 12 of user core. Dec 13 13:34:25.706524 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 13:34:25.773632 kubelet[2640]: I1213 13:34:25.773479 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7d87f7746b-flf6n" podStartSLOduration=19.581812286999998 podStartE2EDuration="22.773460672s" podCreationTimestamp="2024-12-13 13:34:03 +0000 UTC" firstStartedPulling="2024-12-13 13:34:22.224544221 +0000 UTC m=+40.423943572" lastFinishedPulling="2024-12-13 13:34:25.416192606 +0000 UTC m=+43.615591957" observedRunningTime="2024-12-13 13:34:25.773014675 +0000 UTC m=+43.972414056" watchObservedRunningTime="2024-12-13 13:34:25.773460672 +0000 UTC m=+43.972860023" Dec 13 13:34:25.994398 sshd[5621]: Connection closed by 10.0.0.1 port 45734 Dec 13 13:34:26.000472 sshd-session[5619]: pam_unix(sshd:session): session closed for user core Dec 13 13:34:26.006513 systemd[1]: sshd@11-10.0.0.150:22-10.0.0.1:45734.service: Deactivated successfully. Dec 13 13:34:26.008288 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 13:34:26.010803 systemd-logind[1445]: Session 12 logged out. Waiting for processes to exit. Dec 13 13:34:26.028230 systemd[1]: Started sshd@12-10.0.0.150:22-10.0.0.1:36900.service - OpenSSH per-connection server daemon (10.0.0.1:36900). Dec 13 13:34:26.029802 systemd-logind[1445]: Removed session 12. Dec 13 13:34:26.084015 sshd[5634]: Accepted publickey for core from 10.0.0.1 port 36900 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:34:26.085676 sshd-session[5634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:34:26.090156 systemd-logind[1445]: New session 13 of user core. Dec 13 13:34:26.097505 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 13:34:26.231335 sshd[5636]: Connection closed by 10.0.0.1 port 36900 Dec 13 13:34:26.231735 sshd-session[5634]: pam_unix(sshd:session): session closed for user core Dec 13 13:34:26.235745 systemd[1]: sshd@12-10.0.0.150:22-10.0.0.1:36900.service: Deactivated successfully. Dec 13 13:34:26.237773 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 13:34:26.238514 systemd-logind[1445]: Session 13 logged out. Waiting for processes to exit. Dec 13 13:34:26.239489 systemd-logind[1445]: Removed session 13. Dec 13 13:34:26.724613 kubelet[2640]: I1213 13:34:26.724581 2640 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 13:34:27.083384 containerd[1460]: time="2024-12-13T13:34:27.083270454Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:27.084100 containerd[1460]: time="2024-12-13T13:34:27.084061880Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Dec 13 13:34:27.085201 containerd[1460]: time="2024-12-13T13:34:27.085180441Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:27.087257 containerd[1460]: time="2024-12-13T13:34:27.087232133Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:27.103430 containerd[1460]: time="2024-12-13T13:34:27.103393953Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.686999188s" Dec 13 13:34:27.103430 containerd[1460]: time="2024-12-13T13:34:27.103420633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Dec 13 13:34:27.104280 containerd[1460]: time="2024-12-13T13:34:27.104083117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 13:34:27.105028 containerd[1460]: time="2024-12-13T13:34:27.105003175Z" level=info msg="CreateContainer within sandbox \"eebc8245c67c09382ddf9863b64b3bfc1c4564b4b28cabda982433ba51a45c95\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 13:34:27.131348 containerd[1460]: time="2024-12-13T13:34:27.131303072Z" level=info msg="CreateContainer within sandbox \"eebc8245c67c09382ddf9863b64b3bfc1c4564b4b28cabda982433ba51a45c95\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f8e498ab33d23804de28eced32a1d27e03bc1d17327ebd5a93dd8060126d2518\"" Dec 13 13:34:27.131765 containerd[1460]: time="2024-12-13T13:34:27.131742597Z" level=info msg="StartContainer for \"f8e498ab33d23804de28eced32a1d27e03bc1d17327ebd5a93dd8060126d2518\"" Dec 13 13:34:27.162439 systemd[1]: Started cri-containerd-f8e498ab33d23804de28eced32a1d27e03bc1d17327ebd5a93dd8060126d2518.scope - libcontainer container f8e498ab33d23804de28eced32a1d27e03bc1d17327ebd5a93dd8060126d2518. Dec 13 13:34:27.193245 containerd[1460]: time="2024-12-13T13:34:27.193200696Z" level=info msg="StartContainer for \"f8e498ab33d23804de28eced32a1d27e03bc1d17327ebd5a93dd8060126d2518\" returns successfully" Dec 13 13:34:29.110778 containerd[1460]: time="2024-12-13T13:34:29.110721600Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:29.111619 containerd[1460]: time="2024-12-13T13:34:29.111554905Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Dec 13 13:34:29.112685 containerd[1460]: time="2024-12-13T13:34:29.112653318Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:29.114964 containerd[1460]: time="2024-12-13T13:34:29.114933418Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:29.115731 containerd[1460]: time="2024-12-13T13:34:29.115703876Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.011597024s" Dec 13 13:34:29.115731 containerd[1460]: time="2024-12-13T13:34:29.115734643Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Dec 13 13:34:29.116822 containerd[1460]: time="2024-12-13T13:34:29.116791307Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 13:34:29.126654 containerd[1460]: time="2024-12-13T13:34:29.126399307Z" level=info msg="CreateContainer within sandbox \"507a8a50479890f6939e3183f9edd31e384c272bac973a078b682a0de21067c2\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 13:34:29.144892 containerd[1460]: time="2024-12-13T13:34:29.144840712Z" level=info msg="CreateContainer within sandbox \"507a8a50479890f6939e3183f9edd31e384c272bac973a078b682a0de21067c2\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f3509c8b522932da6fc417105071ceb31538129ba4fcbad9ac050ac3c3625370\"" Dec 13 13:34:29.145596 containerd[1460]: time="2024-12-13T13:34:29.145570321Z" level=info msg="StartContainer for \"f3509c8b522932da6fc417105071ceb31538129ba4fcbad9ac050ac3c3625370\"" Dec 13 13:34:29.174455 systemd[1]: Started cri-containerd-f3509c8b522932da6fc417105071ceb31538129ba4fcbad9ac050ac3c3625370.scope - libcontainer container f3509c8b522932da6fc417105071ceb31538129ba4fcbad9ac050ac3c3625370. Dec 13 13:34:29.217139 containerd[1460]: time="2024-12-13T13:34:29.217095506Z" level=info msg="StartContainer for \"f3509c8b522932da6fc417105071ceb31538129ba4fcbad9ac050ac3c3625370\" returns successfully" Dec 13 13:34:29.694155 containerd[1460]: time="2024-12-13T13:34:29.694086486Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:29.694986 containerd[1460]: time="2024-12-13T13:34:29.694899863Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Dec 13 13:34:29.697053 containerd[1460]: time="2024-12-13T13:34:29.697011237Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 580.189312ms" Dec 13 13:34:29.697053 containerd[1460]: time="2024-12-13T13:34:29.697039330Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 13:34:29.697926 containerd[1460]: time="2024-12-13T13:34:29.697882403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 13:34:29.699572 containerd[1460]: time="2024-12-13T13:34:29.699546157Z" level=info msg="CreateContainer within sandbox \"4e13fdab269067716759091d8e4b2f3b7241fd6258b33d2aa74f0333bfee1bbc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 13:34:29.713183 containerd[1460]: time="2024-12-13T13:34:29.713152553Z" level=info msg="CreateContainer within sandbox \"4e13fdab269067716759091d8e4b2f3b7241fd6258b33d2aa74f0333bfee1bbc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"68b60a4ca3b7c3cdeb7f948f122550c6b9483c1451b8865c3de330cf2cd52bc0\"" Dec 13 13:34:29.713636 containerd[1460]: time="2024-12-13T13:34:29.713598701Z" level=info msg="StartContainer for \"68b60a4ca3b7c3cdeb7f948f122550c6b9483c1451b8865c3de330cf2cd52bc0\"" Dec 13 13:34:29.741081 systemd[1]: Started cri-containerd-68b60a4ca3b7c3cdeb7f948f122550c6b9483c1451b8865c3de330cf2cd52bc0.scope - libcontainer container 68b60a4ca3b7c3cdeb7f948f122550c6b9483c1451b8865c3de330cf2cd52bc0. Dec 13 13:34:29.757387 kubelet[2640]: I1213 13:34:29.757237 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-55f74d585b-gkp22" podStartSLOduration=19.984187179 podStartE2EDuration="26.757221026s" podCreationTimestamp="2024-12-13 13:34:03 +0000 UTC" firstStartedPulling="2024-12-13 13:34:22.343328675 +0000 UTC m=+40.542728026" lastFinishedPulling="2024-12-13 13:34:29.116362522 +0000 UTC m=+47.315761873" observedRunningTime="2024-12-13 13:34:29.756998969 +0000 UTC m=+47.956398330" watchObservedRunningTime="2024-12-13 13:34:29.757221026 +0000 UTC m=+47.956620377" Dec 13 13:34:29.790918 containerd[1460]: time="2024-12-13T13:34:29.790866168Z" level=info msg="StartContainer for \"68b60a4ca3b7c3cdeb7f948f122550c6b9483c1451b8865c3de330cf2cd52bc0\" returns successfully" Dec 13 13:34:30.759661 kubelet[2640]: I1213 13:34:30.759595 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7d87f7746b-mqntw" podStartSLOduration=20.45024467 podStartE2EDuration="27.759573827s" podCreationTimestamp="2024-12-13 13:34:03 +0000 UTC" firstStartedPulling="2024-12-13 13:34:22.388425897 +0000 UTC m=+40.587825248" lastFinishedPulling="2024-12-13 13:34:29.697755054 +0000 UTC m=+47.897154405" observedRunningTime="2024-12-13 13:34:30.759346942 +0000 UTC m=+48.958746313" watchObservedRunningTime="2024-12-13 13:34:30.759573827 +0000 UTC m=+48.958973178" Dec 13 13:34:31.247937 systemd[1]: Started sshd@13-10.0.0.150:22-10.0.0.1:36910.service - OpenSSH per-connection server daemon (10.0.0.1:36910). Dec 13 13:34:31.318420 sshd[5808]: Accepted publickey for core from 10.0.0.1 port 36910 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:34:31.321059 sshd-session[5808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:34:31.326406 systemd-logind[1445]: New session 14 of user core. Dec 13 13:34:31.335558 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 13:34:31.750510 kubelet[2640]: I1213 13:34:31.750275 2640 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 13:34:31.765153 sshd[5810]: Connection closed by 10.0.0.1 port 36910 Dec 13 13:34:31.765609 sshd-session[5808]: pam_unix(sshd:session): session closed for user core Dec 13 13:34:31.770885 systemd[1]: sshd@13-10.0.0.150:22-10.0.0.1:36910.service: Deactivated successfully. Dec 13 13:34:31.773140 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 13:34:31.773974 systemd-logind[1445]: Session 14 logged out. Waiting for processes to exit. Dec 13 13:34:31.775579 systemd-logind[1445]: Removed session 14. Dec 13 13:34:32.023270 containerd[1460]: time="2024-12-13T13:34:32.023115209Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:32.045881 containerd[1460]: time="2024-12-13T13:34:32.045791758Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Dec 13 13:34:32.066711 containerd[1460]: time="2024-12-13T13:34:32.066640677Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:32.110047 containerd[1460]: time="2024-12-13T13:34:32.110001687Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:34:32.110678 containerd[1460]: time="2024-12-13T13:34:32.110646146Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.412724871s" Dec 13 13:34:32.110678 containerd[1460]: time="2024-12-13T13:34:32.110674099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Dec 13 13:34:32.112547 containerd[1460]: time="2024-12-13T13:34:32.112523741Z" level=info msg="CreateContainer within sandbox \"eebc8245c67c09382ddf9863b64b3bfc1c4564b4b28cabda982433ba51a45c95\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 13:34:32.128855 containerd[1460]: time="2024-12-13T13:34:32.128779347Z" level=info msg="CreateContainer within sandbox \"eebc8245c67c09382ddf9863b64b3bfc1c4564b4b28cabda982433ba51a45c95\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d142232075e108efaa8e6bb17a78f20f3c7dbc345b37c1557976ac22b7069806\"" Dec 13 13:34:32.129684 containerd[1460]: time="2024-12-13T13:34:32.129426181Z" level=info msg="StartContainer for \"d142232075e108efaa8e6bb17a78f20f3c7dbc345b37c1557976ac22b7069806\"" Dec 13 13:34:32.169498 systemd[1]: Started cri-containerd-d142232075e108efaa8e6bb17a78f20f3c7dbc345b37c1557976ac22b7069806.scope - libcontainer container d142232075e108efaa8e6bb17a78f20f3c7dbc345b37c1557976ac22b7069806. Dec 13 13:34:32.201496 containerd[1460]: time="2024-12-13T13:34:32.201460237Z" level=info msg="StartContainer for \"d142232075e108efaa8e6bb17a78f20f3c7dbc345b37c1557976ac22b7069806\" returns successfully" Dec 13 13:34:32.763459 kubelet[2640]: I1213 13:34:32.763390 2640 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-h5jzv" podStartSLOduration=19.945083652 podStartE2EDuration="29.763372159s" podCreationTimestamp="2024-12-13 13:34:03 +0000 UTC" firstStartedPulling="2024-12-13 13:34:22.292936035 +0000 UTC m=+40.492335386" lastFinishedPulling="2024-12-13 13:34:32.111224542 +0000 UTC m=+50.310623893" observedRunningTime="2024-12-13 13:34:32.762778575 +0000 UTC m=+50.962177946" watchObservedRunningTime="2024-12-13 13:34:32.763372159 +0000 UTC m=+50.962771510" Dec 13 13:34:32.953617 kubelet[2640]: I1213 13:34:32.953580 2640 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 13:34:32.953617 kubelet[2640]: I1213 13:34:32.953610 2640 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 13:34:36.780858 systemd[1]: Started sshd@14-10.0.0.150:22-10.0.0.1:40890.service - OpenSSH per-connection server daemon (10.0.0.1:40890). Dec 13 13:34:36.819722 sshd[5874]: Accepted publickey for core from 10.0.0.1 port 40890 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:34:36.821094 sshd-session[5874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:34:36.824835 systemd-logind[1445]: New session 15 of user core. Dec 13 13:34:36.835451 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 13:34:36.959592 sshd[5876]: Connection closed by 10.0.0.1 port 40890 Dec 13 13:34:36.959980 sshd-session[5874]: pam_unix(sshd:session): session closed for user core Dec 13 13:34:36.971062 systemd[1]: sshd@14-10.0.0.150:22-10.0.0.1:40890.service: Deactivated successfully. Dec 13 13:34:36.973048 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 13:34:36.974465 systemd-logind[1445]: Session 15 logged out. Waiting for processes to exit. Dec 13 13:34:36.980551 systemd[1]: Started sshd@15-10.0.0.150:22-10.0.0.1:40900.service - OpenSSH per-connection server daemon (10.0.0.1:40900). Dec 13 13:34:36.981640 systemd-logind[1445]: Removed session 15. Dec 13 13:34:37.014361 sshd[5888]: Accepted publickey for core from 10.0.0.1 port 40900 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:34:37.015664 sshd-session[5888]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:34:37.019480 systemd-logind[1445]: New session 16 of user core. Dec 13 13:34:37.028439 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 13:34:37.201822 sshd[5890]: Connection closed by 10.0.0.1 port 40900 Dec 13 13:34:37.202837 sshd-session[5888]: pam_unix(sshd:session): session closed for user core Dec 13 13:34:37.212330 systemd[1]: sshd@15-10.0.0.150:22-10.0.0.1:40900.service: Deactivated successfully. Dec 13 13:34:37.214138 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 13:34:37.215568 systemd-logind[1445]: Session 16 logged out. Waiting for processes to exit. Dec 13 13:34:37.222545 systemd[1]: Started sshd@16-10.0.0.150:22-10.0.0.1:40906.service - OpenSSH per-connection server daemon (10.0.0.1:40906). Dec 13 13:34:37.223460 systemd-logind[1445]: Removed session 16. Dec 13 13:34:37.264857 sshd[5900]: Accepted publickey for core from 10.0.0.1 port 40906 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:34:37.266152 sshd-session[5900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:34:37.269861 systemd-logind[1445]: New session 17 of user core. Dec 13 13:34:37.277421 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 13:34:38.418156 sshd[5902]: Connection closed by 10.0.0.1 port 40906 Dec 13 13:34:38.418786 sshd-session[5900]: pam_unix(sshd:session): session closed for user core Dec 13 13:34:38.431902 systemd[1]: sshd@16-10.0.0.150:22-10.0.0.1:40906.service: Deactivated successfully. Dec 13 13:34:38.433522 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 13:34:38.436136 systemd-logind[1445]: Session 17 logged out. Waiting for processes to exit. Dec 13 13:34:38.444684 systemd[1]: Started sshd@17-10.0.0.150:22-10.0.0.1:40914.service - OpenSSH per-connection server daemon (10.0.0.1:40914). Dec 13 13:34:38.445683 systemd-logind[1445]: Removed session 17. Dec 13 13:34:38.477180 sshd[5923]: Accepted publickey for core from 10.0.0.1 port 40914 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:34:38.478463 sshd-session[5923]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:34:38.482203 systemd-logind[1445]: New session 18 of user core. Dec 13 13:34:38.491431 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 13:34:38.694456 sshd[5925]: Connection closed by 10.0.0.1 port 40914 Dec 13 13:34:38.695925 sshd-session[5923]: pam_unix(sshd:session): session closed for user core Dec 13 13:34:38.707224 systemd[1]: sshd@17-10.0.0.150:22-10.0.0.1:40914.service: Deactivated successfully. Dec 13 13:34:38.708967 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 13:34:38.710305 systemd-logind[1445]: Session 18 logged out. Waiting for processes to exit. Dec 13 13:34:38.719526 systemd[1]: Started sshd@18-10.0.0.150:22-10.0.0.1:40916.service - OpenSSH per-connection server daemon (10.0.0.1:40916). Dec 13 13:34:38.720300 systemd-logind[1445]: Removed session 18. Dec 13 13:34:38.752906 sshd[5935]: Accepted publickey for core from 10.0.0.1 port 40916 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:34:38.754269 sshd-session[5935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:34:38.757914 systemd-logind[1445]: New session 19 of user core. Dec 13 13:34:38.770435 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 13:34:38.883910 sshd[5937]: Connection closed by 10.0.0.1 port 40916 Dec 13 13:34:38.884289 sshd-session[5935]: pam_unix(sshd:session): session closed for user core Dec 13 13:34:38.888713 systemd[1]: sshd@18-10.0.0.150:22-10.0.0.1:40916.service: Deactivated successfully. Dec 13 13:34:38.891460 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 13:34:38.892127 systemd-logind[1445]: Session 19 logged out. Waiting for processes to exit. Dec 13 13:34:38.893075 systemd-logind[1445]: Removed session 19. Dec 13 13:34:41.881907 containerd[1460]: time="2024-12-13T13:34:41.881865036Z" level=info msg="StopPodSandbox for \"c62e1d6a49358b1088df68486d06de5d565abc5cb224cecb08a72fc49527b818\"" Dec 13 13:34:41.882339 containerd[1460]: time="2024-12-13T13:34:41.882003965Z" level=info msg="TearDown network for sandbox \"c62e1d6a49358b1088df68486d06de5d565abc5cb224cecb08a72fc49527b818\" successfully" Dec 13 13:34:41.882339 containerd[1460]: time="2024-12-13T13:34:41.882018053Z" level=info msg="StopPodSandbox for \"c62e1d6a49358b1088df68486d06de5d565abc5cb224cecb08a72fc49527b818\" returns successfully" Dec 13 13:34:41.887946 containerd[1460]: time="2024-12-13T13:34:41.887912898Z" level=info msg="RemovePodSandbox for \"c62e1d6a49358b1088df68486d06de5d565abc5cb224cecb08a72fc49527b818\"" Dec 13 13:34:41.900748 containerd[1460]: time="2024-12-13T13:34:41.900697342Z" level=info msg="Forcibly stopping sandbox \"c62e1d6a49358b1088df68486d06de5d565abc5cb224cecb08a72fc49527b818\"" Dec 13 13:34:41.900912 containerd[1460]: time="2024-12-13T13:34:41.900827305Z" level=info msg="TearDown network for sandbox \"c62e1d6a49358b1088df68486d06de5d565abc5cb224cecb08a72fc49527b818\" successfully" Dec 13 13:34:42.026387 containerd[1460]: time="2024-12-13T13:34:42.026295580Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c62e1d6a49358b1088df68486d06de5d565abc5cb224cecb08a72fc49527b818\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:42.026387 containerd[1460]: time="2024-12-13T13:34:42.026409110Z" level=info msg="RemovePodSandbox \"c62e1d6a49358b1088df68486d06de5d565abc5cb224cecb08a72fc49527b818\" returns successfully" Dec 13 13:34:42.027155 containerd[1460]: time="2024-12-13T13:34:42.027100192Z" level=info msg="StopPodSandbox for \"043d5f360472a5cf84f2e7fe5d34c855f1c1466ecd61a4548840ff0da56c473d\"" Dec 13 13:34:42.027338 containerd[1460]: time="2024-12-13T13:34:42.027262897Z" level=info msg="TearDown network for sandbox \"043d5f360472a5cf84f2e7fe5d34c855f1c1466ecd61a4548840ff0da56c473d\" successfully" Dec 13 13:34:42.027338 containerd[1460]: time="2024-12-13T13:34:42.027278227Z" level=info msg="StopPodSandbox for \"043d5f360472a5cf84f2e7fe5d34c855f1c1466ecd61a4548840ff0da56c473d\" returns successfully" Dec 13 13:34:42.027638 containerd[1460]: time="2024-12-13T13:34:42.027598699Z" level=info msg="RemovePodSandbox for \"043d5f360472a5cf84f2e7fe5d34c855f1c1466ecd61a4548840ff0da56c473d\"" Dec 13 13:34:42.027638 containerd[1460]: time="2024-12-13T13:34:42.027627365Z" level=info msg="Forcibly stopping sandbox \"043d5f360472a5cf84f2e7fe5d34c855f1c1466ecd61a4548840ff0da56c473d\"" Dec 13 13:34:42.027760 containerd[1460]: time="2024-12-13T13:34:42.027707401Z" level=info msg="TearDown network for sandbox \"043d5f360472a5cf84f2e7fe5d34c855f1c1466ecd61a4548840ff0da56c473d\" successfully" Dec 13 13:34:42.036011 containerd[1460]: time="2024-12-13T13:34:42.035933821Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"043d5f360472a5cf84f2e7fe5d34c855f1c1466ecd61a4548840ff0da56c473d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:42.036011 containerd[1460]: time="2024-12-13T13:34:42.036016112Z" level=info msg="RemovePodSandbox \"043d5f360472a5cf84f2e7fe5d34c855f1c1466ecd61a4548840ff0da56c473d\" returns successfully" Dec 13 13:34:42.038162 containerd[1460]: time="2024-12-13T13:34:42.038037446Z" level=info msg="StopPodSandbox for \"41522410dfe8d457521fb9fe296f7b8dae420fe4186ddf5966e8b8e3c6ce7668\"" Dec 13 13:34:42.038236 containerd[1460]: time="2024-12-13T13:34:42.038188790Z" level=info msg="TearDown network for sandbox \"41522410dfe8d457521fb9fe296f7b8dae420fe4186ddf5966e8b8e3c6ce7668\" successfully" Dec 13 13:34:42.038236 containerd[1460]: time="2024-12-13T13:34:42.038201364Z" level=info msg="StopPodSandbox for \"41522410dfe8d457521fb9fe296f7b8dae420fe4186ddf5966e8b8e3c6ce7668\" returns successfully" Dec 13 13:34:42.039344 containerd[1460]: time="2024-12-13T13:34:42.039191737Z" level=info msg="RemovePodSandbox for \"41522410dfe8d457521fb9fe296f7b8dae420fe4186ddf5966e8b8e3c6ce7668\"" Dec 13 13:34:42.039344 containerd[1460]: time="2024-12-13T13:34:42.039221835Z" level=info msg="Forcibly stopping sandbox \"41522410dfe8d457521fb9fe296f7b8dae420fe4186ddf5966e8b8e3c6ce7668\"" Dec 13 13:34:42.039549 containerd[1460]: time="2024-12-13T13:34:42.039306539Z" level=info msg="TearDown network for sandbox \"41522410dfe8d457521fb9fe296f7b8dae420fe4186ddf5966e8b8e3c6ce7668\" successfully" Dec 13 13:34:42.045482 containerd[1460]: time="2024-12-13T13:34:42.045419187Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"41522410dfe8d457521fb9fe296f7b8dae420fe4186ddf5966e8b8e3c6ce7668\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:42.045576 containerd[1460]: time="2024-12-13T13:34:42.045496166Z" level=info msg="RemovePodSandbox \"41522410dfe8d457521fb9fe296f7b8dae420fe4186ddf5966e8b8e3c6ce7668\" returns successfully" Dec 13 13:34:42.045948 containerd[1460]: time="2024-12-13T13:34:42.045924177Z" level=info msg="StopPodSandbox for \"c6522ad2e1a3375e607b651b292cf0023738e3078af5ef59656343f350a5c5f4\"" Dec 13 13:34:42.046062 containerd[1460]: time="2024-12-13T13:34:42.046045222Z" level=info msg="TearDown network for sandbox \"c6522ad2e1a3375e607b651b292cf0023738e3078af5ef59656343f350a5c5f4\" successfully" Dec 13 13:34:42.046134 containerd[1460]: time="2024-12-13T13:34:42.046061874Z" level=info msg="StopPodSandbox for \"c6522ad2e1a3375e607b651b292cf0023738e3078af5ef59656343f350a5c5f4\" returns successfully" Dec 13 13:34:42.046339 containerd[1460]: time="2024-12-13T13:34:42.046299286Z" level=info msg="RemovePodSandbox for \"c6522ad2e1a3375e607b651b292cf0023738e3078af5ef59656343f350a5c5f4\"" Dec 13 13:34:42.046457 containerd[1460]: time="2024-12-13T13:34:42.046345435Z" level=info msg="Forcibly stopping sandbox \"c6522ad2e1a3375e607b651b292cf0023738e3078af5ef59656343f350a5c5f4\"" Dec 13 13:34:42.046495 containerd[1460]: time="2024-12-13T13:34:42.046427254Z" level=info msg="TearDown network for sandbox \"c6522ad2e1a3375e607b651b292cf0023738e3078af5ef59656343f350a5c5f4\" successfully" Dec 13 13:34:42.050351 containerd[1460]: time="2024-12-13T13:34:42.050321294Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c6522ad2e1a3375e607b651b292cf0023738e3078af5ef59656343f350a5c5f4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:42.050408 containerd[1460]: time="2024-12-13T13:34:42.050362674Z" level=info msg="RemovePodSandbox \"c6522ad2e1a3375e607b651b292cf0023738e3078af5ef59656343f350a5c5f4\" returns successfully" Dec 13 13:34:42.050617 containerd[1460]: time="2024-12-13T13:34:42.050589925Z" level=info msg="StopPodSandbox for \"6a2a4f3c81b7850246110bb3a39e21303a213d976a8f97927fa282ec47803ddb\"" Dec 13 13:34:42.050712 containerd[1460]: time="2024-12-13T13:34:42.050683397Z" level=info msg="TearDown network for sandbox \"6a2a4f3c81b7850246110bb3a39e21303a213d976a8f97927fa282ec47803ddb\" successfully" Dec 13 13:34:42.050712 containerd[1460]: time="2024-12-13T13:34:42.050701863Z" level=info msg="StopPodSandbox for \"6a2a4f3c81b7850246110bb3a39e21303a213d976a8f97927fa282ec47803ddb\" returns successfully" Dec 13 13:34:42.050913 containerd[1460]: time="2024-12-13T13:34:42.050884618Z" level=info msg="RemovePodSandbox for \"6a2a4f3c81b7850246110bb3a39e21303a213d976a8f97927fa282ec47803ddb\"" Dec 13 13:34:42.050913 containerd[1460]: time="2024-12-13T13:34:42.050909736Z" level=info msg="Forcibly stopping sandbox \"6a2a4f3c81b7850246110bb3a39e21303a213d976a8f97927fa282ec47803ddb\"" Dec 13 13:34:42.051017 containerd[1460]: time="2024-12-13T13:34:42.050984351Z" level=info msg="TearDown network for sandbox \"6a2a4f3c81b7850246110bb3a39e21303a213d976a8f97927fa282ec47803ddb\" successfully" Dec 13 13:34:42.054412 containerd[1460]: time="2024-12-13T13:34:42.054374894Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6a2a4f3c81b7850246110bb3a39e21303a213d976a8f97927fa282ec47803ddb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:42.054466 containerd[1460]: time="2024-12-13T13:34:42.054430421Z" level=info msg="RemovePodSandbox \"6a2a4f3c81b7850246110bb3a39e21303a213d976a8f97927fa282ec47803ddb\" returns successfully" Dec 13 13:34:42.054699 containerd[1460]: time="2024-12-13T13:34:42.054656190Z" level=info msg="StopPodSandbox for \"c361cf0d45af987326726b600a805f9395b6a442483d81b3ec5659ec3ac4bb62\"" Dec 13 13:34:42.054814 containerd[1460]: time="2024-12-13T13:34:42.054774600Z" level=info msg="TearDown network for sandbox \"c361cf0d45af987326726b600a805f9395b6a442483d81b3ec5659ec3ac4bb62\" successfully" Dec 13 13:34:42.054814 containerd[1460]: time="2024-12-13T13:34:42.054794769Z" level=info msg="StopPodSandbox for \"c361cf0d45af987326726b600a805f9395b6a442483d81b3ec5659ec3ac4bb62\" returns successfully" Dec 13 13:34:42.055047 containerd[1460]: time="2024-12-13T13:34:42.055011128Z" level=info msg="RemovePodSandbox for \"c361cf0d45af987326726b600a805f9395b6a442483d81b3ec5659ec3ac4bb62\"" Dec 13 13:34:42.055047 containerd[1460]: time="2024-12-13T13:34:42.055036478Z" level=info msg="Forcibly stopping sandbox \"c361cf0d45af987326726b600a805f9395b6a442483d81b3ec5659ec3ac4bb62\"" Dec 13 13:34:42.055168 containerd[1460]: time="2024-12-13T13:34:42.055116343Z" level=info msg="TearDown network for sandbox \"c361cf0d45af987326726b600a805f9395b6a442483d81b3ec5659ec3ac4bb62\" successfully" Dec 13 13:34:42.058753 containerd[1460]: time="2024-12-13T13:34:42.058722835Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c361cf0d45af987326726b600a805f9395b6a442483d81b3ec5659ec3ac4bb62\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:42.058818 containerd[1460]: time="2024-12-13T13:34:42.058763885Z" level=info msg="RemovePodSandbox \"c361cf0d45af987326726b600a805f9395b6a442483d81b3ec5659ec3ac4bb62\" returns successfully" Dec 13 13:34:42.059033 containerd[1460]: time="2024-12-13T13:34:42.059011966Z" level=info msg="StopPodSandbox for \"558af8a45bc01399591427b598962160a612f4630f01ffcfbcb0b3a90050035e\"" Dec 13 13:34:42.059132 containerd[1460]: time="2024-12-13T13:34:42.059109295Z" level=info msg="TearDown network for sandbox \"558af8a45bc01399591427b598962160a612f4630f01ffcfbcb0b3a90050035e\" successfully" Dec 13 13:34:42.059160 containerd[1460]: time="2024-12-13T13:34:42.059128633Z" level=info msg="StopPodSandbox for \"558af8a45bc01399591427b598962160a612f4630f01ffcfbcb0b3a90050035e\" returns successfully" Dec 13 13:34:42.059481 containerd[1460]: time="2024-12-13T13:34:42.059449165Z" level=info msg="RemovePodSandbox for \"558af8a45bc01399591427b598962160a612f4630f01ffcfbcb0b3a90050035e\"" Dec 13 13:34:42.059481 containerd[1460]: time="2024-12-13T13:34:42.059473482Z" level=info msg="Forcibly stopping sandbox \"558af8a45bc01399591427b598962160a612f4630f01ffcfbcb0b3a90050035e\"" Dec 13 13:34:42.059581 containerd[1460]: time="2024-12-13T13:34:42.059548087Z" level=info msg="TearDown network for sandbox \"558af8a45bc01399591427b598962160a612f4630f01ffcfbcb0b3a90050035e\" successfully" Dec 13 13:34:42.063149 containerd[1460]: time="2024-12-13T13:34:42.063118939Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"558af8a45bc01399591427b598962160a612f4630f01ffcfbcb0b3a90050035e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:42.063205 containerd[1460]: time="2024-12-13T13:34:42.063159158Z" level=info msg="RemovePodSandbox \"558af8a45bc01399591427b598962160a612f4630f01ffcfbcb0b3a90050035e\" returns successfully" Dec 13 13:34:42.063457 containerd[1460]: time="2024-12-13T13:34:42.063430685Z" level=info msg="StopPodSandbox for \"706c13db8235334bc38488f0490b1867ec134804b2b05c3d93c69a593fd16e13\"" Dec 13 13:34:42.063557 containerd[1460]: time="2024-12-13T13:34:42.063537412Z" level=info msg="TearDown network for sandbox \"706c13db8235334bc38488f0490b1867ec134804b2b05c3d93c69a593fd16e13\" successfully" Dec 13 13:34:42.063590 containerd[1460]: time="2024-12-13T13:34:42.063556289Z" level=info msg="StopPodSandbox for \"706c13db8235334bc38488f0490b1867ec134804b2b05c3d93c69a593fd16e13\" returns successfully" Dec 13 13:34:42.063834 containerd[1460]: time="2024-12-13T13:34:42.063801414Z" level=info msg="RemovePodSandbox for \"706c13db8235334bc38488f0490b1867ec134804b2b05c3d93c69a593fd16e13\"" Dec 13 13:34:42.063902 containerd[1460]: time="2024-12-13T13:34:42.063838607Z" level=info msg="Forcibly stopping sandbox \"706c13db8235334bc38488f0490b1867ec134804b2b05c3d93c69a593fd16e13\"" Dec 13 13:34:42.063953 containerd[1460]: time="2024-12-13T13:34:42.063918041Z" level=info msg="TearDown network for sandbox \"706c13db8235334bc38488f0490b1867ec134804b2b05c3d93c69a593fd16e13\" successfully" Dec 13 13:34:42.068235 containerd[1460]: time="2024-12-13T13:34:42.068193192Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"706c13db8235334bc38488f0490b1867ec134804b2b05c3d93c69a593fd16e13\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:42.068302 containerd[1460]: time="2024-12-13T13:34:42.068240814Z" level=info msg="RemovePodSandbox \"706c13db8235334bc38488f0490b1867ec134804b2b05c3d93c69a593fd16e13\" returns successfully" Dec 13 13:34:42.068575 containerd[1460]: time="2024-12-13T13:34:42.068545766Z" level=info msg="StopPodSandbox for \"09abfaaccf32e858d49fc89ee268bc73ea6fb80d4d0a90443906199a1d27306d\"" Dec 13 13:34:42.068670 containerd[1460]: time="2024-12-13T13:34:42.068650700Z" level=info msg="TearDown network for sandbox \"09abfaaccf32e858d49fc89ee268bc73ea6fb80d4d0a90443906199a1d27306d\" successfully" Dec 13 13:34:42.068723 containerd[1460]: time="2024-12-13T13:34:42.068668614Z" level=info msg="StopPodSandbox for \"09abfaaccf32e858d49fc89ee268bc73ea6fb80d4d0a90443906199a1d27306d\" returns successfully" Dec 13 13:34:42.068959 containerd[1460]: time="2024-12-13T13:34:42.068931304Z" level=info msg="RemovePodSandbox for \"09abfaaccf32e858d49fc89ee268bc73ea6fb80d4d0a90443906199a1d27306d\"" Dec 13 13:34:42.068959 containerd[1460]: time="2024-12-13T13:34:42.068953357Z" level=info msg="Forcibly stopping sandbox \"09abfaaccf32e858d49fc89ee268bc73ea6fb80d4d0a90443906199a1d27306d\"" Dec 13 13:34:42.069067 containerd[1460]: time="2024-12-13T13:34:42.069026018Z" level=info msg="TearDown network for sandbox \"09abfaaccf32e858d49fc89ee268bc73ea6fb80d4d0a90443906199a1d27306d\" successfully" Dec 13 13:34:42.072527 containerd[1460]: time="2024-12-13T13:34:42.072492869Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"09abfaaccf32e858d49fc89ee268bc73ea6fb80d4d0a90443906199a1d27306d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:42.072654 containerd[1460]: time="2024-12-13T13:34:42.072538117Z" level=info msg="RemovePodSandbox \"09abfaaccf32e858d49fc89ee268bc73ea6fb80d4d0a90443906199a1d27306d\" returns successfully" Dec 13 13:34:42.072857 containerd[1460]: time="2024-12-13T13:34:42.072835133Z" level=info msg="StopPodSandbox for \"e3e675aabe036a2d2aa1e1782dcf65d1cb1d86f83cf782f33096729bc19652d6\"" Dec 13 13:34:42.075331 containerd[1460]: time="2024-12-13T13:34:42.073233085Z" level=info msg="TearDown network for sandbox \"e3e675aabe036a2d2aa1e1782dcf65d1cb1d86f83cf782f33096729bc19652d6\" successfully" Dec 13 13:34:42.075331 containerd[1460]: time="2024-12-13T13:34:42.073253175Z" level=info msg="StopPodSandbox for \"e3e675aabe036a2d2aa1e1782dcf65d1cb1d86f83cf782f33096729bc19652d6\" returns successfully" Dec 13 13:34:42.075474 containerd[1460]: time="2024-12-13T13:34:42.075384893Z" level=info msg="RemovePodSandbox for \"e3e675aabe036a2d2aa1e1782dcf65d1cb1d86f83cf782f33096729bc19652d6\"" Dec 13 13:34:42.075500 containerd[1460]: time="2024-12-13T13:34:42.075485519Z" level=info msg="Forcibly stopping sandbox \"e3e675aabe036a2d2aa1e1782dcf65d1cb1d86f83cf782f33096729bc19652d6\"" Dec 13 13:34:42.075699 containerd[1460]: time="2024-12-13T13:34:42.075632444Z" level=info msg="TearDown network for sandbox \"e3e675aabe036a2d2aa1e1782dcf65d1cb1d86f83cf782f33096729bc19652d6\" successfully" Dec 13 13:34:42.079602 containerd[1460]: time="2024-12-13T13:34:42.079574768Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e3e675aabe036a2d2aa1e1782dcf65d1cb1d86f83cf782f33096729bc19652d6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:42.079656 containerd[1460]: time="2024-12-13T13:34:42.079611749Z" level=info msg="RemovePodSandbox \"e3e675aabe036a2d2aa1e1782dcf65d1cb1d86f83cf782f33096729bc19652d6\" returns successfully" Dec 13 13:34:42.080015 containerd[1460]: time="2024-12-13T13:34:42.079989473Z" level=info msg="StopPodSandbox for \"ba6f8affd16e998d2e69c27c01590791060c1de1a38eb975939e57a9835e83db\"" Dec 13 13:34:42.080106 containerd[1460]: time="2024-12-13T13:34:42.080067764Z" level=info msg="TearDown network for sandbox \"ba6f8affd16e998d2e69c27c01590791060c1de1a38eb975939e57a9835e83db\" successfully" Dec 13 13:34:42.080106 containerd[1460]: time="2024-12-13T13:34:42.080102763Z" level=info msg="StopPodSandbox for \"ba6f8affd16e998d2e69c27c01590791060c1de1a38eb975939e57a9835e83db\" returns successfully" Dec 13 13:34:42.080409 containerd[1460]: time="2024-12-13T13:34:42.080345665Z" level=info msg="RemovePodSandbox for \"ba6f8affd16e998d2e69c27c01590791060c1de1a38eb975939e57a9835e83db\"" Dec 13 13:34:42.080409 containerd[1460]: time="2024-12-13T13:34:42.080368709Z" level=info msg="Forcibly stopping sandbox \"ba6f8affd16e998d2e69c27c01590791060c1de1a38eb975939e57a9835e83db\"" Dec 13 13:34:42.080479 containerd[1460]: time="2024-12-13T13:34:42.080449036Z" level=info msg="TearDown network for sandbox \"ba6f8affd16e998d2e69c27c01590791060c1de1a38eb975939e57a9835e83db\" successfully" Dec 13 13:34:42.084594 containerd[1460]: time="2024-12-13T13:34:42.084562270Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ba6f8affd16e998d2e69c27c01590791060c1de1a38eb975939e57a9835e83db\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:42.084659 containerd[1460]: time="2024-12-13T13:34:42.084606035Z" level=info msg="RemovePodSandbox \"ba6f8affd16e998d2e69c27c01590791060c1de1a38eb975939e57a9835e83db\" returns successfully" Dec 13 13:34:42.084884 containerd[1460]: time="2024-12-13T13:34:42.084859909Z" level=info msg="StopPodSandbox for \"aca8c409ba4808458cd390c815351d323f7da4bd417c1bcb075eb4cf65179d20\"" Dec 13 13:34:42.085004 containerd[1460]: time="2024-12-13T13:34:42.084977547Z" level=info msg="TearDown network for sandbox \"aca8c409ba4808458cd390c815351d323f7da4bd417c1bcb075eb4cf65179d20\" successfully" Dec 13 13:34:42.085004 containerd[1460]: time="2024-12-13T13:34:42.084997015Z" level=info msg="StopPodSandbox for \"aca8c409ba4808458cd390c815351d323f7da4bd417c1bcb075eb4cf65179d20\" returns successfully" Dec 13 13:34:42.085248 containerd[1460]: time="2024-12-13T13:34:42.085223835Z" level=info msg="RemovePodSandbox for \"aca8c409ba4808458cd390c815351d323f7da4bd417c1bcb075eb4cf65179d20\"" Dec 13 13:34:42.085300 containerd[1460]: time="2024-12-13T13:34:42.085249095Z" level=info msg="Forcibly stopping sandbox \"aca8c409ba4808458cd390c815351d323f7da4bd417c1bcb075eb4cf65179d20\"" Dec 13 13:34:42.085401 containerd[1460]: time="2024-12-13T13:34:42.085366782Z" level=info msg="TearDown network for sandbox \"aca8c409ba4808458cd390c815351d323f7da4bd417c1bcb075eb4cf65179d20\" successfully" Dec 13 13:34:42.089166 containerd[1460]: time="2024-12-13T13:34:42.089121873Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aca8c409ba4808458cd390c815351d323f7da4bd417c1bcb075eb4cf65179d20\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:42.089166 containerd[1460]: time="2024-12-13T13:34:42.089157202Z" level=info msg="RemovePodSandbox \"aca8c409ba4808458cd390c815351d323f7da4bd417c1bcb075eb4cf65179d20\" returns successfully" Dec 13 13:34:42.089410 containerd[1460]: time="2024-12-13T13:34:42.089378041Z" level=info msg="StopPodSandbox for \"21489d44f97457445adcc907a914ad67e96a4016825cb60915c67d35d31dba11\"" Dec 13 13:34:42.089522 containerd[1460]: time="2024-12-13T13:34:42.089473897Z" level=info msg="TearDown network for sandbox \"21489d44f97457445adcc907a914ad67e96a4016825cb60915c67d35d31dba11\" successfully" Dec 13 13:34:42.089522 containerd[1460]: time="2024-12-13T13:34:42.089511249Z" level=info msg="StopPodSandbox for \"21489d44f97457445adcc907a914ad67e96a4016825cb60915c67d35d31dba11\" returns successfully" Dec 13 13:34:42.089741 containerd[1460]: time="2024-12-13T13:34:42.089712690Z" level=info msg="RemovePodSandbox for \"21489d44f97457445adcc907a914ad67e96a4016825cb60915c67d35d31dba11\"" Dec 13 13:34:42.089741 containerd[1460]: time="2024-12-13T13:34:42.089737969Z" level=info msg="Forcibly stopping sandbox \"21489d44f97457445adcc907a914ad67e96a4016825cb60915c67d35d31dba11\"" Dec 13 13:34:42.089851 containerd[1460]: time="2024-12-13T13:34:42.089822263Z" level=info msg="TearDown network for sandbox \"21489d44f97457445adcc907a914ad67e96a4016825cb60915c67d35d31dba11\" successfully" Dec 13 13:34:42.093208 containerd[1460]: time="2024-12-13T13:34:42.093178719Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"21489d44f97457445adcc907a914ad67e96a4016825cb60915c67d35d31dba11\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:42.093248 containerd[1460]: time="2024-12-13T13:34:42.093218005Z" level=info msg="RemovePodSandbox \"21489d44f97457445adcc907a914ad67e96a4016825cb60915c67d35d31dba11\" returns successfully" Dec 13 13:34:42.093486 containerd[1460]: time="2024-12-13T13:34:42.093456148Z" level=info msg="StopPodSandbox for \"8e04eb66310a3917ffdcbb002bcd66f739fb7b5c6e195d72921f7f71735c6ad9\"" Dec 13 13:34:42.093569 containerd[1460]: time="2024-12-13T13:34:42.093555882Z" level=info msg="TearDown network for sandbox \"8e04eb66310a3917ffdcbb002bcd66f739fb7b5c6e195d72921f7f71735c6ad9\" successfully" Dec 13 13:34:42.093596 containerd[1460]: time="2024-12-13T13:34:42.093567734Z" level=info msg="StopPodSandbox for \"8e04eb66310a3917ffdcbb002bcd66f739fb7b5c6e195d72921f7f71735c6ad9\" returns successfully" Dec 13 13:34:42.093791 containerd[1460]: time="2024-12-13T13:34:42.093765900Z" level=info msg="RemovePodSandbox for \"8e04eb66310a3917ffdcbb002bcd66f739fb7b5c6e195d72921f7f71735c6ad9\"" Dec 13 13:34:42.093857 containerd[1460]: time="2024-12-13T13:34:42.093793653Z" level=info msg="Forcibly stopping sandbox \"8e04eb66310a3917ffdcbb002bcd66f739fb7b5c6e195d72921f7f71735c6ad9\"" Dec 13 13:34:42.093911 containerd[1460]: time="2024-12-13T13:34:42.093878207Z" level=info msg="TearDown network for sandbox \"8e04eb66310a3917ffdcbb002bcd66f739fb7b5c6e195d72921f7f71735c6ad9\" successfully" Dec 13 13:34:42.097412 containerd[1460]: time="2024-12-13T13:34:42.097378853Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8e04eb66310a3917ffdcbb002bcd66f739fb7b5c6e195d72921f7f71735c6ad9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:42.097475 containerd[1460]: time="2024-12-13T13:34:42.097422318Z" level=info msg="RemovePodSandbox \"8e04eb66310a3917ffdcbb002bcd66f739fb7b5c6e195d72921f7f71735c6ad9\" returns successfully" Dec 13 13:34:42.097721 containerd[1460]: time="2024-12-13T13:34:42.097698053Z" level=info msg="StopPodSandbox for \"e93b7769b9d88742aaf6059922f8c6c4a26e010f6ef3858da386188108ad8af8\"" Dec 13 13:34:42.097813 containerd[1460]: time="2024-12-13T13:34:42.097789039Z" level=info msg="TearDown network for sandbox \"e93b7769b9d88742aaf6059922f8c6c4a26e010f6ef3858da386188108ad8af8\" successfully" Dec 13 13:34:42.097848 containerd[1460]: time="2024-12-13T13:34:42.097804018Z" level=info msg="StopPodSandbox for \"e93b7769b9d88742aaf6059922f8c6c4a26e010f6ef3858da386188108ad8af8\" returns successfully" Dec 13 13:34:42.098074 containerd[1460]: time="2024-12-13T13:34:42.098049907Z" level=info msg="RemovePodSandbox for \"e93b7769b9d88742aaf6059922f8c6c4a26e010f6ef3858da386188108ad8af8\"" Dec 13 13:34:42.098074 containerd[1460]: time="2024-12-13T13:34:42.098071398Z" level=info msg="Forcibly stopping sandbox \"e93b7769b9d88742aaf6059922f8c6c4a26e010f6ef3858da386188108ad8af8\"" Dec 13 13:34:42.098173 containerd[1460]: time="2024-12-13T13:34:42.098136053Z" level=info msg="TearDown network for sandbox \"e93b7769b9d88742aaf6059922f8c6c4a26e010f6ef3858da386188108ad8af8\" successfully" Dec 13 13:34:42.101532 containerd[1460]: time="2024-12-13T13:34:42.101498953Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e93b7769b9d88742aaf6059922f8c6c4a26e010f6ef3858da386188108ad8af8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:42.101587 containerd[1460]: time="2024-12-13T13:34:42.101534732Z" level=info msg="RemovePodSandbox \"e93b7769b9d88742aaf6059922f8c6c4a26e010f6ef3858da386188108ad8af8\" returns successfully" Dec 13 13:34:42.101754 containerd[1460]: time="2024-12-13T13:34:42.101720733Z" level=info msg="StopPodSandbox for \"3d6b3dd240a00c2a00a729b97087fb6f836a068b396113e925387d9d8ae54c7f\"" Dec 13 13:34:42.101871 containerd[1460]: time="2024-12-13T13:34:42.101826929Z" level=info msg="TearDown network for sandbox \"3d6b3dd240a00c2a00a729b97087fb6f836a068b396113e925387d9d8ae54c7f\" successfully" Dec 13 13:34:42.101871 containerd[1460]: time="2024-12-13T13:34:42.101840996Z" level=info msg="StopPodSandbox for \"3d6b3dd240a00c2a00a729b97087fb6f836a068b396113e925387d9d8ae54c7f\" returns successfully" Dec 13 13:34:42.102084 containerd[1460]: time="2024-12-13T13:34:42.102047828Z" level=info msg="RemovePodSandbox for \"3d6b3dd240a00c2a00a729b97087fb6f836a068b396113e925387d9d8ae54c7f\"" Dec 13 13:34:42.102084 containerd[1460]: time="2024-12-13T13:34:42.102075081Z" level=info msg="Forcibly stopping sandbox \"3d6b3dd240a00c2a00a729b97087fb6f836a068b396113e925387d9d8ae54c7f\"" Dec 13 13:34:42.102180 containerd[1460]: time="2024-12-13T13:34:42.102146930Z" level=info msg="TearDown network for sandbox \"3d6b3dd240a00c2a00a729b97087fb6f836a068b396113e925387d9d8ae54c7f\" successfully" Dec 13 13:34:42.105389 containerd[1460]: time="2024-12-13T13:34:42.105365699Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3d6b3dd240a00c2a00a729b97087fb6f836a068b396113e925387d9d8ae54c7f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:42.105445 containerd[1460]: time="2024-12-13T13:34:42.105399826Z" level=info msg="RemovePodSandbox \"3d6b3dd240a00c2a00a729b97087fb6f836a068b396113e925387d9d8ae54c7f\" returns successfully" Dec 13 13:34:42.105619 containerd[1460]: time="2024-12-13T13:34:42.105598432Z" level=info msg="StopPodSandbox for \"ea5cf49efc7cd6bceaf869c25a1e0ee305ee2c5cb1307feaaffb986d0540529c\"" Dec 13 13:34:42.105722 containerd[1460]: time="2024-12-13T13:34:42.105700269Z" level=info msg="TearDown network for sandbox \"ea5cf49efc7cd6bceaf869c25a1e0ee305ee2c5cb1307feaaffb986d0540529c\" successfully" Dec 13 13:34:42.105722 containerd[1460]: time="2024-12-13T13:34:42.105717232Z" level=info msg="StopPodSandbox for \"ea5cf49efc7cd6bceaf869c25a1e0ee305ee2c5cb1307feaaffb986d0540529c\" returns successfully" Dec 13 13:34:42.105990 containerd[1460]: time="2024-12-13T13:34:42.105970584Z" level=info msg="RemovePodSandbox for \"ea5cf49efc7cd6bceaf869c25a1e0ee305ee2c5cb1307feaaffb986d0540529c\"" Dec 13 13:34:42.105990 containerd[1460]: time="2024-12-13T13:34:42.105989089Z" level=info msg="Forcibly stopping sandbox \"ea5cf49efc7cd6bceaf869c25a1e0ee305ee2c5cb1307feaaffb986d0540529c\"" Dec 13 13:34:42.106075 containerd[1460]: time="2024-12-13T13:34:42.106049787Z" level=info msg="TearDown network for sandbox \"ea5cf49efc7cd6bceaf869c25a1e0ee305ee2c5cb1307feaaffb986d0540529c\" successfully" Dec 13 13:34:42.109266 containerd[1460]: time="2024-12-13T13:34:42.109236634Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ea5cf49efc7cd6bceaf869c25a1e0ee305ee2c5cb1307feaaffb986d0540529c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:42.109337 containerd[1460]: time="2024-12-13T13:34:42.109268506Z" level=info msg="RemovePodSandbox \"ea5cf49efc7cd6bceaf869c25a1e0ee305ee2c5cb1307feaaffb986d0540529c\" returns successfully" Dec 13 13:34:42.109518 containerd[1460]: time="2024-12-13T13:34:42.109485238Z" level=info msg="StopPodSandbox for \"7c57ef222078b00fb1358031d13913b076e7f887d430518e050f8ecbe0260ca9\"" Dec 13 13:34:42.109603 containerd[1460]: time="2024-12-13T13:34:42.109581063Z" level=info msg="TearDown network for sandbox \"7c57ef222078b00fb1358031d13913b076e7f887d430518e050f8ecbe0260ca9\" successfully" Dec 13 13:34:42.109627 containerd[1460]: time="2024-12-13T13:34:42.109600290Z" level=info msg="StopPodSandbox for \"7c57ef222078b00fb1358031d13913b076e7f887d430518e050f8ecbe0260ca9\" returns successfully" Dec 13 13:34:42.109852 containerd[1460]: time="2024-12-13T13:34:42.109831078Z" level=info msg="RemovePodSandbox for \"7c57ef222078b00fb1358031d13913b076e7f887d430518e050f8ecbe0260ca9\"" Dec 13 13:34:42.109907 containerd[1460]: time="2024-12-13T13:34:42.109869543Z" level=info msg="Forcibly stopping sandbox \"7c57ef222078b00fb1358031d13913b076e7f887d430518e050f8ecbe0260ca9\"" Dec 13 13:34:42.109978 containerd[1460]: time="2024-12-13T13:34:42.109944910Z" level=info msg="TearDown network for sandbox \"7c57ef222078b00fb1358031d13913b076e7f887d430518e050f8ecbe0260ca9\" successfully" Dec 13 13:34:42.114180 containerd[1460]: time="2024-12-13T13:34:42.114152569Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7c57ef222078b00fb1358031d13913b076e7f887d430518e050f8ecbe0260ca9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:42.114241 containerd[1460]: time="2024-12-13T13:34:42.114193979Z" level=info msg="RemovePodSandbox \"7c57ef222078b00fb1358031d13913b076e7f887d430518e050f8ecbe0260ca9\" returns successfully" Dec 13 13:34:42.114577 containerd[1460]: time="2024-12-13T13:34:42.114438103Z" level=info msg="StopPodSandbox for \"3471811395b1ff1780abbbc92eeb9a683464d2b59eb4b36f7ca30acd080ae9c4\"" Dec 13 13:34:42.114577 containerd[1460]: time="2024-12-13T13:34:42.114523809Z" level=info msg="TearDown network for sandbox \"3471811395b1ff1780abbbc92eeb9a683464d2b59eb4b36f7ca30acd080ae9c4\" successfully" Dec 13 13:34:42.114577 containerd[1460]: time="2024-12-13T13:34:42.114533528Z" level=info msg="StopPodSandbox for \"3471811395b1ff1780abbbc92eeb9a683464d2b59eb4b36f7ca30acd080ae9c4\" returns successfully" Dec 13 13:34:42.114760 containerd[1460]: time="2024-12-13T13:34:42.114734097Z" level=info msg="RemovePodSandbox for \"3471811395b1ff1780abbbc92eeb9a683464d2b59eb4b36f7ca30acd080ae9c4\"" Dec 13 13:34:42.114826 containerd[1460]: time="2024-12-13T13:34:42.114756311Z" level=info msg="Forcibly stopping sandbox \"3471811395b1ff1780abbbc92eeb9a683464d2b59eb4b36f7ca30acd080ae9c4\"" Dec 13 13:34:42.114889 containerd[1460]: time="2024-12-13T13:34:42.114863729Z" level=info msg="TearDown network for sandbox \"3471811395b1ff1780abbbc92eeb9a683464d2b59eb4b36f7ca30acd080ae9c4\" successfully" Dec 13 13:34:42.118288 containerd[1460]: time="2024-12-13T13:34:42.118255153Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3471811395b1ff1780abbbc92eeb9a683464d2b59eb4b36f7ca30acd080ae9c4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:42.118371 containerd[1460]: time="2024-12-13T13:34:42.118288548Z" level=info msg="RemovePodSandbox \"3471811395b1ff1780abbbc92eeb9a683464d2b59eb4b36f7ca30acd080ae9c4\" returns successfully" Dec 13 13:34:42.118536 containerd[1460]: time="2024-12-13T13:34:42.118508846Z" level=info msg="StopPodSandbox for \"03f3d3da32a6bad9575866ecc071f7db7e2ef21eb15c13b5d3ccc4981ea3a6b8\"" Dec 13 13:34:42.118632 containerd[1460]: time="2024-12-13T13:34:42.118608289Z" level=info msg="TearDown network for sandbox \"03f3d3da32a6bad9575866ecc071f7db7e2ef21eb15c13b5d3ccc4981ea3a6b8\" successfully" Dec 13 13:34:42.118632 containerd[1460]: time="2024-12-13T13:34:42.118627246Z" level=info msg="StopPodSandbox for \"03f3d3da32a6bad9575866ecc071f7db7e2ef21eb15c13b5d3ccc4981ea3a6b8\" returns successfully" Dec 13 13:34:42.118855 containerd[1460]: time="2024-12-13T13:34:42.118826533Z" level=info msg="RemovePodSandbox for \"03f3d3da32a6bad9575866ecc071f7db7e2ef21eb15c13b5d3ccc4981ea3a6b8\"" Dec 13 13:34:42.118855 containerd[1460]: time="2024-12-13T13:34:42.118844057Z" level=info msg="Forcibly stopping sandbox \"03f3d3da32a6bad9575866ecc071f7db7e2ef21eb15c13b5d3ccc4981ea3a6b8\"" Dec 13 13:34:42.118925 containerd[1460]: time="2024-12-13T13:34:42.118904214Z" level=info msg="TearDown network for sandbox \"03f3d3da32a6bad9575866ecc071f7db7e2ef21eb15c13b5d3ccc4981ea3a6b8\" successfully" Dec 13 13:34:42.122279 containerd[1460]: time="2024-12-13T13:34:42.122246312Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"03f3d3da32a6bad9575866ecc071f7db7e2ef21eb15c13b5d3ccc4981ea3a6b8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:42.122368 containerd[1460]: time="2024-12-13T13:34:42.122290398Z" level=info msg="RemovePodSandbox \"03f3d3da32a6bad9575866ecc071f7db7e2ef21eb15c13b5d3ccc4981ea3a6b8\" returns successfully" Dec 13 13:34:42.122582 containerd[1460]: time="2024-12-13T13:34:42.122559511Z" level=info msg="StopPodSandbox for \"3096d2cf061035900198ce891b4e5cbd737796b92099b44cfd3fb9172cdbac54\"" Dec 13 13:34:42.122675 containerd[1460]: time="2024-12-13T13:34:42.122656859Z" level=info msg="TearDown network for sandbox \"3096d2cf061035900198ce891b4e5cbd737796b92099b44cfd3fb9172cdbac54\" successfully" Dec 13 13:34:42.122705 containerd[1460]: time="2024-12-13T13:34:42.122674363Z" level=info msg="StopPodSandbox for \"3096d2cf061035900198ce891b4e5cbd737796b92099b44cfd3fb9172cdbac54\" returns successfully" Dec 13 13:34:42.123790 containerd[1460]: time="2024-12-13T13:34:42.123014754Z" level=info msg="RemovePodSandbox for \"3096d2cf061035900198ce891b4e5cbd737796b92099b44cfd3fb9172cdbac54\"" Dec 13 13:34:42.123790 containerd[1460]: time="2024-12-13T13:34:42.123039832Z" level=info msg="Forcibly stopping sandbox \"3096d2cf061035900198ce891b4e5cbd737796b92099b44cfd3fb9172cdbac54\"" Dec 13 13:34:42.123790 containerd[1460]: time="2024-12-13T13:34:42.123102193Z" level=info msg="TearDown network for sandbox \"3096d2cf061035900198ce891b4e5cbd737796b92099b44cfd3fb9172cdbac54\" successfully" Dec 13 13:34:42.126374 containerd[1460]: time="2024-12-13T13:34:42.126344098Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3096d2cf061035900198ce891b4e5cbd737796b92099b44cfd3fb9172cdbac54\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:42.126427 containerd[1460]: time="2024-12-13T13:34:42.126374707Z" level=info msg="RemovePodSandbox \"3096d2cf061035900198ce891b4e5cbd737796b92099b44cfd3fb9172cdbac54\" returns successfully" Dec 13 13:34:42.126621 containerd[1460]: time="2024-12-13T13:34:42.126599314Z" level=info msg="StopPodSandbox for \"fd4a3ec4455a4558ba5951ceb7f7c3f3eeeab9ca964d38b261406ee89bda46a4\"" Dec 13 13:34:42.126704 containerd[1460]: time="2024-12-13T13:34:42.126690300Z" level=info msg="TearDown network for sandbox \"fd4a3ec4455a4558ba5951ceb7f7c3f3eeeab9ca964d38b261406ee89bda46a4\" successfully" Dec 13 13:34:42.126734 containerd[1460]: time="2024-12-13T13:34:42.126703365Z" level=info msg="StopPodSandbox for \"fd4a3ec4455a4558ba5951ceb7f7c3f3eeeab9ca964d38b261406ee89bda46a4\" returns successfully" Dec 13 13:34:42.126955 containerd[1460]: time="2024-12-13T13:34:42.126932991Z" level=info msg="RemovePodSandbox for \"fd4a3ec4455a4558ba5951ceb7f7c3f3eeeab9ca964d38b261406ee89bda46a4\"" Dec 13 13:34:42.127023 containerd[1460]: time="2024-12-13T13:34:42.126957148Z" level=info msg="Forcibly stopping sandbox \"fd4a3ec4455a4558ba5951ceb7f7c3f3eeeab9ca964d38b261406ee89bda46a4\"" Dec 13 13:34:42.127055 containerd[1460]: time="2024-12-13T13:34:42.127030731Z" level=info msg="TearDown network for sandbox \"fd4a3ec4455a4558ba5951ceb7f7c3f3eeeab9ca964d38b261406ee89bda46a4\" successfully" Dec 13 13:34:42.130386 containerd[1460]: time="2024-12-13T13:34:42.130360615Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fd4a3ec4455a4558ba5951ceb7f7c3f3eeeab9ca964d38b261406ee89bda46a4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:42.130458 containerd[1460]: time="2024-12-13T13:34:42.130400243Z" level=info msg="RemovePodSandbox \"fd4a3ec4455a4558ba5951ceb7f7c3f3eeeab9ca964d38b261406ee89bda46a4\" returns successfully" Dec 13 13:34:42.130683 containerd[1460]: time="2024-12-13T13:34:42.130650889Z" level=info msg="StopPodSandbox for \"bef88fc60cdb3838d0dc9850d23e07d807225163db3a44fc0c007e1c2aa7f9d8\"" Dec 13 13:34:42.130768 containerd[1460]: time="2024-12-13T13:34:42.130746936Z" level=info msg="TearDown network for sandbox \"bef88fc60cdb3838d0dc9850d23e07d807225163db3a44fc0c007e1c2aa7f9d8\" successfully" Dec 13 13:34:42.130768 containerd[1460]: time="2024-12-13T13:34:42.130765071Z" level=info msg="StopPodSandbox for \"bef88fc60cdb3838d0dc9850d23e07d807225163db3a44fc0c007e1c2aa7f9d8\" returns successfully" Dec 13 13:34:42.131015 containerd[1460]: time="2024-12-13T13:34:42.130971362Z" level=info msg="RemovePodSandbox for \"bef88fc60cdb3838d0dc9850d23e07d807225163db3a44fc0c007e1c2aa7f9d8\"" Dec 13 13:34:42.131015 containerd[1460]: time="2024-12-13T13:34:42.130990188Z" level=info msg="Forcibly stopping sandbox \"bef88fc60cdb3838d0dc9850d23e07d807225163db3a44fc0c007e1c2aa7f9d8\"" Dec 13 13:34:42.131084 containerd[1460]: time="2024-12-13T13:34:42.131053381Z" level=info msg="TearDown network for sandbox \"bef88fc60cdb3838d0dc9850d23e07d807225163db3a44fc0c007e1c2aa7f9d8\" successfully" Dec 13 13:34:42.134547 containerd[1460]: time="2024-12-13T13:34:42.134421209Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bef88fc60cdb3838d0dc9850d23e07d807225163db3a44fc0c007e1c2aa7f9d8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:42.134547 containerd[1460]: time="2024-12-13T13:34:42.134477408Z" level=info msg="RemovePodSandbox \"bef88fc60cdb3838d0dc9850d23e07d807225163db3a44fc0c007e1c2aa7f9d8\" returns successfully" Dec 13 13:34:42.134958 containerd[1460]: time="2024-12-13T13:34:42.134931549Z" level=info msg="StopPodSandbox for \"9c3a341d124d9379d933702cd59eae0a502abb56529e8f56106691cfdd5210df\"" Dec 13 13:34:42.135062 containerd[1460]: time="2024-12-13T13:34:42.135038217Z" level=info msg="TearDown network for sandbox \"9c3a341d124d9379d933702cd59eae0a502abb56529e8f56106691cfdd5210df\" successfully" Dec 13 13:34:42.135062 containerd[1460]: time="2024-12-13T13:34:42.135058286Z" level=info msg="StopPodSandbox for \"9c3a341d124d9379d933702cd59eae0a502abb56529e8f56106691cfdd5210df\" returns successfully" Dec 13 13:34:42.135362 containerd[1460]: time="2024-12-13T13:34:42.135340263Z" level=info msg="RemovePodSandbox for \"9c3a341d124d9379d933702cd59eae0a502abb56529e8f56106691cfdd5210df\"" Dec 13 13:34:42.135421 containerd[1460]: time="2024-12-13T13:34:42.135364841Z" level=info msg="Forcibly stopping sandbox \"9c3a341d124d9379d933702cd59eae0a502abb56529e8f56106691cfdd5210df\"" Dec 13 13:34:42.135485 containerd[1460]: time="2024-12-13T13:34:42.135441630Z" level=info msg="TearDown network for sandbox \"9c3a341d124d9379d933702cd59eae0a502abb56529e8f56106691cfdd5210df\" successfully" Dec 13 13:34:42.164410 containerd[1460]: time="2024-12-13T13:34:42.164358611Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9c3a341d124d9379d933702cd59eae0a502abb56529e8f56106691cfdd5210df\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:42.164523 containerd[1460]: time="2024-12-13T13:34:42.164423977Z" level=info msg="RemovePodSandbox \"9c3a341d124d9379d933702cd59eae0a502abb56529e8f56106691cfdd5210df\" returns successfully" Dec 13 13:34:42.164981 containerd[1460]: time="2024-12-13T13:34:42.164796832Z" level=info msg="StopPodSandbox for \"4ef0d56a397cd72f2356f51c144189ca5302b06a02860d08835bc16ddf93f006\"" Dec 13 13:34:42.164981 containerd[1460]: time="2024-12-13T13:34:42.164913719Z" level=info msg="TearDown network for sandbox \"4ef0d56a397cd72f2356f51c144189ca5302b06a02860d08835bc16ddf93f006\" successfully" Dec 13 13:34:42.164981 containerd[1460]: time="2024-12-13T13:34:42.164926503Z" level=info msg="StopPodSandbox for \"4ef0d56a397cd72f2356f51c144189ca5302b06a02860d08835bc16ddf93f006\" returns successfully" Dec 13 13:34:42.165208 containerd[1460]: time="2024-12-13T13:34:42.165173633Z" level=info msg="RemovePodSandbox for \"4ef0d56a397cd72f2356f51c144189ca5302b06a02860d08835bc16ddf93f006\"" Dec 13 13:34:42.165208 containerd[1460]: time="2024-12-13T13:34:42.165201498Z" level=info msg="Forcibly stopping sandbox \"4ef0d56a397cd72f2356f51c144189ca5302b06a02860d08835bc16ddf93f006\"" Dec 13 13:34:42.165335 containerd[1460]: time="2024-12-13T13:34:42.165278236Z" level=info msg="TearDown network for sandbox \"4ef0d56a397cd72f2356f51c144189ca5302b06a02860d08835bc16ddf93f006\" successfully" Dec 13 13:34:42.265015 containerd[1460]: time="2024-12-13T13:34:42.264962377Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4ef0d56a397cd72f2356f51c144189ca5302b06a02860d08835bc16ddf93f006\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:42.265135 containerd[1460]: time="2024-12-13T13:34:42.265026752Z" level=info msg="RemovePodSandbox \"4ef0d56a397cd72f2356f51c144189ca5302b06a02860d08835bc16ddf93f006\" returns successfully" Dec 13 13:34:42.265350 containerd[1460]: time="2024-12-13T13:34:42.265306435Z" level=info msg="StopPodSandbox for \"a3d938a9fe1cbda4b552ace19138d55db25439b920e8a82798c7e324c7afe2f4\"" Dec 13 13:34:42.265461 containerd[1460]: time="2024-12-13T13:34:42.265437609Z" level=info msg="TearDown network for sandbox \"a3d938a9fe1cbda4b552ace19138d55db25439b920e8a82798c7e324c7afe2f4\" successfully" Dec 13 13:34:42.265461 containerd[1460]: time="2024-12-13T13:34:42.265453300Z" level=info msg="StopPodSandbox for \"a3d938a9fe1cbda4b552ace19138d55db25439b920e8a82798c7e324c7afe2f4\" returns successfully" Dec 13 13:34:42.265684 containerd[1460]: time="2024-12-13T13:34:42.265657727Z" level=info msg="RemovePodSandbox for \"a3d938a9fe1cbda4b552ace19138d55db25439b920e8a82798c7e324c7afe2f4\"" Dec 13 13:34:42.265684 containerd[1460]: time="2024-12-13T13:34:42.265679970Z" level=info msg="Forcibly stopping sandbox \"a3d938a9fe1cbda4b552ace19138d55db25439b920e8a82798c7e324c7afe2f4\"" Dec 13 13:34:42.265893 containerd[1460]: time="2024-12-13T13:34:42.265750387Z" level=info msg="TearDown network for sandbox \"a3d938a9fe1cbda4b552ace19138d55db25439b920e8a82798c7e324c7afe2f4\" successfully" Dec 13 13:34:42.354157 containerd[1460]: time="2024-12-13T13:34:42.354113980Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a3d938a9fe1cbda4b552ace19138d55db25439b920e8a82798c7e324c7afe2f4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:42.354254 containerd[1460]: time="2024-12-13T13:34:42.354172743Z" level=info msg="RemovePodSandbox \"a3d938a9fe1cbda4b552ace19138d55db25439b920e8a82798c7e324c7afe2f4\" returns successfully" Dec 13 13:34:42.354627 containerd[1460]: time="2024-12-13T13:34:42.354479099Z" level=info msg="StopPodSandbox for \"9c61f38680623a070a89cebb1cf248b637d83806377c0ceb07f4313094888769\"" Dec 13 13:34:42.354627 containerd[1460]: time="2024-12-13T13:34:42.354566388Z" level=info msg="TearDown network for sandbox \"9c61f38680623a070a89cebb1cf248b637d83806377c0ceb07f4313094888769\" successfully" Dec 13 13:34:42.354627 containerd[1460]: time="2024-12-13T13:34:42.354575265Z" level=info msg="StopPodSandbox for \"9c61f38680623a070a89cebb1cf248b637d83806377c0ceb07f4313094888769\" returns successfully" Dec 13 13:34:42.354823 containerd[1460]: time="2024-12-13T13:34:42.354798569Z" level=info msg="RemovePodSandbox for \"9c61f38680623a070a89cebb1cf248b637d83806377c0ceb07f4313094888769\"" Dec 13 13:34:42.354864 containerd[1460]: time="2024-12-13T13:34:42.354826122Z" level=info msg="Forcibly stopping sandbox \"9c61f38680623a070a89cebb1cf248b637d83806377c0ceb07f4313094888769\"" Dec 13 13:34:42.354916 containerd[1460]: time="2024-12-13T13:34:42.354886479Z" level=info msg="TearDown network for sandbox \"9c61f38680623a070a89cebb1cf248b637d83806377c0ceb07f4313094888769\" successfully" Dec 13 13:34:42.498553 containerd[1460]: time="2024-12-13T13:34:42.498501881Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9c61f38680623a070a89cebb1cf248b637d83806377c0ceb07f4313094888769\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:42.498553 containerd[1460]: time="2024-12-13T13:34:42.498556606Z" level=info msg="RemovePodSandbox \"9c61f38680623a070a89cebb1cf248b637d83806377c0ceb07f4313094888769\" returns successfully" Dec 13 13:34:42.498868 containerd[1460]: time="2024-12-13T13:34:42.498843834Z" level=info msg="StopPodSandbox for \"a51067dfa3319463394d4929f4d513a74dfa510e4dddd7a818017a13ba006fa2\"" Dec 13 13:34:42.498949 containerd[1460]: time="2024-12-13T13:34:42.498930873Z" level=info msg="TearDown network for sandbox \"a51067dfa3319463394d4929f4d513a74dfa510e4dddd7a818017a13ba006fa2\" successfully" Dec 13 13:34:42.498949 containerd[1460]: time="2024-12-13T13:34:42.498942125Z" level=info msg="StopPodSandbox for \"a51067dfa3319463394d4929f4d513a74dfa510e4dddd7a818017a13ba006fa2\" returns successfully" Dec 13 13:34:42.499269 containerd[1460]: time="2024-12-13T13:34:42.499230445Z" level=info msg="RemovePodSandbox for \"a51067dfa3319463394d4929f4d513a74dfa510e4dddd7a818017a13ba006fa2\"" Dec 13 13:34:42.499269 containerd[1460]: time="2024-12-13T13:34:42.499255653Z" level=info msg="Forcibly stopping sandbox \"a51067dfa3319463394d4929f4d513a74dfa510e4dddd7a818017a13ba006fa2\"" Dec 13 13:34:42.499401 containerd[1460]: time="2024-12-13T13:34:42.499354075Z" level=info msg="TearDown network for sandbox \"a51067dfa3319463394d4929f4d513a74dfa510e4dddd7a818017a13ba006fa2\" successfully" Dec 13 13:34:42.503671 containerd[1460]: time="2024-12-13T13:34:42.503643061Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a51067dfa3319463394d4929f4d513a74dfa510e4dddd7a818017a13ba006fa2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:42.503723 containerd[1460]: time="2024-12-13T13:34:42.503682518Z" level=info msg="RemovePodSandbox \"a51067dfa3319463394d4929f4d513a74dfa510e4dddd7a818017a13ba006fa2\" returns successfully" Dec 13 13:34:42.503921 containerd[1460]: time="2024-12-13T13:34:42.503901483Z" level=info msg="StopPodSandbox for \"d1a332cec12278e8a2848eda31dc70e9822bb2e5c607ef1e71dfc9da622ecebb\"" Dec 13 13:34:42.504004 containerd[1460]: time="2024-12-13T13:34:42.503987390Z" level=info msg="TearDown network for sandbox \"d1a332cec12278e8a2848eda31dc70e9822bb2e5c607ef1e71dfc9da622ecebb\" successfully" Dec 13 13:34:42.504004 containerd[1460]: time="2024-12-13T13:34:42.504000947Z" level=info msg="StopPodSandbox for \"d1a332cec12278e8a2848eda31dc70e9822bb2e5c607ef1e71dfc9da622ecebb\" returns successfully" Dec 13 13:34:42.504192 containerd[1460]: time="2024-12-13T13:34:42.504163982Z" level=info msg="RemovePodSandbox for \"d1a332cec12278e8a2848eda31dc70e9822bb2e5c607ef1e71dfc9da622ecebb\"" Dec 13 13:34:42.504192 containerd[1460]: time="2024-12-13T13:34:42.504186927Z" level=info msg="Forcibly stopping sandbox \"d1a332cec12278e8a2848eda31dc70e9822bb2e5c607ef1e71dfc9da622ecebb\"" Dec 13 13:34:42.504288 containerd[1460]: time="2024-12-13T13:34:42.504259138Z" level=info msg="TearDown network for sandbox \"d1a332cec12278e8a2848eda31dc70e9822bb2e5c607ef1e71dfc9da622ecebb\" successfully" Dec 13 13:34:42.508630 containerd[1460]: time="2024-12-13T13:34:42.508601358Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d1a332cec12278e8a2848eda31dc70e9822bb2e5c607ef1e71dfc9da622ecebb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:42.508677 containerd[1460]: time="2024-12-13T13:34:42.508641676Z" level=info msg="RemovePodSandbox \"d1a332cec12278e8a2848eda31dc70e9822bb2e5c607ef1e71dfc9da622ecebb\" returns successfully" Dec 13 13:34:42.508866 containerd[1460]: time="2024-12-13T13:34:42.508842355Z" level=info msg="StopPodSandbox for \"5830ddd4f895c70171dbfc66a59269f13cec0912a16a8f19a7fc422dce5b8861\"" Dec 13 13:34:42.508938 containerd[1460]: time="2024-12-13T13:34:42.508914525Z" level=info msg="TearDown network for sandbox \"5830ddd4f895c70171dbfc66a59269f13cec0912a16a8f19a7fc422dce5b8861\" successfully" Dec 13 13:34:42.508938 containerd[1460]: time="2024-12-13T13:34:42.508925828Z" level=info msg="StopPodSandbox for \"5830ddd4f895c70171dbfc66a59269f13cec0912a16a8f19a7fc422dce5b8861\" returns successfully" Dec 13 13:34:42.509122 containerd[1460]: time="2024-12-13T13:34:42.509107310Z" level=info msg="RemovePodSandbox for \"5830ddd4f895c70171dbfc66a59269f13cec0912a16a8f19a7fc422dce5b8861\"" Dec 13 13:34:42.509153 containerd[1460]: time="2024-12-13T13:34:42.509123582Z" level=info msg="Forcibly stopping sandbox \"5830ddd4f895c70171dbfc66a59269f13cec0912a16a8f19a7fc422dce5b8861\"" Dec 13 13:34:42.509207 containerd[1460]: time="2024-12-13T13:34:42.509181925Z" level=info msg="TearDown network for sandbox \"5830ddd4f895c70171dbfc66a59269f13cec0912a16a8f19a7fc422dce5b8861\" successfully" Dec 13 13:34:42.512992 containerd[1460]: time="2024-12-13T13:34:42.512964719Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5830ddd4f895c70171dbfc66a59269f13cec0912a16a8f19a7fc422dce5b8861\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:42.513035 containerd[1460]: time="2024-12-13T13:34:42.512994066Z" level=info msg="RemovePodSandbox \"5830ddd4f895c70171dbfc66a59269f13cec0912a16a8f19a7fc422dce5b8861\" returns successfully" Dec 13 13:34:42.513199 containerd[1460]: time="2024-12-13T13:34:42.513180969Z" level=info msg="StopPodSandbox for \"8c8a323be73a25f027a42aa31bd62c7c50a5548b9c0f738623c6c902439d54ca\"" Dec 13 13:34:42.513266 containerd[1460]: time="2024-12-13T13:34:42.513249753Z" level=info msg="TearDown network for sandbox \"8c8a323be73a25f027a42aa31bd62c7c50a5548b9c0f738623c6c902439d54ca\" successfully" Dec 13 13:34:42.513266 containerd[1460]: time="2024-12-13T13:34:42.513260312Z" level=info msg="StopPodSandbox for \"8c8a323be73a25f027a42aa31bd62c7c50a5548b9c0f738623c6c902439d54ca\" returns successfully" Dec 13 13:34:42.513505 containerd[1460]: time="2024-12-13T13:34:42.513465010Z" level=info msg="RemovePodSandbox for \"8c8a323be73a25f027a42aa31bd62c7c50a5548b9c0f738623c6c902439d54ca\"" Dec 13 13:34:42.513505 containerd[1460]: time="2024-12-13T13:34:42.513488936Z" level=info msg="Forcibly stopping sandbox \"8c8a323be73a25f027a42aa31bd62c7c50a5548b9c0f738623c6c902439d54ca\"" Dec 13 13:34:42.513602 containerd[1460]: time="2024-12-13T13:34:42.513562219Z" level=info msg="TearDown network for sandbox \"8c8a323be73a25f027a42aa31bd62c7c50a5548b9c0f738623c6c902439d54ca\" successfully" Dec 13 13:34:42.517333 containerd[1460]: time="2024-12-13T13:34:42.517290377Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8c8a323be73a25f027a42aa31bd62c7c50a5548b9c0f738623c6c902439d54ca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:42.517387 containerd[1460]: time="2024-12-13T13:34:42.517342999Z" level=info msg="RemovePodSandbox \"8c8a323be73a25f027a42aa31bd62c7c50a5548b9c0f738623c6c902439d54ca\" returns successfully" Dec 13 13:34:42.517560 containerd[1460]: time="2024-12-13T13:34:42.517537086Z" level=info msg="StopPodSandbox for \"72d027854b05139635b45a0e92158d8fa8f40ee284fc63177385a814ee2244d0\"" Dec 13 13:34:42.517622 containerd[1460]: time="2024-12-13T13:34:42.517610128Z" level=info msg="TearDown network for sandbox \"72d027854b05139635b45a0e92158d8fa8f40ee284fc63177385a814ee2244d0\" successfully" Dec 13 13:34:42.517622 containerd[1460]: time="2024-12-13T13:34:42.517619365Z" level=info msg="StopPodSandbox for \"72d027854b05139635b45a0e92158d8fa8f40ee284fc63177385a814ee2244d0\" returns successfully" Dec 13 13:34:42.517881 containerd[1460]: time="2024-12-13T13:34:42.517861295Z" level=info msg="RemovePodSandbox for \"72d027854b05139635b45a0e92158d8fa8f40ee284fc63177385a814ee2244d0\"" Dec 13 13:34:42.517920 containerd[1460]: time="2024-12-13T13:34:42.517881244Z" level=info msg="Forcibly stopping sandbox \"72d027854b05139635b45a0e92158d8fa8f40ee284fc63177385a814ee2244d0\"" Dec 13 13:34:42.517995 containerd[1460]: time="2024-12-13T13:34:42.517971179Z" level=info msg="TearDown network for sandbox \"72d027854b05139635b45a0e92158d8fa8f40ee284fc63177385a814ee2244d0\" successfully" Dec 13 13:34:42.521478 containerd[1460]: time="2024-12-13T13:34:42.521459471Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"72d027854b05139635b45a0e92158d8fa8f40ee284fc63177385a814ee2244d0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:42.521512 containerd[1460]: time="2024-12-13T13:34:42.521489439Z" level=info msg="RemovePodSandbox \"72d027854b05139635b45a0e92158d8fa8f40ee284fc63177385a814ee2244d0\" returns successfully" Dec 13 13:34:42.521703 containerd[1460]: time="2024-12-13T13:34:42.521685379Z" level=info msg="StopPodSandbox for \"d3f6fbb3f4076657051c373b6d68491beda4319936476f5a29f465b9cc1dee09\"" Dec 13 13:34:42.521780 containerd[1460]: time="2024-12-13T13:34:42.521766016Z" level=info msg="TearDown network for sandbox \"d3f6fbb3f4076657051c373b6d68491beda4319936476f5a29f465b9cc1dee09\" successfully" Dec 13 13:34:42.521780 containerd[1460]: time="2024-12-13T13:34:42.521774292Z" level=info msg="StopPodSandbox for \"d3f6fbb3f4076657051c373b6d68491beda4319936476f5a29f465b9cc1dee09\" returns successfully" Dec 13 13:34:42.521966 containerd[1460]: time="2024-12-13T13:34:42.521945214Z" level=info msg="RemovePodSandbox for \"d3f6fbb3f4076657051c373b6d68491beda4319936476f5a29f465b9cc1dee09\"" Dec 13 13:34:42.522004 containerd[1460]: time="2024-12-13T13:34:42.521970974Z" level=info msg="Forcibly stopping sandbox \"d3f6fbb3f4076657051c373b6d68491beda4319936476f5a29f465b9cc1dee09\"" Dec 13 13:34:42.522068 containerd[1460]: time="2024-12-13T13:34:42.522042433Z" level=info msg="TearDown network for sandbox \"d3f6fbb3f4076657051c373b6d68491beda4319936476f5a29f465b9cc1dee09\" successfully" Dec 13 13:34:42.525296 containerd[1460]: time="2024-12-13T13:34:42.525267895Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d3f6fbb3f4076657051c373b6d68491beda4319936476f5a29f465b9cc1dee09\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:42.525358 containerd[1460]: time="2024-12-13T13:34:42.525301760Z" level=info msg="RemovePodSandbox \"d3f6fbb3f4076657051c373b6d68491beda4319936476f5a29f465b9cc1dee09\" returns successfully" Dec 13 13:34:42.525548 containerd[1460]: time="2024-12-13T13:34:42.525510706Z" level=info msg="StopPodSandbox for \"9dc7abc46ced9ea2f1fa53d72a6c4814f3579022ef445d58cd7ed3e5773800b2\"" Dec 13 13:34:42.525632 containerd[1460]: time="2024-12-13T13:34:42.525614207Z" level=info msg="TearDown network for sandbox \"9dc7abc46ced9ea2f1fa53d72a6c4814f3579022ef445d58cd7ed3e5773800b2\" successfully" Dec 13 13:34:42.525658 containerd[1460]: time="2024-12-13T13:34:42.525631691Z" level=info msg="StopPodSandbox for \"9dc7abc46ced9ea2f1fa53d72a6c4814f3579022ef445d58cd7ed3e5773800b2\" returns successfully" Dec 13 13:34:42.525868 containerd[1460]: time="2024-12-13T13:34:42.525844244Z" level=info msg="RemovePodSandbox for \"9dc7abc46ced9ea2f1fa53d72a6c4814f3579022ef445d58cd7ed3e5773800b2\"" Dec 13 13:34:42.525914 containerd[1460]: time="2024-12-13T13:34:42.525868140Z" level=info msg="Forcibly stopping sandbox \"9dc7abc46ced9ea2f1fa53d72a6c4814f3579022ef445d58cd7ed3e5773800b2\"" Dec 13 13:34:42.525980 containerd[1460]: time="2024-12-13T13:34:42.525946793Z" level=info msg="TearDown network for sandbox \"9dc7abc46ced9ea2f1fa53d72a6c4814f3579022ef445d58cd7ed3e5773800b2\" successfully" Dec 13 13:34:42.529773 containerd[1460]: time="2024-12-13T13:34:42.529739757Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9dc7abc46ced9ea2f1fa53d72a6c4814f3579022ef445d58cd7ed3e5773800b2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:34:42.529838 containerd[1460]: time="2024-12-13T13:34:42.529772591Z" level=info msg="RemovePodSandbox \"9dc7abc46ced9ea2f1fa53d72a6c4814f3579022ef445d58cd7ed3e5773800b2\" returns successfully" Dec 13 13:34:43.898700 systemd[1]: Started sshd@19-10.0.0.150:22-10.0.0.1:40920.service - OpenSSH per-connection server daemon (10.0.0.1:40920). Dec 13 13:34:43.933673 sshd[5962]: Accepted publickey for core from 10.0.0.1 port 40920 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:34:43.934955 sshd-session[5962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:34:43.938539 systemd-logind[1445]: New session 20 of user core. Dec 13 13:34:43.946427 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 13:34:44.047705 sshd[5964]: Connection closed by 10.0.0.1 port 40920 Dec 13 13:34:44.048062 sshd-session[5962]: pam_unix(sshd:session): session closed for user core Dec 13 13:34:44.051450 systemd[1]: sshd@19-10.0.0.150:22-10.0.0.1:40920.service: Deactivated successfully. Dec 13 13:34:44.053233 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 13:34:44.053828 systemd-logind[1445]: Session 20 logged out. Waiting for processes to exit. Dec 13 13:34:44.054654 systemd-logind[1445]: Removed session 20. Dec 13 13:34:46.945050 kubelet[2640]: E1213 13:34:46.945010 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:34:49.059169 systemd[1]: Started sshd@20-10.0.0.150:22-10.0.0.1:48552.service - OpenSSH per-connection server daemon (10.0.0.1:48552). Dec 13 13:34:49.106652 sshd[6019]: Accepted publickey for core from 10.0.0.1 port 48552 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:34:49.108156 sshd-session[6019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:34:49.111991 systemd-logind[1445]: New session 21 of user core. Dec 13 13:34:49.117450 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 13:34:49.221166 sshd[6021]: Connection closed by 10.0.0.1 port 48552 Dec 13 13:34:49.221529 sshd-session[6019]: pam_unix(sshd:session): session closed for user core Dec 13 13:34:49.224983 systemd[1]: sshd@20-10.0.0.150:22-10.0.0.1:48552.service: Deactivated successfully. Dec 13 13:34:49.227172 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 13:34:49.227793 systemd-logind[1445]: Session 21 logged out. Waiting for processes to exit. Dec 13 13:34:49.228742 systemd-logind[1445]: Removed session 21. Dec 13 13:34:53.894441 kubelet[2640]: E1213 13:34:53.894404 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:34:54.236087 systemd[1]: Started sshd@21-10.0.0.150:22-10.0.0.1:48568.service - OpenSSH per-connection server daemon (10.0.0.1:48568). Dec 13 13:34:54.272493 sshd[6034]: Accepted publickey for core from 10.0.0.1 port 48568 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:34:54.273728 sshd-session[6034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:34:54.277189 systemd-logind[1445]: New session 22 of user core. Dec 13 13:34:54.287424 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 13:34:54.389423 sshd[6036]: Connection closed by 10.0.0.1 port 48568 Dec 13 13:34:54.389751 sshd-session[6034]: pam_unix(sshd:session): session closed for user core Dec 13 13:34:54.393605 systemd[1]: sshd@21-10.0.0.150:22-10.0.0.1:48568.service: Deactivated successfully. Dec 13 13:34:54.395567 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 13:34:54.396208 systemd-logind[1445]: Session 22 logged out. Waiting for processes to exit. Dec 13 13:34:54.396991 systemd-logind[1445]: Removed session 22. Dec 13 13:34:55.962266 kubelet[2640]: I1213 13:34:55.962230 2640 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 13:34:59.402398 systemd[1]: Started sshd@22-10.0.0.150:22-10.0.0.1:52002.service - OpenSSH per-connection server daemon (10.0.0.1:52002). Dec 13 13:34:59.449974 sshd[6054]: Accepted publickey for core from 10.0.0.1 port 52002 ssh2: RSA SHA256:dZtUZWIHZCqWPvvtEO8QUYeHUprV5e9zMs000ybProI Dec 13 13:34:59.451443 sshd-session[6054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:34:59.455128 systemd-logind[1445]: New session 23 of user core. Dec 13 13:34:59.467438 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 13:34:59.583214 sshd[6056]: Connection closed by 10.0.0.1 port 52002 Dec 13 13:34:59.583561 sshd-session[6054]: pam_unix(sshd:session): session closed for user core Dec 13 13:34:59.587253 systemd[1]: sshd@22-10.0.0.150:22-10.0.0.1:52002.service: Deactivated successfully. Dec 13 13:34:59.589301 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 13:34:59.589914 systemd-logind[1445]: Session 23 logged out. Waiting for processes to exit. Dec 13 13:34:59.590736 systemd-logind[1445]: Removed session 23. Dec 13 13:35:00.894096 kubelet[2640]: E1213 13:35:00.894060 2640 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"