Jan 17 12:23:35.880808 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 17 10:39:07 -00 2025 Jan 17 12:23:35.880828 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:23:35.880839 kernel: BIOS-provided physical RAM map: Jan 17 12:23:35.880846 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 17 12:23:35.880852 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 17 12:23:35.880858 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 17 12:23:35.880865 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 17 12:23:35.880871 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 17 12:23:35.880878 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 17 12:23:35.880886 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 17 12:23:35.880892 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 17 12:23:35.880898 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 17 12:23:35.880905 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 17 12:23:35.880911 kernel: NX (Execute Disable) protection: active Jan 17 12:23:35.880918 kernel: APIC: Static calls initialized Jan 17 12:23:35.880927 kernel: SMBIOS 2.8 present. Jan 17 12:23:35.880934 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 17 12:23:35.880941 kernel: Hypervisor detected: KVM Jan 17 12:23:35.880949 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 12:23:35.880957 kernel: kvm-clock: using sched offset of 2190148680 cycles Jan 17 12:23:35.880964 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 12:23:35.880971 kernel: tsc: Detected 2794.748 MHz processor Jan 17 12:23:35.880980 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 12:23:35.880988 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 12:23:35.880997 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 17 12:23:35.881008 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 17 12:23:35.881015 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 12:23:35.881022 kernel: Using GB pages for direct mapping Jan 17 12:23:35.881029 kernel: ACPI: Early table checksum verification disabled Jan 17 12:23:35.881047 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 17 12:23:35.881054 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:23:35.881061 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:23:35.881068 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:23:35.881078 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 17 12:23:35.881085 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:23:35.881092 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:23:35.881099 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:23:35.881106 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:23:35.881113 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Jan 17 12:23:35.881120 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Jan 17 12:23:35.881130 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 17 12:23:35.881139 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Jan 17 12:23:35.881146 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Jan 17 12:23:35.881154 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Jan 17 12:23:35.881161 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Jan 17 12:23:35.881168 kernel: No NUMA configuration found Jan 17 12:23:35.881175 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 17 12:23:35.881182 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 17 12:23:35.881191 kernel: Zone ranges: Jan 17 12:23:35.881199 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 12:23:35.881206 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 17 12:23:35.881213 kernel: Normal empty Jan 17 12:23:35.881220 kernel: Movable zone start for each node Jan 17 12:23:35.881227 kernel: Early memory node ranges Jan 17 12:23:35.881234 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 17 12:23:35.881241 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 17 12:23:35.881248 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 17 12:23:35.881258 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 12:23:35.881265 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 17 12:23:35.881272 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 17 12:23:35.881279 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 17 12:23:35.881287 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 12:23:35.881294 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 12:23:35.881301 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 17 12:23:35.881308 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 12:23:35.881315 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 12:23:35.881325 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 12:23:35.881332 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 12:23:35.881339 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 12:23:35.881346 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 17 12:23:35.881353 kernel: TSC deadline timer available Jan 17 12:23:35.881361 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 17 12:23:35.881368 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 17 12:23:35.881375 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 17 12:23:35.881382 kernel: kvm-guest: setup PV sched yield Jan 17 12:23:35.881389 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 17 12:23:35.881399 kernel: Booting paravirtualized kernel on KVM Jan 17 12:23:35.881406 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 12:23:35.881413 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 17 12:23:35.881421 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 17 12:23:35.881428 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 17 12:23:35.881435 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 17 12:23:35.881442 kernel: kvm-guest: PV spinlocks enabled Jan 17 12:23:35.881449 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 17 12:23:35.881458 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:23:35.881468 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 17 12:23:35.881475 kernel: random: crng init done Jan 17 12:23:35.881482 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 12:23:35.881489 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 12:23:35.881497 kernel: Fallback order for Node 0: 0 Jan 17 12:23:35.881504 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 17 12:23:35.881511 kernel: Policy zone: DMA32 Jan 17 12:23:35.881518 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 12:23:35.881528 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42848K init, 2344K bss, 136900K reserved, 0K cma-reserved) Jan 17 12:23:35.881535 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 17 12:23:35.881543 kernel: ftrace: allocating 37918 entries in 149 pages Jan 17 12:23:35.881550 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 12:23:35.881557 kernel: Dynamic Preempt: voluntary Jan 17 12:23:35.881564 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 12:23:35.881582 kernel: rcu: RCU event tracing is enabled. Jan 17 12:23:35.881590 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 17 12:23:35.881598 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 12:23:35.881607 kernel: Rude variant of Tasks RCU enabled. Jan 17 12:23:35.881615 kernel: Tracing variant of Tasks RCU enabled. Jan 17 12:23:35.881622 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 12:23:35.881630 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 17 12:23:35.881637 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 17 12:23:35.881645 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 12:23:35.881652 kernel: Console: colour VGA+ 80x25 Jan 17 12:23:35.881660 kernel: printk: console [ttyS0] enabled Jan 17 12:23:35.881667 kernel: ACPI: Core revision 20230628 Jan 17 12:23:35.881677 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 17 12:23:35.881684 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 12:23:35.881691 kernel: x2apic enabled Jan 17 12:23:35.881699 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 12:23:35.881706 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 17 12:23:35.881714 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 17 12:23:35.881721 kernel: kvm-guest: setup PV IPIs Jan 17 12:23:35.881738 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 17 12:23:35.881745 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 17 12:23:35.881753 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jan 17 12:23:35.881760 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 17 12:23:35.881768 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 17 12:23:35.881777 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 17 12:23:35.881785 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 12:23:35.881793 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 12:23:35.881800 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 17 12:23:35.881810 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 17 12:23:35.881818 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 17 12:23:35.881825 kernel: RETBleed: Mitigation: untrained return thunk Jan 17 12:23:35.881833 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 17 12:23:35.881841 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 17 12:23:35.881848 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 17 12:23:35.881857 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 17 12:23:35.881864 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 17 12:23:35.881872 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 12:23:35.881882 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 12:23:35.881890 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 12:23:35.881897 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 12:23:35.881907 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 17 12:23:35.881915 kernel: Freeing SMP alternatives memory: 32K Jan 17 12:23:35.881923 kernel: pid_max: default: 32768 minimum: 301 Jan 17 12:23:35.881930 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 12:23:35.881938 kernel: landlock: Up and running. Jan 17 12:23:35.881945 kernel: SELinux: Initializing. Jan 17 12:23:35.881955 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 12:23:35.881963 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 12:23:35.881971 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 17 12:23:35.881979 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 12:23:35.881986 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 12:23:35.881994 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 12:23:35.882002 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 17 12:23:35.882009 kernel: ... version: 0 Jan 17 12:23:35.882019 kernel: ... bit width: 48 Jan 17 12:23:35.882026 kernel: ... generic registers: 6 Jan 17 12:23:35.882077 kernel: ... value mask: 0000ffffffffffff Jan 17 12:23:35.882084 kernel: ... max period: 00007fffffffffff Jan 17 12:23:35.882092 kernel: ... fixed-purpose events: 0 Jan 17 12:23:35.882099 kernel: ... event mask: 000000000000003f Jan 17 12:23:35.882107 kernel: signal: max sigframe size: 1776 Jan 17 12:23:35.882115 kernel: rcu: Hierarchical SRCU implementation. Jan 17 12:23:35.882122 kernel: rcu: Max phase no-delay instances is 400. Jan 17 12:23:35.882130 kernel: smp: Bringing up secondary CPUs ... Jan 17 12:23:35.882140 kernel: smpboot: x86: Booting SMP configuration: Jan 17 12:23:35.882148 kernel: .... node #0, CPUs: #1 #2 #3 Jan 17 12:23:35.882155 kernel: smp: Brought up 1 node, 4 CPUs Jan 17 12:23:35.882163 kernel: smpboot: Max logical packages: 1 Jan 17 12:23:35.882170 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jan 17 12:23:35.882178 kernel: devtmpfs: initialized Jan 17 12:23:35.882185 kernel: x86/mm: Memory block size: 128MB Jan 17 12:23:35.882193 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 12:23:35.882201 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 17 12:23:35.882210 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 12:23:35.882218 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 12:23:35.882225 kernel: audit: initializing netlink subsys (disabled) Jan 17 12:23:35.882233 kernel: audit: type=2000 audit(1737116616.095:1): state=initialized audit_enabled=0 res=1 Jan 17 12:23:35.882240 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 12:23:35.882248 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 12:23:35.882255 kernel: cpuidle: using governor menu Jan 17 12:23:35.882263 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 12:23:35.882270 kernel: dca service started, version 1.12.1 Jan 17 12:23:35.882280 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 17 12:23:35.882288 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 17 12:23:35.882296 kernel: PCI: Using configuration type 1 for base access Jan 17 12:23:35.882303 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 12:23:35.882311 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 12:23:35.882318 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 12:23:35.882326 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 12:23:35.882333 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 12:23:35.882341 kernel: ACPI: Added _OSI(Module Device) Jan 17 12:23:35.882351 kernel: ACPI: Added _OSI(Processor Device) Jan 17 12:23:35.882358 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 17 12:23:35.882366 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 12:23:35.882373 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 12:23:35.882381 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 12:23:35.882388 kernel: ACPI: Interpreter enabled Jan 17 12:23:35.882396 kernel: ACPI: PM: (supports S0 S3 S5) Jan 17 12:23:35.882403 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 12:23:35.882411 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 12:23:35.882420 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 12:23:35.882428 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 17 12:23:35.882435 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 12:23:35.882618 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 17 12:23:35.882747 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 17 12:23:35.882867 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 17 12:23:35.882877 kernel: PCI host bridge to bus 0000:00 Jan 17 12:23:35.883007 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 12:23:35.883134 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 12:23:35.883245 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 12:23:35.883353 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 17 12:23:35.883461 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 17 12:23:35.883569 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 17 12:23:35.883687 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 12:23:35.883828 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 17 12:23:35.883957 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 17 12:23:35.884101 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 17 12:23:35.884222 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 17 12:23:35.884340 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 17 12:23:35.884458 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 12:23:35.884593 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 17 12:23:35.884721 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 17 12:23:35.884840 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 17 12:23:35.884961 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 17 12:23:35.885114 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 17 12:23:35.885237 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 17 12:23:35.885356 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 17 12:23:35.885481 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 17 12:23:35.885618 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 17 12:23:35.885739 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 17 12:23:35.885859 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 17 12:23:35.885981 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 17 12:23:35.886115 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 17 12:23:35.886241 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 17 12:23:35.886364 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 17 12:23:35.886491 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 17 12:23:35.886618 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 17 12:23:35.886737 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 17 12:23:35.886863 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 17 12:23:35.886981 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 17 12:23:35.886991 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 12:23:35.887002 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 12:23:35.887010 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 12:23:35.887018 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 12:23:35.887025 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 17 12:23:35.887045 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 17 12:23:35.887053 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 17 12:23:35.887061 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 17 12:23:35.887068 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 17 12:23:35.887076 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 17 12:23:35.887086 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 17 12:23:35.887094 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 17 12:23:35.887101 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 17 12:23:35.887109 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 17 12:23:35.887116 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 17 12:23:35.887124 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 17 12:23:35.887131 kernel: iommu: Default domain type: Translated Jan 17 12:23:35.887139 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 12:23:35.887146 kernel: PCI: Using ACPI for IRQ routing Jan 17 12:23:35.887156 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 12:23:35.887163 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 17 12:23:35.887171 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 17 12:23:35.887292 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 17 12:23:35.887411 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 17 12:23:35.887530 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 12:23:35.887540 kernel: vgaarb: loaded Jan 17 12:23:35.887547 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 17 12:23:35.887558 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 17 12:23:35.887566 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 12:23:35.887581 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 12:23:35.887589 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 12:23:35.887596 kernel: pnp: PnP ACPI init Jan 17 12:23:35.887728 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 17 12:23:35.887739 kernel: pnp: PnP ACPI: found 6 devices Jan 17 12:23:35.887747 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 12:23:35.887758 kernel: NET: Registered PF_INET protocol family Jan 17 12:23:35.887766 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 12:23:35.887773 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 17 12:23:35.887781 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 12:23:35.887789 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 12:23:35.887796 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 17 12:23:35.887804 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 17 12:23:35.887812 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 12:23:35.887819 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 12:23:35.887829 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 12:23:35.887837 kernel: NET: Registered PF_XDP protocol family Jan 17 12:23:35.887948 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 12:23:35.888118 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 12:23:35.888229 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 12:23:35.888336 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 17 12:23:35.888443 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 17 12:23:35.888550 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 17 12:23:35.888564 kernel: PCI: CLS 0 bytes, default 64 Jan 17 12:23:35.888578 kernel: Initialise system trusted keyrings Jan 17 12:23:35.888586 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 17 12:23:35.888594 kernel: Key type asymmetric registered Jan 17 12:23:35.888601 kernel: Asymmetric key parser 'x509' registered Jan 17 12:23:35.888608 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 12:23:35.888616 kernel: io scheduler mq-deadline registered Jan 17 12:23:35.888624 kernel: io scheduler kyber registered Jan 17 12:23:35.888631 kernel: io scheduler bfq registered Jan 17 12:23:35.888641 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 12:23:35.888649 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 17 12:23:35.888657 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 17 12:23:35.888664 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 17 12:23:35.888672 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 12:23:35.888679 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 12:23:35.888687 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 12:23:35.888695 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 12:23:35.888702 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 12:23:35.888831 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 17 12:23:35.888842 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 12:23:35.888952 kernel: rtc_cmos 00:04: registered as rtc0 Jan 17 12:23:35.889078 kernel: rtc_cmos 00:04: setting system clock to 2025-01-17T12:23:35 UTC (1737116615) Jan 17 12:23:35.889190 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 17 12:23:35.889200 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 17 12:23:35.889208 kernel: NET: Registered PF_INET6 protocol family Jan 17 12:23:35.889216 kernel: Segment Routing with IPv6 Jan 17 12:23:35.889227 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 12:23:35.889234 kernel: NET: Registered PF_PACKET protocol family Jan 17 12:23:35.889242 kernel: Key type dns_resolver registered Jan 17 12:23:35.889249 kernel: IPI shorthand broadcast: enabled Jan 17 12:23:35.889257 kernel: sched_clock: Marking stable (615002776, 105553072)->(755170294, -34614446) Jan 17 12:23:35.889264 kernel: registered taskstats version 1 Jan 17 12:23:35.889272 kernel: Loading compiled-in X.509 certificates Jan 17 12:23:35.889279 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 6baa290b0089ed5c4c5f7248306af816ac8c7f80' Jan 17 12:23:35.889287 kernel: Key type .fscrypt registered Jan 17 12:23:35.889296 kernel: Key type fscrypt-provisioning registered Jan 17 12:23:35.889304 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 12:23:35.889312 kernel: ima: Allocated hash algorithm: sha1 Jan 17 12:23:35.889319 kernel: ima: No architecture policies found Jan 17 12:23:35.889327 kernel: clk: Disabling unused clocks Jan 17 12:23:35.889334 kernel: Freeing unused kernel image (initmem) memory: 42848K Jan 17 12:23:35.889342 kernel: Write protecting the kernel read-only data: 36864k Jan 17 12:23:35.889349 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 17 12:23:35.889357 kernel: Run /init as init process Jan 17 12:23:35.889366 kernel: with arguments: Jan 17 12:23:35.889374 kernel: /init Jan 17 12:23:35.889381 kernel: with environment: Jan 17 12:23:35.889388 kernel: HOME=/ Jan 17 12:23:35.889396 kernel: TERM=linux Jan 17 12:23:35.889403 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 17 12:23:35.889413 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:23:35.889422 systemd[1]: Detected virtualization kvm. Jan 17 12:23:35.889433 systemd[1]: Detected architecture x86-64. Jan 17 12:23:35.889441 systemd[1]: Running in initrd. Jan 17 12:23:35.889449 systemd[1]: No hostname configured, using default hostname. Jan 17 12:23:35.889456 systemd[1]: Hostname set to . Jan 17 12:23:35.889465 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:23:35.889473 systemd[1]: Queued start job for default target initrd.target. Jan 17 12:23:35.889481 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:23:35.889489 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:23:35.889500 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 12:23:35.889520 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:23:35.889531 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 12:23:35.889540 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 12:23:35.889550 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 12:23:35.889560 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 12:23:35.889569 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:23:35.889583 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:23:35.889592 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:23:35.889600 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:23:35.889608 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:23:35.889617 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:23:35.889625 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:23:35.889635 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:23:35.889644 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:23:35.889652 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:23:35.889661 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:23:35.889669 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:23:35.889678 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:23:35.889686 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:23:35.889694 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 12:23:35.889702 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:23:35.889713 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 12:23:35.889721 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 12:23:35.889729 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:23:35.889738 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:23:35.889746 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:23:35.889756 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 12:23:35.889765 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:23:35.889774 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 12:23:35.889801 systemd-journald[193]: Collecting audit messages is disabled. Jan 17 12:23:35.889822 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:23:35.889833 systemd-journald[193]: Journal started Jan 17 12:23:35.889852 systemd-journald[193]: Runtime Journal (/run/log/journal/98dd0b4626b840549741eafdf4bca87d) is 6.0M, max 48.4M, 42.3M free. Jan 17 12:23:35.890046 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:23:35.893964 systemd-modules-load[194]: Inserted module 'overlay' Jan 17 12:23:35.932059 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 12:23:35.933161 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:23:35.935851 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:23:35.937544 kernel: Bridge firewalling registered Jan 17 12:23:35.937534 systemd-modules-load[194]: Inserted module 'br_netfilter' Jan 17 12:23:35.940255 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:23:35.940922 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:23:35.948158 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:23:35.948748 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:23:35.956202 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:23:35.961297 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:23:35.962849 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:23:35.966426 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:23:35.966889 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:23:35.971292 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 12:23:35.973162 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:23:35.987178 dracut-cmdline[227]: dracut-dracut-053 Jan 17 12:23:35.990827 dracut-cmdline[227]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:23:36.022108 systemd-resolved[228]: Positive Trust Anchors: Jan 17 12:23:36.022134 systemd-resolved[228]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:23:36.022185 systemd-resolved[228]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:23:36.025820 systemd-resolved[228]: Defaulting to hostname 'linux'. Jan 17 12:23:36.027186 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:23:36.031460 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:23:36.091072 kernel: SCSI subsystem initialized Jan 17 12:23:36.101055 kernel: Loading iSCSI transport class v2.0-870. Jan 17 12:23:36.112064 kernel: iscsi: registered transport (tcp) Jan 17 12:23:36.133060 kernel: iscsi: registered transport (qla4xxx) Jan 17 12:23:36.133090 kernel: QLogic iSCSI HBA Driver Jan 17 12:23:36.180264 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 12:23:36.196166 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 12:23:36.220682 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 12:23:36.220745 kernel: device-mapper: uevent: version 1.0.3 Jan 17 12:23:36.220758 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 12:23:36.262070 kernel: raid6: avx2x4 gen() 30395 MB/s Jan 17 12:23:36.279058 kernel: raid6: avx2x2 gen() 30975 MB/s Jan 17 12:23:36.296120 kernel: raid6: avx2x1 gen() 26000 MB/s Jan 17 12:23:36.296140 kernel: raid6: using algorithm avx2x2 gen() 30975 MB/s Jan 17 12:23:36.314137 kernel: raid6: .... xor() 19884 MB/s, rmw enabled Jan 17 12:23:36.314168 kernel: raid6: using avx2x2 recovery algorithm Jan 17 12:23:36.335059 kernel: xor: automatically using best checksumming function avx Jan 17 12:23:36.490082 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 12:23:36.503913 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:23:36.515266 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:23:36.527314 systemd-udevd[411]: Using default interface naming scheme 'v255'. Jan 17 12:23:36.531944 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:23:36.543184 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 12:23:36.555377 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Jan 17 12:23:36.591203 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:23:36.605220 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:23:36.668540 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:23:36.678421 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 12:23:36.691981 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 12:23:36.695056 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:23:36.697676 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:23:36.700125 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:23:36.705055 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 12:23:36.707636 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 12:23:36.713901 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 17 12:23:36.735115 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 17 12:23:36.735320 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 12:23:36.735337 kernel: AES CTR mode by8 optimization enabled Jan 17 12:23:36.735352 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 12:23:36.735366 kernel: GPT:9289727 != 19775487 Jan 17 12:23:36.735380 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 12:23:36.735394 kernel: GPT:9289727 != 19775487 Jan 17 12:23:36.735407 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 12:23:36.735421 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:23:36.719712 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:23:36.737011 kernel: libata version 3.00 loaded. Jan 17 12:23:36.741857 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:23:36.750519 kernel: ahci 0000:00:1f.2: version 3.0 Jan 17 12:23:36.780962 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 17 12:23:36.780985 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 17 12:23:36.781161 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 17 12:23:36.781298 kernel: scsi host0: ahci Jan 17 12:23:36.781448 kernel: scsi host1: ahci Jan 17 12:23:36.781601 kernel: scsi host2: ahci Jan 17 12:23:36.781749 kernel: BTRFS: device fsid e459b8ee-f1f7-4c3d-a087-3f1955f52c85 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (472) Jan 17 12:23:36.781765 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (464) Jan 17 12:23:36.781778 kernel: scsi host3: ahci Jan 17 12:23:36.781922 kernel: scsi host4: ahci Jan 17 12:23:36.782093 kernel: scsi host5: ahci Jan 17 12:23:36.782248 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 17 12:23:36.782259 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 17 12:23:36.782270 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 17 12:23:36.782284 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 17 12:23:36.782295 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 17 12:23:36.782305 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 17 12:23:36.741942 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:23:36.745653 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:23:36.750482 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:23:36.750557 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:23:36.752867 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:23:36.762277 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:23:36.788254 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 17 12:23:36.821827 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:23:36.835021 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 17 12:23:36.846147 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 12:23:36.850848 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 17 12:23:36.851309 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 17 12:23:36.863246 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 12:23:36.865842 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:23:36.874300 disk-uuid[565]: Primary Header is updated. Jan 17 12:23:36.874300 disk-uuid[565]: Secondary Entries is updated. Jan 17 12:23:36.874300 disk-uuid[565]: Secondary Header is updated. Jan 17 12:23:36.877652 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:23:36.882079 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:23:36.888076 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:23:36.893678 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:23:37.091059 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 17 12:23:37.091112 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 17 12:23:37.092055 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 17 12:23:37.092070 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 17 12:23:37.093055 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 17 12:23:37.094079 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 17 12:23:37.095068 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 17 12:23:37.095096 kernel: ata3.00: applying bridge limits Jan 17 12:23:37.096145 kernel: ata3.00: configured for UDMA/100 Jan 17 12:23:37.097062 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 17 12:23:37.135617 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 17 12:23:37.147692 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 17 12:23:37.147706 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 17 12:23:37.904607 disk-uuid[567]: The operation has completed successfully. Jan 17 12:23:37.906148 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:23:37.933139 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 12:23:37.933281 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 12:23:37.956360 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 12:23:37.959108 sh[592]: Success Jan 17 12:23:37.972063 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 17 12:23:38.003369 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 12:23:38.017459 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 12:23:38.021795 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 12:23:38.032586 kernel: BTRFS info (device dm-0): first mount of filesystem e459b8ee-f1f7-4c3d-a087-3f1955f52c85 Jan 17 12:23:38.032615 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:23:38.032633 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 12:23:38.033625 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 12:23:38.034466 kernel: BTRFS info (device dm-0): using free space tree Jan 17 12:23:38.038932 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 12:23:38.041260 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 12:23:38.051171 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 12:23:38.053682 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 12:23:38.061344 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:23:38.061379 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:23:38.061394 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:23:38.065065 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:23:38.073171 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 12:23:38.075183 kernel: BTRFS info (device vda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:23:38.085307 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 12:23:38.095176 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 12:23:38.151402 ignition[690]: Ignition 2.19.0 Jan 17 12:23:38.151414 ignition[690]: Stage: fetch-offline Jan 17 12:23:38.151466 ignition[690]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:23:38.151480 ignition[690]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:23:38.151618 ignition[690]: parsed url from cmdline: "" Jan 17 12:23:38.151624 ignition[690]: no config URL provided Jan 17 12:23:38.151631 ignition[690]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:23:38.151643 ignition[690]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:23:38.151672 ignition[690]: op(1): [started] loading QEMU firmware config module Jan 17 12:23:38.151678 ignition[690]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 17 12:23:38.163299 ignition[690]: op(1): [finished] loading QEMU firmware config module Jan 17 12:23:38.163325 ignition[690]: QEMU firmware config was not found. Ignoring... Jan 17 12:23:38.166212 ignition[690]: parsing config with SHA512: 7ca005dd022e0e0e42db3c0b34e323a236992a082e93c33f5d0d24ca0ade44a9d4bf946660100a793288c50f157472a9372ea150ecf085385dfde0503770cc58 Jan 17 12:23:38.169639 unknown[690]: fetched base config from "system" Jan 17 12:23:38.169653 unknown[690]: fetched user config from "qemu" Jan 17 12:23:38.170048 ignition[690]: fetch-offline: fetch-offline passed Jan 17 12:23:38.170139 ignition[690]: Ignition finished successfully Jan 17 12:23:38.173858 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:23:38.176684 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:23:38.194173 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:23:38.216423 systemd-networkd[782]: lo: Link UP Jan 17 12:23:38.216434 systemd-networkd[782]: lo: Gained carrier Jan 17 12:23:38.218019 systemd-networkd[782]: Enumeration completed Jan 17 12:23:38.218140 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:23:38.218454 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:23:38.218458 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:23:38.218779 systemd[1]: Reached target network.target - Network. Jan 17 12:23:38.221595 systemd-networkd[782]: eth0: Link UP Jan 17 12:23:38.221600 systemd-networkd[782]: eth0: Gained carrier Jan 17 12:23:38.221607 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:23:38.221647 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 17 12:23:38.227181 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 12:23:38.234095 systemd-networkd[782]: eth0: DHCPv4 address 10.0.0.160/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 17 12:23:38.241249 ignition[784]: Ignition 2.19.0 Jan 17 12:23:38.241260 ignition[784]: Stage: kargs Jan 17 12:23:38.241425 ignition[784]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:23:38.241436 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:23:38.242019 ignition[784]: kargs: kargs passed Jan 17 12:23:38.242076 ignition[784]: Ignition finished successfully Jan 17 12:23:38.245433 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 12:23:38.254210 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 12:23:38.267012 ignition[793]: Ignition 2.19.0 Jan 17 12:23:38.267024 ignition[793]: Stage: disks Jan 17 12:23:38.267210 ignition[793]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:23:38.267222 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:23:38.270132 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 12:23:38.268014 ignition[793]: disks: disks passed Jan 17 12:23:38.271882 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 12:23:38.268078 ignition[793]: Ignition finished successfully Jan 17 12:23:38.273813 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:23:38.275720 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:23:38.277799 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:23:38.278385 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:23:38.285163 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 12:23:38.296028 systemd-fsck[803]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 12:23:38.302333 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 12:23:38.309135 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 12:23:38.397070 kernel: EXT4-fs (vda9): mounted filesystem 0ba4fe0e-76d7-406f-b570-4642d86198f6 r/w with ordered data mode. Quota mode: none. Jan 17 12:23:38.397561 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 12:23:38.398425 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 12:23:38.411106 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:23:38.413356 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 12:23:38.414322 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 12:23:38.414361 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 12:23:38.422806 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (811) Jan 17 12:23:38.414381 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:23:38.426732 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:23:38.426751 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:23:38.426764 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:23:38.428057 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:23:38.429895 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:23:38.461535 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 12:23:38.475195 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 12:23:38.507929 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 12:23:38.513647 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Jan 17 12:23:38.519116 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 12:23:38.524129 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 12:23:38.615477 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 12:23:38.623226 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 12:23:38.626237 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 12:23:38.631055 kernel: BTRFS info (device vda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:23:38.650263 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 12:23:38.650699 ignition[924]: INFO : Ignition 2.19.0 Jan 17 12:23:38.650699 ignition[924]: INFO : Stage: mount Jan 17 12:23:38.651405 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:23:38.651405 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:23:38.651755 ignition[924]: INFO : mount: mount passed Jan 17 12:23:38.651755 ignition[924]: INFO : Ignition finished successfully Jan 17 12:23:38.656969 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 12:23:38.672117 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 12:23:39.032417 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 12:23:39.049243 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:23:39.057062 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (939) Jan 17 12:23:39.059141 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:23:39.059162 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:23:39.059176 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:23:39.063061 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:23:39.063993 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:23:39.088722 ignition[957]: INFO : Ignition 2.19.0 Jan 17 12:23:39.088722 ignition[957]: INFO : Stage: files Jan 17 12:23:39.090809 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:23:39.090809 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:23:39.093189 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Jan 17 12:23:39.094970 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 12:23:39.094970 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 12:23:39.098205 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 12:23:39.099819 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 12:23:39.101195 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 12:23:39.100283 unknown[957]: wrote ssh authorized keys file for user: core Jan 17 12:23:39.103997 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 17 12:23:39.103997 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 12:23:39.103997 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:23:39.103997 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:23:39.103997 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:23:39.103997 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:23:39.103997 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:23:39.103997 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 17 12:23:39.465852 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 17 12:23:39.735868 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:23:39.735868 ignition[957]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Jan 17 12:23:39.739623 ignition[957]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 12:23:39.739623 ignition[957]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 12:23:39.739623 ignition[957]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Jan 17 12:23:39.739623 ignition[957]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Jan 17 12:23:39.755845 ignition[957]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 12:23:39.760680 ignition[957]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 12:23:39.762390 ignition[957]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Jan 17 12:23:39.763955 ignition[957]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:23:39.765754 ignition[957]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:23:39.767435 ignition[957]: INFO : files: files passed Jan 17 12:23:39.768219 ignition[957]: INFO : Ignition finished successfully Jan 17 12:23:39.771510 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 12:23:39.792259 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 12:23:39.793445 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 12:23:39.800850 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 12:23:39.801004 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 12:23:39.806385 initrd-setup-root-after-ignition[984]: grep: /sysroot/oem/oem-release: No such file or directory Jan 17 12:23:39.810451 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:23:39.810451 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:23:39.814095 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:23:39.817204 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:23:39.817861 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 12:23:39.835209 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 12:23:39.858462 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 12:23:39.858620 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 12:23:39.859190 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 12:23:39.859499 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 12:23:39.859878 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 12:23:39.867003 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 12:23:39.886419 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:23:39.899203 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 12:23:39.909158 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:23:39.909702 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:23:39.910064 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 12:23:39.945343 ignition[1011]: INFO : Ignition 2.19.0 Jan 17 12:23:39.945343 ignition[1011]: INFO : Stage: umount Jan 17 12:23:39.945343 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:23:39.945343 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:23:39.910358 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 12:23:39.953295 ignition[1011]: INFO : umount: umount passed Jan 17 12:23:39.953295 ignition[1011]: INFO : Ignition finished successfully Jan 17 12:23:39.910493 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:23:39.911214 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 12:23:39.911560 systemd[1]: Stopped target basic.target - Basic System. Jan 17 12:23:39.911899 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 12:23:39.912405 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:23:39.912760 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 12:23:39.913110 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 12:23:39.913432 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:23:39.913800 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 12:23:39.914158 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 12:23:39.914481 systemd[1]: Stopped target swap.target - Swaps. Jan 17 12:23:39.914786 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 12:23:39.914916 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:23:39.915494 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:23:39.915846 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:23:39.916311 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 12:23:39.916444 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:23:39.916831 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 12:23:39.916954 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 12:23:39.917491 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 12:23:39.917622 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:23:39.918114 systemd[1]: Stopped target paths.target - Path Units. Jan 17 12:23:39.918498 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 12:23:39.922088 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:23:39.922408 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 12:23:39.922732 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 12:23:39.923082 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 12:23:39.923190 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:23:39.923623 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 12:23:39.923759 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:23:39.924338 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 12:23:39.924476 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:23:39.924867 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 12:23:39.924989 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 12:23:39.926170 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 12:23:39.927156 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 12:23:39.927509 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 12:23:39.927650 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:23:39.928120 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 12:23:39.928253 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:23:39.932148 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 12:23:39.932289 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 12:23:39.947744 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 12:23:39.947879 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 12:23:39.949493 systemd[1]: Stopped target network.target - Network. Jan 17 12:23:39.951229 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 12:23:39.951294 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 12:23:39.953330 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 12:23:39.953386 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 12:23:39.955138 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 12:23:39.955192 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 12:23:39.957197 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 12:23:39.957247 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 12:23:39.959636 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 12:23:39.961679 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 12:23:39.964373 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 12:23:39.967083 systemd-networkd[782]: eth0: DHCPv6 lease lost Jan 17 12:23:39.970381 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 12:23:39.970540 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 12:23:39.972301 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 12:23:39.972422 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 12:23:39.975984 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 12:23:39.976054 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:23:39.993494 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 12:23:39.995431 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 12:23:39.995520 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:23:40.031357 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:23:40.031417 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:23:40.034362 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 12:23:40.034416 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 12:23:40.036504 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 12:23:40.037536 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:23:40.040960 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:23:40.057107 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 12:23:40.057249 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 12:23:40.059405 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 12:23:40.059584 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:23:40.061151 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 12:23:40.061231 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 12:23:40.062355 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 12:23:40.062395 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:23:40.066271 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 12:23:40.066320 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:23:40.069180 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 12:23:40.069227 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 12:23:40.071941 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:23:40.071998 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:23:40.093221 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 12:23:40.093683 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 12:23:40.093739 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:23:40.096643 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 17 12:23:40.096692 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:23:40.097255 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 12:23:40.097299 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:23:40.097594 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:23:40.097638 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:23:40.101221 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 12:23:40.101341 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 12:23:40.121786 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 12:23:40.121916 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 12:23:40.123938 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 12:23:40.125722 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 12:23:40.125776 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 12:23:40.143206 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 12:23:40.150192 systemd[1]: Switching root. Jan 17 12:23:40.176340 systemd-journald[193]: Journal stopped Jan 17 12:23:41.233953 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Jan 17 12:23:41.234053 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 12:23:41.234073 kernel: SELinux: policy capability open_perms=1 Jan 17 12:23:41.234088 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 12:23:41.234108 kernel: SELinux: policy capability always_check_network=0 Jan 17 12:23:41.234132 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 12:23:41.234147 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 12:23:41.234162 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 12:23:41.234177 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 12:23:41.234191 kernel: audit: type=1403 audit(1737116620.505:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 12:23:41.234213 systemd[1]: Successfully loaded SELinux policy in 38.653ms. Jan 17 12:23:41.234244 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.391ms. Jan 17 12:23:41.234264 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:23:41.234286 systemd[1]: Detected virtualization kvm. Jan 17 12:23:41.234307 systemd[1]: Detected architecture x86-64. Jan 17 12:23:41.234323 systemd[1]: Detected first boot. Jan 17 12:23:41.234339 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:23:41.234355 zram_generator::config[1056]: No configuration found. Jan 17 12:23:41.234372 systemd[1]: Populated /etc with preset unit settings. Jan 17 12:23:41.234389 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 12:23:41.234407 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 12:23:41.234423 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 12:23:41.234452 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 12:23:41.234469 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 12:23:41.234485 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 12:23:41.234501 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 12:23:41.234524 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 12:23:41.234541 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 12:23:41.234557 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 12:23:41.234573 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 12:23:41.234589 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:23:41.234605 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:23:41.234621 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 12:23:41.234637 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 12:23:41.234653 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 12:23:41.234673 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:23:41.234689 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 12:23:41.234706 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:23:41.234722 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 12:23:41.234738 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 12:23:41.234754 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 12:23:41.234769 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 12:23:41.234789 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:23:41.234806 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:23:41.234822 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:23:41.234838 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:23:41.234854 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 12:23:41.234872 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 12:23:41.234888 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:23:41.234904 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:23:41.234919 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:23:41.234935 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 12:23:41.234955 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 12:23:41.234971 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 12:23:41.234987 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 12:23:41.235003 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:23:41.235019 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 12:23:41.235051 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 12:23:41.235068 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 12:23:41.235085 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 12:23:41.235104 systemd[1]: Reached target machines.target - Containers. Jan 17 12:23:41.235120 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 12:23:41.235137 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:23:41.235153 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:23:41.235168 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 12:23:41.235184 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:23:41.235200 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:23:41.235216 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:23:41.235234 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 12:23:41.235253 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:23:41.235270 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 12:23:41.235286 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 12:23:41.235302 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 12:23:41.235318 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 12:23:41.235333 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 12:23:41.235349 kernel: fuse: init (API version 7.39) Jan 17 12:23:41.235364 kernel: loop: module loaded Jan 17 12:23:41.235383 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:23:41.235399 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:23:41.235415 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 12:23:41.235441 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 12:23:41.235458 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:23:41.235474 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 12:23:41.235490 systemd[1]: Stopped verity-setup.service. Jan 17 12:23:41.235527 systemd-journald[1126]: Collecting audit messages is disabled. Jan 17 12:23:41.235560 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:23:41.235577 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 12:23:41.235592 systemd-journald[1126]: Journal started Jan 17 12:23:41.235620 systemd-journald[1126]: Runtime Journal (/run/log/journal/98dd0b4626b840549741eafdf4bca87d) is 6.0M, max 48.4M, 42.3M free. Jan 17 12:23:41.020824 systemd[1]: Queued start job for default target multi-user.target. Jan 17 12:23:41.036215 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 17 12:23:41.036678 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 12:23:41.241084 kernel: ACPI: bus type drm_connector registered Jan 17 12:23:41.241119 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:23:41.243563 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 12:23:41.244798 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 12:23:41.245887 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 12:23:41.247123 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 12:23:41.248381 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 12:23:41.249662 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 12:23:41.251123 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:23:41.252673 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 12:23:41.252850 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 12:23:41.254534 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:23:41.254720 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:23:41.256196 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:23:41.256378 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:23:41.257749 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:23:41.257925 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:23:41.259474 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 12:23:41.259649 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 12:23:41.261028 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:23:41.261216 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:23:41.262848 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:23:41.264473 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 12:23:41.266184 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 12:23:41.282139 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 12:23:41.295130 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 12:23:41.297440 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 12:23:41.298584 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 12:23:41.298609 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:23:41.300609 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 12:23:41.302999 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 12:23:41.305721 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 12:23:41.307009 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:23:41.310474 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 12:23:41.315704 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 12:23:41.316965 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:23:41.323519 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 12:23:41.324786 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:23:41.326823 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:23:41.330335 systemd-journald[1126]: Time spent on flushing to /var/log/journal/98dd0b4626b840549741eafdf4bca87d is 23.471ms for 935 entries. Jan 17 12:23:41.330335 systemd-journald[1126]: System Journal (/var/log/journal/98dd0b4626b840549741eafdf4bca87d) is 8.0M, max 195.6M, 187.6M free. Jan 17 12:23:41.362815 systemd-journald[1126]: Received client request to flush runtime journal. Jan 17 12:23:41.362867 kernel: loop0: detected capacity change from 0 to 140768 Jan 17 12:23:41.332542 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 12:23:41.336205 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:23:41.340482 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 12:23:41.341941 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 12:23:41.343545 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 12:23:41.353124 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 12:23:41.355600 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:23:41.360828 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 12:23:41.367824 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 12:23:41.372224 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 12:23:41.374253 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 12:23:41.376477 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:23:41.388063 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 12:23:41.398849 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 12:23:41.401972 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 12:23:41.404090 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Jan 17 12:23:41.404114 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Jan 17 12:23:41.405842 udevadm[1182]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 17 12:23:41.411503 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:23:41.416056 kernel: loop1: detected capacity change from 0 to 142488 Jan 17 12:23:41.421845 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 12:23:41.449004 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 12:23:41.453106 kernel: loop2: detected capacity change from 0 to 211296 Jan 17 12:23:41.455213 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:23:41.480915 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Jan 17 12:23:41.480944 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Jan 17 12:23:41.486565 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:23:41.495067 kernel: loop3: detected capacity change from 0 to 140768 Jan 17 12:23:41.508058 kernel: loop4: detected capacity change from 0 to 142488 Jan 17 12:23:41.522065 kernel: loop5: detected capacity change from 0 to 211296 Jan 17 12:23:41.529599 (sd-merge)[1200]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 17 12:23:41.530334 (sd-merge)[1200]: Merged extensions into '/usr'. Jan 17 12:23:41.534921 systemd[1]: Reloading requested from client PID 1170 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 12:23:41.534938 systemd[1]: Reloading... Jan 17 12:23:41.601094 zram_generator::config[1222]: No configuration found. Jan 17 12:23:41.675919 ldconfig[1165]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 12:23:41.734948 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:23:41.784349 systemd[1]: Reloading finished in 248 ms. Jan 17 12:23:41.819000 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 12:23:41.820754 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 12:23:41.833335 systemd[1]: Starting ensure-sysext.service... Jan 17 12:23:41.835633 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:23:41.843545 systemd[1]: Reloading requested from client PID 1263 ('systemctl') (unit ensure-sysext.service)... Jan 17 12:23:41.843564 systemd[1]: Reloading... Jan 17 12:23:41.856965 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 12:23:41.857350 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 12:23:41.858342 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 12:23:41.858644 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Jan 17 12:23:41.858729 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Jan 17 12:23:41.871118 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:23:41.871130 systemd-tmpfiles[1264]: Skipping /boot Jan 17 12:23:41.883571 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:23:41.883585 systemd-tmpfiles[1264]: Skipping /boot Jan 17 12:23:41.908063 zram_generator::config[1294]: No configuration found. Jan 17 12:23:42.017300 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:23:42.067084 systemd[1]: Reloading finished in 223 ms. Jan 17 12:23:42.085440 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 12:23:42.104781 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:23:42.114174 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:23:42.117169 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 12:23:42.119989 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 12:23:42.125805 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:23:42.131326 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:23:42.137098 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 12:23:42.140494 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:23:42.140656 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:23:42.143601 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:23:42.148109 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:23:42.151439 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:23:42.154172 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:23:42.163473 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 12:23:42.165010 augenrules[1355]: No rules Jan 17 12:23:42.174868 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:23:42.175329 systemd-udevd[1336]: Using default interface naming scheme 'v255'. Jan 17 12:23:42.177868 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:23:42.179904 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 12:23:42.181987 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:23:42.182303 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:23:42.184535 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:23:42.184842 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:23:42.186869 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:23:42.187424 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:23:42.201495 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:23:42.204230 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:23:42.204537 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:23:42.216316 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:23:42.221678 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:23:42.226606 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:23:42.229315 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:23:42.235828 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:23:42.244289 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 12:23:42.245439 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:23:42.247682 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 12:23:42.249879 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 12:23:42.251620 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 12:23:42.253337 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:23:42.253533 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:23:42.255661 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:23:42.255844 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:23:42.257556 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:23:42.257736 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:23:42.262773 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 12:23:42.280379 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 17 12:23:42.280738 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:23:42.280931 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:23:42.288280 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:23:42.291270 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:23:42.295435 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:23:42.299131 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:23:42.300663 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:23:42.300797 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 12:23:42.300879 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:23:42.301826 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:23:42.303778 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:23:42.308868 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:23:42.309855 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:23:42.313058 systemd[1]: Finished ensure-sysext.service. Jan 17 12:23:42.315361 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:23:42.316318 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:23:42.322446 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:23:42.324943 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:23:42.331449 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:23:42.331526 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:23:42.333058 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1376) Jan 17 12:23:42.342245 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 12:23:42.343637 systemd-networkd[1390]: lo: Link UP Jan 17 12:23:42.343649 systemd-networkd[1390]: lo: Gained carrier Jan 17 12:23:42.345386 systemd-networkd[1390]: Enumeration completed Jan 17 12:23:42.345789 systemd-networkd[1390]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:23:42.345799 systemd-networkd[1390]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:23:42.346499 systemd-networkd[1390]: eth0: Link UP Jan 17 12:23:42.346509 systemd-networkd[1390]: eth0: Gained carrier Jan 17 12:23:42.346520 systemd-networkd[1390]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:23:42.346623 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:23:42.349335 systemd-resolved[1334]: Positive Trust Anchors: Jan 17 12:23:42.349355 systemd-resolved[1334]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:23:42.349387 systemd-resolved[1334]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:23:42.353947 systemd-resolved[1334]: Defaulting to hostname 'linux'. Jan 17 12:23:42.360140 systemd-networkd[1390]: eth0: DHCPv4 address 10.0.0.160/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 17 12:23:42.362164 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 12:23:42.364165 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:23:42.365456 systemd[1]: Reached target network.target - Network. Jan 17 12:23:42.366611 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:23:42.376063 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 17 12:23:42.381911 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 17 12:23:42.382694 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 17 12:23:42.399059 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 17 12:23:43.643536 systemd-timesyncd[1416]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 17 12:23:43.643584 systemd-timesyncd[1416]: Initial clock synchronization to Fri 2025-01-17 12:23:43.643443 UTC. Jan 17 12:23:43.643782 systemd-resolved[1334]: Clock change detected. Flushing caches. Jan 17 12:23:43.656367 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 12:23:43.659090 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 12:23:43.659711 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 17 12:23:43.660737 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 12:23:43.672718 kernel: ACPI: button: Power Button [PWRF] Jan 17 12:23:43.673713 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 12:23:43.675868 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 12:23:43.727074 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:23:43.729919 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 12:23:43.758112 kernel: kvm_amd: TSC scaling supported Jan 17 12:23:43.758257 kernel: kvm_amd: Nested Virtualization enabled Jan 17 12:23:43.758271 kernel: kvm_amd: Nested Paging enabled Jan 17 12:23:43.758283 kernel: kvm_amd: LBR virtualization supported Jan 17 12:23:43.759333 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 17 12:23:43.759348 kernel: kvm_amd: Virtual GIF supported Jan 17 12:23:43.783613 kernel: EDAC MC: Ver: 3.0.0 Jan 17 12:23:43.823239 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 12:23:43.824985 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:23:43.835867 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 12:23:43.845748 lvm[1434]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:23:43.876807 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 12:23:43.878320 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:23:43.879443 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:23:43.880614 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 12:23:43.881876 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 12:23:43.883336 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 12:23:43.884530 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 12:23:43.885792 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 12:23:43.887047 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 12:23:43.887080 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:23:43.887995 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:23:43.889606 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 12:23:43.892245 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 12:23:43.909528 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 12:23:43.912043 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 12:23:43.913804 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 12:23:43.915028 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:23:43.916017 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:23:43.916982 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:23:43.917010 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:23:43.917955 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 12:23:43.919991 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 12:23:43.924798 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 12:23:43.926622 lvm[1438]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:23:43.928221 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 12:23:43.929326 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 12:23:43.931785 jq[1441]: false Jan 17 12:23:43.935107 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 12:23:43.939847 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 12:23:43.944011 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 12:23:43.946604 extend-filesystems[1442]: Found loop3 Jan 17 12:23:43.947747 extend-filesystems[1442]: Found loop4 Jan 17 12:23:43.947747 extend-filesystems[1442]: Found loop5 Jan 17 12:23:43.947747 extend-filesystems[1442]: Found sr0 Jan 17 12:23:43.947747 extend-filesystems[1442]: Found vda Jan 17 12:23:43.947747 extend-filesystems[1442]: Found vda1 Jan 17 12:23:43.947747 extend-filesystems[1442]: Found vda2 Jan 17 12:23:43.947747 extend-filesystems[1442]: Found vda3 Jan 17 12:23:43.947747 extend-filesystems[1442]: Found usr Jan 17 12:23:43.966084 extend-filesystems[1442]: Found vda4 Jan 17 12:23:43.966084 extend-filesystems[1442]: Found vda6 Jan 17 12:23:43.966084 extend-filesystems[1442]: Found vda7 Jan 17 12:23:43.966084 extend-filesystems[1442]: Found vda9 Jan 17 12:23:43.966084 extend-filesystems[1442]: Checking size of /dev/vda9 Jan 17 12:23:43.959046 dbus-daemon[1440]: [system] SELinux support is enabled Jan 17 12:23:43.947990 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 12:23:43.949858 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 12:23:43.950443 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 12:23:43.954840 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 12:23:43.957521 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 12:23:43.960085 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 12:23:43.971188 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 12:23:43.973474 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 12:23:43.973685 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 12:23:43.975130 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 12:23:43.975325 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 12:23:43.976838 extend-filesystems[1442]: Resized partition /dev/vda9 Jan 17 12:23:43.982420 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1367) Jan 17 12:23:43.982470 jq[1455]: true Jan 17 12:23:43.985063 (ntainerd)[1462]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 12:23:43.992397 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 12:23:43.992744 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 12:23:43.997934 extend-filesystems[1467]: resize2fs 1.47.1 (20-May-2024) Jan 17 12:23:44.003307 update_engine[1451]: I20250117 12:23:43.999415 1451 main.cc:92] Flatcar Update Engine starting Jan 17 12:23:44.012852 update_engine[1451]: I20250117 12:23:44.010669 1451 update_check_scheduler.cc:74] Next update check in 2m53s Jan 17 12:23:44.024481 systemd[1]: Started update-engine.service - Update Engine. Jan 17 12:23:44.026047 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 12:23:44.026085 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 12:23:44.028758 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 17 12:23:44.028216 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 12:23:44.028232 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 12:23:44.032237 jq[1466]: true Jan 17 12:23:44.038880 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 12:23:44.050847 systemd-logind[1448]: Watching system buttons on /dev/input/event2 (Power Button) Jan 17 12:23:44.051278 systemd-logind[1448]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 12:23:44.052813 systemd-logind[1448]: New seat seat0. Jan 17 12:23:44.055352 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 12:23:44.072736 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 17 12:23:44.075375 locksmithd[1474]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 12:23:44.096107 extend-filesystems[1467]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 17 12:23:44.096107 extend-filesystems[1467]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 17 12:23:44.096107 extend-filesystems[1467]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 17 12:23:44.101287 extend-filesystems[1442]: Resized filesystem in /dev/vda9 Jan 17 12:23:44.101034 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 12:23:44.101282 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 12:23:44.109333 bash[1492]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:23:44.111814 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 12:23:44.113850 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 17 12:23:44.202139 containerd[1462]: time="2025-01-17T12:23:44.202000435Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 12:23:44.207381 sshd_keygen[1468]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 12:23:44.225471 containerd[1462]: time="2025-01-17T12:23:44.225399487Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:23:44.227632 containerd[1462]: time="2025-01-17T12:23:44.227159618Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:23:44.227632 containerd[1462]: time="2025-01-17T12:23:44.227206556Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 12:23:44.227632 containerd[1462]: time="2025-01-17T12:23:44.227226834Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 12:23:44.227632 containerd[1462]: time="2025-01-17T12:23:44.227418373Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 12:23:44.227632 containerd[1462]: time="2025-01-17T12:23:44.227433732Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 12:23:44.227632 containerd[1462]: time="2025-01-17T12:23:44.227499565Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:23:44.227632 containerd[1462]: time="2025-01-17T12:23:44.227514273Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:23:44.227914 containerd[1462]: time="2025-01-17T12:23:44.227726661Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:23:44.227914 containerd[1462]: time="2025-01-17T12:23:44.227742391Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 12:23:44.227914 containerd[1462]: time="2025-01-17T12:23:44.227754954Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:23:44.227914 containerd[1462]: time="2025-01-17T12:23:44.227764983Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 12:23:44.227914 containerd[1462]: time="2025-01-17T12:23:44.227854531Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:23:44.228136 containerd[1462]: time="2025-01-17T12:23:44.228102917Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:23:44.228267 containerd[1462]: time="2025-01-17T12:23:44.228235275Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:23:44.228267 containerd[1462]: time="2025-01-17T12:23:44.228253389Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 12:23:44.228371 containerd[1462]: time="2025-01-17T12:23:44.228347746Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 12:23:44.228421 containerd[1462]: time="2025-01-17T12:23:44.228405795Z" level=info msg="metadata content store policy set" policy=shared Jan 17 12:23:44.233965 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 12:23:44.235664 containerd[1462]: time="2025-01-17T12:23:44.235564220Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 12:23:44.235664 containerd[1462]: time="2025-01-17T12:23:44.235617069Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 12:23:44.235664 containerd[1462]: time="2025-01-17T12:23:44.235632177Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 12:23:44.235664 containerd[1462]: time="2025-01-17T12:23:44.235648728Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 12:23:44.235664 containerd[1462]: time="2025-01-17T12:23:44.235664578Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 12:23:44.236970 containerd[1462]: time="2025-01-17T12:23:44.236914261Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 12:23:44.237210 containerd[1462]: time="2025-01-17T12:23:44.237177535Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 12:23:44.237323 containerd[1462]: time="2025-01-17T12:23:44.237294104Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 12:23:44.237323 containerd[1462]: time="2025-01-17T12:23:44.237317638Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 12:23:44.237394 containerd[1462]: time="2025-01-17T12:23:44.237332666Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 12:23:44.237394 containerd[1462]: time="2025-01-17T12:23:44.237347664Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 12:23:44.237394 containerd[1462]: time="2025-01-17T12:23:44.237366469Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 12:23:44.237394 containerd[1462]: time="2025-01-17T12:23:44.237378973Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 12:23:44.237394 containerd[1462]: time="2025-01-17T12:23:44.237393851Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 12:23:44.237477 containerd[1462]: time="2025-01-17T12:23:44.237409300Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 12:23:44.237477 containerd[1462]: time="2025-01-17T12:23:44.237423757Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 12:23:44.237477 containerd[1462]: time="2025-01-17T12:23:44.237436901Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 12:23:44.237477 containerd[1462]: time="2025-01-17T12:23:44.237449445Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 12:23:44.237477 containerd[1462]: time="2025-01-17T12:23:44.237468741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 12:23:44.237565 containerd[1462]: time="2025-01-17T12:23:44.237482387Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 12:23:44.237565 containerd[1462]: time="2025-01-17T12:23:44.237496924Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 12:23:44.237565 containerd[1462]: time="2025-01-17T12:23:44.237514667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 12:23:44.237565 containerd[1462]: time="2025-01-17T12:23:44.237527081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 12:23:44.237565 containerd[1462]: time="2025-01-17T12:23:44.237539574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 12:23:44.237565 containerd[1462]: time="2025-01-17T12:23:44.237551316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 12:23:44.237565 containerd[1462]: time="2025-01-17T12:23:44.237563519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 12:23:44.237732 containerd[1462]: time="2025-01-17T12:23:44.237579198Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 12:23:44.237732 containerd[1462]: time="2025-01-17T12:23:44.237593545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 12:23:44.237732 containerd[1462]: time="2025-01-17T12:23:44.237610717Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 12:23:44.237732 containerd[1462]: time="2025-01-17T12:23:44.237623371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 12:23:44.237732 containerd[1462]: time="2025-01-17T12:23:44.237635995Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 12:23:44.237732 containerd[1462]: time="2025-01-17T12:23:44.237653037Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 12:23:44.237732 containerd[1462]: time="2025-01-17T12:23:44.237678114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 12:23:44.237732 containerd[1462]: time="2025-01-17T12:23:44.237735832Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 12:23:44.237886 containerd[1462]: time="2025-01-17T12:23:44.237749087Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 12:23:44.237886 containerd[1462]: time="2025-01-17T12:23:44.237806605Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 12:23:44.237886 containerd[1462]: time="2025-01-17T12:23:44.237823997Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 12:23:44.237886 containerd[1462]: time="2025-01-17T12:23:44.237835319Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 12:23:44.237886 containerd[1462]: time="2025-01-17T12:23:44.237847642Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 12:23:44.237886 containerd[1462]: time="2025-01-17T12:23:44.237857590Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 12:23:44.237886 containerd[1462]: time="2025-01-17T12:23:44.237870444Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 12:23:44.237886 containerd[1462]: time="2025-01-17T12:23:44.237886114Z" level=info msg="NRI interface is disabled by configuration." Jan 17 12:23:44.238028 containerd[1462]: time="2025-01-17T12:23:44.237910830Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 12:23:44.238335 containerd[1462]: time="2025-01-17T12:23:44.238276726Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 12:23:44.238335 containerd[1462]: time="2025-01-17T12:23:44.238332661Z" level=info msg="Connect containerd service" Jan 17 12:23:44.238485 containerd[1462]: time="2025-01-17T12:23:44.238370081Z" level=info msg="using legacy CRI server" Jan 17 12:23:44.238485 containerd[1462]: time="2025-01-17T12:23:44.238377175Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 12:23:44.238522 containerd[1462]: time="2025-01-17T12:23:44.238484546Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 12:23:44.239102 containerd[1462]: time="2025-01-17T12:23:44.239070936Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:23:44.239386 containerd[1462]: time="2025-01-17T12:23:44.239255262Z" level=info msg="Start subscribing containerd event" Jan 17 12:23:44.239420 containerd[1462]: time="2025-01-17T12:23:44.239400474Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 12:23:44.239441 containerd[1462]: time="2025-01-17T12:23:44.239422916Z" level=info msg="Start recovering state" Jan 17 12:23:44.239461 containerd[1462]: time="2025-01-17T12:23:44.239448624Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 12:23:44.241820 containerd[1462]: time="2025-01-17T12:23:44.241793321Z" level=info msg="Start event monitor" Jan 17 12:23:44.241870 containerd[1462]: time="2025-01-17T12:23:44.241820683Z" level=info msg="Start snapshots syncer" Jan 17 12:23:44.241870 containerd[1462]: time="2025-01-17T12:23:44.241832956Z" level=info msg="Start cni network conf syncer for default" Jan 17 12:23:44.241870 containerd[1462]: time="2025-01-17T12:23:44.241844207Z" level=info msg="Start streaming server" Jan 17 12:23:44.241923 containerd[1462]: time="2025-01-17T12:23:44.241909299Z" level=info msg="containerd successfully booted in 0.041112s" Jan 17 12:23:44.244488 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 12:23:44.245628 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 12:23:44.253302 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 12:23:44.253557 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 12:23:44.256497 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 12:23:44.272522 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 12:23:44.276277 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 12:23:44.278424 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 12:23:44.279706 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 12:23:45.219970 systemd-networkd[1390]: eth0: Gained IPv6LL Jan 17 12:23:45.223458 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 12:23:45.225392 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 12:23:45.239083 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 17 12:23:45.241909 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:23:45.244180 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 12:23:45.272026 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 12:23:45.273798 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 17 12:23:45.274003 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 17 12:23:45.277135 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 12:23:45.858947 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:23:45.860855 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 12:23:45.873351 systemd[1]: Startup finished in 757ms (kernel) + 4.802s (initrd) + 4.174s (userspace) = 9.734s. Jan 17 12:23:45.882113 (kubelet)[1544]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:23:46.367971 kubelet[1544]: E0117 12:23:46.367878 1544 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:23:46.372325 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:23:46.372564 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:23:53.782872 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 12:23:53.784049 systemd[1]: Started sshd@0-10.0.0.160:22-10.0.0.1:56198.service - OpenSSH per-connection server daemon (10.0.0.1:56198). Jan 17 12:23:53.830974 sshd[1558]: Accepted publickey for core from 10.0.0.1 port 56198 ssh2: RSA SHA256:SlDwm7Or6/NzPo2pwmoc3QpDgnxlCMQ0MaN4S0v55gM Jan 17 12:23:53.832871 sshd[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:53.841723 systemd-logind[1448]: New session 1 of user core. Jan 17 12:23:53.843269 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 12:23:53.860963 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 12:23:53.872530 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 12:23:53.874486 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 12:23:53.882016 (systemd)[1562]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 12:23:53.990952 systemd[1562]: Queued start job for default target default.target. Jan 17 12:23:53.998974 systemd[1562]: Created slice app.slice - User Application Slice. Jan 17 12:23:53.998998 systemd[1562]: Reached target paths.target - Paths. Jan 17 12:23:53.999012 systemd[1562]: Reached target timers.target - Timers. Jan 17 12:23:54.000585 systemd[1562]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 12:23:54.011964 systemd[1562]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 12:23:54.012092 systemd[1562]: Reached target sockets.target - Sockets. Jan 17 12:23:54.012110 systemd[1562]: Reached target basic.target - Basic System. Jan 17 12:23:54.012147 systemd[1562]: Reached target default.target - Main User Target. Jan 17 12:23:54.012181 systemd[1562]: Startup finished in 124ms. Jan 17 12:23:54.012603 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 12:23:54.014043 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 12:23:54.074120 systemd[1]: Started sshd@1-10.0.0.160:22-10.0.0.1:56200.service - OpenSSH per-connection server daemon (10.0.0.1:56200). Jan 17 12:23:54.111603 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 56200 ssh2: RSA SHA256:SlDwm7Or6/NzPo2pwmoc3QpDgnxlCMQ0MaN4S0v55gM Jan 17 12:23:54.113160 sshd[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:54.117082 systemd-logind[1448]: New session 2 of user core. Jan 17 12:23:54.126834 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 12:23:54.179821 sshd[1573]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:54.191105 systemd[1]: sshd@1-10.0.0.160:22-10.0.0.1:56200.service: Deactivated successfully. Jan 17 12:23:54.192591 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 12:23:54.193884 systemd-logind[1448]: Session 2 logged out. Waiting for processes to exit. Jan 17 12:23:54.195037 systemd[1]: Started sshd@2-10.0.0.160:22-10.0.0.1:56210.service - OpenSSH per-connection server daemon (10.0.0.1:56210). Jan 17 12:23:54.195639 systemd-logind[1448]: Removed session 2. Jan 17 12:23:54.230774 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 56210 ssh2: RSA SHA256:SlDwm7Or6/NzPo2pwmoc3QpDgnxlCMQ0MaN4S0v55gM Jan 17 12:23:54.232156 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:54.235916 systemd-logind[1448]: New session 3 of user core. Jan 17 12:23:54.245850 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 12:23:54.294961 sshd[1580]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:54.304539 systemd[1]: sshd@2-10.0.0.160:22-10.0.0.1:56210.service: Deactivated successfully. Jan 17 12:23:54.306230 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 12:23:54.307767 systemd-logind[1448]: Session 3 logged out. Waiting for processes to exit. Jan 17 12:23:54.316961 systemd[1]: Started sshd@3-10.0.0.160:22-10.0.0.1:56222.service - OpenSSH per-connection server daemon (10.0.0.1:56222). Jan 17 12:23:54.317851 systemd-logind[1448]: Removed session 3. Jan 17 12:23:54.349284 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 56222 ssh2: RSA SHA256:SlDwm7Or6/NzPo2pwmoc3QpDgnxlCMQ0MaN4S0v55gM Jan 17 12:23:54.350913 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:54.354799 systemd-logind[1448]: New session 4 of user core. Jan 17 12:23:54.364820 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 12:23:54.418600 sshd[1587]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:54.432559 systemd[1]: sshd@3-10.0.0.160:22-10.0.0.1:56222.service: Deactivated successfully. Jan 17 12:23:54.434356 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 12:23:54.435968 systemd-logind[1448]: Session 4 logged out. Waiting for processes to exit. Jan 17 12:23:54.444932 systemd[1]: Started sshd@4-10.0.0.160:22-10.0.0.1:56238.service - OpenSSH per-connection server daemon (10.0.0.1:56238). Jan 17 12:23:54.445813 systemd-logind[1448]: Removed session 4. Jan 17 12:23:54.475944 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 56238 ssh2: RSA SHA256:SlDwm7Or6/NzPo2pwmoc3QpDgnxlCMQ0MaN4S0v55gM Jan 17 12:23:54.477334 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:54.480897 systemd-logind[1448]: New session 5 of user core. Jan 17 12:23:54.497822 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 12:23:54.555779 sudo[1597]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 12:23:54.556098 sudo[1597]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:23:54.571710 sudo[1597]: pam_unix(sudo:session): session closed for user root Jan 17 12:23:54.573588 sshd[1594]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:54.590746 systemd[1]: sshd@4-10.0.0.160:22-10.0.0.1:56238.service: Deactivated successfully. Jan 17 12:23:54.592638 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 12:23:54.594406 systemd-logind[1448]: Session 5 logged out. Waiting for processes to exit. Jan 17 12:23:54.595748 systemd[1]: Started sshd@5-10.0.0.160:22-10.0.0.1:56250.service - OpenSSH per-connection server daemon (10.0.0.1:56250). Jan 17 12:23:54.596515 systemd-logind[1448]: Removed session 5. Jan 17 12:23:54.643749 sshd[1602]: Accepted publickey for core from 10.0.0.1 port 56250 ssh2: RSA SHA256:SlDwm7Or6/NzPo2pwmoc3QpDgnxlCMQ0MaN4S0v55gM Jan 17 12:23:54.645073 sshd[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:54.649474 systemd-logind[1448]: New session 6 of user core. Jan 17 12:23:54.658845 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 12:23:54.714439 sudo[1606]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 12:23:54.714866 sudo[1606]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:23:54.719419 sudo[1606]: pam_unix(sudo:session): session closed for user root Jan 17 12:23:54.726308 sudo[1605]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 12:23:54.726738 sudo[1605]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:23:54.758026 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 12:23:54.759786 auditctl[1609]: No rules Jan 17 12:23:54.760220 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 12:23:54.760450 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 12:23:54.763092 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:23:54.794752 augenrules[1627]: No rules Jan 17 12:23:54.796573 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:23:54.798347 sudo[1605]: pam_unix(sudo:session): session closed for user root Jan 17 12:23:54.800291 sshd[1602]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:54.819892 systemd[1]: sshd@5-10.0.0.160:22-10.0.0.1:56250.service: Deactivated successfully. Jan 17 12:23:54.821798 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 12:23:54.823504 systemd-logind[1448]: Session 6 logged out. Waiting for processes to exit. Jan 17 12:23:54.832972 systemd[1]: Started sshd@6-10.0.0.160:22-10.0.0.1:56262.service - OpenSSH per-connection server daemon (10.0.0.1:56262). Jan 17 12:23:54.834028 systemd-logind[1448]: Removed session 6. Jan 17 12:23:54.869018 sshd[1635]: Accepted publickey for core from 10.0.0.1 port 56262 ssh2: RSA SHA256:SlDwm7Or6/NzPo2pwmoc3QpDgnxlCMQ0MaN4S0v55gM Jan 17 12:23:54.870567 sshd[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:54.874371 systemd-logind[1448]: New session 7 of user core. Jan 17 12:23:54.885960 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 12:23:54.939257 sudo[1638]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 12:23:54.939585 sudo[1638]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:23:54.964072 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 17 12:23:54.985244 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 17 12:23:54.985505 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 17 12:23:55.479456 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:23:55.491927 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:23:55.510675 systemd[1]: Reloading requested from client PID 1686 ('systemctl') (unit session-7.scope)... Jan 17 12:23:55.510690 systemd[1]: Reloading... Jan 17 12:23:55.571868 zram_generator::config[1725]: No configuration found. Jan 17 12:23:56.444464 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:23:56.521383 systemd[1]: Reloading finished in 1010 ms. Jan 17 12:23:56.577093 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:23:56.580775 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 12:23:56.581132 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:23:56.583494 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:23:56.727627 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:23:56.731992 (kubelet)[1774]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:23:56.777931 kubelet[1774]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:23:56.777931 kubelet[1774]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:23:56.777931 kubelet[1774]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:23:56.778364 kubelet[1774]: I0117 12:23:56.777966 1774 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:23:57.326565 kubelet[1774]: I0117 12:23:57.326528 1774 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 17 12:23:57.326565 kubelet[1774]: I0117 12:23:57.326559 1774 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:23:57.326801 kubelet[1774]: I0117 12:23:57.326784 1774 server.go:919] "Client rotation is on, will bootstrap in background" Jan 17 12:23:57.341036 kubelet[1774]: I0117 12:23:57.340999 1774 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:23:57.354840 kubelet[1774]: I0117 12:23:57.354808 1774 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:23:57.355083 kubelet[1774]: I0117 12:23:57.355057 1774 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:23:57.355243 kubelet[1774]: I0117 12:23:57.355216 1774 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:23:57.355324 kubelet[1774]: I0117 12:23:57.355245 1774 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:23:57.355324 kubelet[1774]: I0117 12:23:57.355256 1774 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:23:57.355386 kubelet[1774]: I0117 12:23:57.355368 1774 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:23:57.355475 kubelet[1774]: I0117 12:23:57.355459 1774 kubelet.go:396] "Attempting to sync node with API server" Jan 17 12:23:57.355510 kubelet[1774]: I0117 12:23:57.355477 1774 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:23:57.355510 kubelet[1774]: I0117 12:23:57.355501 1774 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:23:57.355551 kubelet[1774]: I0117 12:23:57.355515 1774 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:23:57.355643 kubelet[1774]: E0117 12:23:57.355613 1774 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:57.355727 kubelet[1774]: E0117 12:23:57.355673 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:57.356607 kubelet[1774]: I0117 12:23:57.356568 1774 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:23:57.358876 kubelet[1774]: I0117 12:23:57.358843 1774 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:23:57.359566 kubelet[1774]: W0117 12:23:57.359548 1774 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 17 12:23:57.359615 kubelet[1774]: E0117 12:23:57.359590 1774 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 17 12:23:57.359615 kubelet[1774]: W0117 12:23:57.359549 1774 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "10.0.0.160" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 17 12:23:57.359615 kubelet[1774]: E0117 12:23:57.359609 1774 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.160" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 17 12:23:57.359897 kubelet[1774]: W0117 12:23:57.359872 1774 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 12:23:57.360492 kubelet[1774]: I0117 12:23:57.360470 1774 server.go:1256] "Started kubelet" Jan 17 12:23:57.361095 kubelet[1774]: I0117 12:23:57.361041 1774 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:23:57.361131 kubelet[1774]: I0117 12:23:57.361097 1774 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:23:57.361906 kubelet[1774]: I0117 12:23:57.361345 1774 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:23:57.361906 kubelet[1774]: I0117 12:23:57.361579 1774 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:23:57.361906 kubelet[1774]: I0117 12:23:57.361807 1774 server.go:461] "Adding debug handlers to kubelet server" Jan 17 12:23:57.364192 kubelet[1774]: E0117 12:23:57.363575 1774 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.160\" not found" Jan 17 12:23:57.364192 kubelet[1774]: I0117 12:23:57.363607 1774 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:23:57.364192 kubelet[1774]: I0117 12:23:57.363666 1774 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 17 12:23:57.364192 kubelet[1774]: I0117 12:23:57.363754 1774 reconciler_new.go:29] "Reconciler: start to sync state" Jan 17 12:23:57.365229 kubelet[1774]: I0117 12:23:57.365210 1774 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:23:57.365319 kubelet[1774]: I0117 12:23:57.365292 1774 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:23:57.365845 kubelet[1774]: E0117 12:23:57.365821 1774 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:23:57.367347 kubelet[1774]: I0117 12:23:57.367321 1774 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:23:57.367568 kubelet[1774]: E0117 12:23:57.367535 1774 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.160\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 17 12:23:57.367641 kubelet[1774]: W0117 12:23:57.367621 1774 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 17 12:23:57.367641 kubelet[1774]: E0117 12:23:57.367642 1774 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 17 12:23:57.369148 kubelet[1774]: E0117 12:23:57.369109 1774 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.160.181b7a60c74a60ad default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.160,UID:10.0.0.160,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.160,},FirstTimestamp:2025-01-17 12:23:57.360447661 +0000 UTC m=+0.624403460,LastTimestamp:2025-01-17 12:23:57.360447661 +0000 UTC m=+0.624403460,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.160,}" Jan 17 12:23:57.379341 kubelet[1774]: E0117 12:23:57.379287 1774 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.160.181b7a60c79c3ac7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.160,UID:10.0.0.160,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.160,},FirstTimestamp:2025-01-17 12:23:57.365811911 +0000 UTC m=+0.629767700,LastTimestamp:2025-01-17 12:23:57.365811911 +0000 UTC m=+0.629767700,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.160,}" Jan 17 12:23:57.382859 kubelet[1774]: I0117 12:23:57.382824 1774 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:23:57.382859 kubelet[1774]: I0117 12:23:57.382856 1774 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:23:57.382964 kubelet[1774]: I0117 12:23:57.382873 1774 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:23:57.383761 kubelet[1774]: E0117 12:23:57.383722 1774 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.160.181b7a60c86c672e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.160,UID:10.0.0.160,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.160 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.160,},FirstTimestamp:2025-01-17 12:23:57.379454766 +0000 UTC m=+0.643410565,LastTimestamp:2025-01-17 12:23:57.379454766 +0000 UTC m=+0.643410565,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.160,}" Jan 17 12:23:57.387802 kubelet[1774]: E0117 12:23:57.387774 1774 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.160.181b7a60c86c749a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.160,UID:10.0.0.160,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.160 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.160,},FirstTimestamp:2025-01-17 12:23:57.379458202 +0000 UTC m=+0.643414002,LastTimestamp:2025-01-17 12:23:57.379458202 +0000 UTC m=+0.643414002,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.160,}" Jan 17 12:23:57.391071 kubelet[1774]: E0117 12:23:57.391032 1774 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.160.181b7a60c86c8094 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.160,UID:10.0.0.160,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.160 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.160,},FirstTimestamp:2025-01-17 12:23:57.379461268 +0000 UTC m=+0.643417067,LastTimestamp:2025-01-17 12:23:57.379461268 +0000 UTC m=+0.643417067,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.160,}" Jan 17 12:23:57.464971 kubelet[1774]: I0117 12:23:57.464943 1774 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.160" Jan 17 12:23:57.468574 kubelet[1774]: E0117 12:23:57.468552 1774 kubelet_node_status.go:96] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.160" Jan 17 12:23:57.468672 kubelet[1774]: E0117 12:23:57.468616 1774 event.go:346] "Server rejected event (will not retry!)" err="events \"10.0.0.160.181b7a60c86c672e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.160.181b7a60c86c672e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.160,UID:10.0.0.160,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.160 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.160,},FirstTimestamp:2025-01-17 12:23:57.379454766 +0000 UTC m=+0.643410565,LastTimestamp:2025-01-17 12:23:57.464895849 +0000 UTC m=+0.728851648,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.160,}" Jan 17 12:23:57.471669 kubelet[1774]: E0117 12:23:57.471650 1774 event.go:346] "Server rejected event (will not retry!)" err="events \"10.0.0.160.181b7a60c86c749a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.160.181b7a60c86c749a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.160,UID:10.0.0.160,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.160 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.160,},FirstTimestamp:2025-01-17 12:23:57.379458202 +0000 UTC m=+0.643414002,LastTimestamp:2025-01-17 12:23:57.464902441 +0000 UTC m=+0.728858240,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.160,}" Jan 17 12:23:57.475006 kubelet[1774]: E0117 12:23:57.474983 1774 event.go:346] "Server rejected event (will not retry!)" err="events \"10.0.0.160.181b7a60c86c8094\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.160.181b7a60c86c8094 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.160,UID:10.0.0.160,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.160 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.160,},FirstTimestamp:2025-01-17 12:23:57.379461268 +0000 UTC m=+0.643417067,LastTimestamp:2025-01-17 12:23:57.464906649 +0000 UTC m=+0.728862448,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.160,}" Jan 17 12:23:57.571958 kubelet[1774]: E0117 12:23:57.571923 1774 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.160\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Jan 17 12:23:57.669494 kubelet[1774]: I0117 12:23:57.669335 1774 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.160" Jan 17 12:23:57.674095 kubelet[1774]: E0117 12:23:57.674039 1774 kubelet_node_status.go:96] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.160" Jan 17 12:23:57.674153 kubelet[1774]: E0117 12:23:57.674108 1774 event.go:346] "Server rejected event (will not retry!)" err="events \"10.0.0.160.181b7a60c86c672e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.160.181b7a60c86c672e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.160,UID:10.0.0.160,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.160 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.160,},FirstTimestamp:2025-01-17 12:23:57.379454766 +0000 UTC m=+0.643410565,LastTimestamp:2025-01-17 12:23:57.669288688 +0000 UTC m=+0.933244497,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.160,}" Jan 17 12:23:57.677676 kubelet[1774]: E0117 12:23:57.677650 1774 event.go:346] "Server rejected event (will not retry!)" err="events \"10.0.0.160.181b7a60c86c749a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.160.181b7a60c86c749a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.160,UID:10.0.0.160,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.160 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.160,},FirstTimestamp:2025-01-17 12:23:57.379458202 +0000 UTC m=+0.643414002,LastTimestamp:2025-01-17 12:23:57.66930036 +0000 UTC m=+0.933256159,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.160,}" Jan 17 12:23:57.681533 kubelet[1774]: E0117 12:23:57.681484 1774 event.go:346] "Server rejected event (will not retry!)" err="events \"10.0.0.160.181b7a60c86c8094\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.160.181b7a60c86c8094 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.160,UID:10.0.0.160,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.160 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.160,},FirstTimestamp:2025-01-17 12:23:57.379461268 +0000 UTC m=+0.643417067,LastTimestamp:2025-01-17 12:23:57.669303546 +0000 UTC m=+0.933259345,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.160,}" Jan 17 12:23:58.075080 kubelet[1774]: I0117 12:23:58.074963 1774 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.160" Jan 17 12:23:58.192252 kubelet[1774]: I0117 12:23:58.192207 1774 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:23:58.193459 kubelet[1774]: I0117 12:23:58.193435 1774 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:23:58.193496 kubelet[1774]: I0117 12:23:58.193476 1774 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:23:58.193496 kubelet[1774]: I0117 12:23:58.193494 1774 kubelet.go:2329] "Starting kubelet main sync loop" Jan 17 12:23:58.193876 kubelet[1774]: E0117 12:23:58.193606 1774 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:23:58.220421 kubelet[1774]: I0117 12:23:58.220383 1774 policy_none.go:49] "None policy: Start" Jan 17 12:23:58.221128 kubelet[1774]: I0117 12:23:58.221100 1774 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:23:58.221128 kubelet[1774]: I0117 12:23:58.221124 1774 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:23:58.277039 kubelet[1774]: I0117 12:23:58.276990 1774 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.160" Jan 17 12:23:58.293790 kubelet[1774]: E0117 12:23:58.293765 1774 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 12:23:58.295803 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 12:23:58.310554 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 12:23:58.322928 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 12:23:58.324188 kubelet[1774]: I0117 12:23:58.324125 1774 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:23:58.324493 kubelet[1774]: I0117 12:23:58.324409 1774 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:23:58.325917 kubelet[1774]: E0117 12:23:58.325832 1774 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.160\" not found" Jan 17 12:23:58.329059 kubelet[1774]: I0117 12:23:58.329024 1774 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 17 12:23:58.329226 kubelet[1774]: W0117 12:23:58.329154 1774 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Jan 17 12:23:58.329226 kubelet[1774]: W0117 12:23:58.329199 1774 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.Service ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Jan 17 12:23:58.329305 kubelet[1774]: E0117 12:23:58.329244 1774 controller.go:145] "Failed to ensure lease exists, will retry" err="Post \"https://10.0.0.158:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases?timeout=10s\": read tcp 10.0.0.160:36318->10.0.0.158:6443: use of closed network connection" interval="800ms" Jan 17 12:23:58.329366 kubelet[1774]: E0117 12:23:58.329345 1774 kubelet_node_status.go:544] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"message\\\":\\\"[container runtime status check may not have completed yet, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized, CSINode is not yet initialized]\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"10.0.0.160\": Patch \"https://10.0.0.158:6443/api/v1/nodes/10.0.0.160/status?timeout=10s\": read tcp 10.0.0.160:36318->10.0.0.158:6443: use of closed network connection" Jan 17 12:23:58.329366 kubelet[1774]: E0117 12:23:58.329343 1774 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.158:6443/api/v1/namespaces/default/events\": read tcp 10.0.0.160:36318->10.0.0.158:6443: use of closed network connection" event="&Event{ObjectMeta:{10.0.0.160.181b7a60c86c672e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.160,UID:10.0.0.160,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.160 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.160,},FirstTimestamp:2025-01-17 12:23:57.379454766 +0000 UTC m=+0.643410565,LastTimestamp:2025-01-17 12:23:58.074915193 +0000 UTC m=+1.338870992,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.160,}" Jan 17 12:23:58.355873 kubelet[1774]: E0117 12:23:58.355802 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:58.416248 kubelet[1774]: I0117 12:23:58.416206 1774 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 17 12:23:58.416762 containerd[1462]: time="2025-01-17T12:23:58.416646551Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 12:23:58.417208 kubelet[1774]: I0117 12:23:58.416858 1774 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 17 12:23:58.517457 sudo[1638]: pam_unix(sudo:session): session closed for user root Jan 17 12:23:58.519615 sshd[1635]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:58.523482 systemd[1]: sshd@6-10.0.0.160:22-10.0.0.1:56262.service: Deactivated successfully. Jan 17 12:23:58.525457 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 12:23:58.527163 systemd-logind[1448]: Session 7 logged out. Waiting for processes to exit. Jan 17 12:23:58.528336 systemd-logind[1448]: Removed session 7. Jan 17 12:23:58.531131 kubelet[1774]: E0117 12:23:58.531097 1774 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.160\" not found" Jan 17 12:23:59.356022 kubelet[1774]: I0117 12:23:59.355987 1774 apiserver.go:52] "Watching apiserver" Jan 17 12:23:59.356535 kubelet[1774]: E0117 12:23:59.356031 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:59.359084 kubelet[1774]: I0117 12:23:59.359058 1774 topology_manager.go:215] "Topology Admit Handler" podUID="23215448-caff-4f20-82b0-3b3f56d251cf" podNamespace="calico-system" podName="calico-node-pn6jm" Jan 17 12:23:59.359159 kubelet[1774]: I0117 12:23:59.359143 1774 topology_manager.go:215] "Topology Admit Handler" podUID="dda61e94-21b3-42b1-baef-aafab38d13ff" podNamespace="calico-system" podName="csi-node-driver-b66m9" Jan 17 12:23:59.359206 kubelet[1774]: I0117 12:23:59.359191 1774 topology_manager.go:215] "Topology Admit Handler" podUID="3726337c-da45-43f9-b78b-6c023c291293" podNamespace="kube-system" podName="kube-proxy-vvkxc" Jan 17 12:23:59.359812 kubelet[1774]: E0117 12:23:59.359767 1774 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b66m9" podUID="dda61e94-21b3-42b1-baef-aafab38d13ff" Jan 17 12:23:59.365069 kubelet[1774]: I0117 12:23:59.364148 1774 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 17 12:23:59.369240 systemd[1]: Created slice kubepods-besteffort-pod3726337c_da45_43f9_b78b_6c023c291293.slice - libcontainer container kubepods-besteffort-pod3726337c_da45_43f9_b78b_6c023c291293.slice. Jan 17 12:23:59.374085 kubelet[1774]: I0117 12:23:59.374051 1774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/23215448-caff-4f20-82b0-3b3f56d251cf-node-certs\") pod \"calico-node-pn6jm\" (UID: \"23215448-caff-4f20-82b0-3b3f56d251cf\") " pod="calico-system/calico-node-pn6jm" Jan 17 12:23:59.374085 kubelet[1774]: I0117 12:23:59.374087 1774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3726337c-da45-43f9-b78b-6c023c291293-kube-proxy\") pod \"kube-proxy-vvkxc\" (UID: \"3726337c-da45-43f9-b78b-6c023c291293\") " pod="kube-system/kube-proxy-vvkxc" Jan 17 12:23:59.374197 kubelet[1774]: I0117 12:23:59.374114 1774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3726337c-da45-43f9-b78b-6c023c291293-lib-modules\") pod \"kube-proxy-vvkxc\" (UID: \"3726337c-da45-43f9-b78b-6c023c291293\") " pod="kube-system/kube-proxy-vvkxc" Jan 17 12:23:59.374264 kubelet[1774]: I0117 12:23:59.374227 1774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23215448-caff-4f20-82b0-3b3f56d251cf-tigera-ca-bundle\") pod \"calico-node-pn6jm\" (UID: \"23215448-caff-4f20-82b0-3b3f56d251cf\") " pod="calico-system/calico-node-pn6jm" Jan 17 12:23:59.374305 kubelet[1774]: I0117 12:23:59.374277 1774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/23215448-caff-4f20-82b0-3b3f56d251cf-var-lib-calico\") pod \"calico-node-pn6jm\" (UID: \"23215448-caff-4f20-82b0-3b3f56d251cf\") " pod="calico-system/calico-node-pn6jm" Jan 17 12:23:59.374347 kubelet[1774]: I0117 12:23:59.374308 1774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nq4t\" (UniqueName: \"kubernetes.io/projected/23215448-caff-4f20-82b0-3b3f56d251cf-kube-api-access-4nq4t\") pod \"calico-node-pn6jm\" (UID: \"23215448-caff-4f20-82b0-3b3f56d251cf\") " pod="calico-system/calico-node-pn6jm" Jan 17 12:23:59.374347 kubelet[1774]: I0117 12:23:59.374334 1774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/dda61e94-21b3-42b1-baef-aafab38d13ff-registration-dir\") pod \"csi-node-driver-b66m9\" (UID: \"dda61e94-21b3-42b1-baef-aafab38d13ff\") " pod="calico-system/csi-node-driver-b66m9" Jan 17 12:23:59.374409 kubelet[1774]: I0117 12:23:59.374360 1774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7t4b\" (UniqueName: \"kubernetes.io/projected/dda61e94-21b3-42b1-baef-aafab38d13ff-kube-api-access-b7t4b\") pod \"csi-node-driver-b66m9\" (UID: \"dda61e94-21b3-42b1-baef-aafab38d13ff\") " pod="calico-system/csi-node-driver-b66m9" Jan 17 12:23:59.374409 kubelet[1774]: I0117 12:23:59.374391 1774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/23215448-caff-4f20-82b0-3b3f56d251cf-lib-modules\") pod \"calico-node-pn6jm\" (UID: \"23215448-caff-4f20-82b0-3b3f56d251cf\") " pod="calico-system/calico-node-pn6jm" Jan 17 12:23:59.374481 kubelet[1774]: I0117 12:23:59.374425 1774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/23215448-caff-4f20-82b0-3b3f56d251cf-cni-bin-dir\") pod \"calico-node-pn6jm\" (UID: \"23215448-caff-4f20-82b0-3b3f56d251cf\") " pod="calico-system/calico-node-pn6jm" Jan 17 12:23:59.374481 kubelet[1774]: I0117 12:23:59.374468 1774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/23215448-caff-4f20-82b0-3b3f56d251cf-cni-net-dir\") pod \"calico-node-pn6jm\" (UID: \"23215448-caff-4f20-82b0-3b3f56d251cf\") " pod="calico-system/calico-node-pn6jm" Jan 17 12:23:59.374545 kubelet[1774]: I0117 12:23:59.374514 1774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/23215448-caff-4f20-82b0-3b3f56d251cf-cni-log-dir\") pod \"calico-node-pn6jm\" (UID: \"23215448-caff-4f20-82b0-3b3f56d251cf\") " pod="calico-system/calico-node-pn6jm" Jan 17 12:23:59.374591 kubelet[1774]: I0117 12:23:59.374556 1774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/dda61e94-21b3-42b1-baef-aafab38d13ff-socket-dir\") pod \"csi-node-driver-b66m9\" (UID: \"dda61e94-21b3-42b1-baef-aafab38d13ff\") " pod="calico-system/csi-node-driver-b66m9" Jan 17 12:23:59.374623 kubelet[1774]: I0117 12:23:59.374593 1774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3726337c-da45-43f9-b78b-6c023c291293-xtables-lock\") pod \"kube-proxy-vvkxc\" (UID: \"3726337c-da45-43f9-b78b-6c023c291293\") " pod="kube-system/kube-proxy-vvkxc" Jan 17 12:23:59.374668 kubelet[1774]: I0117 12:23:59.374630 1774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/23215448-caff-4f20-82b0-3b3f56d251cf-xtables-lock\") pod \"calico-node-pn6jm\" (UID: \"23215448-caff-4f20-82b0-3b3f56d251cf\") " pod="calico-system/calico-node-pn6jm" Jan 17 12:23:59.374721 kubelet[1774]: I0117 12:23:59.374708 1774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/23215448-caff-4f20-82b0-3b3f56d251cf-var-run-calico\") pod \"calico-node-pn6jm\" (UID: \"23215448-caff-4f20-82b0-3b3f56d251cf\") " pod="calico-system/calico-node-pn6jm" Jan 17 12:23:59.374761 kubelet[1774]: I0117 12:23:59.374746 1774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/23215448-caff-4f20-82b0-3b3f56d251cf-flexvol-driver-host\") pod \"calico-node-pn6jm\" (UID: \"23215448-caff-4f20-82b0-3b3f56d251cf\") " pod="calico-system/calico-node-pn6jm" Jan 17 12:23:59.374792 kubelet[1774]: I0117 12:23:59.374777 1774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/dda61e94-21b3-42b1-baef-aafab38d13ff-varrun\") pod \"csi-node-driver-b66m9\" (UID: \"dda61e94-21b3-42b1-baef-aafab38d13ff\") " pod="calico-system/csi-node-driver-b66m9" Jan 17 12:23:59.374824 kubelet[1774]: I0117 12:23:59.374805 1774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dda61e94-21b3-42b1-baef-aafab38d13ff-kubelet-dir\") pod \"csi-node-driver-b66m9\" (UID: \"dda61e94-21b3-42b1-baef-aafab38d13ff\") " pod="calico-system/csi-node-driver-b66m9" Jan 17 12:23:59.374853 kubelet[1774]: I0117 12:23:59.374834 1774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwnvs\" (UniqueName: \"kubernetes.io/projected/3726337c-da45-43f9-b78b-6c023c291293-kube-api-access-cwnvs\") pod \"kube-proxy-vvkxc\" (UID: \"3726337c-da45-43f9-b78b-6c023c291293\") " pod="kube-system/kube-proxy-vvkxc" Jan 17 12:23:59.374891 kubelet[1774]: I0117 12:23:59.374864 1774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/23215448-caff-4f20-82b0-3b3f56d251cf-policysync\") pod \"calico-node-pn6jm\" (UID: \"23215448-caff-4f20-82b0-3b3f56d251cf\") " pod="calico-system/calico-node-pn6jm" Jan 17 12:23:59.388415 systemd[1]: Created slice kubepods-besteffort-pod23215448_caff_4f20_82b0_3b3f56d251cf.slice - libcontainer container kubepods-besteffort-pod23215448_caff_4f20_82b0_3b3f56d251cf.slice. Jan 17 12:23:59.477678 kubelet[1774]: E0117 12:23:59.477626 1774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:59.477678 kubelet[1774]: W0117 12:23:59.477653 1774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:59.477678 kubelet[1774]: E0117 12:23:59.477669 1774 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:59.480686 kubelet[1774]: E0117 12:23:59.480535 1774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:59.480686 kubelet[1774]: W0117 12:23:59.480557 1774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:59.480686 kubelet[1774]: E0117 12:23:59.480581 1774 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:59.483312 kubelet[1774]: E0117 12:23:59.483285 1774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:59.483312 kubelet[1774]: W0117 12:23:59.483302 1774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:59.483447 kubelet[1774]: E0117 12:23:59.483314 1774 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:59.484678 kubelet[1774]: E0117 12:23:59.484615 1774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:59.484678 kubelet[1774]: W0117 12:23:59.484627 1774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:59.484678 kubelet[1774]: E0117 12:23:59.484649 1774 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:59.485282 kubelet[1774]: E0117 12:23:59.485255 1774 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:59.485282 kubelet[1774]: W0117 12:23:59.485275 1774 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:59.485354 kubelet[1774]: E0117 12:23:59.485291 1774 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:59.686943 kubelet[1774]: E0117 12:23:59.686822 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:23:59.687668 containerd[1462]: time="2025-01-17T12:23:59.687591356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vvkxc,Uid:3726337c-da45-43f9-b78b-6c023c291293,Namespace:kube-system,Attempt:0,}" Jan 17 12:23:59.692090 kubelet[1774]: E0117 12:23:59.691356 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:23:59.692534 containerd[1462]: time="2025-01-17T12:23:59.692498028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pn6jm,Uid:23215448-caff-4f20-82b0-3b3f56d251cf,Namespace:calico-system,Attempt:0,}" Jan 17 12:24:00.356777 kubelet[1774]: E0117 12:24:00.356724 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:01.194844 kubelet[1774]: E0117 12:24:01.194812 1774 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b66m9" podUID="dda61e94-21b3-42b1-baef-aafab38d13ff" Jan 17 12:24:01.340911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount848913156.mount: Deactivated successfully. Jan 17 12:24:01.357275 kubelet[1774]: E0117 12:24:01.357208 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:01.522230 containerd[1462]: time="2025-01-17T12:24:01.522172501Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:24:01.523542 containerd[1462]: time="2025-01-17T12:24:01.523485613Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:24:01.524550 containerd[1462]: time="2025-01-17T12:24:01.524491289Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:24:01.525594 containerd[1462]: time="2025-01-17T12:24:01.525540717Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 17 12:24:01.526835 containerd[1462]: time="2025-01-17T12:24:01.526805168Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:24:01.529332 containerd[1462]: time="2025-01-17T12:24:01.529303985Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:24:01.530096 containerd[1462]: time="2025-01-17T12:24:01.530060203Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.837461045s" Jan 17 12:24:01.532315 containerd[1462]: time="2025-01-17T12:24:01.532284504Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.844513882s" Jan 17 12:24:01.637482 containerd[1462]: time="2025-01-17T12:24:01.637398771Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:24:01.637482 containerd[1462]: time="2025-01-17T12:24:01.637443204Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:24:01.637482 containerd[1462]: time="2025-01-17T12:24:01.637453143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:24:01.637482 containerd[1462]: time="2025-01-17T12:24:01.637422115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:24:01.637482 containerd[1462]: time="2025-01-17T12:24:01.637472159Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:24:01.637753 containerd[1462]: time="2025-01-17T12:24:01.637483931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:24:01.637753 containerd[1462]: time="2025-01-17T12:24:01.637558100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:24:01.637971 containerd[1462]: time="2025-01-17T12:24:01.637881677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:24:01.696846 systemd[1]: Started cri-containerd-042623025d32332d5bad39431a6b5691badb6c90ccb542ad679672f857c7d40a.scope - libcontainer container 042623025d32332d5bad39431a6b5691badb6c90ccb542ad679672f857c7d40a. Jan 17 12:24:01.698467 systemd[1]: Started cri-containerd-dcc6ab575120eb14f97bf7dbe0451c1b57d2d48a5aa0c64b710b89494687c235.scope - libcontainer container dcc6ab575120eb14f97bf7dbe0451c1b57d2d48a5aa0c64b710b89494687c235. Jan 17 12:24:01.717927 containerd[1462]: time="2025-01-17T12:24:01.717506722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pn6jm,Uid:23215448-caff-4f20-82b0-3b3f56d251cf,Namespace:calico-system,Attempt:0,} returns sandbox id \"042623025d32332d5bad39431a6b5691badb6c90ccb542ad679672f857c7d40a\"" Jan 17 12:24:01.718902 kubelet[1774]: E0117 12:24:01.718882 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:24:01.720753 containerd[1462]: time="2025-01-17T12:24:01.720630690Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 17 12:24:01.722107 containerd[1462]: time="2025-01-17T12:24:01.722087673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vvkxc,Uid:3726337c-da45-43f9-b78b-6c023c291293,Namespace:kube-system,Attempt:0,} returns sandbox id \"dcc6ab575120eb14f97bf7dbe0451c1b57d2d48a5aa0c64b710b89494687c235\"" Jan 17 12:24:01.722684 kubelet[1774]: E0117 12:24:01.722666 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:24:02.357353 kubelet[1774]: E0117 12:24:02.357309 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:03.193953 kubelet[1774]: E0117 12:24:03.193908 1774 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b66m9" podUID="dda61e94-21b3-42b1-baef-aafab38d13ff" Jan 17 12:24:03.202459 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4293449155.mount: Deactivated successfully. Jan 17 12:24:03.264747 containerd[1462]: time="2025-01-17T12:24:03.264668783Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:03.265383 containerd[1462]: time="2025-01-17T12:24:03.265348357Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 17 12:24:03.266388 containerd[1462]: time="2025-01-17T12:24:03.266357540Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:03.270514 containerd[1462]: time="2025-01-17T12:24:03.270464552Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:03.271058 containerd[1462]: time="2025-01-17T12:24:03.271029411Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.550369637s" Jan 17 12:24:03.271094 containerd[1462]: time="2025-01-17T12:24:03.271059458Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 17 12:24:03.271980 containerd[1462]: time="2025-01-17T12:24:03.271958614Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\"" Jan 17 12:24:03.272648 containerd[1462]: time="2025-01-17T12:24:03.272626276Z" level=info msg="CreateContainer within sandbox \"042623025d32332d5bad39431a6b5691badb6c90ccb542ad679672f857c7d40a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 12:24:03.289194 containerd[1462]: time="2025-01-17T12:24:03.289155524Z" level=info msg="CreateContainer within sandbox \"042623025d32332d5bad39431a6b5691badb6c90ccb542ad679672f857c7d40a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"de0c1d564148853f11c6a5566a631eb2220775eb12f8f26fa30e6358badd8bf4\"" Jan 17 12:24:03.289786 containerd[1462]: time="2025-01-17T12:24:03.289762152Z" level=info msg="StartContainer for \"de0c1d564148853f11c6a5566a631eb2220775eb12f8f26fa30e6358badd8bf4\"" Jan 17 12:24:03.317809 systemd[1]: Started cri-containerd-de0c1d564148853f11c6a5566a631eb2220775eb12f8f26fa30e6358badd8bf4.scope - libcontainer container de0c1d564148853f11c6a5566a631eb2220775eb12f8f26fa30e6358badd8bf4. Jan 17 12:24:03.345236 containerd[1462]: time="2025-01-17T12:24:03.345185861Z" level=info msg="StartContainer for \"de0c1d564148853f11c6a5566a631eb2220775eb12f8f26fa30e6358badd8bf4\" returns successfully" Jan 17 12:24:03.356145 systemd[1]: cri-containerd-de0c1d564148853f11c6a5566a631eb2220775eb12f8f26fa30e6358badd8bf4.scope: Deactivated successfully. Jan 17 12:24:03.358385 kubelet[1774]: E0117 12:24:03.358353 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:03.375283 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de0c1d564148853f11c6a5566a631eb2220775eb12f8f26fa30e6358badd8bf4-rootfs.mount: Deactivated successfully. Jan 17 12:24:03.447755 containerd[1462]: time="2025-01-17T12:24:03.447604062Z" level=info msg="shim disconnected" id=de0c1d564148853f11c6a5566a631eb2220775eb12f8f26fa30e6358badd8bf4 namespace=k8s.io Jan 17 12:24:03.447755 containerd[1462]: time="2025-01-17T12:24:03.447652783Z" level=warning msg="cleaning up after shim disconnected" id=de0c1d564148853f11c6a5566a631eb2220775eb12f8f26fa30e6358badd8bf4 namespace=k8s.io Jan 17 12:24:03.447755 containerd[1462]: time="2025-01-17T12:24:03.447660708Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:24:04.206425 kubelet[1774]: E0117 12:24:04.206391 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:24:04.330040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4136703551.mount: Deactivated successfully. Jan 17 12:24:04.359100 kubelet[1774]: E0117 12:24:04.359071 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:04.570085 containerd[1462]: time="2025-01-17T12:24:04.569716443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:04.570586 containerd[1462]: time="2025-01-17T12:24:04.570544787Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.13: active requests=0, bytes read=28620941" Jan 17 12:24:04.571886 containerd[1462]: time="2025-01-17T12:24:04.571335279Z" level=info msg="ImageCreate event name:\"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:04.573970 containerd[1462]: time="2025-01-17T12:24:04.573943631Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:04.574570 containerd[1462]: time="2025-01-17T12:24:04.574528538Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.13\" with image id \"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\", repo tag \"registry.k8s.io/kube-proxy:v1.29.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\", size \"28619960\" in 1.302539667s" Jan 17 12:24:04.574570 containerd[1462]: time="2025-01-17T12:24:04.574563473Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\" returns image reference \"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\"" Jan 17 12:24:04.575483 containerd[1462]: time="2025-01-17T12:24:04.575443954Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 17 12:24:04.576282 containerd[1462]: time="2025-01-17T12:24:04.576258372Z" level=info msg="CreateContainer within sandbox \"dcc6ab575120eb14f97bf7dbe0451c1b57d2d48a5aa0c64b710b89494687c235\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 12:24:04.593228 containerd[1462]: time="2025-01-17T12:24:04.593185626Z" level=info msg="CreateContainer within sandbox \"dcc6ab575120eb14f97bf7dbe0451c1b57d2d48a5aa0c64b710b89494687c235\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f56827ede39ffe58741906702726f2bd9cbbc4947d8fbd48f55456e11e318f84\"" Jan 17 12:24:04.593633 containerd[1462]: time="2025-01-17T12:24:04.593594603Z" level=info msg="StartContainer for \"f56827ede39ffe58741906702726f2bd9cbbc4947d8fbd48f55456e11e318f84\"" Jan 17 12:24:04.626845 systemd[1]: Started cri-containerd-f56827ede39ffe58741906702726f2bd9cbbc4947d8fbd48f55456e11e318f84.scope - libcontainer container f56827ede39ffe58741906702726f2bd9cbbc4947d8fbd48f55456e11e318f84. Jan 17 12:24:04.655404 containerd[1462]: time="2025-01-17T12:24:04.654729844Z" level=info msg="StartContainer for \"f56827ede39ffe58741906702726f2bd9cbbc4947d8fbd48f55456e11e318f84\" returns successfully" Jan 17 12:24:05.193805 kubelet[1774]: E0117 12:24:05.193759 1774 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b66m9" podUID="dda61e94-21b3-42b1-baef-aafab38d13ff" Jan 17 12:24:05.208458 kubelet[1774]: E0117 12:24:05.208421 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:24:05.215932 kubelet[1774]: I0117 12:24:05.215899 1774 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-vvkxc" podStartSLOduration=4.364093473 podStartE2EDuration="7.215864762s" podCreationTimestamp="2025-01-17 12:23:58 +0000 UTC" firstStartedPulling="2025-01-17 12:24:01.723123736 +0000 UTC m=+4.987079535" lastFinishedPulling="2025-01-17 12:24:04.574895025 +0000 UTC m=+7.838850824" observedRunningTime="2025-01-17 12:24:05.215675918 +0000 UTC m=+8.479631717" watchObservedRunningTime="2025-01-17 12:24:05.215864762 +0000 UTC m=+8.479820561" Jan 17 12:24:05.359809 kubelet[1774]: E0117 12:24:05.359765 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:06.209892 kubelet[1774]: E0117 12:24:06.209853 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:24:06.360096 kubelet[1774]: E0117 12:24:06.360066 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:07.194397 kubelet[1774]: E0117 12:24:07.194348 1774 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b66m9" podUID="dda61e94-21b3-42b1-baef-aafab38d13ff" Jan 17 12:24:07.360449 kubelet[1774]: E0117 12:24:07.360392 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:08.361326 kubelet[1774]: E0117 12:24:08.361296 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:08.430873 containerd[1462]: time="2025-01-17T12:24:08.430799983Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:08.431958 containerd[1462]: time="2025-01-17T12:24:08.431904635Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 17 12:24:08.433302 containerd[1462]: time="2025-01-17T12:24:08.433268793Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:08.435528 containerd[1462]: time="2025-01-17T12:24:08.435492493Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:08.436154 containerd[1462]: time="2025-01-17T12:24:08.436129668Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 3.860631412s" Jan 17 12:24:08.436194 containerd[1462]: time="2025-01-17T12:24:08.436157380Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 17 12:24:08.437617 containerd[1462]: time="2025-01-17T12:24:08.437581962Z" level=info msg="CreateContainer within sandbox \"042623025d32332d5bad39431a6b5691badb6c90ccb542ad679672f857c7d40a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 12:24:08.451859 containerd[1462]: time="2025-01-17T12:24:08.451816046Z" level=info msg="CreateContainer within sandbox \"042623025d32332d5bad39431a6b5691badb6c90ccb542ad679672f857c7d40a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1ac63eed8aa9941c2f75700bd99e9310b4ea38e89a101f4cc234e45252d9db9b\"" Jan 17 12:24:08.452356 containerd[1462]: time="2025-01-17T12:24:08.452322786Z" level=info msg="StartContainer for \"1ac63eed8aa9941c2f75700bd99e9310b4ea38e89a101f4cc234e45252d9db9b\"" Jan 17 12:24:08.480880 systemd[1]: Started cri-containerd-1ac63eed8aa9941c2f75700bd99e9310b4ea38e89a101f4cc234e45252d9db9b.scope - libcontainer container 1ac63eed8aa9941c2f75700bd99e9310b4ea38e89a101f4cc234e45252d9db9b. Jan 17 12:24:08.508488 containerd[1462]: time="2025-01-17T12:24:08.508443522Z" level=info msg="StartContainer for \"1ac63eed8aa9941c2f75700bd99e9310b4ea38e89a101f4cc234e45252d9db9b\" returns successfully" Jan 17 12:24:09.194728 kubelet[1774]: E0117 12:24:09.194666 1774 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b66m9" podUID="dda61e94-21b3-42b1-baef-aafab38d13ff" Jan 17 12:24:09.214740 kubelet[1774]: E0117 12:24:09.214684 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:24:09.362090 kubelet[1774]: E0117 12:24:09.362040 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:10.216183 kubelet[1774]: E0117 12:24:10.216149 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:24:10.362610 kubelet[1774]: E0117 12:24:10.362561 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:10.580728 systemd[1]: cri-containerd-1ac63eed8aa9941c2f75700bd99e9310b4ea38e89a101f4cc234e45252d9db9b.scope: Deactivated successfully. Jan 17 12:24:10.602577 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ac63eed8aa9941c2f75700bd99e9310b4ea38e89a101f4cc234e45252d9db9b-rootfs.mount: Deactivated successfully. Jan 17 12:24:10.638580 kubelet[1774]: I0117 12:24:10.638524 1774 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 17 12:24:11.079188 kubelet[1774]: I0117 12:24:11.079112 1774 topology_manager.go:215] "Topology Admit Handler" podUID="03c15592-0c9f-4dd8-ac51-f2e8e4fe7f05" podNamespace="calico-system" podName="calico-kube-controllers-9fc77b6d-kp6sc" Jan 17 12:24:11.080192 kubelet[1774]: I0117 12:24:11.080158 1774 topology_manager.go:215] "Topology Admit Handler" podUID="01f58051-b831-4680-bcde-ee72a125416f" podNamespace="kube-system" podName="coredns-76f75df574-q4bsh" Jan 17 12:24:11.080402 kubelet[1774]: I0117 12:24:11.080292 1774 topology_manager.go:215] "Topology Admit Handler" podUID="d48ea218-f377-4f0d-a97e-292b17df6d89" podNamespace="calico-apiserver" podName="calico-apiserver-68f79c84d4-xr6b6" Jan 17 12:24:11.080495 kubelet[1774]: I0117 12:24:11.080415 1774 topology_manager.go:215] "Topology Admit Handler" podUID="3229aaed-5c2c-4c71-a248-e6141b8a2e7f" podNamespace="calico-apiserver" podName="calico-apiserver-68f79c84d4-6mrf5" Jan 17 12:24:11.080806 kubelet[1774]: I0117 12:24:11.080779 1774 topology_manager.go:215] "Topology Admit Handler" podUID="04af4684-e94d-4b3d-a664-20ff360fb4b6" podNamespace="kube-system" podName="coredns-76f75df574-nlhgn" Jan 17 12:24:11.082079 kubelet[1774]: W0117 12:24:11.082049 1774 reflector.go:539] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:10.0.0.160" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node '10.0.0.160' and this object Jan 17 12:24:11.082079 kubelet[1774]: E0117 12:24:11.082075 1774 reflector.go:147] object-"calico-apiserver"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:10.0.0.160" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node '10.0.0.160' and this object Jan 17 12:24:11.087088 systemd[1]: Created slice kubepods-besteffort-pod03c15592_0c9f_4dd8_ac51_f2e8e4fe7f05.slice - libcontainer container kubepods-besteffort-pod03c15592_0c9f_4dd8_ac51_f2e8e4fe7f05.slice. Jan 17 12:24:11.091125 systemd[1]: Created slice kubepods-besteffort-podd48ea218_f377_4f0d_a97e_292b17df6d89.slice - libcontainer container kubepods-besteffort-podd48ea218_f377_4f0d_a97e_292b17df6d89.slice. Jan 17 12:24:11.094914 systemd[1]: Created slice kubepods-besteffort-pod3229aaed_5c2c_4c71_a248_e6141b8a2e7f.slice - libcontainer container kubepods-besteffort-pod3229aaed_5c2c_4c71_a248_e6141b8a2e7f.slice. Jan 17 12:24:11.098465 systemd[1]: Created slice kubepods-burstable-pod01f58051_b831_4680_bcde_ee72a125416f.slice - libcontainer container kubepods-burstable-pod01f58051_b831_4680_bcde_ee72a125416f.slice. Jan 17 12:24:11.115051 systemd[1]: Created slice kubepods-burstable-pod04af4684_e94d_4b3d_a664_20ff360fb4b6.slice - libcontainer container kubepods-burstable-pod04af4684_e94d_4b3d_a664_20ff360fb4b6.slice. Jan 17 12:24:11.209486 systemd[1]: Created slice kubepods-besteffort-poddda61e94_21b3_42b1_baef_aafab38d13ff.slice - libcontainer container kubepods-besteffort-poddda61e94_21b3_42b1_baef_aafab38d13ff.slice. Jan 17 12:24:11.211838 containerd[1462]: time="2025-01-17T12:24:11.211799039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b66m9,Uid:dda61e94-21b3-42b1-baef-aafab38d13ff,Namespace:calico-system,Attempt:0,}" Jan 17 12:24:11.242877 kubelet[1774]: I0117 12:24:11.242845 1774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04af4684-e94d-4b3d-a664-20ff360fb4b6-config-volume\") pod \"coredns-76f75df574-nlhgn\" (UID: \"04af4684-e94d-4b3d-a664-20ff360fb4b6\") " pod="kube-system/coredns-76f75df574-nlhgn" Jan 17 12:24:11.242877 kubelet[1774]: I0117 12:24:11.242880 1774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03c15592-0c9f-4dd8-ac51-f2e8e4fe7f05-tigera-ca-bundle\") pod \"calico-kube-controllers-9fc77b6d-kp6sc\" (UID: \"03c15592-0c9f-4dd8-ac51-f2e8e4fe7f05\") " pod="calico-system/calico-kube-controllers-9fc77b6d-kp6sc" Jan 17 12:24:11.243121 kubelet[1774]: I0117 12:24:11.242901 1774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vglvq\" (UniqueName: \"kubernetes.io/projected/01f58051-b831-4680-bcde-ee72a125416f-kube-api-access-vglvq\") pod \"coredns-76f75df574-q4bsh\" (UID: \"01f58051-b831-4680-bcde-ee72a125416f\") " pod="kube-system/coredns-76f75df574-q4bsh" Jan 17 12:24:11.243121 kubelet[1774]: I0117 12:24:11.242939 1774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69c2q\" (UniqueName: \"kubernetes.io/projected/03c15592-0c9f-4dd8-ac51-f2e8e4fe7f05-kube-api-access-69c2q\") pod \"calico-kube-controllers-9fc77b6d-kp6sc\" (UID: \"03c15592-0c9f-4dd8-ac51-f2e8e4fe7f05\") " pod="calico-system/calico-kube-controllers-9fc77b6d-kp6sc" Jan 17 12:24:11.243121 kubelet[1774]: I0117 12:24:11.242995 1774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3229aaed-5c2c-4c71-a248-e6141b8a2e7f-calico-apiserver-certs\") pod \"calico-apiserver-68f79c84d4-6mrf5\" (UID: \"3229aaed-5c2c-4c71-a248-e6141b8a2e7f\") " pod="calico-apiserver/calico-apiserver-68f79c84d4-6mrf5" Jan 17 12:24:11.243121 kubelet[1774]: I0117 12:24:11.243029 1774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/01f58051-b831-4680-bcde-ee72a125416f-config-volume\") pod \"coredns-76f75df574-q4bsh\" (UID: \"01f58051-b831-4680-bcde-ee72a125416f\") " pod="kube-system/coredns-76f75df574-q4bsh" Jan 17 12:24:11.243263 kubelet[1774]: I0117 12:24:11.243159 1774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d48ea218-f377-4f0d-a97e-292b17df6d89-calico-apiserver-certs\") pod \"calico-apiserver-68f79c84d4-xr6b6\" (UID: \"d48ea218-f377-4f0d-a97e-292b17df6d89\") " pod="calico-apiserver/calico-apiserver-68f79c84d4-xr6b6" Jan 17 12:24:11.243263 kubelet[1774]: I0117 12:24:11.243202 1774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfn4s\" (UniqueName: \"kubernetes.io/projected/d48ea218-f377-4f0d-a97e-292b17df6d89-kube-api-access-hfn4s\") pod \"calico-apiserver-68f79c84d4-xr6b6\" (UID: \"d48ea218-f377-4f0d-a97e-292b17df6d89\") " pod="calico-apiserver/calico-apiserver-68f79c84d4-xr6b6" Jan 17 12:24:11.243263 kubelet[1774]: I0117 12:24:11.243228 1774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxwjw\" (UniqueName: \"kubernetes.io/projected/3229aaed-5c2c-4c71-a248-e6141b8a2e7f-kube-api-access-kxwjw\") pod \"calico-apiserver-68f79c84d4-6mrf5\" (UID: \"3229aaed-5c2c-4c71-a248-e6141b8a2e7f\") " pod="calico-apiserver/calico-apiserver-68f79c84d4-6mrf5" Jan 17 12:24:11.243263 kubelet[1774]: I0117 12:24:11.243258 1774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2spp\" (UniqueName: \"kubernetes.io/projected/04af4684-e94d-4b3d-a664-20ff360fb4b6-kube-api-access-k2spp\") pod \"coredns-76f75df574-nlhgn\" (UID: \"04af4684-e94d-4b3d-a664-20ff360fb4b6\") " pod="kube-system/coredns-76f75df574-nlhgn" Jan 17 12:24:11.363532 kubelet[1774]: E0117 12:24:11.363436 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:11.690621 containerd[1462]: time="2025-01-17T12:24:11.690514952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9fc77b6d-kp6sc,Uid:03c15592-0c9f-4dd8-ac51-f2e8e4fe7f05,Namespace:calico-system,Attempt:0,}" Jan 17 12:24:11.714032 kubelet[1774]: E0117 12:24:11.713973 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:24:11.714581 containerd[1462]: time="2025-01-17T12:24:11.714524569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-q4bsh,Uid:01f58051-b831-4680-bcde-ee72a125416f,Namespace:kube-system,Attempt:0,}" Jan 17 12:24:11.717860 kubelet[1774]: E0117 12:24:11.717816 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:24:11.718244 containerd[1462]: time="2025-01-17T12:24:11.718191977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-nlhgn,Uid:04af4684-e94d-4b3d-a664-20ff360fb4b6,Namespace:kube-system,Attempt:0,}" Jan 17 12:24:11.850878 containerd[1462]: time="2025-01-17T12:24:11.850787170Z" level=info msg="shim disconnected" id=1ac63eed8aa9941c2f75700bd99e9310b4ea38e89a101f4cc234e45252d9db9b namespace=k8s.io Jan 17 12:24:11.850878 containerd[1462]: time="2025-01-17T12:24:11.850844237Z" level=warning msg="cleaning up after shim disconnected" id=1ac63eed8aa9941c2f75700bd99e9310b4ea38e89a101f4cc234e45252d9db9b namespace=k8s.io Jan 17 12:24:11.850878 containerd[1462]: time="2025-01-17T12:24:11.850855197Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:24:11.959245 containerd[1462]: time="2025-01-17T12:24:11.958861418Z" level=error msg="Failed to destroy network for sandbox \"99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:11.961078 containerd[1462]: time="2025-01-17T12:24:11.960886014Z" level=error msg="encountered an error cleaning up failed sandbox \"99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:11.961078 containerd[1462]: time="2025-01-17T12:24:11.960943041Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b66m9,Uid:dda61e94-21b3-42b1-baef-aafab38d13ff,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:11.961289 kubelet[1774]: E0117 12:24:11.961252 1774 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:11.961399 kubelet[1774]: E0117 12:24:11.961376 1774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b66m9" Jan 17 12:24:11.961462 kubelet[1774]: E0117 12:24:11.961408 1774 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b66m9" Jan 17 12:24:11.961578 kubelet[1774]: E0117 12:24:11.961555 1774 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-b66m9_calico-system(dda61e94-21b3-42b1-baef-aafab38d13ff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-b66m9_calico-system(dda61e94-21b3-42b1-baef-aafab38d13ff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-b66m9" podUID="dda61e94-21b3-42b1-baef-aafab38d13ff" Jan 17 12:24:11.961907 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20-shm.mount: Deactivated successfully. Jan 17 12:24:11.967833 containerd[1462]: time="2025-01-17T12:24:11.967670748Z" level=error msg="Failed to destroy network for sandbox \"d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:11.968160 containerd[1462]: time="2025-01-17T12:24:11.968121183Z" level=error msg="encountered an error cleaning up failed sandbox \"d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:11.968232 containerd[1462]: time="2025-01-17T12:24:11.968203247Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9fc77b6d-kp6sc,Uid:03c15592-0c9f-4dd8-ac51-f2e8e4fe7f05,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:11.968498 kubelet[1774]: E0117 12:24:11.968467 1774 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:11.968558 kubelet[1774]: E0117 12:24:11.968533 1774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9fc77b6d-kp6sc" Jan 17 12:24:11.968605 kubelet[1774]: E0117 12:24:11.968559 1774 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9fc77b6d-kp6sc" Jan 17 12:24:11.968638 kubelet[1774]: E0117 12:24:11.968620 1774 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-9fc77b6d-kp6sc_calico-system(03c15592-0c9f-4dd8-ac51-f2e8e4fe7f05)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-9fc77b6d-kp6sc_calico-system(03c15592-0c9f-4dd8-ac51-f2e8e4fe7f05)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-9fc77b6d-kp6sc" podUID="03c15592-0c9f-4dd8-ac51-f2e8e4fe7f05" Jan 17 12:24:11.971462 containerd[1462]: time="2025-01-17T12:24:11.971396455Z" level=error msg="Failed to destroy network for sandbox \"918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:11.971916 containerd[1462]: time="2025-01-17T12:24:11.971873430Z" level=error msg="encountered an error cleaning up failed sandbox \"918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:11.971981 containerd[1462]: time="2025-01-17T12:24:11.971928182Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-q4bsh,Uid:01f58051-b831-4680-bcde-ee72a125416f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:11.972166 kubelet[1774]: E0117 12:24:11.972140 1774 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:11.972217 kubelet[1774]: E0117 12:24:11.972197 1774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-q4bsh" Jan 17 12:24:11.972266 kubelet[1774]: E0117 12:24:11.972219 1774 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-q4bsh" Jan 17 12:24:11.972301 kubelet[1774]: E0117 12:24:11.972271 1774 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-q4bsh_kube-system(01f58051-b831-4680-bcde-ee72a125416f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-q4bsh_kube-system(01f58051-b831-4680-bcde-ee72a125416f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-q4bsh" podUID="01f58051-b831-4680-bcde-ee72a125416f" Jan 17 12:24:11.979157 containerd[1462]: time="2025-01-17T12:24:11.979102607Z" level=error msg="Failed to destroy network for sandbox \"7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:11.979496 containerd[1462]: time="2025-01-17T12:24:11.979463794Z" level=error msg="encountered an error cleaning up failed sandbox \"7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:11.979546 containerd[1462]: time="2025-01-17T12:24:11.979508729Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-nlhgn,Uid:04af4684-e94d-4b3d-a664-20ff360fb4b6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:11.979714 kubelet[1774]: E0117 12:24:11.979678 1774 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:11.979762 kubelet[1774]: E0117 12:24:11.979731 1774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-nlhgn" Jan 17 12:24:11.979762 kubelet[1774]: E0117 12:24:11.979752 1774 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-nlhgn" Jan 17 12:24:11.979809 kubelet[1774]: E0117 12:24:11.979802 1774 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-nlhgn_kube-system(04af4684-e94d-4b3d-a664-20ff360fb4b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-nlhgn_kube-system(04af4684-e94d-4b3d-a664-20ff360fb4b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-nlhgn" podUID="04af4684-e94d-4b3d-a664-20ff360fb4b6" Jan 17 12:24:12.219018 kubelet[1774]: I0117 12:24:12.218992 1774 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20" Jan 17 12:24:12.219714 containerd[1462]: time="2025-01-17T12:24:12.219659056Z" level=info msg="StopPodSandbox for \"99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20\"" Jan 17 12:24:12.220079 containerd[1462]: time="2025-01-17T12:24:12.219860765Z" level=info msg="Ensure that sandbox 99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20 in task-service has been cleanup successfully" Jan 17 12:24:12.221348 kubelet[1774]: E0117 12:24:12.221208 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:24:12.221834 kubelet[1774]: I0117 12:24:12.221803 1774 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2" Jan 17 12:24:12.221911 containerd[1462]: time="2025-01-17T12:24:12.221835167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 17 12:24:12.224513 kubelet[1774]: I0117 12:24:12.224415 1774 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788" Jan 17 12:24:12.225461 kubelet[1774]: I0117 12:24:12.225442 1774 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef" Jan 17 12:24:12.226130 containerd[1462]: time="2025-01-17T12:24:12.226095747Z" level=info msg="StopPodSandbox for \"d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef\"" Jan 17 12:24:12.226381 containerd[1462]: time="2025-01-17T12:24:12.226342660Z" level=info msg="Ensure that sandbox d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef in task-service has been cleanup successfully" Jan 17 12:24:12.240435 containerd[1462]: time="2025-01-17T12:24:12.238933543Z" level=info msg="StopPodSandbox for \"7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2\"" Jan 17 12:24:12.240435 containerd[1462]: time="2025-01-17T12:24:12.239145941Z" level=info msg="Ensure that sandbox 7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2 in task-service has been cleanup successfully" Jan 17 12:24:12.243991 containerd[1462]: time="2025-01-17T12:24:12.243756878Z" level=info msg="StopPodSandbox for \"918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788\"" Jan 17 12:24:12.244475 containerd[1462]: time="2025-01-17T12:24:12.244451781Z" level=info msg="Ensure that sandbox 918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788 in task-service has been cleanup successfully" Jan 17 12:24:12.260915 containerd[1462]: time="2025-01-17T12:24:12.260864421Z" level=error msg="StopPodSandbox for \"99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20\" failed" error="failed to destroy network for sandbox \"99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:12.261141 kubelet[1774]: E0117 12:24:12.261116 1774 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20" Jan 17 12:24:12.261230 kubelet[1774]: E0117 12:24:12.261193 1774 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20"} Jan 17 12:24:12.261230 kubelet[1774]: E0117 12:24:12.261227 1774 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dda61e94-21b3-42b1-baef-aafab38d13ff\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:24:12.261355 kubelet[1774]: E0117 12:24:12.261257 1774 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dda61e94-21b3-42b1-baef-aafab38d13ff\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-b66m9" podUID="dda61e94-21b3-42b1-baef-aafab38d13ff" Jan 17 12:24:12.264808 containerd[1462]: time="2025-01-17T12:24:12.264748916Z" level=error msg="StopPodSandbox for \"d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef\" failed" error="failed to destroy network for sandbox \"d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:12.265042 kubelet[1774]: E0117 12:24:12.265003 1774 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef" Jan 17 12:24:12.265107 kubelet[1774]: E0117 12:24:12.265062 1774 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef"} Jan 17 12:24:12.265135 kubelet[1774]: E0117 12:24:12.265108 1774 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"03c15592-0c9f-4dd8-ac51-f2e8e4fe7f05\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:24:12.265201 kubelet[1774]: E0117 12:24:12.265136 1774 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"03c15592-0c9f-4dd8-ac51-f2e8e4fe7f05\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-9fc77b6d-kp6sc" podUID="03c15592-0c9f-4dd8-ac51-f2e8e4fe7f05" Jan 17 12:24:12.268643 containerd[1462]: time="2025-01-17T12:24:12.268609285Z" level=error msg="StopPodSandbox for \"7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2\" failed" error="failed to destroy network for sandbox \"7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:12.268878 kubelet[1774]: E0117 12:24:12.268811 1774 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2" Jan 17 12:24:12.268878 kubelet[1774]: E0117 12:24:12.268839 1774 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2"} Jan 17 12:24:12.268878 kubelet[1774]: E0117 12:24:12.268870 1774 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"04af4684-e94d-4b3d-a664-20ff360fb4b6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:24:12.269020 kubelet[1774]: E0117 12:24:12.268893 1774 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"04af4684-e94d-4b3d-a664-20ff360fb4b6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-nlhgn" podUID="04af4684-e94d-4b3d-a664-20ff360fb4b6" Jan 17 12:24:12.279331 containerd[1462]: time="2025-01-17T12:24:12.279278434Z" level=error msg="StopPodSandbox for \"918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788\" failed" error="failed to destroy network for sandbox \"918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:12.279534 kubelet[1774]: E0117 12:24:12.279505 1774 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788" Jan 17 12:24:12.279592 kubelet[1774]: E0117 12:24:12.279548 1774 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788"} Jan 17 12:24:12.279628 kubelet[1774]: E0117 12:24:12.279605 1774 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"01f58051-b831-4680-bcde-ee72a125416f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:24:12.279716 kubelet[1774]: E0117 12:24:12.279642 1774 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"01f58051-b831-4680-bcde-ee72a125416f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-q4bsh" podUID="01f58051-b831-4680-bcde-ee72a125416f" Jan 17 12:24:12.363899 kubelet[1774]: E0117 12:24:12.363875 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:12.421943 kubelet[1774]: E0117 12:24:12.421917 1774 projected.go:294] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 17 12:24:12.421943 kubelet[1774]: E0117 12:24:12.421940 1774 projected.go:200] Error preparing data for projected volume kube-api-access-hfn4s for pod calico-apiserver/calico-apiserver-68f79c84d4-xr6b6: failed to sync configmap cache: timed out waiting for the condition Jan 17 12:24:12.422019 kubelet[1774]: E0117 12:24:12.422006 1774 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d48ea218-f377-4f0d-a97e-292b17df6d89-kube-api-access-hfn4s podName:d48ea218-f377-4f0d-a97e-292b17df6d89 nodeName:}" failed. No retries permitted until 2025-01-17 12:24:12.921983126 +0000 UTC m=+16.185938925 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hfn4s" (UniqueName: "kubernetes.io/projected/d48ea218-f377-4f0d-a97e-292b17df6d89-kube-api-access-hfn4s") pod "calico-apiserver-68f79c84d4-xr6b6" (UID: "d48ea218-f377-4f0d-a97e-292b17df6d89") : failed to sync configmap cache: timed out waiting for the condition Jan 17 12:24:12.423592 kubelet[1774]: E0117 12:24:12.423556 1774 projected.go:294] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 17 12:24:12.423592 kubelet[1774]: E0117 12:24:12.423575 1774 projected.go:200] Error preparing data for projected volume kube-api-access-kxwjw for pod calico-apiserver/calico-apiserver-68f79c84d4-6mrf5: failed to sync configmap cache: timed out waiting for the condition Jan 17 12:24:12.423657 kubelet[1774]: E0117 12:24:12.423615 1774 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3229aaed-5c2c-4c71-a248-e6141b8a2e7f-kube-api-access-kxwjw podName:3229aaed-5c2c-4c71-a248-e6141b8a2e7f nodeName:}" failed. No retries permitted until 2025-01-17 12:24:12.923603285 +0000 UTC m=+16.187559084 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kxwjw" (UniqueName: "kubernetes.io/projected/3229aaed-5c2c-4c71-a248-e6141b8a2e7f-kube-api-access-kxwjw") pod "calico-apiserver-68f79c84d4-6mrf5" (UID: "3229aaed-5c2c-4c71-a248-e6141b8a2e7f") : failed to sync configmap cache: timed out waiting for the condition Jan 17 12:24:12.878458 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2-shm.mount: Deactivated successfully. Jan 17 12:24:12.878583 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788-shm.mount: Deactivated successfully. Jan 17 12:24:12.878655 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef-shm.mount: Deactivated successfully. Jan 17 12:24:13.193723 containerd[1462]: time="2025-01-17T12:24:13.193590896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68f79c84d4-xr6b6,Uid:d48ea218-f377-4f0d-a97e-292b17df6d89,Namespace:calico-apiserver,Attempt:0,}" Jan 17 12:24:13.197238 containerd[1462]: time="2025-01-17T12:24:13.197181790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68f79c84d4-6mrf5,Uid:3229aaed-5c2c-4c71-a248-e6141b8a2e7f,Namespace:calico-apiserver,Attempt:0,}" Jan 17 12:24:13.364172 kubelet[1774]: E0117 12:24:13.364137 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:13.493784 containerd[1462]: time="2025-01-17T12:24:13.493727630Z" level=error msg="Failed to destroy network for sandbox \"c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:13.494351 containerd[1462]: time="2025-01-17T12:24:13.494314530Z" level=error msg="encountered an error cleaning up failed sandbox \"c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:13.494409 containerd[1462]: time="2025-01-17T12:24:13.494372519Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68f79c84d4-xr6b6,Uid:d48ea218-f377-4f0d-a97e-292b17df6d89,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:13.494689 kubelet[1774]: E0117 12:24:13.494664 1774 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:13.494752 kubelet[1774]: E0117 12:24:13.494744 1774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68f79c84d4-xr6b6" Jan 17 12:24:13.494776 kubelet[1774]: E0117 12:24:13.494765 1774 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68f79c84d4-xr6b6" Jan 17 12:24:13.494850 kubelet[1774]: E0117 12:24:13.494837 1774 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68f79c84d4-xr6b6_calico-apiserver(d48ea218-f377-4f0d-a97e-292b17df6d89)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68f79c84d4-xr6b6_calico-apiserver(d48ea218-f377-4f0d-a97e-292b17df6d89)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68f79c84d4-xr6b6" podUID="d48ea218-f377-4f0d-a97e-292b17df6d89" Jan 17 12:24:13.496138 containerd[1462]: time="2025-01-17T12:24:13.496098847Z" level=error msg="Failed to destroy network for sandbox \"501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:13.496424 containerd[1462]: time="2025-01-17T12:24:13.496395242Z" level=error msg="encountered an error cleaning up failed sandbox \"501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:13.496457 containerd[1462]: time="2025-01-17T12:24:13.496431560Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68f79c84d4-6mrf5,Uid:3229aaed-5c2c-4c71-a248-e6141b8a2e7f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:13.496586 kubelet[1774]: E0117 12:24:13.496561 1774 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:13.496624 kubelet[1774]: E0117 12:24:13.496600 1774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68f79c84d4-6mrf5" Jan 17 12:24:13.496624 kubelet[1774]: E0117 12:24:13.496616 1774 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68f79c84d4-6mrf5" Jan 17 12:24:13.496689 kubelet[1774]: E0117 12:24:13.496650 1774 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68f79c84d4-6mrf5_calico-apiserver(3229aaed-5c2c-4c71-a248-e6141b8a2e7f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68f79c84d4-6mrf5_calico-apiserver(3229aaed-5c2c-4c71-a248-e6141b8a2e7f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68f79c84d4-6mrf5" podUID="3229aaed-5c2c-4c71-a248-e6141b8a2e7f" Jan 17 12:24:13.896418 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b-shm.mount: Deactivated successfully. Jan 17 12:24:13.896821 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf-shm.mount: Deactivated successfully. Jan 17 12:24:14.229488 kubelet[1774]: I0117 12:24:14.229447 1774 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf" Jan 17 12:24:14.229943 containerd[1462]: time="2025-01-17T12:24:14.229912539Z" level=info msg="StopPodSandbox for \"c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf\"" Jan 17 12:24:14.230095 kubelet[1774]: I0117 12:24:14.230074 1774 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b" Jan 17 12:24:14.230145 containerd[1462]: time="2025-01-17T12:24:14.230081436Z" level=info msg="Ensure that sandbox c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf in task-service has been cleanup successfully" Jan 17 12:24:14.230690 containerd[1462]: time="2025-01-17T12:24:14.230409421Z" level=info msg="StopPodSandbox for \"501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b\"" Jan 17 12:24:14.230690 containerd[1462]: time="2025-01-17T12:24:14.230518215Z" level=info msg="Ensure that sandbox 501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b in task-service has been cleanup successfully" Jan 17 12:24:14.255691 containerd[1462]: time="2025-01-17T12:24:14.255605873Z" level=error msg="StopPodSandbox for \"501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b\" failed" error="failed to destroy network for sandbox \"501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:14.256011 kubelet[1774]: E0117 12:24:14.255985 1774 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b" Jan 17 12:24:14.256064 kubelet[1774]: E0117 12:24:14.256032 1774 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b"} Jan 17 12:24:14.256089 kubelet[1774]: E0117 12:24:14.256068 1774 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3229aaed-5c2c-4c71-a248-e6141b8a2e7f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:24:14.256148 kubelet[1774]: E0117 12:24:14.256101 1774 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3229aaed-5c2c-4c71-a248-e6141b8a2e7f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68f79c84d4-6mrf5" podUID="3229aaed-5c2c-4c71-a248-e6141b8a2e7f" Jan 17 12:24:14.257463 containerd[1462]: time="2025-01-17T12:24:14.257421358Z" level=error msg="StopPodSandbox for \"c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf\" failed" error="failed to destroy network for sandbox \"c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:14.257552 kubelet[1774]: E0117 12:24:14.257540 1774 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf" Jan 17 12:24:14.257591 kubelet[1774]: E0117 12:24:14.257560 1774 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf"} Jan 17 12:24:14.257591 kubelet[1774]: E0117 12:24:14.257586 1774 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d48ea218-f377-4f0d-a97e-292b17df6d89\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:24:14.257655 kubelet[1774]: E0117 12:24:14.257608 1774 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d48ea218-f377-4f0d-a97e-292b17df6d89\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68f79c84d4-xr6b6" podUID="d48ea218-f377-4f0d-a97e-292b17df6d89" Jan 17 12:24:14.364981 kubelet[1774]: E0117 12:24:14.364925 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:15.366121 kubelet[1774]: E0117 12:24:15.366073 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:15.642801 kubelet[1774]: I0117 12:24:15.642656 1774 topology_manager.go:215] "Topology Admit Handler" podUID="cf4115fc-1224-431d-895d-b145d63715d3" podNamespace="default" podName="nginx-deployment-6d5f899847-869v5" Jan 17 12:24:15.655427 systemd[1]: Created slice kubepods-besteffort-podcf4115fc_1224_431d_895d_b145d63715d3.slice - libcontainer container kubepods-besteffort-podcf4115fc_1224_431d_895d_b145d63715d3.slice. Jan 17 12:24:15.673381 kubelet[1774]: I0117 12:24:15.673344 1774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxdn7\" (UniqueName: \"kubernetes.io/projected/cf4115fc-1224-431d-895d-b145d63715d3-kube-api-access-vxdn7\") pod \"nginx-deployment-6d5f899847-869v5\" (UID: \"cf4115fc-1224-431d-895d-b145d63715d3\") " pod="default/nginx-deployment-6d5f899847-869v5" Jan 17 12:24:15.959821 containerd[1462]: time="2025-01-17T12:24:15.959717280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-869v5,Uid:cf4115fc-1224-431d-895d-b145d63715d3,Namespace:default,Attempt:0,}" Jan 17 12:24:16.367005 kubelet[1774]: E0117 12:24:16.366947 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:16.658674 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3138444046.mount: Deactivated successfully. Jan 17 12:24:17.300129 containerd[1462]: time="2025-01-17T12:24:17.300060663Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:17.303904 containerd[1462]: time="2025-01-17T12:24:17.303845604Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 17 12:24:17.305417 containerd[1462]: time="2025-01-17T12:24:17.305119361Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:17.310995 containerd[1462]: time="2025-01-17T12:24:17.310948688Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:17.311598 containerd[1462]: time="2025-01-17T12:24:17.311482735Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 5.089612221s" Jan 17 12:24:17.311598 containerd[1462]: time="2025-01-17T12:24:17.311513283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 17 12:24:17.323949 containerd[1462]: time="2025-01-17T12:24:17.323897723Z" level=info msg="CreateContainer within sandbox \"042623025d32332d5bad39431a6b5691badb6c90ccb542ad679672f857c7d40a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 12:24:17.340299 containerd[1462]: time="2025-01-17T12:24:17.340253652Z" level=info msg="CreateContainer within sandbox \"042623025d32332d5bad39431a6b5691badb6c90ccb542ad679672f857c7d40a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bb3aec4c691425f909bec7ccecabec5bd89ccae1a2a1ae7a4da0e84ff903af52\"" Jan 17 12:24:17.340993 containerd[1462]: time="2025-01-17T12:24:17.340916726Z" level=info msg="StartContainer for \"bb3aec4c691425f909bec7ccecabec5bd89ccae1a2a1ae7a4da0e84ff903af52\"" Jan 17 12:24:17.356207 kubelet[1774]: E0117 12:24:17.356171 1774 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:17.360400 containerd[1462]: time="2025-01-17T12:24:17.360311894Z" level=error msg="Failed to destroy network for sandbox \"b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:17.360712 containerd[1462]: time="2025-01-17T12:24:17.360655703Z" level=error msg="encountered an error cleaning up failed sandbox \"b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:17.360764 containerd[1462]: time="2025-01-17T12:24:17.360721470Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-869v5,Uid:cf4115fc-1224-431d-895d-b145d63715d3,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:17.360968 kubelet[1774]: E0117 12:24:17.360939 1774 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:17.361025 kubelet[1774]: E0117 12:24:17.360992 1774 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-869v5" Jan 17 12:24:17.361025 kubelet[1774]: E0117 12:24:17.361014 1774 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-869v5" Jan 17 12:24:17.361083 kubelet[1774]: E0117 12:24:17.361069 1774 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-869v5_default(cf4115fc-1224-431d-895d-b145d63715d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-869v5_default(cf4115fc-1224-431d-895d-b145d63715d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-869v5" podUID="cf4115fc-1224-431d-895d-b145d63715d3" Jan 17 12:24:17.367202 kubelet[1774]: E0117 12:24:17.367060 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:17.374837 systemd[1]: Started cri-containerd-bb3aec4c691425f909bec7ccecabec5bd89ccae1a2a1ae7a4da0e84ff903af52.scope - libcontainer container bb3aec4c691425f909bec7ccecabec5bd89ccae1a2a1ae7a4da0e84ff903af52. Jan 17 12:24:17.405136 containerd[1462]: time="2025-01-17T12:24:17.405082887Z" level=info msg="StartContainer for \"bb3aec4c691425f909bec7ccecabec5bd89ccae1a2a1ae7a4da0e84ff903af52\" returns successfully" Jan 17 12:24:17.480117 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 17 12:24:17.480298 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 17 12:24:17.502005 systemd[1]: cri-containerd-bb3aec4c691425f909bec7ccecabec5bd89ccae1a2a1ae7a4da0e84ff903af52.scope: Deactivated successfully. Jan 17 12:24:18.240622 kubelet[1774]: I0117 12:24:18.240570 1774 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262" Jan 17 12:24:18.241109 containerd[1462]: time="2025-01-17T12:24:18.240977727Z" level=info msg="StopPodSandbox for \"b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262\"" Jan 17 12:24:18.241201 containerd[1462]: time="2025-01-17T12:24:18.241135069Z" level=info msg="Ensure that sandbox b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262 in task-service has been cleanup successfully" Jan 17 12:24:18.243355 kubelet[1774]: E0117 12:24:18.243330 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:24:18.268203 containerd[1462]: time="2025-01-17T12:24:18.268133544Z" level=error msg="StopPodSandbox for \"b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262\" failed" error="failed to destroy network for sandbox \"b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:18.268464 kubelet[1774]: E0117 12:24:18.268435 1774 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262" Jan 17 12:24:18.268540 kubelet[1774]: E0117 12:24:18.268482 1774 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262"} Jan 17 12:24:18.268540 kubelet[1774]: E0117 12:24:18.268516 1774 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cf4115fc-1224-431d-895d-b145d63715d3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:24:18.268649 kubelet[1774]: E0117 12:24:18.268543 1774 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cf4115fc-1224-431d-895d-b145d63715d3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-869v5" podUID="cf4115fc-1224-431d-895d-b145d63715d3" Jan 17 12:24:18.305144 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262-shm.mount: Deactivated successfully. Jan 17 12:24:18.352444 kubelet[1774]: I0117 12:24:18.352399 1774 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-pn6jm" podStartSLOduration=4.759904502 podStartE2EDuration="20.352362935s" podCreationTimestamp="2025-01-17 12:23:58 +0000 UTC" firstStartedPulling="2025-01-17 12:24:01.719377751 +0000 UTC m=+4.983333550" lastFinishedPulling="2025-01-17 12:24:17.311836184 +0000 UTC m=+20.575791983" observedRunningTime="2025-01-17 12:24:18.352111914 +0000 UTC m=+21.616067713" watchObservedRunningTime="2025-01-17 12:24:18.352362935 +0000 UTC m=+21.616318734" Jan 17 12:24:18.368264 kubelet[1774]: E0117 12:24:18.368206 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:18.709531 containerd[1462]: time="2025-01-17T12:24:18.709367201Z" level=info msg="shim disconnected" id=bb3aec4c691425f909bec7ccecabec5bd89ccae1a2a1ae7a4da0e84ff903af52 namespace=k8s.io Jan 17 12:24:18.709531 containerd[1462]: time="2025-01-17T12:24:18.709433378Z" level=warning msg="cleaning up after shim disconnected" id=bb3aec4c691425f909bec7ccecabec5bd89ccae1a2a1ae7a4da0e84ff903af52 namespace=k8s.io Jan 17 12:24:18.709531 containerd[1462]: time="2025-01-17T12:24:18.709446613Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:24:19.246891 kubelet[1774]: I0117 12:24:19.246854 1774 scope.go:117] "RemoveContainer" containerID="bb3aec4c691425f909bec7ccecabec5bd89ccae1a2a1ae7a4da0e84ff903af52" Jan 17 12:24:19.247038 kubelet[1774]: E0117 12:24:19.246930 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:24:19.249192 containerd[1462]: time="2025-01-17T12:24:19.249162006Z" level=info msg="CreateContainer within sandbox \"042623025d32332d5bad39431a6b5691badb6c90ccb542ad679672f857c7d40a\" for container &ContainerMetadata{Name:calico-node,Attempt:1,}" Jan 17 12:24:19.368465 kubelet[1774]: E0117 12:24:19.368409 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:19.387111 containerd[1462]: time="2025-01-17T12:24:19.387047369Z" level=info msg="CreateContainer within sandbox \"042623025d32332d5bad39431a6b5691badb6c90ccb542ad679672f857c7d40a\" for &ContainerMetadata{Name:calico-node,Attempt:1,} returns container id \"b3d683b0f7d8cee3b1c0566f7be42a21ea9efce0339d72c8521bd59bd575d5a8\"" Jan 17 12:24:19.387649 containerd[1462]: time="2025-01-17T12:24:19.387611378Z" level=info msg="StartContainer for \"b3d683b0f7d8cee3b1c0566f7be42a21ea9efce0339d72c8521bd59bd575d5a8\"" Jan 17 12:24:19.418825 systemd[1]: Started cri-containerd-b3d683b0f7d8cee3b1c0566f7be42a21ea9efce0339d72c8521bd59bd575d5a8.scope - libcontainer container b3d683b0f7d8cee3b1c0566f7be42a21ea9efce0339d72c8521bd59bd575d5a8. Jan 17 12:24:19.515765 containerd[1462]: time="2025-01-17T12:24:19.515572451Z" level=info msg="StartContainer for \"b3d683b0f7d8cee3b1c0566f7be42a21ea9efce0339d72c8521bd59bd575d5a8\" returns successfully" Jan 17 12:24:19.572313 systemd[1]: cri-containerd-b3d683b0f7d8cee3b1c0566f7be42a21ea9efce0339d72c8521bd59bd575d5a8.scope: Deactivated successfully. Jan 17 12:24:19.593614 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b3d683b0f7d8cee3b1c0566f7be42a21ea9efce0339d72c8521bd59bd575d5a8-rootfs.mount: Deactivated successfully. Jan 17 12:24:19.599965 containerd[1462]: time="2025-01-17T12:24:19.599907000Z" level=info msg="shim disconnected" id=b3d683b0f7d8cee3b1c0566f7be42a21ea9efce0339d72c8521bd59bd575d5a8 namespace=k8s.io Jan 17 12:24:19.599965 containerd[1462]: time="2025-01-17T12:24:19.599965181Z" level=warning msg="cleaning up after shim disconnected" id=b3d683b0f7d8cee3b1c0566f7be42a21ea9efce0339d72c8521bd59bd575d5a8 namespace=k8s.io Jan 17 12:24:19.600086 containerd[1462]: time="2025-01-17T12:24:19.599974308Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:24:20.250500 kubelet[1774]: I0117 12:24:20.250469 1774 scope.go:117] "RemoveContainer" containerID="bb3aec4c691425f909bec7ccecabec5bd89ccae1a2a1ae7a4da0e84ff903af52" Jan 17 12:24:20.250776 kubelet[1774]: I0117 12:24:20.250756 1774 scope.go:117] "RemoveContainer" containerID="b3d683b0f7d8cee3b1c0566f7be42a21ea9efce0339d72c8521bd59bd575d5a8" Jan 17 12:24:20.250831 kubelet[1774]: E0117 12:24:20.250817 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:24:20.251231 kubelet[1774]: E0117 12:24:20.251210 1774 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 10s restarting failed container=calico-node pod=calico-node-pn6jm_calico-system(23215448-caff-4f20-82b0-3b3f56d251cf)\"" pod="calico-system/calico-node-pn6jm" podUID="23215448-caff-4f20-82b0-3b3f56d251cf" Jan 17 12:24:20.251635 containerd[1462]: time="2025-01-17T12:24:20.251603193Z" level=info msg="RemoveContainer for \"bb3aec4c691425f909bec7ccecabec5bd89ccae1a2a1ae7a4da0e84ff903af52\"" Jan 17 12:24:20.345226 containerd[1462]: time="2025-01-17T12:24:20.345183366Z" level=info msg="RemoveContainer for \"bb3aec4c691425f909bec7ccecabec5bd89ccae1a2a1ae7a4da0e84ff903af52\" returns successfully" Jan 17 12:24:20.368834 kubelet[1774]: E0117 12:24:20.368777 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:21.283872 kubelet[1774]: I0117 12:24:21.283836 1774 scope.go:117] "RemoveContainer" containerID="b3d683b0f7d8cee3b1c0566f7be42a21ea9efce0339d72c8521bd59bd575d5a8" Jan 17 12:24:21.284035 kubelet[1774]: E0117 12:24:21.283905 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:24:21.284338 kubelet[1774]: E0117 12:24:21.284311 1774 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 10s restarting failed container=calico-node pod=calico-node-pn6jm_calico-system(23215448-caff-4f20-82b0-3b3f56d251cf)\"" pod="calico-system/calico-node-pn6jm" podUID="23215448-caff-4f20-82b0-3b3f56d251cf" Jan 17 12:24:21.368981 kubelet[1774]: E0117 12:24:21.368932 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:22.369759 kubelet[1774]: E0117 12:24:22.369710 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:23.370791 kubelet[1774]: E0117 12:24:23.370755 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:24.371593 kubelet[1774]: E0117 12:24:24.371519 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:25.195242 containerd[1462]: time="2025-01-17T12:24:25.195197237Z" level=info msg="StopPodSandbox for \"99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20\"" Jan 17 12:24:25.222720 containerd[1462]: time="2025-01-17T12:24:25.222642893Z" level=error msg="StopPodSandbox for \"99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20\" failed" error="failed to destroy network for sandbox \"99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:25.222912 kubelet[1774]: E0117 12:24:25.222889 1774 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20" Jan 17 12:24:25.222957 kubelet[1774]: E0117 12:24:25.222936 1774 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20"} Jan 17 12:24:25.222978 kubelet[1774]: E0117 12:24:25.222968 1774 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dda61e94-21b3-42b1-baef-aafab38d13ff\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:24:25.223037 kubelet[1774]: E0117 12:24:25.222994 1774 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dda61e94-21b3-42b1-baef-aafab38d13ff\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-b66m9" podUID="dda61e94-21b3-42b1-baef-aafab38d13ff" Jan 17 12:24:25.372083 kubelet[1774]: E0117 12:24:25.372037 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:26.194778 containerd[1462]: time="2025-01-17T12:24:26.194553109Z" level=info msg="StopPodSandbox for \"d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef\"" Jan 17 12:24:26.199915 containerd[1462]: time="2025-01-17T12:24:26.199878471Z" level=info msg="StopPodSandbox for \"918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788\"" Jan 17 12:24:26.223133 containerd[1462]: time="2025-01-17T12:24:26.223067189Z" level=error msg="StopPodSandbox for \"d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef\" failed" error="failed to destroy network for sandbox \"d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:26.223351 kubelet[1774]: E0117 12:24:26.223303 1774 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef" Jan 17 12:24:26.223351 kubelet[1774]: E0117 12:24:26.223361 1774 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef"} Jan 17 12:24:26.223541 kubelet[1774]: E0117 12:24:26.223395 1774 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"03c15592-0c9f-4dd8-ac51-f2e8e4fe7f05\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:24:26.223541 kubelet[1774]: E0117 12:24:26.223424 1774 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"03c15592-0c9f-4dd8-ac51-f2e8e4fe7f05\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-9fc77b6d-kp6sc" podUID="03c15592-0c9f-4dd8-ac51-f2e8e4fe7f05" Jan 17 12:24:26.225512 containerd[1462]: time="2025-01-17T12:24:26.225474766Z" level=error msg="StopPodSandbox for \"918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788\" failed" error="failed to destroy network for sandbox \"918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:26.225736 kubelet[1774]: E0117 12:24:26.225708 1774 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788" Jan 17 12:24:26.225796 kubelet[1774]: E0117 12:24:26.225762 1774 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788"} Jan 17 12:24:26.225823 kubelet[1774]: E0117 12:24:26.225800 1774 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"01f58051-b831-4680-bcde-ee72a125416f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:24:26.225885 kubelet[1774]: E0117 12:24:26.225829 1774 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"01f58051-b831-4680-bcde-ee72a125416f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-q4bsh" podUID="01f58051-b831-4680-bcde-ee72a125416f" Jan 17 12:24:26.372935 kubelet[1774]: E0117 12:24:26.372850 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:27.195034 containerd[1462]: time="2025-01-17T12:24:27.194963902Z" level=info msg="StopPodSandbox for \"7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2\"" Jan 17 12:24:27.221663 containerd[1462]: time="2025-01-17T12:24:27.221601184Z" level=error msg="StopPodSandbox for \"7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2\" failed" error="failed to destroy network for sandbox \"7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:27.222055 kubelet[1774]: E0117 12:24:27.221889 1774 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2" Jan 17 12:24:27.222055 kubelet[1774]: E0117 12:24:27.221933 1774 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2"} Jan 17 12:24:27.222055 kubelet[1774]: E0117 12:24:27.221966 1774 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"04af4684-e94d-4b3d-a664-20ff360fb4b6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:24:27.222055 kubelet[1774]: E0117 12:24:27.221996 1774 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"04af4684-e94d-4b3d-a664-20ff360fb4b6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-nlhgn" podUID="04af4684-e94d-4b3d-a664-20ff360fb4b6" Jan 17 12:24:27.373041 kubelet[1774]: E0117 12:24:27.372999 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:28.194659 containerd[1462]: time="2025-01-17T12:24:28.194544105Z" level=info msg="StopPodSandbox for \"501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b\"" Jan 17 12:24:28.194845 containerd[1462]: time="2025-01-17T12:24:28.194776025Z" level=info msg="StopPodSandbox for \"c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf\"" Jan 17 12:24:28.219824 containerd[1462]: time="2025-01-17T12:24:28.219769359Z" level=error msg="StopPodSandbox for \"c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf\" failed" error="failed to destroy network for sandbox \"c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:28.220072 kubelet[1774]: E0117 12:24:28.220036 1774 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf" Jan 17 12:24:28.220125 kubelet[1774]: E0117 12:24:28.220085 1774 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf"} Jan 17 12:24:28.220157 kubelet[1774]: E0117 12:24:28.220125 1774 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d48ea218-f377-4f0d-a97e-292b17df6d89\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:24:28.220231 kubelet[1774]: E0117 12:24:28.220158 1774 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d48ea218-f377-4f0d-a97e-292b17df6d89\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68f79c84d4-xr6b6" podUID="d48ea218-f377-4f0d-a97e-292b17df6d89" Jan 17 12:24:28.220299 containerd[1462]: time="2025-01-17T12:24:28.220256533Z" level=error msg="StopPodSandbox for \"501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b\" failed" error="failed to destroy network for sandbox \"501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:28.220448 kubelet[1774]: E0117 12:24:28.220430 1774 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b" Jan 17 12:24:28.220520 kubelet[1774]: E0117 12:24:28.220453 1774 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b"} Jan 17 12:24:28.220520 kubelet[1774]: E0117 12:24:28.220480 1774 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3229aaed-5c2c-4c71-a248-e6141b8a2e7f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:24:28.220520 kubelet[1774]: E0117 12:24:28.220500 1774 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3229aaed-5c2c-4c71-a248-e6141b8a2e7f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68f79c84d4-6mrf5" podUID="3229aaed-5c2c-4c71-a248-e6141b8a2e7f" Jan 17 12:24:28.373242 kubelet[1774]: E0117 12:24:28.373182 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:29.148530 update_engine[1451]: I20250117 12:24:29.148418 1451 update_attempter.cc:509] Updating boot flags... Jan 17 12:24:29.172742 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2940) Jan 17 12:24:29.195109 containerd[1462]: time="2025-01-17T12:24:29.194972625Z" level=info msg="StopPodSandbox for \"b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262\"" Jan 17 12:24:29.202803 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2943) Jan 17 12:24:29.239787 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2943) Jan 17 12:24:29.255032 containerd[1462]: time="2025-01-17T12:24:29.254984541Z" level=error msg="StopPodSandbox for \"b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262\" failed" error="failed to destroy network for sandbox \"b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:29.255192 kubelet[1774]: E0117 12:24:29.255164 1774 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262" Jan 17 12:24:29.255252 kubelet[1774]: E0117 12:24:29.255208 1774 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262"} Jan 17 12:24:29.255288 kubelet[1774]: E0117 12:24:29.255241 1774 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cf4115fc-1224-431d-895d-b145d63715d3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:24:29.255362 kubelet[1774]: E0117 12:24:29.255290 1774 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cf4115fc-1224-431d-895d-b145d63715d3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-869v5" podUID="cf4115fc-1224-431d-895d-b145d63715d3" Jan 17 12:24:29.374342 kubelet[1774]: E0117 12:24:29.374286 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:30.375255 kubelet[1774]: E0117 12:24:30.375183 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:31.375849 kubelet[1774]: E0117 12:24:31.375795 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:32.194307 kubelet[1774]: I0117 12:24:32.194258 1774 scope.go:117] "RemoveContainer" containerID="b3d683b0f7d8cee3b1c0566f7be42a21ea9efce0339d72c8521bd59bd575d5a8" Jan 17 12:24:32.196715 kubelet[1774]: E0117 12:24:32.194523 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:24:32.199382 containerd[1462]: time="2025-01-17T12:24:32.199349754Z" level=info msg="CreateContainer within sandbox \"042623025d32332d5bad39431a6b5691badb6c90ccb542ad679672f857c7d40a\" for container &ContainerMetadata{Name:calico-node,Attempt:2,}" Jan 17 12:24:32.212384 containerd[1462]: time="2025-01-17T12:24:32.212346772Z" level=info msg="CreateContainer within sandbox \"042623025d32332d5bad39431a6b5691badb6c90ccb542ad679672f857c7d40a\" for &ContainerMetadata{Name:calico-node,Attempt:2,} returns container id \"98bb31d10477b002ad29e655c0d2da12f041763fffaba6cd0ca05a1e82ec4d71\"" Jan 17 12:24:32.212706 containerd[1462]: time="2025-01-17T12:24:32.212670245Z" level=info msg="StartContainer for \"98bb31d10477b002ad29e655c0d2da12f041763fffaba6cd0ca05a1e82ec4d71\"" Jan 17 12:24:32.242825 systemd[1]: Started cri-containerd-98bb31d10477b002ad29e655c0d2da12f041763fffaba6cd0ca05a1e82ec4d71.scope - libcontainer container 98bb31d10477b002ad29e655c0d2da12f041763fffaba6cd0ca05a1e82ec4d71. Jan 17 12:24:32.273826 containerd[1462]: time="2025-01-17T12:24:32.273624382Z" level=info msg="StartContainer for \"98bb31d10477b002ad29e655c0d2da12f041763fffaba6cd0ca05a1e82ec4d71\" returns successfully" Jan 17 12:24:32.322453 systemd[1]: cri-containerd-98bb31d10477b002ad29e655c0d2da12f041763fffaba6cd0ca05a1e82ec4d71.scope: Deactivated successfully. Jan 17 12:24:32.348675 containerd[1462]: time="2025-01-17T12:24:32.348613573Z" level=info msg="shim disconnected" id=98bb31d10477b002ad29e655c0d2da12f041763fffaba6cd0ca05a1e82ec4d71 namespace=k8s.io Jan 17 12:24:32.348675 containerd[1462]: time="2025-01-17T12:24:32.348659138Z" level=warning msg="cleaning up after shim disconnected" id=98bb31d10477b002ad29e655c0d2da12f041763fffaba6cd0ca05a1e82ec4d71 namespace=k8s.io Jan 17 12:24:32.348675 containerd[1462]: time="2025-01-17T12:24:32.348667173Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:24:32.375995 kubelet[1774]: E0117 12:24:32.375948 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:33.208758 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98bb31d10477b002ad29e655c0d2da12f041763fffaba6cd0ca05a1e82ec4d71-rootfs.mount: Deactivated successfully. Jan 17 12:24:33.280050 kubelet[1774]: I0117 12:24:33.280029 1774 scope.go:117] "RemoveContainer" containerID="b3d683b0f7d8cee3b1c0566f7be42a21ea9efce0339d72c8521bd59bd575d5a8" Jan 17 12:24:33.280313 kubelet[1774]: I0117 12:24:33.280296 1774 scope.go:117] "RemoveContainer" containerID="98bb31d10477b002ad29e655c0d2da12f041763fffaba6cd0ca05a1e82ec4d71" Jan 17 12:24:33.280387 kubelet[1774]: E0117 12:24:33.280368 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:24:33.280792 kubelet[1774]: E0117 12:24:33.280772 1774 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-pn6jm_calico-system(23215448-caff-4f20-82b0-3b3f56d251cf)\"" pod="calico-system/calico-node-pn6jm" podUID="23215448-caff-4f20-82b0-3b3f56d251cf" Jan 17 12:24:33.280983 containerd[1462]: time="2025-01-17T12:24:33.280954501Z" level=info msg="RemoveContainer for \"b3d683b0f7d8cee3b1c0566f7be42a21ea9efce0339d72c8521bd59bd575d5a8\"" Jan 17 12:24:33.284166 containerd[1462]: time="2025-01-17T12:24:33.284135178Z" level=info msg="RemoveContainer for \"b3d683b0f7d8cee3b1c0566f7be42a21ea9efce0339d72c8521bd59bd575d5a8\" returns successfully" Jan 17 12:24:33.376060 kubelet[1774]: E0117 12:24:33.376026 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:34.283582 kubelet[1774]: I0117 12:24:34.283550 1774 scope.go:117] "RemoveContainer" containerID="98bb31d10477b002ad29e655c0d2da12f041763fffaba6cd0ca05a1e82ec4d71" Jan 17 12:24:34.283767 kubelet[1774]: E0117 12:24:34.283619 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:24:34.284053 kubelet[1774]: E0117 12:24:34.284026 1774 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-pn6jm_calico-system(23215448-caff-4f20-82b0-3b3f56d251cf)\"" pod="calico-system/calico-node-pn6jm" podUID="23215448-caff-4f20-82b0-3b3f56d251cf" Jan 17 12:24:34.376982 kubelet[1774]: E0117 12:24:34.376957 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:35.377792 kubelet[1774]: E0117 12:24:35.377745 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:36.378581 kubelet[1774]: E0117 12:24:36.378531 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:37.355670 kubelet[1774]: E0117 12:24:37.355623 1774 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:37.378772 kubelet[1774]: E0117 12:24:37.378746 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:38.195318 containerd[1462]: time="2025-01-17T12:24:38.195268018Z" level=info msg="StopPodSandbox for \"d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef\"" Jan 17 12:24:38.220781 containerd[1462]: time="2025-01-17T12:24:38.220721452Z" level=error msg="StopPodSandbox for \"d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef\" failed" error="failed to destroy network for sandbox \"d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:38.220985 kubelet[1774]: E0117 12:24:38.220962 1774 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef" Jan 17 12:24:38.221029 kubelet[1774]: E0117 12:24:38.221004 1774 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef"} Jan 17 12:24:38.221074 kubelet[1774]: E0117 12:24:38.221061 1774 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"03c15592-0c9f-4dd8-ac51-f2e8e4fe7f05\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:24:38.221138 kubelet[1774]: E0117 12:24:38.221092 1774 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"03c15592-0c9f-4dd8-ac51-f2e8e4fe7f05\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-9fc77b6d-kp6sc" podUID="03c15592-0c9f-4dd8-ac51-f2e8e4fe7f05" Jan 17 12:24:38.379838 kubelet[1774]: E0117 12:24:38.379810 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:39.194968 containerd[1462]: time="2025-01-17T12:24:39.194916671Z" level=info msg="StopPodSandbox for \"7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2\"" Jan 17 12:24:39.220836 containerd[1462]: time="2025-01-17T12:24:39.220790119Z" level=error msg="StopPodSandbox for \"7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2\" failed" error="failed to destroy network for sandbox \"7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:39.221189 kubelet[1774]: E0117 12:24:39.221011 1774 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2" Jan 17 12:24:39.221189 kubelet[1774]: E0117 12:24:39.221052 1774 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2"} Jan 17 12:24:39.221189 kubelet[1774]: E0117 12:24:39.221088 1774 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"04af4684-e94d-4b3d-a664-20ff360fb4b6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:24:39.221189 kubelet[1774]: E0117 12:24:39.221114 1774 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"04af4684-e94d-4b3d-a664-20ff360fb4b6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-nlhgn" podUID="04af4684-e94d-4b3d-a664-20ff360fb4b6" Jan 17 12:24:39.380588 kubelet[1774]: E0117 12:24:39.380558 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:40.195633 containerd[1462]: time="2025-01-17T12:24:40.195104853Z" level=info msg="StopPodSandbox for \"918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788\"" Jan 17 12:24:40.195633 containerd[1462]: time="2025-01-17T12:24:40.195299500Z" level=info msg="StopPodSandbox for \"99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20\"" Jan 17 12:24:40.220452 containerd[1462]: time="2025-01-17T12:24:40.220402586Z" level=error msg="StopPodSandbox for \"99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20\" failed" error="failed to destroy network for sandbox \"99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:40.220669 kubelet[1774]: E0117 12:24:40.220644 1774 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20" Jan 17 12:24:40.220735 kubelet[1774]: E0117 12:24:40.220704 1774 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20"} Jan 17 12:24:40.220770 kubelet[1774]: E0117 12:24:40.220740 1774 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dda61e94-21b3-42b1-baef-aafab38d13ff\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:24:40.220770 kubelet[1774]: E0117 12:24:40.220769 1774 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dda61e94-21b3-42b1-baef-aafab38d13ff\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-b66m9" podUID="dda61e94-21b3-42b1-baef-aafab38d13ff" Jan 17 12:24:40.220912 containerd[1462]: time="2025-01-17T12:24:40.220854729Z" level=error msg="StopPodSandbox for \"918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788\" failed" error="failed to destroy network for sandbox \"918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:40.221220 kubelet[1774]: E0117 12:24:40.221032 1774 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788" Jan 17 12:24:40.221220 kubelet[1774]: E0117 12:24:40.221051 1774 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788"} Jan 17 12:24:40.221220 kubelet[1774]: E0117 12:24:40.221077 1774 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"01f58051-b831-4680-bcde-ee72a125416f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:24:40.221220 kubelet[1774]: E0117 12:24:40.221098 1774 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"01f58051-b831-4680-bcde-ee72a125416f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-q4bsh" podUID="01f58051-b831-4680-bcde-ee72a125416f" Jan 17 12:24:40.381368 kubelet[1774]: E0117 12:24:40.381307 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:41.194625 containerd[1462]: time="2025-01-17T12:24:41.194523314Z" level=info msg="StopPodSandbox for \"501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b\"" Jan 17 12:24:41.194840 containerd[1462]: time="2025-01-17T12:24:41.194798133Z" level=info msg="StopPodSandbox for \"b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262\"" Jan 17 12:24:41.219201 containerd[1462]: time="2025-01-17T12:24:41.219138705Z" level=error msg="StopPodSandbox for \"b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262\" failed" error="failed to destroy network for sandbox \"b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:41.219386 kubelet[1774]: E0117 12:24:41.219356 1774 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262" Jan 17 12:24:41.219428 kubelet[1774]: E0117 12:24:41.219400 1774 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262"} Jan 17 12:24:41.219456 kubelet[1774]: E0117 12:24:41.219432 1774 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cf4115fc-1224-431d-895d-b145d63715d3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:24:41.219519 kubelet[1774]: E0117 12:24:41.219461 1774 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cf4115fc-1224-431d-895d-b145d63715d3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-869v5" podUID="cf4115fc-1224-431d-895d-b145d63715d3" Jan 17 12:24:41.219773 containerd[1462]: time="2025-01-17T12:24:41.219738827Z" level=error msg="StopPodSandbox for \"501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b\" failed" error="failed to destroy network for sandbox \"501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:41.219969 kubelet[1774]: E0117 12:24:41.219938 1774 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b" Jan 17 12:24:41.220012 kubelet[1774]: E0117 12:24:41.219987 1774 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b"} Jan 17 12:24:41.220042 kubelet[1774]: E0117 12:24:41.220021 1774 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3229aaed-5c2c-4c71-a248-e6141b8a2e7f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:24:41.220097 kubelet[1774]: E0117 12:24:41.220056 1774 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3229aaed-5c2c-4c71-a248-e6141b8a2e7f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68f79c84d4-6mrf5" podUID="3229aaed-5c2c-4c71-a248-e6141b8a2e7f" Jan 17 12:24:41.381869 kubelet[1774]: E0117 12:24:41.381836 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:42.382789 kubelet[1774]: E0117 12:24:42.382750 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:43.194956 containerd[1462]: time="2025-01-17T12:24:43.194903765Z" level=info msg="StopPodSandbox for \"c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf\"" Jan 17 12:24:43.221410 containerd[1462]: time="2025-01-17T12:24:43.221359365Z" level=error msg="StopPodSandbox for \"c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf\" failed" error="failed to destroy network for sandbox \"c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:43.221595 kubelet[1774]: E0117 12:24:43.221558 1774 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf" Jan 17 12:24:43.221595 kubelet[1774]: E0117 12:24:43.221598 1774 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf"} Jan 17 12:24:43.221780 kubelet[1774]: E0117 12:24:43.221628 1774 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d48ea218-f377-4f0d-a97e-292b17df6d89\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:24:43.221780 kubelet[1774]: E0117 12:24:43.221653 1774 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d48ea218-f377-4f0d-a97e-292b17df6d89\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68f79c84d4-xr6b6" podUID="d48ea218-f377-4f0d-a97e-292b17df6d89" Jan 17 12:24:43.383135 kubelet[1774]: E0117 12:24:43.383097 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:44.383279 kubelet[1774]: E0117 12:24:44.383213 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:45.384298 kubelet[1774]: E0117 12:24:45.384224 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:46.384569 kubelet[1774]: E0117 12:24:46.384510 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:47.385351 kubelet[1774]: E0117 12:24:47.385301 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:48.385784 kubelet[1774]: E0117 12:24:48.385741 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:49.194786 kubelet[1774]: I0117 12:24:49.194726 1774 scope.go:117] "RemoveContainer" containerID="98bb31d10477b002ad29e655c0d2da12f041763fffaba6cd0ca05a1e82ec4d71" Jan 17 12:24:49.194945 kubelet[1774]: E0117 12:24:49.194821 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:24:49.195284 kubelet[1774]: E0117 12:24:49.195251 1774 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-pn6jm_calico-system(23215448-caff-4f20-82b0-3b3f56d251cf)\"" pod="calico-system/calico-node-pn6jm" podUID="23215448-caff-4f20-82b0-3b3f56d251cf" Jan 17 12:24:49.386053 kubelet[1774]: E0117 12:24:49.385974 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:50.195297 containerd[1462]: time="2025-01-17T12:24:50.195242674Z" level=info msg="StopPodSandbox for \"d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef\"" Jan 17 12:24:50.218458 containerd[1462]: time="2025-01-17T12:24:50.218333542Z" level=error msg="StopPodSandbox for \"d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef\" failed" error="failed to destroy network for sandbox \"d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:50.218611 kubelet[1774]: E0117 12:24:50.218588 1774 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef" Jan 17 12:24:50.218651 kubelet[1774]: E0117 12:24:50.218636 1774 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef"} Jan 17 12:24:50.218676 kubelet[1774]: E0117 12:24:50.218670 1774 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"03c15592-0c9f-4dd8-ac51-f2e8e4fe7f05\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:24:50.218752 kubelet[1774]: E0117 12:24:50.218717 1774 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"03c15592-0c9f-4dd8-ac51-f2e8e4fe7f05\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-9fc77b6d-kp6sc" podUID="03c15592-0c9f-4dd8-ac51-f2e8e4fe7f05" Jan 17 12:24:50.386102 kubelet[1774]: E0117 12:24:50.386066 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:51.194835 containerd[1462]: time="2025-01-17T12:24:51.194787593Z" level=info msg="StopPodSandbox for \"99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20\"" Jan 17 12:24:51.220018 containerd[1462]: time="2025-01-17T12:24:51.219935533Z" level=error msg="StopPodSandbox for \"99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20\" failed" error="failed to destroy network for sandbox \"99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:51.220386 kubelet[1774]: E0117 12:24:51.220220 1774 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20" Jan 17 12:24:51.220386 kubelet[1774]: E0117 12:24:51.220269 1774 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20"} Jan 17 12:24:51.220386 kubelet[1774]: E0117 12:24:51.220316 1774 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dda61e94-21b3-42b1-baef-aafab38d13ff\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:24:51.220386 kubelet[1774]: E0117 12:24:51.220353 1774 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dda61e94-21b3-42b1-baef-aafab38d13ff\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-b66m9" podUID="dda61e94-21b3-42b1-baef-aafab38d13ff" Jan 17 12:24:51.283983 kubelet[1774]: I0117 12:24:51.283950 1774 scope.go:117] "RemoveContainer" containerID="98bb31d10477b002ad29e655c0d2da12f041763fffaba6cd0ca05a1e82ec4d71" Jan 17 12:24:51.284063 kubelet[1774]: E0117 12:24:51.284042 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:24:51.284528 kubelet[1774]: E0117 12:24:51.284504 1774 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-pn6jm_calico-system(23215448-caff-4f20-82b0-3b3f56d251cf)\"" pod="calico-system/calico-node-pn6jm" podUID="23215448-caff-4f20-82b0-3b3f56d251cf" Jan 17 12:24:51.386595 kubelet[1774]: E0117 12:24:51.386576 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:52.195076 containerd[1462]: time="2025-01-17T12:24:52.195043497Z" level=info msg="StopPodSandbox for \"918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788\"" Jan 17 12:24:52.219446 containerd[1462]: time="2025-01-17T12:24:52.219388408Z" level=error msg="StopPodSandbox for \"918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788\" failed" error="failed to destroy network for sandbox \"918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:52.219655 kubelet[1774]: E0117 12:24:52.219632 1774 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788" Jan 17 12:24:52.219721 kubelet[1774]: E0117 12:24:52.219681 1774 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788"} Jan 17 12:24:52.219751 kubelet[1774]: E0117 12:24:52.219729 1774 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"01f58051-b831-4680-bcde-ee72a125416f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:24:52.219824 kubelet[1774]: E0117 12:24:52.219758 1774 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"01f58051-b831-4680-bcde-ee72a125416f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-q4bsh" podUID="01f58051-b831-4680-bcde-ee72a125416f" Jan 17 12:24:52.387740 kubelet[1774]: E0117 12:24:52.387681 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:53.194945 containerd[1462]: time="2025-01-17T12:24:53.194899856Z" level=info msg="StopPodSandbox for \"7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2\"" Jan 17 12:24:53.195297 containerd[1462]: time="2025-01-17T12:24:53.194913934Z" level=info msg="StopPodSandbox for \"b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262\"" Jan 17 12:24:53.218517 containerd[1462]: time="2025-01-17T12:24:53.218449898Z" level=error msg="StopPodSandbox for \"7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2\" failed" error="failed to destroy network for sandbox \"7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:53.218747 kubelet[1774]: E0117 12:24:53.218715 1774 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2" Jan 17 12:24:53.218807 kubelet[1774]: E0117 12:24:53.218765 1774 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2"} Jan 17 12:24:53.218807 kubelet[1774]: E0117 12:24:53.218802 1774 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"04af4684-e94d-4b3d-a664-20ff360fb4b6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:24:53.218893 kubelet[1774]: E0117 12:24:53.218830 1774 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"04af4684-e94d-4b3d-a664-20ff360fb4b6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-nlhgn" podUID="04af4684-e94d-4b3d-a664-20ff360fb4b6" Jan 17 12:24:53.220845 containerd[1462]: time="2025-01-17T12:24:53.220794835Z" level=error msg="StopPodSandbox for \"b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262\" failed" error="failed to destroy network for sandbox \"b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:53.220974 kubelet[1774]: E0117 12:24:53.220919 1774 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262" Jan 17 12:24:53.220974 kubelet[1774]: E0117 12:24:53.220950 1774 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262"} Jan 17 12:24:53.220974 kubelet[1774]: E0117 12:24:53.220982 1774 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cf4115fc-1224-431d-895d-b145d63715d3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:24:53.221197 kubelet[1774]: E0117 12:24:53.221004 1774 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cf4115fc-1224-431d-895d-b145d63715d3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-869v5" podUID="cf4115fc-1224-431d-895d-b145d63715d3" Jan 17 12:24:53.388809 kubelet[1774]: E0117 12:24:53.388774 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:54.195308 containerd[1462]: time="2025-01-17T12:24:54.195245620Z" level=info msg="StopPodSandbox for \"501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b\"" Jan 17 12:24:54.219727 containerd[1462]: time="2025-01-17T12:24:54.219664326Z" level=error msg="StopPodSandbox for \"501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b\" failed" error="failed to destroy network for sandbox \"501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:54.219899 kubelet[1774]: E0117 12:24:54.219873 1774 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b" Jan 17 12:24:54.219966 kubelet[1774]: E0117 12:24:54.219921 1774 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b"} Jan 17 12:24:54.219966 kubelet[1774]: E0117 12:24:54.219955 1774 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3229aaed-5c2c-4c71-a248-e6141b8a2e7f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:24:54.220077 kubelet[1774]: E0117 12:24:54.219981 1774 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3229aaed-5c2c-4c71-a248-e6141b8a2e7f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68f79c84d4-6mrf5" podUID="3229aaed-5c2c-4c71-a248-e6141b8a2e7f" Jan 17 12:24:54.389045 kubelet[1774]: E0117 12:24:54.389017 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:55.358906 kubelet[1774]: I0117 12:24:55.358859 1774 scope.go:117] "RemoveContainer" containerID="98bb31d10477b002ad29e655c0d2da12f041763fffaba6cd0ca05a1e82ec4d71" Jan 17 12:24:55.359066 kubelet[1774]: E0117 12:24:55.358952 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:24:55.361224 containerd[1462]: time="2025-01-17T12:24:55.361191447Z" level=info msg="CreateContainer within sandbox \"042623025d32332d5bad39431a6b5691badb6c90ccb542ad679672f857c7d40a\" for container &ContainerMetadata{Name:calico-node,Attempt:3,}" Jan 17 12:24:55.373956 containerd[1462]: time="2025-01-17T12:24:55.373915790Z" level=info msg="CreateContainer within sandbox \"042623025d32332d5bad39431a6b5691badb6c90ccb542ad679672f857c7d40a\" for &ContainerMetadata{Name:calico-node,Attempt:3,} returns container id \"2b7e7727433bd628c02d5b334574c08cb93694a4904048fcd8e7761ad9594d67\"" Jan 17 12:24:55.374848 containerd[1462]: time="2025-01-17T12:24:55.374224480Z" level=info msg="StartContainer for \"2b7e7727433bd628c02d5b334574c08cb93694a4904048fcd8e7761ad9594d67\"" Jan 17 12:24:55.389424 kubelet[1774]: E0117 12:24:55.389390 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:55.396265 systemd[1]: run-containerd-runc-k8s.io-2b7e7727433bd628c02d5b334574c08cb93694a4904048fcd8e7761ad9594d67-runc.AFci7y.mount: Deactivated successfully. Jan 17 12:24:55.405818 systemd[1]: Started cri-containerd-2b7e7727433bd628c02d5b334574c08cb93694a4904048fcd8e7761ad9594d67.scope - libcontainer container 2b7e7727433bd628c02d5b334574c08cb93694a4904048fcd8e7761ad9594d67. Jan 17 12:24:55.435097 containerd[1462]: time="2025-01-17T12:24:55.435055940Z" level=info msg="StartContainer for \"2b7e7727433bd628c02d5b334574c08cb93694a4904048fcd8e7761ad9594d67\" returns successfully" Jan 17 12:24:55.479797 systemd[1]: cri-containerd-2b7e7727433bd628c02d5b334574c08cb93694a4904048fcd8e7761ad9594d67.scope: Deactivated successfully. Jan 17 12:24:55.504629 containerd[1462]: time="2025-01-17T12:24:55.504573412Z" level=info msg="shim disconnected" id=2b7e7727433bd628c02d5b334574c08cb93694a4904048fcd8e7761ad9594d67 namespace=k8s.io Jan 17 12:24:55.504629 containerd[1462]: time="2025-01-17T12:24:55.504624649Z" level=warning msg="cleaning up after shim disconnected" id=2b7e7727433bd628c02d5b334574c08cb93694a4904048fcd8e7761ad9594d67 namespace=k8s.io Jan 17 12:24:55.504629 containerd[1462]: time="2025-01-17T12:24:55.504633115Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:24:56.317766 kubelet[1774]: I0117 12:24:56.317724 1774 scope.go:117] "RemoveContainer" containerID="98bb31d10477b002ad29e655c0d2da12f041763fffaba6cd0ca05a1e82ec4d71" Jan 17 12:24:56.318106 kubelet[1774]: I0117 12:24:56.318076 1774 scope.go:117] "RemoveContainer" containerID="2b7e7727433bd628c02d5b334574c08cb93694a4904048fcd8e7761ad9594d67" Jan 17 12:24:56.318247 kubelet[1774]: E0117 12:24:56.318146 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:24:56.318564 kubelet[1774]: E0117 12:24:56.318547 1774 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 40s restarting failed container=calico-node pod=calico-node-pn6jm_calico-system(23215448-caff-4f20-82b0-3b3f56d251cf)\"" pod="calico-system/calico-node-pn6jm" podUID="23215448-caff-4f20-82b0-3b3f56d251cf" Jan 17 12:24:56.318645 containerd[1462]: time="2025-01-17T12:24:56.318612743Z" level=info msg="RemoveContainer for \"98bb31d10477b002ad29e655c0d2da12f041763fffaba6cd0ca05a1e82ec4d71\"" Jan 17 12:24:56.324556 containerd[1462]: time="2025-01-17T12:24:56.324523530Z" level=info msg="RemoveContainer for \"98bb31d10477b002ad29e655c0d2da12f041763fffaba6cd0ca05a1e82ec4d71\" returns successfully" Jan 17 12:24:56.370822 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b7e7727433bd628c02d5b334574c08cb93694a4904048fcd8e7761ad9594d67-rootfs.mount: Deactivated successfully. Jan 17 12:24:56.390005 kubelet[1774]: E0117 12:24:56.389970 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:57.356683 kubelet[1774]: E0117 12:24:57.356645 1774 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:57.390313 kubelet[1774]: E0117 12:24:57.390279 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:57.954410 containerd[1462]: time="2025-01-17T12:24:57.954364188Z" level=info msg="StopPodSandbox for \"042623025d32332d5bad39431a6b5691badb6c90ccb542ad679672f857c7d40a\"" Jan 17 12:24:57.954410 containerd[1462]: time="2025-01-17T12:24:57.954408130Z" level=info msg="Container to stop \"2b7e7727433bd628c02d5b334574c08cb93694a4904048fcd8e7761ad9594d67\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:24:57.954942 containerd[1462]: time="2025-01-17T12:24:57.954420002Z" level=info msg="Container to stop \"de0c1d564148853f11c6a5566a631eb2220775eb12f8f26fa30e6358badd8bf4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:24:57.954942 containerd[1462]: time="2025-01-17T12:24:57.954430652Z" level=info msg="Container to stop \"1ac63eed8aa9941c2f75700bd99e9310b4ea38e89a101f4cc234e45252d9db9b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:24:57.957048 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-042623025d32332d5bad39431a6b5691badb6c90ccb542ad679672f857c7d40a-shm.mount: Deactivated successfully. Jan 17 12:24:57.960134 systemd[1]: cri-containerd-042623025d32332d5bad39431a6b5691badb6c90ccb542ad679672f857c7d40a.scope: Deactivated successfully. Jan 17 12:24:57.981533 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-042623025d32332d5bad39431a6b5691badb6c90ccb542ad679672f857c7d40a-rootfs.mount: Deactivated successfully. Jan 17 12:24:57.988216 containerd[1462]: time="2025-01-17T12:24:57.988156768Z" level=info msg="shim disconnected" id=042623025d32332d5bad39431a6b5691badb6c90ccb542ad679672f857c7d40a namespace=k8s.io Jan 17 12:24:57.988443 containerd[1462]: time="2025-01-17T12:24:57.988408421Z" level=warning msg="cleaning up after shim disconnected" id=042623025d32332d5bad39431a6b5691badb6c90ccb542ad679672f857c7d40a namespace=k8s.io Jan 17 12:24:57.988443 containerd[1462]: time="2025-01-17T12:24:57.988428679Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:24:58.001481 containerd[1462]: time="2025-01-17T12:24:58.001433893Z" level=info msg="TearDown network for sandbox \"042623025d32332d5bad39431a6b5691badb6c90ccb542ad679672f857c7d40a\" successfully" Jan 17 12:24:58.001481 containerd[1462]: time="2025-01-17T12:24:58.001472666Z" level=info msg="StopPodSandbox for \"042623025d32332d5bad39431a6b5691badb6c90ccb542ad679672f857c7d40a\" returns successfully" Jan 17 12:24:58.051741 kubelet[1774]: I0117 12:24:58.051667 1774 topology_manager.go:215] "Topology Admit Handler" podUID="cb99553a-14f0-4828-9cfe-14bede5eaa5e" podNamespace="calico-system" podName="calico-node-jjmkv" Jan 17 12:24:58.051741 kubelet[1774]: E0117 12:24:58.051743 1774 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="23215448-caff-4f20-82b0-3b3f56d251cf" containerName="calico-node" Jan 17 12:24:58.051940 kubelet[1774]: E0117 12:24:58.051757 1774 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="23215448-caff-4f20-82b0-3b3f56d251cf" containerName="calico-node" Jan 17 12:24:58.051940 kubelet[1774]: E0117 12:24:58.051765 1774 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="23215448-caff-4f20-82b0-3b3f56d251cf" containerName="flexvol-driver" Jan 17 12:24:58.051940 kubelet[1774]: E0117 12:24:58.051771 1774 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="23215448-caff-4f20-82b0-3b3f56d251cf" containerName="install-cni" Jan 17 12:24:58.051940 kubelet[1774]: I0117 12:24:58.051793 1774 memory_manager.go:354] "RemoveStaleState removing state" podUID="23215448-caff-4f20-82b0-3b3f56d251cf" containerName="calico-node" Jan 17 12:24:58.051940 kubelet[1774]: I0117 12:24:58.051802 1774 memory_manager.go:354] "RemoveStaleState removing state" podUID="23215448-caff-4f20-82b0-3b3f56d251cf" containerName="calico-node" Jan 17 12:24:58.051940 kubelet[1774]: I0117 12:24:58.051810 1774 memory_manager.go:354] "RemoveStaleState removing state" podUID="23215448-caff-4f20-82b0-3b3f56d251cf" containerName="calico-node" Jan 17 12:24:58.051940 kubelet[1774]: E0117 12:24:58.051840 1774 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="23215448-caff-4f20-82b0-3b3f56d251cf" containerName="calico-node" Jan 17 12:24:58.051940 kubelet[1774]: I0117 12:24:58.051856 1774 memory_manager.go:354] "RemoveStaleState removing state" podUID="23215448-caff-4f20-82b0-3b3f56d251cf" containerName="calico-node" Jan 17 12:24:58.051940 kubelet[1774]: E0117 12:24:58.051875 1774 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="23215448-caff-4f20-82b0-3b3f56d251cf" containerName="calico-node" Jan 17 12:24:58.057173 systemd[1]: Created slice kubepods-besteffort-podcb99553a_14f0_4828_9cfe_14bede5eaa5e.slice - libcontainer container kubepods-besteffort-podcb99553a_14f0_4828_9cfe_14bede5eaa5e.slice. Jan 17 12:24:58.099762 kubelet[1774]: I0117 12:24:58.099724 1774 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/23215448-caff-4f20-82b0-3b3f56d251cf-node-certs\") pod \"23215448-caff-4f20-82b0-3b3f56d251cf\" (UID: \"23215448-caff-4f20-82b0-3b3f56d251cf\") " Jan 17 12:24:58.099762 kubelet[1774]: I0117 12:24:58.099762 1774 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/23215448-caff-4f20-82b0-3b3f56d251cf-var-lib-calico\") pod \"23215448-caff-4f20-82b0-3b3f56d251cf\" (UID: \"23215448-caff-4f20-82b0-3b3f56d251cf\") " Jan 17 12:24:58.099885 kubelet[1774]: I0117 12:24:58.099783 1774 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/23215448-caff-4f20-82b0-3b3f56d251cf-flexvol-driver-host\") pod \"23215448-caff-4f20-82b0-3b3f56d251cf\" (UID: \"23215448-caff-4f20-82b0-3b3f56d251cf\") " Jan 17 12:24:58.099885 kubelet[1774]: I0117 12:24:58.099807 1774 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/23215448-caff-4f20-82b0-3b3f56d251cf-lib-modules\") pod \"23215448-caff-4f20-82b0-3b3f56d251cf\" (UID: \"23215448-caff-4f20-82b0-3b3f56d251cf\") " Jan 17 12:24:58.099885 kubelet[1774]: I0117 12:24:58.099836 1774 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/23215448-caff-4f20-82b0-3b3f56d251cf-policysync\") pod \"23215448-caff-4f20-82b0-3b3f56d251cf\" (UID: \"23215448-caff-4f20-82b0-3b3f56d251cf\") " Jan 17 12:24:58.099885 kubelet[1774]: I0117 12:24:58.099858 1774 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/23215448-caff-4f20-82b0-3b3f56d251cf-cni-bin-dir\") pod \"23215448-caff-4f20-82b0-3b3f56d251cf\" (UID: \"23215448-caff-4f20-82b0-3b3f56d251cf\") " Jan 17 12:24:58.099885 kubelet[1774]: I0117 12:24:58.099849 1774 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23215448-caff-4f20-82b0-3b3f56d251cf-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "23215448-caff-4f20-82b0-3b3f56d251cf" (UID: "23215448-caff-4f20-82b0-3b3f56d251cf"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:24:58.100013 kubelet[1774]: I0117 12:24:58.099874 1774 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/23215448-caff-4f20-82b0-3b3f56d251cf-cni-log-dir\") pod \"23215448-caff-4f20-82b0-3b3f56d251cf\" (UID: \"23215448-caff-4f20-82b0-3b3f56d251cf\") " Jan 17 12:24:58.100013 kubelet[1774]: I0117 12:24:58.099909 1774 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23215448-caff-4f20-82b0-3b3f56d251cf-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "23215448-caff-4f20-82b0-3b3f56d251cf" (UID: "23215448-caff-4f20-82b0-3b3f56d251cf"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:24:58.100013 kubelet[1774]: I0117 12:24:58.099934 1774 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23215448-caff-4f20-82b0-3b3f56d251cf-tigera-ca-bundle\") pod \"23215448-caff-4f20-82b0-3b3f56d251cf\" (UID: \"23215448-caff-4f20-82b0-3b3f56d251cf\") " Jan 17 12:24:58.100013 kubelet[1774]: I0117 12:24:58.099938 1774 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23215448-caff-4f20-82b0-3b3f56d251cf-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "23215448-caff-4f20-82b0-3b3f56d251cf" (UID: "23215448-caff-4f20-82b0-3b3f56d251cf"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:24:58.100013 kubelet[1774]: I0117 12:24:58.099962 1774 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nq4t\" (UniqueName: \"kubernetes.io/projected/23215448-caff-4f20-82b0-3b3f56d251cf-kube-api-access-4nq4t\") pod \"23215448-caff-4f20-82b0-3b3f56d251cf\" (UID: \"23215448-caff-4f20-82b0-3b3f56d251cf\") " Jan 17 12:24:58.100830 kubelet[1774]: I0117 12:24:58.099977 1774 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23215448-caff-4f20-82b0-3b3f56d251cf-policysync" (OuterVolumeSpecName: "policysync") pod "23215448-caff-4f20-82b0-3b3f56d251cf" (UID: "23215448-caff-4f20-82b0-3b3f56d251cf"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:24:58.100830 kubelet[1774]: I0117 12:24:58.099985 1774 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/23215448-caff-4f20-82b0-3b3f56d251cf-cni-net-dir\") pod \"23215448-caff-4f20-82b0-3b3f56d251cf\" (UID: \"23215448-caff-4f20-82b0-3b3f56d251cf\") " Jan 17 12:24:58.100830 kubelet[1774]: I0117 12:24:58.099998 1774 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23215448-caff-4f20-82b0-3b3f56d251cf-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "23215448-caff-4f20-82b0-3b3f56d251cf" (UID: "23215448-caff-4f20-82b0-3b3f56d251cf"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:24:58.100830 kubelet[1774]: I0117 12:24:58.100006 1774 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/23215448-caff-4f20-82b0-3b3f56d251cf-xtables-lock\") pod \"23215448-caff-4f20-82b0-3b3f56d251cf\" (UID: \"23215448-caff-4f20-82b0-3b3f56d251cf\") " Jan 17 12:24:58.100830 kubelet[1774]: I0117 12:24:58.100025 1774 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/23215448-caff-4f20-82b0-3b3f56d251cf-var-run-calico\") pod \"23215448-caff-4f20-82b0-3b3f56d251cf\" (UID: \"23215448-caff-4f20-82b0-3b3f56d251cf\") " Jan 17 12:24:58.100990 kubelet[1774]: I0117 12:24:58.100078 1774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/cb99553a-14f0-4828-9cfe-14bede5eaa5e-var-run-calico\") pod \"calico-node-jjmkv\" (UID: \"cb99553a-14f0-4828-9cfe-14bede5eaa5e\") " pod="calico-system/calico-node-jjmkv" Jan 17 12:24:58.100990 kubelet[1774]: I0117 12:24:58.100100 1774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/cb99553a-14f0-4828-9cfe-14bede5eaa5e-cni-net-dir\") pod \"calico-node-jjmkv\" (UID: \"cb99553a-14f0-4828-9cfe-14bede5eaa5e\") " pod="calico-system/calico-node-jjmkv" Jan 17 12:24:58.100990 kubelet[1774]: I0117 12:24:58.100120 1774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cb99553a-14f0-4828-9cfe-14bede5eaa5e-xtables-lock\") pod \"calico-node-jjmkv\" (UID: \"cb99553a-14f0-4828-9cfe-14bede5eaa5e\") " pod="calico-system/calico-node-jjmkv" Jan 17 12:24:58.100990 kubelet[1774]: I0117 12:24:58.100139 1774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb99553a-14f0-4828-9cfe-14bede5eaa5e-tigera-ca-bundle\") pod \"calico-node-jjmkv\" (UID: \"cb99553a-14f0-4828-9cfe-14bede5eaa5e\") " pod="calico-system/calico-node-jjmkv" Jan 17 12:24:58.100990 kubelet[1774]: I0117 12:24:58.100157 1774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cb99553a-14f0-4828-9cfe-14bede5eaa5e-var-lib-calico\") pod \"calico-node-jjmkv\" (UID: \"cb99553a-14f0-4828-9cfe-14bede5eaa5e\") " pod="calico-system/calico-node-jjmkv" Jan 17 12:24:58.101105 kubelet[1774]: I0117 12:24:58.100178 1774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q4h8\" (UniqueName: \"kubernetes.io/projected/cb99553a-14f0-4828-9cfe-14bede5eaa5e-kube-api-access-9q4h8\") pod \"calico-node-jjmkv\" (UID: \"cb99553a-14f0-4828-9cfe-14bede5eaa5e\") " pod="calico-system/calico-node-jjmkv" Jan 17 12:24:58.101105 kubelet[1774]: I0117 12:24:58.100196 1774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/cb99553a-14f0-4828-9cfe-14bede5eaa5e-node-certs\") pod \"calico-node-jjmkv\" (UID: \"cb99553a-14f0-4828-9cfe-14bede5eaa5e\") " pod="calico-system/calico-node-jjmkv" Jan 17 12:24:58.101105 kubelet[1774]: I0117 12:24:58.100215 1774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/cb99553a-14f0-4828-9cfe-14bede5eaa5e-cni-log-dir\") pod \"calico-node-jjmkv\" (UID: \"cb99553a-14f0-4828-9cfe-14bede5eaa5e\") " pod="calico-system/calico-node-jjmkv" Jan 17 12:24:58.101105 kubelet[1774]: I0117 12:24:58.100234 1774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cb99553a-14f0-4828-9cfe-14bede5eaa5e-lib-modules\") pod \"calico-node-jjmkv\" (UID: \"cb99553a-14f0-4828-9cfe-14bede5eaa5e\") " pod="calico-system/calico-node-jjmkv" Jan 17 12:24:58.101105 kubelet[1774]: I0117 12:24:58.100255 1774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/cb99553a-14f0-4828-9cfe-14bede5eaa5e-flexvol-driver-host\") pod \"calico-node-jjmkv\" (UID: \"cb99553a-14f0-4828-9cfe-14bede5eaa5e\") " pod="calico-system/calico-node-jjmkv" Jan 17 12:24:58.101216 kubelet[1774]: I0117 12:24:58.100276 1774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/cb99553a-14f0-4828-9cfe-14bede5eaa5e-policysync\") pod \"calico-node-jjmkv\" (UID: \"cb99553a-14f0-4828-9cfe-14bede5eaa5e\") " pod="calico-system/calico-node-jjmkv" Jan 17 12:24:58.101216 kubelet[1774]: I0117 12:24:58.100298 1774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/cb99553a-14f0-4828-9cfe-14bede5eaa5e-cni-bin-dir\") pod \"calico-node-jjmkv\" (UID: \"cb99553a-14f0-4828-9cfe-14bede5eaa5e\") " pod="calico-system/calico-node-jjmkv" Jan 17 12:24:58.101216 kubelet[1774]: I0117 12:24:58.100320 1774 reconciler_common.go:300] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/23215448-caff-4f20-82b0-3b3f56d251cf-var-lib-calico\") on node \"10.0.0.160\" DevicePath \"\"" Jan 17 12:24:58.101216 kubelet[1774]: I0117 12:24:58.100331 1774 reconciler_common.go:300] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/23215448-caff-4f20-82b0-3b3f56d251cf-flexvol-driver-host\") on node \"10.0.0.160\" DevicePath \"\"" Jan 17 12:24:58.101216 kubelet[1774]: I0117 12:24:58.100341 1774 reconciler_common.go:300] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/23215448-caff-4f20-82b0-3b3f56d251cf-policysync\") on node \"10.0.0.160\" DevicePath \"\"" Jan 17 12:24:58.101216 kubelet[1774]: I0117 12:24:58.100351 1774 reconciler_common.go:300] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/23215448-caff-4f20-82b0-3b3f56d251cf-cni-bin-dir\") on node \"10.0.0.160\" DevicePath \"\"" Jan 17 12:24:58.101216 kubelet[1774]: I0117 12:24:58.100360 1774 reconciler_common.go:300] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/23215448-caff-4f20-82b0-3b3f56d251cf-cni-log-dir\") on node \"10.0.0.160\" DevicePath \"\"" Jan 17 12:24:58.101374 kubelet[1774]: I0117 12:24:58.099961 1774 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23215448-caff-4f20-82b0-3b3f56d251cf-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "23215448-caff-4f20-82b0-3b3f56d251cf" (UID: "23215448-caff-4f20-82b0-3b3f56d251cf"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:24:58.101374 kubelet[1774]: I0117 12:24:58.100415 1774 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23215448-caff-4f20-82b0-3b3f56d251cf-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "23215448-caff-4f20-82b0-3b3f56d251cf" (UID: "23215448-caff-4f20-82b0-3b3f56d251cf"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:24:58.101374 kubelet[1774]: I0117 12:24:58.100434 1774 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23215448-caff-4f20-82b0-3b3f56d251cf-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "23215448-caff-4f20-82b0-3b3f56d251cf" (UID: "23215448-caff-4f20-82b0-3b3f56d251cf"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:24:58.101374 kubelet[1774]: I0117 12:24:58.100451 1774 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23215448-caff-4f20-82b0-3b3f56d251cf-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "23215448-caff-4f20-82b0-3b3f56d251cf" (UID: "23215448-caff-4f20-82b0-3b3f56d251cf"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:24:58.102737 kubelet[1774]: I0117 12:24:58.102690 1774 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23215448-caff-4f20-82b0-3b3f56d251cf-node-certs" (OuterVolumeSpecName: "node-certs") pod "23215448-caff-4f20-82b0-3b3f56d251cf" (UID: "23215448-caff-4f20-82b0-3b3f56d251cf"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 17 12:24:58.103458 kubelet[1774]: I0117 12:24:58.103417 1774 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23215448-caff-4f20-82b0-3b3f56d251cf-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "23215448-caff-4f20-82b0-3b3f56d251cf" (UID: "23215448-caff-4f20-82b0-3b3f56d251cf"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 17 12:24:58.103540 kubelet[1774]: I0117 12:24:58.103510 1774 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23215448-caff-4f20-82b0-3b3f56d251cf-kube-api-access-4nq4t" (OuterVolumeSpecName: "kube-api-access-4nq4t") pod "23215448-caff-4f20-82b0-3b3f56d251cf" (UID: "23215448-caff-4f20-82b0-3b3f56d251cf"). InnerVolumeSpecName "kube-api-access-4nq4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 12:24:58.104042 systemd[1]: var-lib-kubelet-pods-23215448\x2dcaff\x2d4f20\x2d82b0\x2d3b3f56d251cf-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Jan 17 12:24:58.104161 systemd[1]: var-lib-kubelet-pods-23215448\x2dcaff\x2d4f20\x2d82b0\x2d3b3f56d251cf-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Jan 17 12:24:58.106138 systemd[1]: var-lib-kubelet-pods-23215448\x2dcaff\x2d4f20\x2d82b0\x2d3b3f56d251cf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4nq4t.mount: Deactivated successfully. Jan 17 12:24:58.195106 containerd[1462]: time="2025-01-17T12:24:58.195044193Z" level=info msg="StopPodSandbox for \"c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf\"" Jan 17 12:24:58.200965 kubelet[1774]: I0117 12:24:58.200944 1774 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23215448-caff-4f20-82b0-3b3f56d251cf-tigera-ca-bundle\") on node \"10.0.0.160\" DevicePath \"\"" Jan 17 12:24:58.201506 kubelet[1774]: I0117 12:24:58.201424 1774 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-4nq4t\" (UniqueName: \"kubernetes.io/projected/23215448-caff-4f20-82b0-3b3f56d251cf-kube-api-access-4nq4t\") on node \"10.0.0.160\" DevicePath \"\"" Jan 17 12:24:58.201506 kubelet[1774]: I0117 12:24:58.201440 1774 reconciler_common.go:300] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/23215448-caff-4f20-82b0-3b3f56d251cf-cni-net-dir\") on node \"10.0.0.160\" DevicePath \"\"" Jan 17 12:24:58.201506 kubelet[1774]: I0117 12:24:58.201450 1774 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/23215448-caff-4f20-82b0-3b3f56d251cf-xtables-lock\") on node \"10.0.0.160\" DevicePath \"\"" Jan 17 12:24:58.201506 kubelet[1774]: I0117 12:24:58.201459 1774 reconciler_common.go:300] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/23215448-caff-4f20-82b0-3b3f56d251cf-var-run-calico\") on node \"10.0.0.160\" DevicePath \"\"" Jan 17 12:24:58.201506 kubelet[1774]: I0117 12:24:58.201469 1774 reconciler_common.go:300] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/23215448-caff-4f20-82b0-3b3f56d251cf-node-certs\") on node \"10.0.0.160\" DevicePath \"\"" Jan 17 12:24:58.201506 kubelet[1774]: I0117 12:24:58.201479 1774 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/23215448-caff-4f20-82b0-3b3f56d251cf-lib-modules\") on node \"10.0.0.160\" DevicePath \"\"" Jan 17 12:24:58.203123 systemd[1]: Removed slice kubepods-besteffort-pod23215448_caff_4f20_82b0_3b3f56d251cf.slice - libcontainer container kubepods-besteffort-pod23215448_caff_4f20_82b0_3b3f56d251cf.slice. Jan 17 12:24:58.223021 containerd[1462]: time="2025-01-17T12:24:58.222977047Z" level=error msg="StopPodSandbox for \"c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf\" failed" error="failed to destroy network for sandbox \"c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:24:58.223232 kubelet[1774]: E0117 12:24:58.223199 1774 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf" Jan 17 12:24:58.223340 kubelet[1774]: E0117 12:24:58.223247 1774 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf"} Jan 17 12:24:58.223340 kubelet[1774]: E0117 12:24:58.223285 1774 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d48ea218-f377-4f0d-a97e-292b17df6d89\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:24:58.223340 kubelet[1774]: E0117 12:24:58.223316 1774 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d48ea218-f377-4f0d-a97e-292b17df6d89\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68f79c84d4-xr6b6" podUID="d48ea218-f377-4f0d-a97e-292b17df6d89" Jan 17 12:24:58.324440 kubelet[1774]: I0117 12:24:58.324417 1774 scope.go:117] "RemoveContainer" containerID="2b7e7727433bd628c02d5b334574c08cb93694a4904048fcd8e7761ad9594d67" Jan 17 12:24:58.325312 containerd[1462]: time="2025-01-17T12:24:58.325274151Z" level=info msg="RemoveContainer for \"2b7e7727433bd628c02d5b334574c08cb93694a4904048fcd8e7761ad9594d67\"" Jan 17 12:24:58.328823 containerd[1462]: time="2025-01-17T12:24:58.328767814Z" level=info msg="RemoveContainer for \"2b7e7727433bd628c02d5b334574c08cb93694a4904048fcd8e7761ad9594d67\" returns successfully" Jan 17 12:24:58.328965 kubelet[1774]: I0117 12:24:58.328940 1774 scope.go:117] "RemoveContainer" containerID="1ac63eed8aa9941c2f75700bd99e9310b4ea38e89a101f4cc234e45252d9db9b" Jan 17 12:24:58.329826 containerd[1462]: time="2025-01-17T12:24:58.329772862Z" level=info msg="RemoveContainer for \"1ac63eed8aa9941c2f75700bd99e9310b4ea38e89a101f4cc234e45252d9db9b\"" Jan 17 12:24:58.332726 containerd[1462]: time="2025-01-17T12:24:58.332682047Z" level=info msg="RemoveContainer for \"1ac63eed8aa9941c2f75700bd99e9310b4ea38e89a101f4cc234e45252d9db9b\" returns successfully" Jan 17 12:24:58.332955 kubelet[1774]: I0117 12:24:58.332828 1774 scope.go:117] "RemoveContainer" containerID="de0c1d564148853f11c6a5566a631eb2220775eb12f8f26fa30e6358badd8bf4" Jan 17 12:24:58.334000 containerd[1462]: time="2025-01-17T12:24:58.333961230Z" level=info msg="RemoveContainer for \"de0c1d564148853f11c6a5566a631eb2220775eb12f8f26fa30e6358badd8bf4\"" Jan 17 12:24:58.338579 containerd[1462]: time="2025-01-17T12:24:58.336849917Z" level=info msg="RemoveContainer for \"de0c1d564148853f11c6a5566a631eb2220775eb12f8f26fa30e6358badd8bf4\" returns successfully" Jan 17 12:24:58.359757 kubelet[1774]: E0117 12:24:58.359735 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:24:58.360122 containerd[1462]: time="2025-01-17T12:24:58.360091848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jjmkv,Uid:cb99553a-14f0-4828-9cfe-14bede5eaa5e,Namespace:calico-system,Attempt:0,}" Jan 17 12:24:58.379335 containerd[1462]: time="2025-01-17T12:24:58.379221245Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:24:58.379335 containerd[1462]: time="2025-01-17T12:24:58.379290945Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:24:58.379335 containerd[1462]: time="2025-01-17T12:24:58.379302377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:24:58.379531 containerd[1462]: time="2025-01-17T12:24:58.379389892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:24:58.390867 kubelet[1774]: E0117 12:24:58.390823 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:58.395941 systemd[1]: Started cri-containerd-6c1ea6b32b2bd77db28f9ec87f3927ec1e90925cb0f2b98e5558159a07f2e0c1.scope - libcontainer container 6c1ea6b32b2bd77db28f9ec87f3927ec1e90925cb0f2b98e5558159a07f2e0c1. Jan 17 12:24:58.415823 containerd[1462]: time="2025-01-17T12:24:58.415782499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jjmkv,Uid:cb99553a-14f0-4828-9cfe-14bede5eaa5e,Namespace:calico-system,Attempt:0,} returns sandbox id \"6c1ea6b32b2bd77db28f9ec87f3927ec1e90925cb0f2b98e5558159a07f2e0c1\"" Jan 17 12:24:58.416461 kubelet[1774]: E0117 12:24:58.416424 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:24:58.418177 containerd[1462]: time="2025-01-17T12:24:58.418134166Z" level=info msg="CreateContainer within sandbox \"6c1ea6b32b2bd77db28f9ec87f3927ec1e90925cb0f2b98e5558159a07f2e0c1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 12:24:58.431996 containerd[1462]: time="2025-01-17T12:24:58.431964688Z" level=info msg="CreateContainer within sandbox \"6c1ea6b32b2bd77db28f9ec87f3927ec1e90925cb0f2b98e5558159a07f2e0c1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"223641c197db27f182b9b21fd26dd077f37e68202a3101d13a0ea7ffa15647ae\"" Jan 17 12:24:58.432274 containerd[1462]: time="2025-01-17T12:24:58.432251717Z" level=info msg="StartContainer for \"223641c197db27f182b9b21fd26dd077f37e68202a3101d13a0ea7ffa15647ae\"" Jan 17 12:24:58.462831 systemd[1]: Started cri-containerd-223641c197db27f182b9b21fd26dd077f37e68202a3101d13a0ea7ffa15647ae.scope - libcontainer container 223641c197db27f182b9b21fd26dd077f37e68202a3101d13a0ea7ffa15647ae. Jan 17 12:24:58.489222 containerd[1462]: time="2025-01-17T12:24:58.488403093Z" level=info msg="StartContainer for \"223641c197db27f182b9b21fd26dd077f37e68202a3101d13a0ea7ffa15647ae\" returns successfully" Jan 17 12:24:58.503181 systemd[1]: cri-containerd-223641c197db27f182b9b21fd26dd077f37e68202a3101d13a0ea7ffa15647ae.scope: Deactivated successfully. Jan 17 12:24:58.541233 containerd[1462]: time="2025-01-17T12:24:58.541155754Z" level=info msg="shim disconnected" id=223641c197db27f182b9b21fd26dd077f37e68202a3101d13a0ea7ffa15647ae namespace=k8s.io Jan 17 12:24:58.541233 containerd[1462]: time="2025-01-17T12:24:58.541222339Z" level=warning msg="cleaning up after shim disconnected" id=223641c197db27f182b9b21fd26dd077f37e68202a3101d13a0ea7ffa15647ae namespace=k8s.io Jan 17 12:24:58.541233 containerd[1462]: time="2025-01-17T12:24:58.541231175Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:24:59.327933 kubelet[1774]: E0117 12:24:59.327902 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:24:59.329454 containerd[1462]: time="2025-01-17T12:24:59.329423852Z" level=info msg="CreateContainer within sandbox \"6c1ea6b32b2bd77db28f9ec87f3927ec1e90925cb0f2b98e5558159a07f2e0c1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 12:24:59.342499 containerd[1462]: time="2025-01-17T12:24:59.342460369Z" level=info msg="CreateContainer within sandbox \"6c1ea6b32b2bd77db28f9ec87f3927ec1e90925cb0f2b98e5558159a07f2e0c1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d69f3fdd6a0fb962fd4a4dc90383946eb7645d7a3b73859e803f10b8a614dda4\"" Jan 17 12:24:59.342850 containerd[1462]: time="2025-01-17T12:24:59.342827810Z" level=info msg="StartContainer for \"d69f3fdd6a0fb962fd4a4dc90383946eb7645d7a3b73859e803f10b8a614dda4\"" Jan 17 12:24:59.374840 systemd[1]: Started cri-containerd-d69f3fdd6a0fb962fd4a4dc90383946eb7645d7a3b73859e803f10b8a614dda4.scope - libcontainer container d69f3fdd6a0fb962fd4a4dc90383946eb7645d7a3b73859e803f10b8a614dda4. Jan 17 12:24:59.391671 kubelet[1774]: E0117 12:24:59.391624 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:59.403407 containerd[1462]: time="2025-01-17T12:24:59.403350906Z" level=info msg="StartContainer for \"d69f3fdd6a0fb962fd4a4dc90383946eb7645d7a3b73859e803f10b8a614dda4\" returns successfully" Jan 17 12:24:59.654714 systemd[1]: cri-containerd-d69f3fdd6a0fb962fd4a4dc90383946eb7645d7a3b73859e803f10b8a614dda4.scope: Deactivated successfully. Jan 17 12:24:59.679923 containerd[1462]: time="2025-01-17T12:24:59.679861030Z" level=info msg="shim disconnected" id=d69f3fdd6a0fb962fd4a4dc90383946eb7645d7a3b73859e803f10b8a614dda4 namespace=k8s.io Jan 17 12:24:59.679923 containerd[1462]: time="2025-01-17T12:24:59.679906575Z" level=warning msg="cleaning up after shim disconnected" id=d69f3fdd6a0fb962fd4a4dc90383946eb7645d7a3b73859e803f10b8a614dda4 namespace=k8s.io Jan 17 12:24:59.679923 containerd[1462]: time="2025-01-17T12:24:59.679914590Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:24:59.956270 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d69f3fdd6a0fb962fd4a4dc90383946eb7645d7a3b73859e803f10b8a614dda4-rootfs.mount: Deactivated successfully. Jan 17 12:25:00.196521 kubelet[1774]: I0117 12:25:00.196486 1774 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="23215448-caff-4f20-82b0-3b3f56d251cf" path="/var/lib/kubelet/pods/23215448-caff-4f20-82b0-3b3f56d251cf/volumes" Jan 17 12:25:00.331881 kubelet[1774]: E0117 12:25:00.331849 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:25:00.338482 containerd[1462]: time="2025-01-17T12:25:00.338434173Z" level=info msg="CreateContainer within sandbox \"6c1ea6b32b2bd77db28f9ec87f3927ec1e90925cb0f2b98e5558159a07f2e0c1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 12:25:00.352806 containerd[1462]: time="2025-01-17T12:25:00.352740493Z" level=info msg="CreateContainer within sandbox \"6c1ea6b32b2bd77db28f9ec87f3927ec1e90925cb0f2b98e5558159a07f2e0c1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"85aa3b47e2499bc731d9a732049ed5b78e8fa8fd743ae24c8324a3c86b146d78\"" Jan 17 12:25:00.353279 containerd[1462]: time="2025-01-17T12:25:00.353243007Z" level=info msg="StartContainer for \"85aa3b47e2499bc731d9a732049ed5b78e8fa8fd743ae24c8324a3c86b146d78\"" Jan 17 12:25:00.381873 systemd[1]: Started cri-containerd-85aa3b47e2499bc731d9a732049ed5b78e8fa8fd743ae24c8324a3c86b146d78.scope - libcontainer container 85aa3b47e2499bc731d9a732049ed5b78e8fa8fd743ae24c8324a3c86b146d78. Jan 17 12:25:00.392411 kubelet[1774]: E0117 12:25:00.392369 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:00.412677 containerd[1462]: time="2025-01-17T12:25:00.412635488Z" level=info msg="StartContainer for \"85aa3b47e2499bc731d9a732049ed5b78e8fa8fd743ae24c8324a3c86b146d78\" returns successfully" Jan 17 12:25:01.335551 kubelet[1774]: E0117 12:25:01.335517 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:25:01.347079 kubelet[1774]: I0117 12:25:01.346842 1774 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-jjmkv" podStartSLOduration=3.346808251 podStartE2EDuration="3.346808251s" podCreationTimestamp="2025-01-17 12:24:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:25:01.346258087 +0000 UTC m=+64.610213886" watchObservedRunningTime="2025-01-17 12:25:01.346808251 +0000 UTC m=+64.610764050" Jan 17 12:25:01.393296 kubelet[1774]: E0117 12:25:01.393264 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:01.719715 kernel: bpftool[3837]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 17 12:25:01.937206 systemd-networkd[1390]: vxlan.calico: Link UP Jan 17 12:25:01.937218 systemd-networkd[1390]: vxlan.calico: Gained carrier Jan 17 12:25:02.337357 kubelet[1774]: E0117 12:25:02.337332 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:25:02.393375 kubelet[1774]: E0117 12:25:02.393331 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:03.235876 systemd-networkd[1390]: vxlan.calico: Gained IPv6LL Jan 17 12:25:03.393539 kubelet[1774]: E0117 12:25:03.393455 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:04.195810 containerd[1462]: time="2025-01-17T12:25:04.195614043Z" level=info msg="StopPodSandbox for \"b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262\"" Jan 17 12:25:04.201928 containerd[1462]: time="2025-01-17T12:25:04.201663190Z" level=info msg="StopPodSandbox for \"918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788\"" Jan 17 12:25:04.264153 containerd[1462]: 2025-01-17 12:25:04.235 [INFO][3953] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262" Jan 17 12:25:04.264153 containerd[1462]: 2025-01-17 12:25:04.235 [INFO][3953] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262" iface="eth0" netns="/var/run/netns/cni-5a77ada1-d716-dd45-cd7c-d1f4b265cf25" Jan 17 12:25:04.264153 containerd[1462]: 2025-01-17 12:25:04.235 [INFO][3953] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262" iface="eth0" netns="/var/run/netns/cni-5a77ada1-d716-dd45-cd7c-d1f4b265cf25" Jan 17 12:25:04.264153 containerd[1462]: 2025-01-17 12:25:04.235 [INFO][3953] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262" iface="eth0" netns="/var/run/netns/cni-5a77ada1-d716-dd45-cd7c-d1f4b265cf25" Jan 17 12:25:04.264153 containerd[1462]: 2025-01-17 12:25:04.235 [INFO][3953] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262" Jan 17 12:25:04.264153 containerd[1462]: 2025-01-17 12:25:04.235 [INFO][3953] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262" Jan 17 12:25:04.264153 containerd[1462]: 2025-01-17 12:25:04.253 [INFO][3981] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262" HandleID="k8s-pod-network.b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262" Workload="10.0.0.160-k8s-nginx--deployment--6d5f899847--869v5-eth0" Jan 17 12:25:04.264153 containerd[1462]: 2025-01-17 12:25:04.253 [INFO][3981] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:25:04.264153 containerd[1462]: 2025-01-17 12:25:04.253 [INFO][3981] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:25:04.264153 containerd[1462]: 2025-01-17 12:25:04.258 [WARNING][3981] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262" HandleID="k8s-pod-network.b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262" Workload="10.0.0.160-k8s-nginx--deployment--6d5f899847--869v5-eth0" Jan 17 12:25:04.264153 containerd[1462]: 2025-01-17 12:25:04.258 [INFO][3981] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262" HandleID="k8s-pod-network.b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262" Workload="10.0.0.160-k8s-nginx--deployment--6d5f899847--869v5-eth0" Jan 17 12:25:04.264153 containerd[1462]: 2025-01-17 12:25:04.260 [INFO][3981] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:25:04.264153 containerd[1462]: 2025-01-17 12:25:04.262 [INFO][3953] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262" Jan 17 12:25:04.264576 containerd[1462]: time="2025-01-17T12:25:04.264304381Z" level=info msg="TearDown network for sandbox \"b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262\" successfully" Jan 17 12:25:04.264576 containerd[1462]: time="2025-01-17T12:25:04.264330702Z" level=info msg="StopPodSandbox for \"b77c50140c75fd0d884c30355bde678a53ffcce3b94eb7e67597452a7e311262\" returns successfully" Jan 17 12:25:04.265057 containerd[1462]: time="2025-01-17T12:25:04.264962277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-869v5,Uid:cf4115fc-1224-431d-895d-b145d63715d3,Namespace:default,Attempt:1,}" Jan 17 12:25:04.266311 systemd[1]: run-netns-cni\x2d5a77ada1\x2dd716\x2ddd45\x2dcd7c\x2dd1f4b265cf25.mount: Deactivated successfully. Jan 17 12:25:04.271252 containerd[1462]: 2025-01-17 12:25:04.239 [INFO][3969] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788" Jan 17 12:25:04.271252 containerd[1462]: 2025-01-17 12:25:04.239 [INFO][3969] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788" iface="eth0" netns="/var/run/netns/cni-36e132bf-e58d-e82c-5d25-4a07d67cb490" Jan 17 12:25:04.271252 containerd[1462]: 2025-01-17 12:25:04.239 [INFO][3969] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788" iface="eth0" netns="/var/run/netns/cni-36e132bf-e58d-e82c-5d25-4a07d67cb490" Jan 17 12:25:04.271252 containerd[1462]: 2025-01-17 12:25:04.239 [INFO][3969] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788" iface="eth0" netns="/var/run/netns/cni-36e132bf-e58d-e82c-5d25-4a07d67cb490" Jan 17 12:25:04.271252 containerd[1462]: 2025-01-17 12:25:04.239 [INFO][3969] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788" Jan 17 12:25:04.271252 containerd[1462]: 2025-01-17 12:25:04.239 [INFO][3969] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788" Jan 17 12:25:04.271252 containerd[1462]: 2025-01-17 12:25:04.259 [INFO][3985] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788" HandleID="k8s-pod-network.918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788" Workload="10.0.0.160-k8s-coredns--76f75df574--q4bsh-eth0" Jan 17 12:25:04.271252 containerd[1462]: 2025-01-17 12:25:04.260 [INFO][3985] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:25:04.271252 containerd[1462]: 2025-01-17 12:25:04.260 [INFO][3985] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:25:04.271252 containerd[1462]: 2025-01-17 12:25:04.265 [WARNING][3985] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788" HandleID="k8s-pod-network.918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788" Workload="10.0.0.160-k8s-coredns--76f75df574--q4bsh-eth0" Jan 17 12:25:04.271252 containerd[1462]: 2025-01-17 12:25:04.265 [INFO][3985] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788" HandleID="k8s-pod-network.918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788" Workload="10.0.0.160-k8s-coredns--76f75df574--q4bsh-eth0" Jan 17 12:25:04.271252 containerd[1462]: 2025-01-17 12:25:04.266 [INFO][3985] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:25:04.271252 containerd[1462]: 2025-01-17 12:25:04.269 [INFO][3969] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788" Jan 17 12:25:04.271754 containerd[1462]: time="2025-01-17T12:25:04.271377752Z" level=info msg="TearDown network for sandbox \"918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788\" successfully" Jan 17 12:25:04.271754 containerd[1462]: time="2025-01-17T12:25:04.271394634Z" level=info msg="StopPodSandbox for \"918cbe2b4bc05e9f8d9574f765c81dbb59033a6738cf6996840e718becf3a788\" returns successfully" Jan 17 12:25:04.271811 kubelet[1774]: E0117 12:25:04.271613 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:25:04.272378 containerd[1462]: time="2025-01-17T12:25:04.272164510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-q4bsh,Uid:01f58051-b831-4680-bcde-ee72a125416f,Namespace:kube-system,Attempt:1,}" Jan 17 12:25:04.273041 systemd[1]: run-netns-cni\x2d36e132bf\x2de58d\x2de82c\x2d5d25\x2d4a07d67cb490.mount: Deactivated successfully. Jan 17 12:25:04.370836 systemd-networkd[1390]: cali8a4ec19a811: Link UP Jan 17 12:25:04.372126 systemd-networkd[1390]: cali8a4ec19a811: Gained carrier Jan 17 12:25:04.379481 containerd[1462]: 2025-01-17 12:25:04.311 [INFO][3995] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.160-k8s-nginx--deployment--6d5f899847--869v5-eth0 nginx-deployment-6d5f899847- default cf4115fc-1224-431d-895d-b145d63715d3 1098 0 2025-01-17 12:24:15 +0000 UTC map[app:nginx pod-template-hash:6d5f899847 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.160 nginx-deployment-6d5f899847-869v5 eth0 default [] [] [kns.default ksa.default.default] cali8a4ec19a811 [] []}} ContainerID="ff5741c06f0144fa618883b93d2c7f79c46d887c06a03018be9da38513b4908b" Namespace="default" Pod="nginx-deployment-6d5f899847-869v5" WorkloadEndpoint="10.0.0.160-k8s-nginx--deployment--6d5f899847--869v5-" Jan 17 12:25:04.379481 containerd[1462]: 2025-01-17 12:25:04.311 [INFO][3995] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ff5741c06f0144fa618883b93d2c7f79c46d887c06a03018be9da38513b4908b" Namespace="default" Pod="nginx-deployment-6d5f899847-869v5" WorkloadEndpoint="10.0.0.160-k8s-nginx--deployment--6d5f899847--869v5-eth0" Jan 17 12:25:04.379481 containerd[1462]: 2025-01-17 12:25:04.336 [INFO][4021] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ff5741c06f0144fa618883b93d2c7f79c46d887c06a03018be9da38513b4908b" HandleID="k8s-pod-network.ff5741c06f0144fa618883b93d2c7f79c46d887c06a03018be9da38513b4908b" Workload="10.0.0.160-k8s-nginx--deployment--6d5f899847--869v5-eth0" Jan 17 12:25:04.379481 containerd[1462]: 2025-01-17 12:25:04.346 [INFO][4021] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ff5741c06f0144fa618883b93d2c7f79c46d887c06a03018be9da38513b4908b" HandleID="k8s-pod-network.ff5741c06f0144fa618883b93d2c7f79c46d887c06a03018be9da38513b4908b" Workload="10.0.0.160-k8s-nginx--deployment--6d5f899847--869v5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000405430), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.160", "pod":"nginx-deployment-6d5f899847-869v5", "timestamp":"2025-01-17 12:25:04.336576777 +0000 UTC"}, Hostname:"10.0.0.160", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:25:04.379481 containerd[1462]: 2025-01-17 12:25:04.346 [INFO][4021] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:25:04.379481 containerd[1462]: 2025-01-17 12:25:04.346 [INFO][4021] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:25:04.379481 containerd[1462]: 2025-01-17 12:25:04.346 [INFO][4021] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.160' Jan 17 12:25:04.379481 containerd[1462]: 2025-01-17 12:25:04.347 [INFO][4021] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ff5741c06f0144fa618883b93d2c7f79c46d887c06a03018be9da38513b4908b" host="10.0.0.160" Jan 17 12:25:04.379481 containerd[1462]: 2025-01-17 12:25:04.350 [INFO][4021] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.160" Jan 17 12:25:04.379481 containerd[1462]: 2025-01-17 12:25:04.354 [INFO][4021] ipam/ipam.go 489: Trying affinity for 192.168.44.0/26 host="10.0.0.160" Jan 17 12:25:04.379481 containerd[1462]: 2025-01-17 12:25:04.355 [INFO][4021] ipam/ipam.go 155: Attempting to load block cidr=192.168.44.0/26 host="10.0.0.160" Jan 17 12:25:04.379481 containerd[1462]: 2025-01-17 12:25:04.357 [INFO][4021] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.44.0/26 host="10.0.0.160" Jan 17 12:25:04.379481 containerd[1462]: 2025-01-17 12:25:04.357 [INFO][4021] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.44.0/26 handle="k8s-pod-network.ff5741c06f0144fa618883b93d2c7f79c46d887c06a03018be9da38513b4908b" host="10.0.0.160" Jan 17 12:25:04.379481 containerd[1462]: 2025-01-17 12:25:04.358 [INFO][4021] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ff5741c06f0144fa618883b93d2c7f79c46d887c06a03018be9da38513b4908b Jan 17 12:25:04.379481 containerd[1462]: 2025-01-17 12:25:04.362 [INFO][4021] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.44.0/26 handle="k8s-pod-network.ff5741c06f0144fa618883b93d2c7f79c46d887c06a03018be9da38513b4908b" host="10.0.0.160" Jan 17 12:25:04.379481 containerd[1462]: 2025-01-17 12:25:04.365 [INFO][4021] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.44.1/26] block=192.168.44.0/26 handle="k8s-pod-network.ff5741c06f0144fa618883b93d2c7f79c46d887c06a03018be9da38513b4908b" host="10.0.0.160" Jan 17 12:25:04.379481 containerd[1462]: 2025-01-17 12:25:04.365 [INFO][4021] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.44.1/26] handle="k8s-pod-network.ff5741c06f0144fa618883b93d2c7f79c46d887c06a03018be9da38513b4908b" host="10.0.0.160" Jan 17 12:25:04.379481 containerd[1462]: 2025-01-17 12:25:04.365 [INFO][4021] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:25:04.379481 containerd[1462]: 2025-01-17 12:25:04.365 [INFO][4021] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.44.1/26] IPv6=[] ContainerID="ff5741c06f0144fa618883b93d2c7f79c46d887c06a03018be9da38513b4908b" HandleID="k8s-pod-network.ff5741c06f0144fa618883b93d2c7f79c46d887c06a03018be9da38513b4908b" Workload="10.0.0.160-k8s-nginx--deployment--6d5f899847--869v5-eth0" Jan 17 12:25:04.380522 containerd[1462]: 2025-01-17 12:25:04.367 [INFO][3995] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ff5741c06f0144fa618883b93d2c7f79c46d887c06a03018be9da38513b4908b" Namespace="default" Pod="nginx-deployment-6d5f899847-869v5" WorkloadEndpoint="10.0.0.160-k8s-nginx--deployment--6d5f899847--869v5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.160-k8s-nginx--deployment--6d5f899847--869v5-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"cf4115fc-1224-431d-895d-b145d63715d3", ResourceVersion:"1098", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 24, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.160", ContainerID:"", Pod:"nginx-deployment-6d5f899847-869v5", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.44.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali8a4ec19a811", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:25:04.380522 containerd[1462]: 2025-01-17 12:25:04.368 [INFO][3995] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.44.1/32] ContainerID="ff5741c06f0144fa618883b93d2c7f79c46d887c06a03018be9da38513b4908b" Namespace="default" Pod="nginx-deployment-6d5f899847-869v5" WorkloadEndpoint="10.0.0.160-k8s-nginx--deployment--6d5f899847--869v5-eth0" Jan 17 12:25:04.380522 containerd[1462]: 2025-01-17 12:25:04.368 [INFO][3995] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8a4ec19a811 ContainerID="ff5741c06f0144fa618883b93d2c7f79c46d887c06a03018be9da38513b4908b" Namespace="default" Pod="nginx-deployment-6d5f899847-869v5" WorkloadEndpoint="10.0.0.160-k8s-nginx--deployment--6d5f899847--869v5-eth0" Jan 17 12:25:04.380522 containerd[1462]: 2025-01-17 12:25:04.370 [INFO][3995] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ff5741c06f0144fa618883b93d2c7f79c46d887c06a03018be9da38513b4908b" Namespace="default" Pod="nginx-deployment-6d5f899847-869v5" WorkloadEndpoint="10.0.0.160-k8s-nginx--deployment--6d5f899847--869v5-eth0" Jan 17 12:25:04.380522 containerd[1462]: 2025-01-17 12:25:04.371 [INFO][3995] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ff5741c06f0144fa618883b93d2c7f79c46d887c06a03018be9da38513b4908b" Namespace="default" Pod="nginx-deployment-6d5f899847-869v5" WorkloadEndpoint="10.0.0.160-k8s-nginx--deployment--6d5f899847--869v5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.160-k8s-nginx--deployment--6d5f899847--869v5-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"cf4115fc-1224-431d-895d-b145d63715d3", ResourceVersion:"1098", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 24, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.160", ContainerID:"ff5741c06f0144fa618883b93d2c7f79c46d887c06a03018be9da38513b4908b", Pod:"nginx-deployment-6d5f899847-869v5", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.44.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali8a4ec19a811", MAC:"ba:5a:a7:65:4d:9f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:25:04.380522 containerd[1462]: 2025-01-17 12:25:04.376 [INFO][3995] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ff5741c06f0144fa618883b93d2c7f79c46d887c06a03018be9da38513b4908b" Namespace="default" Pod="nginx-deployment-6d5f899847-869v5" WorkloadEndpoint="10.0.0.160-k8s-nginx--deployment--6d5f899847--869v5-eth0" Jan 17 12:25:04.394582 kubelet[1774]: E0117 12:25:04.394513 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:04.398306 systemd-networkd[1390]: calibb2bee070f4: Link UP Jan 17 12:25:04.400228 containerd[1462]: time="2025-01-17T12:25:04.400134690Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:25:04.400228 containerd[1462]: time="2025-01-17T12:25:04.400197047Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:25:04.400228 containerd[1462]: time="2025-01-17T12:25:04.400209091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:25:04.400402 containerd[1462]: time="2025-01-17T12:25:04.400291575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:25:04.401945 systemd-networkd[1390]: calibb2bee070f4: Gained carrier Jan 17 12:25:04.413751 containerd[1462]: 2025-01-17 12:25:04.319 [INFO][4007] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.160-k8s-coredns--76f75df574--q4bsh-eth0 coredns-76f75df574- kube-system 01f58051-b831-4680-bcde-ee72a125416f 1099 0 2025-01-17 12:23:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 10.0.0.160 coredns-76f75df574-q4bsh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibb2bee070f4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="3f3df8bece8cb989ff367ee9bf7d4b7d8a16bd5dd32a08a3c9d496825a27df27" Namespace="kube-system" Pod="coredns-76f75df574-q4bsh" WorkloadEndpoint="10.0.0.160-k8s-coredns--76f75df574--q4bsh-" Jan 17 12:25:04.413751 containerd[1462]: 2025-01-17 12:25:04.319 [INFO][4007] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3f3df8bece8cb989ff367ee9bf7d4b7d8a16bd5dd32a08a3c9d496825a27df27" Namespace="kube-system" Pod="coredns-76f75df574-q4bsh" WorkloadEndpoint="10.0.0.160-k8s-coredns--76f75df574--q4bsh-eth0" Jan 17 12:25:04.413751 containerd[1462]: 2025-01-17 12:25:04.344 [INFO][4026] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3f3df8bece8cb989ff367ee9bf7d4b7d8a16bd5dd32a08a3c9d496825a27df27" HandleID="k8s-pod-network.3f3df8bece8cb989ff367ee9bf7d4b7d8a16bd5dd32a08a3c9d496825a27df27" Workload="10.0.0.160-k8s-coredns--76f75df574--q4bsh-eth0" Jan 17 12:25:04.413751 containerd[1462]: 2025-01-17 12:25:04.350 [INFO][4026] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3f3df8bece8cb989ff367ee9bf7d4b7d8a16bd5dd32a08a3c9d496825a27df27" HandleID="k8s-pod-network.3f3df8bece8cb989ff367ee9bf7d4b7d8a16bd5dd32a08a3c9d496825a27df27" Workload="10.0.0.160-k8s-coredns--76f75df574--q4bsh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ddaa0), Attrs:map[string]string{"namespace":"kube-system", "node":"10.0.0.160", "pod":"coredns-76f75df574-q4bsh", "timestamp":"2025-01-17 12:25:04.34443978 +0000 UTC"}, Hostname:"10.0.0.160", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:25:04.413751 containerd[1462]: 2025-01-17 12:25:04.350 [INFO][4026] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:25:04.413751 containerd[1462]: 2025-01-17 12:25:04.365 [INFO][4026] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:25:04.413751 containerd[1462]: 2025-01-17 12:25:04.366 [INFO][4026] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.160' Jan 17 12:25:04.413751 containerd[1462]: 2025-01-17 12:25:04.367 [INFO][4026] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3f3df8bece8cb989ff367ee9bf7d4b7d8a16bd5dd32a08a3c9d496825a27df27" host="10.0.0.160" Jan 17 12:25:04.413751 containerd[1462]: 2025-01-17 12:25:04.371 [INFO][4026] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.160" Jan 17 12:25:04.413751 containerd[1462]: 2025-01-17 12:25:04.376 [INFO][4026] ipam/ipam.go 489: Trying affinity for 192.168.44.0/26 host="10.0.0.160" Jan 17 12:25:04.413751 containerd[1462]: 2025-01-17 12:25:04.379 [INFO][4026] ipam/ipam.go 155: Attempting to load block cidr=192.168.44.0/26 host="10.0.0.160" Jan 17 12:25:04.413751 containerd[1462]: 2025-01-17 12:25:04.382 [INFO][4026] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.44.0/26 host="10.0.0.160" Jan 17 12:25:04.413751 containerd[1462]: 2025-01-17 12:25:04.382 [INFO][4026] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.44.0/26 handle="k8s-pod-network.3f3df8bece8cb989ff367ee9bf7d4b7d8a16bd5dd32a08a3c9d496825a27df27" host="10.0.0.160" Jan 17 12:25:04.413751 containerd[1462]: 2025-01-17 12:25:04.384 [INFO][4026] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3f3df8bece8cb989ff367ee9bf7d4b7d8a16bd5dd32a08a3c9d496825a27df27 Jan 17 12:25:04.413751 containerd[1462]: 2025-01-17 12:25:04.388 [INFO][4026] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.44.0/26 handle="k8s-pod-network.3f3df8bece8cb989ff367ee9bf7d4b7d8a16bd5dd32a08a3c9d496825a27df27" host="10.0.0.160" Jan 17 12:25:04.413751 containerd[1462]: 2025-01-17 12:25:04.392 [INFO][4026] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.44.2/26] block=192.168.44.0/26 handle="k8s-pod-network.3f3df8bece8cb989ff367ee9bf7d4b7d8a16bd5dd32a08a3c9d496825a27df27" host="10.0.0.160" Jan 17 12:25:04.413751 containerd[1462]: 2025-01-17 12:25:04.392 [INFO][4026] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.44.2/26] handle="k8s-pod-network.3f3df8bece8cb989ff367ee9bf7d4b7d8a16bd5dd32a08a3c9d496825a27df27" host="10.0.0.160" Jan 17 12:25:04.413751 containerd[1462]: 2025-01-17 12:25:04.392 [INFO][4026] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:25:04.413751 containerd[1462]: 2025-01-17 12:25:04.392 [INFO][4026] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.44.2/26] IPv6=[] ContainerID="3f3df8bece8cb989ff367ee9bf7d4b7d8a16bd5dd32a08a3c9d496825a27df27" HandleID="k8s-pod-network.3f3df8bece8cb989ff367ee9bf7d4b7d8a16bd5dd32a08a3c9d496825a27df27" Workload="10.0.0.160-k8s-coredns--76f75df574--q4bsh-eth0" Jan 17 12:25:04.414266 containerd[1462]: 2025-01-17 12:25:04.395 [INFO][4007] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3f3df8bece8cb989ff367ee9bf7d4b7d8a16bd5dd32a08a3c9d496825a27df27" Namespace="kube-system" Pod="coredns-76f75df574-q4bsh" WorkloadEndpoint="10.0.0.160-k8s-coredns--76f75df574--q4bsh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.160-k8s-coredns--76f75df574--q4bsh-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"01f58051-b831-4680-bcde-ee72a125416f", ResourceVersion:"1099", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 23, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.160", ContainerID:"", Pod:"coredns-76f75df574-q4bsh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibb2bee070f4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:25:04.414266 containerd[1462]: 2025-01-17 12:25:04.395 [INFO][4007] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.44.2/32] ContainerID="3f3df8bece8cb989ff367ee9bf7d4b7d8a16bd5dd32a08a3c9d496825a27df27" Namespace="kube-system" Pod="coredns-76f75df574-q4bsh" WorkloadEndpoint="10.0.0.160-k8s-coredns--76f75df574--q4bsh-eth0" Jan 17 12:25:04.414266 containerd[1462]: 2025-01-17 12:25:04.395 [INFO][4007] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibb2bee070f4 ContainerID="3f3df8bece8cb989ff367ee9bf7d4b7d8a16bd5dd32a08a3c9d496825a27df27" Namespace="kube-system" Pod="coredns-76f75df574-q4bsh" WorkloadEndpoint="10.0.0.160-k8s-coredns--76f75df574--q4bsh-eth0" Jan 17 12:25:04.414266 containerd[1462]: 2025-01-17 12:25:04.402 [INFO][4007] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3f3df8bece8cb989ff367ee9bf7d4b7d8a16bd5dd32a08a3c9d496825a27df27" Namespace="kube-system" Pod="coredns-76f75df574-q4bsh" WorkloadEndpoint="10.0.0.160-k8s-coredns--76f75df574--q4bsh-eth0" Jan 17 12:25:04.414266 containerd[1462]: 2025-01-17 12:25:04.402 [INFO][4007] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3f3df8bece8cb989ff367ee9bf7d4b7d8a16bd5dd32a08a3c9d496825a27df27" Namespace="kube-system" Pod="coredns-76f75df574-q4bsh" WorkloadEndpoint="10.0.0.160-k8s-coredns--76f75df574--q4bsh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.160-k8s-coredns--76f75df574--q4bsh-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"01f58051-b831-4680-bcde-ee72a125416f", ResourceVersion:"1099", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 23, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.160", ContainerID:"3f3df8bece8cb989ff367ee9bf7d4b7d8a16bd5dd32a08a3c9d496825a27df27", Pod:"coredns-76f75df574-q4bsh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibb2bee070f4", MAC:"ee:0c:82:eb:76:12", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:25:04.414266 containerd[1462]: 2025-01-17 12:25:04.409 [INFO][4007] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3f3df8bece8cb989ff367ee9bf7d4b7d8a16bd5dd32a08a3c9d496825a27df27" Namespace="kube-system" Pod="coredns-76f75df574-q4bsh" WorkloadEndpoint="10.0.0.160-k8s-coredns--76f75df574--q4bsh-eth0" Jan 17 12:25:04.420854 systemd[1]: Started cri-containerd-ff5741c06f0144fa618883b93d2c7f79c46d887c06a03018be9da38513b4908b.scope - libcontainer container ff5741c06f0144fa618883b93d2c7f79c46d887c06a03018be9da38513b4908b. Jan 17 12:25:04.432579 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:25:04.440213 containerd[1462]: time="2025-01-17T12:25:04.440084708Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:25:04.440213 containerd[1462]: time="2025-01-17T12:25:04.440148308Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:25:04.440213 containerd[1462]: time="2025-01-17T12:25:04.440160631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:25:04.440364 containerd[1462]: time="2025-01-17T12:25:04.440254187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:25:04.459848 systemd[1]: Started cri-containerd-3f3df8bece8cb989ff367ee9bf7d4b7d8a16bd5dd32a08a3c9d496825a27df27.scope - libcontainer container 3f3df8bece8cb989ff367ee9bf7d4b7d8a16bd5dd32a08a3c9d496825a27df27. Jan 17 12:25:04.460882 containerd[1462]: time="2025-01-17T12:25:04.460841060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-869v5,Uid:cf4115fc-1224-431d-895d-b145d63715d3,Namespace:default,Attempt:1,} returns sandbox id \"ff5741c06f0144fa618883b93d2c7f79c46d887c06a03018be9da38513b4908b\"" Jan 17 12:25:04.463457 containerd[1462]: time="2025-01-17T12:25:04.462980557Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 17 12:25:04.471962 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:25:04.493427 containerd[1462]: time="2025-01-17T12:25:04.493394211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-q4bsh,Uid:01f58051-b831-4680-bcde-ee72a125416f,Namespace:kube-system,Attempt:1,} returns sandbox id \"3f3df8bece8cb989ff367ee9bf7d4b7d8a16bd5dd32a08a3c9d496825a27df27\"" Jan 17 12:25:04.494036 kubelet[1774]: E0117 12:25:04.494010 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:25:05.194904 containerd[1462]: time="2025-01-17T12:25:05.194762289Z" level=info msg="StopPodSandbox for \"99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20\"" Jan 17 12:25:05.195241 containerd[1462]: time="2025-01-17T12:25:05.195186536Z" level=info msg="StopPodSandbox for \"501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b\"" Jan 17 12:25:05.195341 containerd[1462]: time="2025-01-17T12:25:05.195208267Z" level=info msg="StopPodSandbox for \"d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef\"" Jan 17 12:25:05.282830 containerd[1462]: 2025-01-17 12:25:05.250 [INFO][4197] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20" Jan 17 12:25:05.282830 containerd[1462]: 2025-01-17 12:25:05.251 [INFO][4197] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20" iface="eth0" netns="/var/run/netns/cni-67ef328f-cfa6-585c-7f22-1c1e83861785" Jan 17 12:25:05.282830 containerd[1462]: 2025-01-17 12:25:05.251 [INFO][4197] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20" iface="eth0" netns="/var/run/netns/cni-67ef328f-cfa6-585c-7f22-1c1e83861785" Jan 17 12:25:05.282830 containerd[1462]: 2025-01-17 12:25:05.252 [INFO][4197] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20" iface="eth0" netns="/var/run/netns/cni-67ef328f-cfa6-585c-7f22-1c1e83861785" Jan 17 12:25:05.282830 containerd[1462]: 2025-01-17 12:25:05.252 [INFO][4197] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20" Jan 17 12:25:05.282830 containerd[1462]: 2025-01-17 12:25:05.252 [INFO][4197] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20" Jan 17 12:25:05.282830 containerd[1462]: 2025-01-17 12:25:05.271 [INFO][4224] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20" HandleID="k8s-pod-network.99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20" Workload="10.0.0.160-k8s-csi--node--driver--b66m9-eth0" Jan 17 12:25:05.282830 containerd[1462]: 2025-01-17 12:25:05.271 [INFO][4224] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:25:05.282830 containerd[1462]: 2025-01-17 12:25:05.271 [INFO][4224] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:25:05.282830 containerd[1462]: 2025-01-17 12:25:05.277 [WARNING][4224] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20" HandleID="k8s-pod-network.99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20" Workload="10.0.0.160-k8s-csi--node--driver--b66m9-eth0" Jan 17 12:25:05.282830 containerd[1462]: 2025-01-17 12:25:05.277 [INFO][4224] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20" HandleID="k8s-pod-network.99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20" Workload="10.0.0.160-k8s-csi--node--driver--b66m9-eth0" Jan 17 12:25:05.282830 containerd[1462]: 2025-01-17 12:25:05.279 [INFO][4224] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:25:05.282830 containerd[1462]: 2025-01-17 12:25:05.281 [INFO][4197] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20" Jan 17 12:25:05.283580 containerd[1462]: time="2025-01-17T12:25:05.283003104Z" level=info msg="TearDown network for sandbox \"99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20\" successfully" Jan 17 12:25:05.283580 containerd[1462]: time="2025-01-17T12:25:05.283032309Z" level=info msg="StopPodSandbox for \"99d57e40516b29852a2d6a4523182e419c922871aee1301552de5016f9d08f20\" returns successfully" Jan 17 12:25:05.285789 containerd[1462]: time="2025-01-17T12:25:05.285051559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b66m9,Uid:dda61e94-21b3-42b1-baef-aafab38d13ff,Namespace:calico-system,Attempt:1,}" Jan 17 12:25:05.285178 systemd[1]: run-netns-cni\x2d67ef328f\x2dcfa6\x2d585c\x2d7f22\x2d1c1e83861785.mount: Deactivated successfully. Jan 17 12:25:05.290425 containerd[1462]: 2025-01-17 12:25:05.246 [INFO][4199] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b" Jan 17 12:25:05.290425 containerd[1462]: 2025-01-17 12:25:05.246 [INFO][4199] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b" iface="eth0" netns="/var/run/netns/cni-8d35b2b1-125e-f6df-9ea3-c916784950d2" Jan 17 12:25:05.290425 containerd[1462]: 2025-01-17 12:25:05.247 [INFO][4199] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b" iface="eth0" netns="/var/run/netns/cni-8d35b2b1-125e-f6df-9ea3-c916784950d2" Jan 17 12:25:05.290425 containerd[1462]: 2025-01-17 12:25:05.247 [INFO][4199] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b" iface="eth0" netns="/var/run/netns/cni-8d35b2b1-125e-f6df-9ea3-c916784950d2" Jan 17 12:25:05.290425 containerd[1462]: 2025-01-17 12:25:05.247 [INFO][4199] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b" Jan 17 12:25:05.290425 containerd[1462]: 2025-01-17 12:25:05.247 [INFO][4199] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b" Jan 17 12:25:05.290425 containerd[1462]: 2025-01-17 12:25:05.278 [INFO][4218] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b" HandleID="k8s-pod-network.501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b" Workload="10.0.0.160-k8s-calico--apiserver--68f79c84d4--6mrf5-eth0" Jan 17 12:25:05.290425 containerd[1462]: 2025-01-17 12:25:05.278 [INFO][4218] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:25:05.290425 containerd[1462]: 2025-01-17 12:25:05.279 [INFO][4218] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:25:05.290425 containerd[1462]: 2025-01-17 12:25:05.285 [WARNING][4218] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b" HandleID="k8s-pod-network.501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b" Workload="10.0.0.160-k8s-calico--apiserver--68f79c84d4--6mrf5-eth0" Jan 17 12:25:05.290425 containerd[1462]: 2025-01-17 12:25:05.285 [INFO][4218] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b" HandleID="k8s-pod-network.501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b" Workload="10.0.0.160-k8s-calico--apiserver--68f79c84d4--6mrf5-eth0" Jan 17 12:25:05.290425 containerd[1462]: 2025-01-17 12:25:05.287 [INFO][4218] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:25:05.290425 containerd[1462]: 2025-01-17 12:25:05.288 [INFO][4199] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b" Jan 17 12:25:05.290787 containerd[1462]: time="2025-01-17T12:25:05.290621016Z" level=info msg="TearDown network for sandbox \"501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b\" successfully" Jan 17 12:25:05.290787 containerd[1462]: time="2025-01-17T12:25:05.290663346Z" level=info msg="StopPodSandbox for \"501de16cec8e1798d41d55d10c4e0c1ba21f146fe195d70d8c252862106d0d5b\" returns successfully" Jan 17 12:25:05.291504 containerd[1462]: time="2025-01-17T12:25:05.291463958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68f79c84d4-6mrf5,Uid:3229aaed-5c2c-4c71-a248-e6141b8a2e7f,Namespace:calico-apiserver,Attempt:1,}" Jan 17 12:25:05.292400 systemd[1]: run-netns-cni\x2d8d35b2b1\x2d125e\x2df6df\x2d9ea3\x2dc916784950d2.mount: Deactivated successfully. Jan 17 12:25:05.299838 containerd[1462]: 2025-01-17 12:25:05.249 [INFO][4198] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef" Jan 17 12:25:05.299838 containerd[1462]: 2025-01-17 12:25:05.250 [INFO][4198] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef" iface="eth0" netns="/var/run/netns/cni-d123ecd3-431f-c030-2acf-783578305a7d" Jan 17 12:25:05.299838 containerd[1462]: 2025-01-17 12:25:05.250 [INFO][4198] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef" iface="eth0" netns="/var/run/netns/cni-d123ecd3-431f-c030-2acf-783578305a7d" Jan 17 12:25:05.299838 containerd[1462]: 2025-01-17 12:25:05.250 [INFO][4198] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef" iface="eth0" netns="/var/run/netns/cni-d123ecd3-431f-c030-2acf-783578305a7d" Jan 17 12:25:05.299838 containerd[1462]: 2025-01-17 12:25:05.250 [INFO][4198] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef" Jan 17 12:25:05.299838 containerd[1462]: 2025-01-17 12:25:05.250 [INFO][4198] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef" Jan 17 12:25:05.299838 containerd[1462]: 2025-01-17 12:25:05.287 [INFO][4223] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef" HandleID="k8s-pod-network.d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef" Workload="10.0.0.160-k8s-calico--kube--controllers--9fc77b6d--kp6sc-eth0" Jan 17 12:25:05.299838 containerd[1462]: 2025-01-17 12:25:05.287 [INFO][4223] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:25:05.299838 containerd[1462]: 2025-01-17 12:25:05.287 [INFO][4223] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:25:05.299838 containerd[1462]: 2025-01-17 12:25:05.293 [WARNING][4223] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef" HandleID="k8s-pod-network.d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef" Workload="10.0.0.160-k8s-calico--kube--controllers--9fc77b6d--kp6sc-eth0" Jan 17 12:25:05.299838 containerd[1462]: 2025-01-17 12:25:05.293 [INFO][4223] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef" HandleID="k8s-pod-network.d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef" Workload="10.0.0.160-k8s-calico--kube--controllers--9fc77b6d--kp6sc-eth0" Jan 17 12:25:05.299838 containerd[1462]: 2025-01-17 12:25:05.295 [INFO][4223] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:25:05.299838 containerd[1462]: 2025-01-17 12:25:05.297 [INFO][4198] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef" Jan 17 12:25:05.300376 containerd[1462]: time="2025-01-17T12:25:05.299994985Z" level=info msg="TearDown network for sandbox \"d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef\" successfully" Jan 17 12:25:05.300376 containerd[1462]: time="2025-01-17T12:25:05.300019140Z" level=info msg="StopPodSandbox for \"d0deefa24fa0792e646c4fbdea8c0d20b27684802cc23609c0ddf681ecfe89ef\" returns successfully" Jan 17 12:25:05.300744 containerd[1462]: time="2025-01-17T12:25:05.300712151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9fc77b6d-kp6sc,Uid:03c15592-0c9f-4dd8-ac51-f2e8e4fe7f05,Namespace:calico-system,Attempt:1,}" Jan 17 12:25:05.301751 systemd[1]: run-netns-cni\x2dd123ecd3\x2d431f\x2dc030\x2d2acf\x2d783578305a7d.mount: Deactivated successfully. Jan 17 12:25:05.395344 kubelet[1774]: E0117 12:25:05.395076 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:05.436572 systemd-networkd[1390]: calib4ac08fd5d4: Link UP Jan 17 12:25:05.437438 systemd-networkd[1390]: calib4ac08fd5d4: Gained carrier Jan 17 12:25:05.447129 containerd[1462]: 2025-01-17 12:25:05.372 [INFO][4244] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.160-k8s-csi--node--driver--b66m9-eth0 csi-node-driver- calico-system dda61e94-21b3-42b1-baef-aafab38d13ff 1117 0 2025-01-17 12:23:58 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.160 csi-node-driver-b66m9 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib4ac08fd5d4 [] []}} ContainerID="405a158c85be99c863d2dfdde9542760680bf737e6a4e11ad97fed9c1857a9b9" Namespace="calico-system" Pod="csi-node-driver-b66m9" WorkloadEndpoint="10.0.0.160-k8s-csi--node--driver--b66m9-" Jan 17 12:25:05.447129 containerd[1462]: 2025-01-17 12:25:05.372 [INFO][4244] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="405a158c85be99c863d2dfdde9542760680bf737e6a4e11ad97fed9c1857a9b9" Namespace="calico-system" Pod="csi-node-driver-b66m9" WorkloadEndpoint="10.0.0.160-k8s-csi--node--driver--b66m9-eth0" Jan 17 12:25:05.447129 containerd[1462]: 2025-01-17 12:25:05.400 [INFO][4289] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="405a158c85be99c863d2dfdde9542760680bf737e6a4e11ad97fed9c1857a9b9" HandleID="k8s-pod-network.405a158c85be99c863d2dfdde9542760680bf737e6a4e11ad97fed9c1857a9b9" Workload="10.0.0.160-k8s-csi--node--driver--b66m9-eth0" Jan 17 12:25:05.447129 containerd[1462]: 2025-01-17 12:25:05.410 [INFO][4289] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="405a158c85be99c863d2dfdde9542760680bf737e6a4e11ad97fed9c1857a9b9" HandleID="k8s-pod-network.405a158c85be99c863d2dfdde9542760680bf737e6a4e11ad97fed9c1857a9b9" Workload="10.0.0.160-k8s-csi--node--driver--b66m9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dd090), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.160", "pod":"csi-node-driver-b66m9", "timestamp":"2025-01-17 12:25:05.400469988 +0000 UTC"}, Hostname:"10.0.0.160", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:25:05.447129 containerd[1462]: 2025-01-17 12:25:05.410 [INFO][4289] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:25:05.447129 containerd[1462]: 2025-01-17 12:25:05.410 [INFO][4289] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:25:05.447129 containerd[1462]: 2025-01-17 12:25:05.410 [INFO][4289] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.160' Jan 17 12:25:05.447129 containerd[1462]: 2025-01-17 12:25:05.411 [INFO][4289] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.405a158c85be99c863d2dfdde9542760680bf737e6a4e11ad97fed9c1857a9b9" host="10.0.0.160" Jan 17 12:25:05.447129 containerd[1462]: 2025-01-17 12:25:05.414 [INFO][4289] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.160" Jan 17 12:25:05.447129 containerd[1462]: 2025-01-17 12:25:05.418 [INFO][4289] ipam/ipam.go 489: Trying affinity for 192.168.44.0/26 host="10.0.0.160" Jan 17 12:25:05.447129 containerd[1462]: 2025-01-17 12:25:05.420 [INFO][4289] ipam/ipam.go 155: Attempting to load block cidr=192.168.44.0/26 host="10.0.0.160" Jan 17 12:25:05.447129 containerd[1462]: 2025-01-17 12:25:05.421 [INFO][4289] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.44.0/26 host="10.0.0.160" Jan 17 12:25:05.447129 containerd[1462]: 2025-01-17 12:25:05.421 [INFO][4289] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.44.0/26 handle="k8s-pod-network.405a158c85be99c863d2dfdde9542760680bf737e6a4e11ad97fed9c1857a9b9" host="10.0.0.160" Jan 17 12:25:05.447129 containerd[1462]: 2025-01-17 12:25:05.423 [INFO][4289] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.405a158c85be99c863d2dfdde9542760680bf737e6a4e11ad97fed9c1857a9b9 Jan 17 12:25:05.447129 containerd[1462]: 2025-01-17 12:25:05.426 [INFO][4289] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.44.0/26 handle="k8s-pod-network.405a158c85be99c863d2dfdde9542760680bf737e6a4e11ad97fed9c1857a9b9" host="10.0.0.160" Jan 17 12:25:05.447129 containerd[1462]: 2025-01-17 12:25:05.430 [INFO][4289] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.44.3/26] block=192.168.44.0/26 handle="k8s-pod-network.405a158c85be99c863d2dfdde9542760680bf737e6a4e11ad97fed9c1857a9b9" host="10.0.0.160" Jan 17 12:25:05.447129 containerd[1462]: 2025-01-17 12:25:05.430 [INFO][4289] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.44.3/26] handle="k8s-pod-network.405a158c85be99c863d2dfdde9542760680bf737e6a4e11ad97fed9c1857a9b9" host="10.0.0.160" Jan 17 12:25:05.447129 containerd[1462]: 2025-01-17 12:25:05.430 [INFO][4289] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:25:05.447129 containerd[1462]: 2025-01-17 12:25:05.430 [INFO][4289] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.44.3/26] IPv6=[] ContainerID="405a158c85be99c863d2dfdde9542760680bf737e6a4e11ad97fed9c1857a9b9" HandleID="k8s-pod-network.405a158c85be99c863d2dfdde9542760680bf737e6a4e11ad97fed9c1857a9b9" Workload="10.0.0.160-k8s-csi--node--driver--b66m9-eth0" Jan 17 12:25:05.447918 containerd[1462]: 2025-01-17 12:25:05.432 [INFO][4244] cni-plugin/k8s.go 386: Populated endpoint ContainerID="405a158c85be99c863d2dfdde9542760680bf737e6a4e11ad97fed9c1857a9b9" Namespace="calico-system" Pod="csi-node-driver-b66m9" WorkloadEndpoint="10.0.0.160-k8s-csi--node--driver--b66m9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.160-k8s-csi--node--driver--b66m9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dda61e94-21b3-42b1-baef-aafab38d13ff", ResourceVersion:"1117", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 23, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.160", ContainerID:"", Pod:"csi-node-driver-b66m9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.44.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib4ac08fd5d4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:25:05.447918 containerd[1462]: 2025-01-17 12:25:05.433 [INFO][4244] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.44.3/32] ContainerID="405a158c85be99c863d2dfdde9542760680bf737e6a4e11ad97fed9c1857a9b9" Namespace="calico-system" Pod="csi-node-driver-b66m9" WorkloadEndpoint="10.0.0.160-k8s-csi--node--driver--b66m9-eth0" Jan 17 12:25:05.447918 containerd[1462]: 2025-01-17 12:25:05.433 [INFO][4244] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib4ac08fd5d4 ContainerID="405a158c85be99c863d2dfdde9542760680bf737e6a4e11ad97fed9c1857a9b9" Namespace="calico-system" Pod="csi-node-driver-b66m9" WorkloadEndpoint="10.0.0.160-k8s-csi--node--driver--b66m9-eth0" Jan 17 12:25:05.447918 containerd[1462]: 2025-01-17 12:25:05.436 [INFO][4244] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="405a158c85be99c863d2dfdde9542760680bf737e6a4e11ad97fed9c1857a9b9" Namespace="calico-system" Pod="csi-node-driver-b66m9" WorkloadEndpoint="10.0.0.160-k8s-csi--node--driver--b66m9-eth0" Jan 17 12:25:05.447918 containerd[1462]: 2025-01-17 12:25:05.437 [INFO][4244] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="405a158c85be99c863d2dfdde9542760680bf737e6a4e11ad97fed9c1857a9b9" Namespace="calico-system" Pod="csi-node-driver-b66m9" WorkloadEndpoint="10.0.0.160-k8s-csi--node--driver--b66m9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.160-k8s-csi--node--driver--b66m9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dda61e94-21b3-42b1-baef-aafab38d13ff", ResourceVersion:"1117", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 23, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.160", ContainerID:"405a158c85be99c863d2dfdde9542760680bf737e6a4e11ad97fed9c1857a9b9", Pod:"csi-node-driver-b66m9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.44.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib4ac08fd5d4", MAC:"da:cf:7c:3c:b9:78", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:25:05.447918 containerd[1462]: 2025-01-17 12:25:05.444 [INFO][4244] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="405a158c85be99c863d2dfdde9542760680bf737e6a4e11ad97fed9c1857a9b9" Namespace="calico-system" Pod="csi-node-driver-b66m9" WorkloadEndpoint="10.0.0.160-k8s-csi--node--driver--b66m9-eth0" Jan 17 12:25:05.465122 systemd-networkd[1390]: cali35c0d98c471: Link UP Jan 17 12:25:05.465803 systemd-networkd[1390]: cali35c0d98c471: Gained carrier Jan 17 12:25:05.475762 containerd[1462]: time="2025-01-17T12:25:05.474938126Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:25:05.475762 containerd[1462]: time="2025-01-17T12:25:05.474991436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:25:05.475762 containerd[1462]: time="2025-01-17T12:25:05.475006133Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:25:05.475762 containerd[1462]: time="2025-01-17T12:25:05.475103015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:25:05.476284 containerd[1462]: 2025-01-17 12:25:05.371 [INFO][4258] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.160-k8s-calico--apiserver--68f79c84d4--6mrf5-eth0 calico-apiserver-68f79c84d4- calico-apiserver 3229aaed-5c2c-4c71-a248-e6141b8a2e7f 1115 0 2025-01-17 12:23:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68f79c84d4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 10.0.0.160 calico-apiserver-68f79c84d4-6mrf5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali35c0d98c471 [] []}} ContainerID="705b9e1c6b5bbf8ef7aa026983d5709d6aab3efa37a4109de5e3084938d7c205" Namespace="calico-apiserver" Pod="calico-apiserver-68f79c84d4-6mrf5" WorkloadEndpoint="10.0.0.160-k8s-calico--apiserver--68f79c84d4--6mrf5-" Jan 17 12:25:05.476284 containerd[1462]: 2025-01-17 12:25:05.371 [INFO][4258] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="705b9e1c6b5bbf8ef7aa026983d5709d6aab3efa37a4109de5e3084938d7c205" Namespace="calico-apiserver" Pod="calico-apiserver-68f79c84d4-6mrf5" WorkloadEndpoint="10.0.0.160-k8s-calico--apiserver--68f79c84d4--6mrf5-eth0" Jan 17 12:25:05.476284 containerd[1462]: 2025-01-17 12:25:05.403 [INFO][4288] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="705b9e1c6b5bbf8ef7aa026983d5709d6aab3efa37a4109de5e3084938d7c205" HandleID="k8s-pod-network.705b9e1c6b5bbf8ef7aa026983d5709d6aab3efa37a4109de5e3084938d7c205" Workload="10.0.0.160-k8s-calico--apiserver--68f79c84d4--6mrf5-eth0" Jan 17 12:25:05.476284 containerd[1462]: 2025-01-17 12:25:05.411 [INFO][4288] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="705b9e1c6b5bbf8ef7aa026983d5709d6aab3efa37a4109de5e3084938d7c205" HandleID="k8s-pod-network.705b9e1c6b5bbf8ef7aa026983d5709d6aab3efa37a4109de5e3084938d7c205" Workload="10.0.0.160-k8s-calico--apiserver--68f79c84d4--6mrf5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051400), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"10.0.0.160", "pod":"calico-apiserver-68f79c84d4-6mrf5", "timestamp":"2025-01-17 12:25:05.403378368 +0000 UTC"}, Hostname:"10.0.0.160", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:25:05.476284 containerd[1462]: 2025-01-17 12:25:05.411 [INFO][4288] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:25:05.476284 containerd[1462]: 2025-01-17 12:25:05.430 [INFO][4288] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:25:05.476284 containerd[1462]: 2025-01-17 12:25:05.430 [INFO][4288] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.160' Jan 17 12:25:05.476284 containerd[1462]: 2025-01-17 12:25:05.432 [INFO][4288] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.705b9e1c6b5bbf8ef7aa026983d5709d6aab3efa37a4109de5e3084938d7c205" host="10.0.0.160" Jan 17 12:25:05.476284 containerd[1462]: 2025-01-17 12:25:05.437 [INFO][4288] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.160" Jan 17 12:25:05.476284 containerd[1462]: 2025-01-17 12:25:05.440 [INFO][4288] ipam/ipam.go 489: Trying affinity for 192.168.44.0/26 host="10.0.0.160" Jan 17 12:25:05.476284 containerd[1462]: 2025-01-17 12:25:05.442 [INFO][4288] ipam/ipam.go 155: Attempting to load block cidr=192.168.44.0/26 host="10.0.0.160" Jan 17 12:25:05.476284 containerd[1462]: 2025-01-17 12:25:05.447 [INFO][4288] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.44.0/26 host="10.0.0.160" Jan 17 12:25:05.476284 containerd[1462]: 2025-01-17 12:25:05.447 [INFO][4288] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.44.0/26 handle="k8s-pod-network.705b9e1c6b5bbf8ef7aa026983d5709d6aab3efa37a4109de5e3084938d7c205" host="10.0.0.160" Jan 17 12:25:05.476284 containerd[1462]: 2025-01-17 12:25:05.450 [INFO][4288] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.705b9e1c6b5bbf8ef7aa026983d5709d6aab3efa37a4109de5e3084938d7c205 Jan 17 12:25:05.476284 containerd[1462]: 2025-01-17 12:25:05.453 [INFO][4288] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.44.0/26 handle="k8s-pod-network.705b9e1c6b5bbf8ef7aa026983d5709d6aab3efa37a4109de5e3084938d7c205" host="10.0.0.160" Jan 17 12:25:05.476284 containerd[1462]: 2025-01-17 12:25:05.459 [INFO][4288] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.44.4/26] block=192.168.44.0/26 handle="k8s-pod-network.705b9e1c6b5bbf8ef7aa026983d5709d6aab3efa37a4109de5e3084938d7c205" host="10.0.0.160" Jan 17 12:25:05.476284 containerd[1462]: 2025-01-17 12:25:05.459 [INFO][4288] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.44.4/26] handle="k8s-pod-network.705b9e1c6b5bbf8ef7aa026983d5709d6aab3efa37a4109de5e3084938d7c205" host="10.0.0.160" Jan 17 12:25:05.476284 containerd[1462]: 2025-01-17 12:25:05.459 [INFO][4288] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:25:05.476284 containerd[1462]: 2025-01-17 12:25:05.459 [INFO][4288] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.44.4/26] IPv6=[] ContainerID="705b9e1c6b5bbf8ef7aa026983d5709d6aab3efa37a4109de5e3084938d7c205" HandleID="k8s-pod-network.705b9e1c6b5bbf8ef7aa026983d5709d6aab3efa37a4109de5e3084938d7c205" Workload="10.0.0.160-k8s-calico--apiserver--68f79c84d4--6mrf5-eth0" Jan 17 12:25:05.476776 containerd[1462]: 2025-01-17 12:25:05.462 [INFO][4258] cni-plugin/k8s.go 386: Populated endpoint ContainerID="705b9e1c6b5bbf8ef7aa026983d5709d6aab3efa37a4109de5e3084938d7c205" Namespace="calico-apiserver" Pod="calico-apiserver-68f79c84d4-6mrf5" WorkloadEndpoint="10.0.0.160-k8s-calico--apiserver--68f79c84d4--6mrf5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.160-k8s-calico--apiserver--68f79c84d4--6mrf5-eth0", GenerateName:"calico-apiserver-68f79c84d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"3229aaed-5c2c-4c71-a248-e6141b8a2e7f", ResourceVersion:"1115", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 23, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68f79c84d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.160", ContainerID:"", Pod:"calico-apiserver-68f79c84d4-6mrf5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali35c0d98c471", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:25:05.476776 containerd[1462]: 2025-01-17 12:25:05.462 [INFO][4258] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.44.4/32] ContainerID="705b9e1c6b5bbf8ef7aa026983d5709d6aab3efa37a4109de5e3084938d7c205" Namespace="calico-apiserver" Pod="calico-apiserver-68f79c84d4-6mrf5" WorkloadEndpoint="10.0.0.160-k8s-calico--apiserver--68f79c84d4--6mrf5-eth0" Jan 17 12:25:05.476776 containerd[1462]: 2025-01-17 12:25:05.462 [INFO][4258] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali35c0d98c471 ContainerID="705b9e1c6b5bbf8ef7aa026983d5709d6aab3efa37a4109de5e3084938d7c205" Namespace="calico-apiserver" Pod="calico-apiserver-68f79c84d4-6mrf5" WorkloadEndpoint="10.0.0.160-k8s-calico--apiserver--68f79c84d4--6mrf5-eth0" Jan 17 12:25:05.476776 containerd[1462]: 2025-01-17 12:25:05.465 [INFO][4258] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="705b9e1c6b5bbf8ef7aa026983d5709d6aab3efa37a4109de5e3084938d7c205" Namespace="calico-apiserver" Pod="calico-apiserver-68f79c84d4-6mrf5" WorkloadEndpoint="10.0.0.160-k8s-calico--apiserver--68f79c84d4--6mrf5-eth0" Jan 17 12:25:05.476776 containerd[1462]: 2025-01-17 12:25:05.467 [INFO][4258] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="705b9e1c6b5bbf8ef7aa026983d5709d6aab3efa37a4109de5e3084938d7c205" Namespace="calico-apiserver" Pod="calico-apiserver-68f79c84d4-6mrf5" WorkloadEndpoint="10.0.0.160-k8s-calico--apiserver--68f79c84d4--6mrf5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.160-k8s-calico--apiserver--68f79c84d4--6mrf5-eth0", GenerateName:"calico-apiserver-68f79c84d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"3229aaed-5c2c-4c71-a248-e6141b8a2e7f", ResourceVersion:"1115", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 23, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68f79c84d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.160", ContainerID:"705b9e1c6b5bbf8ef7aa026983d5709d6aab3efa37a4109de5e3084938d7c205", Pod:"calico-apiserver-68f79c84d4-6mrf5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali35c0d98c471", MAC:"aa:9d:a2:94:b1:c3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:25:05.476776 containerd[1462]: 2025-01-17 12:25:05.473 [INFO][4258] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="705b9e1c6b5bbf8ef7aa026983d5709d6aab3efa37a4109de5e3084938d7c205" Namespace="calico-apiserver" Pod="calico-apiserver-68f79c84d4-6mrf5" WorkloadEndpoint="10.0.0.160-k8s-calico--apiserver--68f79c84d4--6mrf5-eth0" Jan 17 12:25:05.494055 systemd[1]: Started cri-containerd-405a158c85be99c863d2dfdde9542760680bf737e6a4e11ad97fed9c1857a9b9.scope - libcontainer container 405a158c85be99c863d2dfdde9542760680bf737e6a4e11ad97fed9c1857a9b9. Jan 17 12:25:05.495199 systemd-networkd[1390]: cali3cccdc13232: Link UP Jan 17 12:25:05.496349 systemd-networkd[1390]: cali3cccdc13232: Gained carrier Jan 17 12:25:05.506719 containerd[1462]: time="2025-01-17T12:25:05.506436551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:25:05.506719 containerd[1462]: time="2025-01-17T12:25:05.506483830Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:25:05.506719 containerd[1462]: time="2025-01-17T12:25:05.506499600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:25:05.506719 containerd[1462]: time="2025-01-17T12:25:05.506570533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:25:05.507886 containerd[1462]: 2025-01-17 12:25:05.377 [INFO][4270] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.160-k8s-calico--kube--controllers--9fc77b6d--kp6sc-eth0 calico-kube-controllers-9fc77b6d- calico-system 03c15592-0c9f-4dd8-ac51-f2e8e4fe7f05 1116 0 2025-01-17 12:23:57 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:9fc77b6d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 10.0.0.160 calico-kube-controllers-9fc77b6d-kp6sc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3cccdc13232 [] []}} ContainerID="46127e5ff968b2917de40166ec0cbb391eb42dcf45c2a8acc1e490aa998ce0d5" Namespace="calico-system" Pod="calico-kube-controllers-9fc77b6d-kp6sc" WorkloadEndpoint="10.0.0.160-k8s-calico--kube--controllers--9fc77b6d--kp6sc-" Jan 17 12:25:05.507886 containerd[1462]: 2025-01-17 12:25:05.377 [INFO][4270] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="46127e5ff968b2917de40166ec0cbb391eb42dcf45c2a8acc1e490aa998ce0d5" Namespace="calico-system" Pod="calico-kube-controllers-9fc77b6d-kp6sc" WorkloadEndpoint="10.0.0.160-k8s-calico--kube--controllers--9fc77b6d--kp6sc-eth0" Jan 17 12:25:05.507886 containerd[1462]: 2025-01-17 12:25:05.406 [INFO][4298] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="46127e5ff968b2917de40166ec0cbb391eb42dcf45c2a8acc1e490aa998ce0d5" HandleID="k8s-pod-network.46127e5ff968b2917de40166ec0cbb391eb42dcf45c2a8acc1e490aa998ce0d5" Workload="10.0.0.160-k8s-calico--kube--controllers--9fc77b6d--kp6sc-eth0" Jan 17 12:25:05.507886 containerd[1462]: 2025-01-17 12:25:05.413 [INFO][4298] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="46127e5ff968b2917de40166ec0cbb391eb42dcf45c2a8acc1e490aa998ce0d5" HandleID="k8s-pod-network.46127e5ff968b2917de40166ec0cbb391eb42dcf45c2a8acc1e490aa998ce0d5" Workload="10.0.0.160-k8s-calico--kube--controllers--9fc77b6d--kp6sc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dd410), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.160", "pod":"calico-kube-controllers-9fc77b6d-kp6sc", "timestamp":"2025-01-17 12:25:05.406039875 +0000 UTC"}, Hostname:"10.0.0.160", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:25:05.507886 containerd[1462]: 2025-01-17 12:25:05.413 [INFO][4298] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:25:05.507886 containerd[1462]: 2025-01-17 12:25:05.459 [INFO][4298] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:25:05.507886 containerd[1462]: 2025-01-17 12:25:05.460 [INFO][4298] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.160' Jan 17 12:25:05.507886 containerd[1462]: 2025-01-17 12:25:05.462 [INFO][4298] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.46127e5ff968b2917de40166ec0cbb391eb42dcf45c2a8acc1e490aa998ce0d5" host="10.0.0.160" Jan 17 12:25:05.507886 containerd[1462]: 2025-01-17 12:25:05.467 [INFO][4298] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.160" Jan 17 12:25:05.507886 containerd[1462]: 2025-01-17 12:25:05.470 [INFO][4298] ipam/ipam.go 489: Trying affinity for 192.168.44.0/26 host="10.0.0.160" Jan 17 12:25:05.507886 containerd[1462]: 2025-01-17 12:25:05.473 [INFO][4298] ipam/ipam.go 155: Attempting to load block cidr=192.168.44.0/26 host="10.0.0.160" Jan 17 12:25:05.507886 containerd[1462]: 2025-01-17 12:25:05.475 [INFO][4298] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.44.0/26 host="10.0.0.160" Jan 17 12:25:05.507886 containerd[1462]: 2025-01-17 12:25:05.475 [INFO][4298] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.44.0/26 handle="k8s-pod-network.46127e5ff968b2917de40166ec0cbb391eb42dcf45c2a8acc1e490aa998ce0d5" host="10.0.0.160" Jan 17 12:25:05.507886 containerd[1462]: 2025-01-17 12:25:05.477 [INFO][4298] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.46127e5ff968b2917de40166ec0cbb391eb42dcf45c2a8acc1e490aa998ce0d5 Jan 17 12:25:05.507886 containerd[1462]: 2025-01-17 12:25:05.480 [INFO][4298] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.44.0/26 handle="k8s-pod-network.46127e5ff968b2917de40166ec0cbb391eb42dcf45c2a8acc1e490aa998ce0d5" host="10.0.0.160" Jan 17 12:25:05.507886 containerd[1462]: 2025-01-17 12:25:05.485 [INFO][4298] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.44.5/26] block=192.168.44.0/26 handle="k8s-pod-network.46127e5ff968b2917de40166ec0cbb391eb42dcf45c2a8acc1e490aa998ce0d5" host="10.0.0.160" Jan 17 12:25:05.507886 containerd[1462]: 2025-01-17 12:25:05.485 [INFO][4298] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.44.5/26] handle="k8s-pod-network.46127e5ff968b2917de40166ec0cbb391eb42dcf45c2a8acc1e490aa998ce0d5" host="10.0.0.160" Jan 17 12:25:05.507886 containerd[1462]: 2025-01-17 12:25:05.485 [INFO][4298] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:25:05.507886 containerd[1462]: 2025-01-17 12:25:05.485 [INFO][4298] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.44.5/26] IPv6=[] ContainerID="46127e5ff968b2917de40166ec0cbb391eb42dcf45c2a8acc1e490aa998ce0d5" HandleID="k8s-pod-network.46127e5ff968b2917de40166ec0cbb391eb42dcf45c2a8acc1e490aa998ce0d5" Workload="10.0.0.160-k8s-calico--kube--controllers--9fc77b6d--kp6sc-eth0" Jan 17 12:25:05.508307 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:25:05.508864 containerd[1462]: 2025-01-17 12:25:05.489 [INFO][4270] cni-plugin/k8s.go 386: Populated endpoint ContainerID="46127e5ff968b2917de40166ec0cbb391eb42dcf45c2a8acc1e490aa998ce0d5" Namespace="calico-system" Pod="calico-kube-controllers-9fc77b6d-kp6sc" WorkloadEndpoint="10.0.0.160-k8s-calico--kube--controllers--9fc77b6d--kp6sc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.160-k8s-calico--kube--controllers--9fc77b6d--kp6sc-eth0", GenerateName:"calico-kube-controllers-9fc77b6d-", Namespace:"calico-system", SelfLink:"", UID:"03c15592-0c9f-4dd8-ac51-f2e8e4fe7f05", ResourceVersion:"1116", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 23, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9fc77b6d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.160", ContainerID:"", Pod:"calico-kube-controllers-9fc77b6d-kp6sc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.44.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3cccdc13232", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:25:05.508864 containerd[1462]: 2025-01-17 12:25:05.491 [INFO][4270] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.44.5/32] ContainerID="46127e5ff968b2917de40166ec0cbb391eb42dcf45c2a8acc1e490aa998ce0d5" Namespace="calico-system" Pod="calico-kube-controllers-9fc77b6d-kp6sc" WorkloadEndpoint="10.0.0.160-k8s-calico--kube--controllers--9fc77b6d--kp6sc-eth0" Jan 17 12:25:05.508864 containerd[1462]: 2025-01-17 12:25:05.491 [INFO][4270] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3cccdc13232 ContainerID="46127e5ff968b2917de40166ec0cbb391eb42dcf45c2a8acc1e490aa998ce0d5" Namespace="calico-system" Pod="calico-kube-controllers-9fc77b6d-kp6sc" WorkloadEndpoint="10.0.0.160-k8s-calico--kube--controllers--9fc77b6d--kp6sc-eth0" Jan 17 12:25:05.508864 containerd[1462]: 2025-01-17 12:25:05.496 [INFO][4270] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="46127e5ff968b2917de40166ec0cbb391eb42dcf45c2a8acc1e490aa998ce0d5" Namespace="calico-system" Pod="calico-kube-controllers-9fc77b6d-kp6sc" WorkloadEndpoint="10.0.0.160-k8s-calico--kube--controllers--9fc77b6d--kp6sc-eth0" Jan 17 12:25:05.508864 containerd[1462]: 2025-01-17 12:25:05.496 [INFO][4270] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="46127e5ff968b2917de40166ec0cbb391eb42dcf45c2a8acc1e490aa998ce0d5" Namespace="calico-system" Pod="calico-kube-controllers-9fc77b6d-kp6sc" WorkloadEndpoint="10.0.0.160-k8s-calico--kube--controllers--9fc77b6d--kp6sc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.160-k8s-calico--kube--controllers--9fc77b6d--kp6sc-eth0", GenerateName:"calico-kube-controllers-9fc77b6d-", Namespace:"calico-system", SelfLink:"", UID:"03c15592-0c9f-4dd8-ac51-f2e8e4fe7f05", ResourceVersion:"1116", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 23, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9fc77b6d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.160", ContainerID:"46127e5ff968b2917de40166ec0cbb391eb42dcf45c2a8acc1e490aa998ce0d5", Pod:"calico-kube-controllers-9fc77b6d-kp6sc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.44.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3cccdc13232", MAC:"52:cf:e4:81:4e:9b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:25:05.508864 containerd[1462]: 2025-01-17 12:25:05.503 [INFO][4270] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="46127e5ff968b2917de40166ec0cbb391eb42dcf45c2a8acc1e490aa998ce0d5" Namespace="calico-system" Pod="calico-kube-controllers-9fc77b6d-kp6sc" WorkloadEndpoint="10.0.0.160-k8s-calico--kube--controllers--9fc77b6d--kp6sc-eth0" Jan 17 12:25:05.526024 containerd[1462]: time="2025-01-17T12:25:05.525951840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b66m9,Uid:dda61e94-21b3-42b1-baef-aafab38d13ff,Namespace:calico-system,Attempt:1,} returns sandbox id \"405a158c85be99c863d2dfdde9542760680bf737e6a4e11ad97fed9c1857a9b9\"" Jan 17 12:25:05.530858 systemd[1]: Started cri-containerd-705b9e1c6b5bbf8ef7aa026983d5709d6aab3efa37a4109de5e3084938d7c205.scope - libcontainer container 705b9e1c6b5bbf8ef7aa026983d5709d6aab3efa37a4109de5e3084938d7c205. Jan 17 12:25:05.534389 containerd[1462]: time="2025-01-17T12:25:05.534309260Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:25:05.534389 containerd[1462]: time="2025-01-17T12:25:05.534362540Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:25:05.534389 containerd[1462]: time="2025-01-17T12:25:05.534376586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:25:05.534605 containerd[1462]: time="2025-01-17T12:25:05.534447660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:25:05.543949 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:25:05.558834 systemd[1]: Started cri-containerd-46127e5ff968b2917de40166ec0cbb391eb42dcf45c2a8acc1e490aa998ce0d5.scope - libcontainer container 46127e5ff968b2917de40166ec0cbb391eb42dcf45c2a8acc1e490aa998ce0d5. Jan 17 12:25:05.568912 containerd[1462]: time="2025-01-17T12:25:05.568867981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68f79c84d4-6mrf5,Uid:3229aaed-5c2c-4c71-a248-e6141b8a2e7f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"705b9e1c6b5bbf8ef7aa026983d5709d6aab3efa37a4109de5e3084938d7c205\"" Jan 17 12:25:05.570137 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:25:05.591535 containerd[1462]: time="2025-01-17T12:25:05.591494941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9fc77b6d-kp6sc,Uid:03c15592-0c9f-4dd8-ac51-f2e8e4fe7f05,Namespace:calico-system,Attempt:1,} returns sandbox id \"46127e5ff968b2917de40166ec0cbb391eb42dcf45c2a8acc1e490aa998ce0d5\"" Jan 17 12:25:05.667847 systemd-networkd[1390]: calibb2bee070f4: Gained IPv6LL Jan 17 12:25:06.115889 systemd-networkd[1390]: cali8a4ec19a811: Gained IPv6LL Jan 17 12:25:06.395452 kubelet[1774]: E0117 12:25:06.395352 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:06.563849 systemd-networkd[1390]: cali35c0d98c471: Gained IPv6LL Jan 17 12:25:06.819839 systemd-networkd[1390]: cali3cccdc13232: Gained IPv6LL Jan 17 12:25:07.395991 kubelet[1774]: E0117 12:25:07.395965 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:07.459840 systemd-networkd[1390]: calib4ac08fd5d4: Gained IPv6LL Jan 17 12:25:08.195531 containerd[1462]: time="2025-01-17T12:25:08.195482210Z" level=info msg="StopPodSandbox for \"7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2\"" Jan 17 12:25:08.396616 kubelet[1774]: E0117 12:25:08.396389 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:08.420811 containerd[1462]: 2025-01-17 12:25:08.389 [INFO][4502] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2" Jan 17 12:25:08.420811 containerd[1462]: 2025-01-17 12:25:08.389 [INFO][4502] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2" iface="eth0" netns="/var/run/netns/cni-9ce1a404-6c6b-40a9-8fc1-d188cc23ea50" Jan 17 12:25:08.420811 containerd[1462]: 2025-01-17 12:25:08.390 [INFO][4502] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2" iface="eth0" netns="/var/run/netns/cni-9ce1a404-6c6b-40a9-8fc1-d188cc23ea50" Jan 17 12:25:08.420811 containerd[1462]: 2025-01-17 12:25:08.390 [INFO][4502] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2" iface="eth0" netns="/var/run/netns/cni-9ce1a404-6c6b-40a9-8fc1-d188cc23ea50" Jan 17 12:25:08.420811 containerd[1462]: 2025-01-17 12:25:08.390 [INFO][4502] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2" Jan 17 12:25:08.420811 containerd[1462]: 2025-01-17 12:25:08.390 [INFO][4502] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2" Jan 17 12:25:08.420811 containerd[1462]: 2025-01-17 12:25:08.409 [INFO][4510] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2" HandleID="k8s-pod-network.7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2" Workload="10.0.0.160-k8s-coredns--76f75df574--nlhgn-eth0" Jan 17 12:25:08.420811 containerd[1462]: 2025-01-17 12:25:08.409 [INFO][4510] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:25:08.420811 containerd[1462]: 2025-01-17 12:25:08.409 [INFO][4510] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:25:08.420811 containerd[1462]: 2025-01-17 12:25:08.414 [WARNING][4510] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2" HandleID="k8s-pod-network.7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2" Workload="10.0.0.160-k8s-coredns--76f75df574--nlhgn-eth0" Jan 17 12:25:08.420811 containerd[1462]: 2025-01-17 12:25:08.414 [INFO][4510] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2" HandleID="k8s-pod-network.7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2" Workload="10.0.0.160-k8s-coredns--76f75df574--nlhgn-eth0" Jan 17 12:25:08.420811 containerd[1462]: 2025-01-17 12:25:08.416 [INFO][4510] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:25:08.420811 containerd[1462]: 2025-01-17 12:25:08.418 [INFO][4502] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2" Jan 17 12:25:08.421255 containerd[1462]: time="2025-01-17T12:25:08.420994288Z" level=info msg="TearDown network for sandbox \"7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2\" successfully" Jan 17 12:25:08.421255 containerd[1462]: time="2025-01-17T12:25:08.421025757Z" level=info msg="StopPodSandbox for \"7a7a852025bfa9adc75020f278431ddbe53f151a058f596183613c9f48b6d0e2\" returns successfully" Jan 17 12:25:08.421431 kubelet[1774]: E0117 12:25:08.421403 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:25:08.422101 containerd[1462]: time="2025-01-17T12:25:08.422075869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-nlhgn,Uid:04af4684-e94d-4b3d-a664-20ff360fb4b6,Namespace:kube-system,Attempt:1,}" Jan 17 12:25:08.422993 systemd[1]: run-netns-cni\x2d9ce1a404\x2d6c6b\x2d40a9\x2d8fc1\x2dd188cc23ea50.mount: Deactivated successfully. Jan 17 12:25:08.559091 systemd-networkd[1390]: cali9c3c618bae5: Link UP Jan 17 12:25:08.559359 systemd-networkd[1390]: cali9c3c618bae5: Gained carrier Jan 17 12:25:08.573364 containerd[1462]: 2025-01-17 12:25:08.496 [INFO][4521] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.160-k8s-coredns--76f75df574--nlhgn-eth0 coredns-76f75df574- kube-system 04af4684-e94d-4b3d-a664-20ff360fb4b6 1142 0 2025-01-17 12:23:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 10.0.0.160 coredns-76f75df574-nlhgn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9c3c618bae5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="3c8b1143d7533e7f37538df9b1a6dfcea840a2766c890dbdf7f9b173584dcb0b" Namespace="kube-system" Pod="coredns-76f75df574-nlhgn" WorkloadEndpoint="10.0.0.160-k8s-coredns--76f75df574--nlhgn-" Jan 17 12:25:08.573364 containerd[1462]: 2025-01-17 12:25:08.496 [INFO][4521] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3c8b1143d7533e7f37538df9b1a6dfcea840a2766c890dbdf7f9b173584dcb0b" Namespace="kube-system" Pod="coredns-76f75df574-nlhgn" WorkloadEndpoint="10.0.0.160-k8s-coredns--76f75df574--nlhgn-eth0" Jan 17 12:25:08.573364 containerd[1462]: 2025-01-17 12:25:08.522 [INFO][4535] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3c8b1143d7533e7f37538df9b1a6dfcea840a2766c890dbdf7f9b173584dcb0b" HandleID="k8s-pod-network.3c8b1143d7533e7f37538df9b1a6dfcea840a2766c890dbdf7f9b173584dcb0b" Workload="10.0.0.160-k8s-coredns--76f75df574--nlhgn-eth0" Jan 17 12:25:08.573364 containerd[1462]: 2025-01-17 12:25:08.529 [INFO][4535] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3c8b1143d7533e7f37538df9b1a6dfcea840a2766c890dbdf7f9b173584dcb0b" HandleID="k8s-pod-network.3c8b1143d7533e7f37538df9b1a6dfcea840a2766c890dbdf7f9b173584dcb0b" Workload="10.0.0.160-k8s-coredns--76f75df574--nlhgn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dccc0), Attrs:map[string]string{"namespace":"kube-system", "node":"10.0.0.160", "pod":"coredns-76f75df574-nlhgn", "timestamp":"2025-01-17 12:25:08.522655091 +0000 UTC"}, Hostname:"10.0.0.160", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:25:08.573364 containerd[1462]: 2025-01-17 12:25:08.529 [INFO][4535] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:25:08.573364 containerd[1462]: 2025-01-17 12:25:08.529 [INFO][4535] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:25:08.573364 containerd[1462]: 2025-01-17 12:25:08.529 [INFO][4535] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.160' Jan 17 12:25:08.573364 containerd[1462]: 2025-01-17 12:25:08.531 [INFO][4535] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3c8b1143d7533e7f37538df9b1a6dfcea840a2766c890dbdf7f9b173584dcb0b" host="10.0.0.160" Jan 17 12:25:08.573364 containerd[1462]: 2025-01-17 12:25:08.534 [INFO][4535] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.160" Jan 17 12:25:08.573364 containerd[1462]: 2025-01-17 12:25:08.537 [INFO][4535] ipam/ipam.go 489: Trying affinity for 192.168.44.0/26 host="10.0.0.160" Jan 17 12:25:08.573364 containerd[1462]: 2025-01-17 12:25:08.538 [INFO][4535] ipam/ipam.go 155: Attempting to load block cidr=192.168.44.0/26 host="10.0.0.160" Jan 17 12:25:08.573364 containerd[1462]: 2025-01-17 12:25:08.540 [INFO][4535] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.44.0/26 host="10.0.0.160" Jan 17 12:25:08.573364 containerd[1462]: 2025-01-17 12:25:08.540 [INFO][4535] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.44.0/26 handle="k8s-pod-network.3c8b1143d7533e7f37538df9b1a6dfcea840a2766c890dbdf7f9b173584dcb0b" host="10.0.0.160" Jan 17 12:25:08.573364 containerd[1462]: 2025-01-17 12:25:08.541 [INFO][4535] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3c8b1143d7533e7f37538df9b1a6dfcea840a2766c890dbdf7f9b173584dcb0b Jan 17 12:25:08.573364 containerd[1462]: 2025-01-17 12:25:08.545 [INFO][4535] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.44.0/26 handle="k8s-pod-network.3c8b1143d7533e7f37538df9b1a6dfcea840a2766c890dbdf7f9b173584dcb0b" host="10.0.0.160" Jan 17 12:25:08.573364 containerd[1462]: 2025-01-17 12:25:08.552 [INFO][4535] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.44.6/26] block=192.168.44.0/26 handle="k8s-pod-network.3c8b1143d7533e7f37538df9b1a6dfcea840a2766c890dbdf7f9b173584dcb0b" host="10.0.0.160" Jan 17 12:25:08.573364 containerd[1462]: 2025-01-17 12:25:08.552 [INFO][4535] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.44.6/26] handle="k8s-pod-network.3c8b1143d7533e7f37538df9b1a6dfcea840a2766c890dbdf7f9b173584dcb0b" host="10.0.0.160" Jan 17 12:25:08.573364 containerd[1462]: 2025-01-17 12:25:08.552 [INFO][4535] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:25:08.573364 containerd[1462]: 2025-01-17 12:25:08.552 [INFO][4535] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.44.6/26] IPv6=[] ContainerID="3c8b1143d7533e7f37538df9b1a6dfcea840a2766c890dbdf7f9b173584dcb0b" HandleID="k8s-pod-network.3c8b1143d7533e7f37538df9b1a6dfcea840a2766c890dbdf7f9b173584dcb0b" Workload="10.0.0.160-k8s-coredns--76f75df574--nlhgn-eth0" Jan 17 12:25:08.573988 containerd[1462]: 2025-01-17 12:25:08.555 [INFO][4521] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3c8b1143d7533e7f37538df9b1a6dfcea840a2766c890dbdf7f9b173584dcb0b" Namespace="kube-system" Pod="coredns-76f75df574-nlhgn" WorkloadEndpoint="10.0.0.160-k8s-coredns--76f75df574--nlhgn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.160-k8s-coredns--76f75df574--nlhgn-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"04af4684-e94d-4b3d-a664-20ff360fb4b6", ResourceVersion:"1142", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 23, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.160", ContainerID:"", Pod:"coredns-76f75df574-nlhgn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9c3c618bae5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:25:08.573988 containerd[1462]: 2025-01-17 12:25:08.555 [INFO][4521] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.44.6/32] ContainerID="3c8b1143d7533e7f37538df9b1a6dfcea840a2766c890dbdf7f9b173584dcb0b" Namespace="kube-system" Pod="coredns-76f75df574-nlhgn" WorkloadEndpoint="10.0.0.160-k8s-coredns--76f75df574--nlhgn-eth0" Jan 17 12:25:08.573988 containerd[1462]: 2025-01-17 12:25:08.555 [INFO][4521] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9c3c618bae5 ContainerID="3c8b1143d7533e7f37538df9b1a6dfcea840a2766c890dbdf7f9b173584dcb0b" Namespace="kube-system" Pod="coredns-76f75df574-nlhgn" WorkloadEndpoint="10.0.0.160-k8s-coredns--76f75df574--nlhgn-eth0" Jan 17 12:25:08.573988 containerd[1462]: 2025-01-17 12:25:08.559 [INFO][4521] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3c8b1143d7533e7f37538df9b1a6dfcea840a2766c890dbdf7f9b173584dcb0b" Namespace="kube-system" Pod="coredns-76f75df574-nlhgn" WorkloadEndpoint="10.0.0.160-k8s-coredns--76f75df574--nlhgn-eth0" Jan 17 12:25:08.573988 containerd[1462]: 2025-01-17 12:25:08.560 [INFO][4521] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3c8b1143d7533e7f37538df9b1a6dfcea840a2766c890dbdf7f9b173584dcb0b" Namespace="kube-system" Pod="coredns-76f75df574-nlhgn" WorkloadEndpoint="10.0.0.160-k8s-coredns--76f75df574--nlhgn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.160-k8s-coredns--76f75df574--nlhgn-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"04af4684-e94d-4b3d-a664-20ff360fb4b6", ResourceVersion:"1142", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 23, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.160", ContainerID:"3c8b1143d7533e7f37538df9b1a6dfcea840a2766c890dbdf7f9b173584dcb0b", Pod:"coredns-76f75df574-nlhgn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9c3c618bae5", MAC:"be:74:a9:7a:1a:72", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:25:08.573988 containerd[1462]: 2025-01-17 12:25:08.568 [INFO][4521] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3c8b1143d7533e7f37538df9b1a6dfcea840a2766c890dbdf7f9b173584dcb0b" Namespace="kube-system" Pod="coredns-76f75df574-nlhgn" WorkloadEndpoint="10.0.0.160-k8s-coredns--76f75df574--nlhgn-eth0" Jan 17 12:25:08.618713 containerd[1462]: time="2025-01-17T12:25:08.618624190Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:25:08.618713 containerd[1462]: time="2025-01-17T12:25:08.618673753Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:25:08.618713 containerd[1462]: time="2025-01-17T12:25:08.618690595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:25:08.618837 containerd[1462]: time="2025-01-17T12:25:08.618790914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:25:08.665925 systemd[1]: Started cri-containerd-3c8b1143d7533e7f37538df9b1a6dfcea840a2766c890dbdf7f9b173584dcb0b.scope - libcontainer container 3c8b1143d7533e7f37538df9b1a6dfcea840a2766c890dbdf7f9b173584dcb0b. Jan 17 12:25:08.677429 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:25:08.700190 containerd[1462]: time="2025-01-17T12:25:08.700100166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-nlhgn,Uid:04af4684-e94d-4b3d-a664-20ff360fb4b6,Namespace:kube-system,Attempt:1,} returns sandbox id \"3c8b1143d7533e7f37538df9b1a6dfcea840a2766c890dbdf7f9b173584dcb0b\"" Jan 17 12:25:08.701015 kubelet[1774]: E0117 12:25:08.700923 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:25:09.194404 containerd[1462]: time="2025-01-17T12:25:09.194360387Z" level=info msg="StopPodSandbox for \"c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf\"" Jan 17 12:25:09.265357 containerd[1462]: 2025-01-17 12:25:09.237 [INFO][4613] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf" Jan 17 12:25:09.265357 containerd[1462]: 2025-01-17 12:25:09.237 [INFO][4613] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf" iface="eth0" netns="/var/run/netns/cni-cedb32b6-4caa-c5b7-22e1-4a1e14aeaeda" Jan 17 12:25:09.265357 containerd[1462]: 2025-01-17 12:25:09.237 [INFO][4613] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf" iface="eth0" netns="/var/run/netns/cni-cedb32b6-4caa-c5b7-22e1-4a1e14aeaeda" Jan 17 12:25:09.265357 containerd[1462]: 2025-01-17 12:25:09.237 [INFO][4613] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf" iface="eth0" netns="/var/run/netns/cni-cedb32b6-4caa-c5b7-22e1-4a1e14aeaeda" Jan 17 12:25:09.265357 containerd[1462]: 2025-01-17 12:25:09.237 [INFO][4613] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf" Jan 17 12:25:09.265357 containerd[1462]: 2025-01-17 12:25:09.237 [INFO][4613] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf" Jan 17 12:25:09.265357 containerd[1462]: 2025-01-17 12:25:09.255 [INFO][4621] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf" HandleID="k8s-pod-network.c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf" Workload="10.0.0.160-k8s-calico--apiserver--68f79c84d4--xr6b6-eth0" Jan 17 12:25:09.265357 containerd[1462]: 2025-01-17 12:25:09.255 [INFO][4621] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:25:09.265357 containerd[1462]: 2025-01-17 12:25:09.255 [INFO][4621] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:25:09.265357 containerd[1462]: 2025-01-17 12:25:09.259 [WARNING][4621] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf" HandleID="k8s-pod-network.c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf" Workload="10.0.0.160-k8s-calico--apiserver--68f79c84d4--xr6b6-eth0" Jan 17 12:25:09.265357 containerd[1462]: 2025-01-17 12:25:09.259 [INFO][4621] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf" HandleID="k8s-pod-network.c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf" Workload="10.0.0.160-k8s-calico--apiserver--68f79c84d4--xr6b6-eth0" Jan 17 12:25:09.265357 containerd[1462]: 2025-01-17 12:25:09.261 [INFO][4621] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:25:09.265357 containerd[1462]: 2025-01-17 12:25:09.263 [INFO][4613] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf" Jan 17 12:25:09.266018 containerd[1462]: time="2025-01-17T12:25:09.265524539Z" level=info msg="TearDown network for sandbox \"c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf\" successfully" Jan 17 12:25:09.266018 containerd[1462]: time="2025-01-17T12:25:09.265599991Z" level=info msg="StopPodSandbox for \"c73f963120c36550314792c43b88b2d6f709a08f08037497692b316525e358bf\" returns successfully" Jan 17 12:25:09.266345 containerd[1462]: time="2025-01-17T12:25:09.266320333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68f79c84d4-xr6b6,Uid:d48ea218-f377-4f0d-a97e-292b17df6d89,Namespace:calico-apiserver,Attempt:1,}" Jan 17 12:25:09.390118 systemd-networkd[1390]: cali59f5e5d8307: Link UP Jan 17 12:25:09.391160 systemd-networkd[1390]: cali59f5e5d8307: Gained carrier Jan 17 12:25:09.396948 kubelet[1774]: E0117 12:25:09.396926 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:09.402717 containerd[1462]: 2025-01-17 12:25:09.329 [INFO][4633] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.160-k8s-calico--apiserver--68f79c84d4--xr6b6-eth0 calico-apiserver-68f79c84d4- calico-apiserver d48ea218-f377-4f0d-a97e-292b17df6d89 1152 0 2025-01-17 12:23:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68f79c84d4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 10.0.0.160 calico-apiserver-68f79c84d4-xr6b6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali59f5e5d8307 [] []}} ContainerID="a17ce83b27e1ce8b503d39d1d540f008a21074a4f6f99a463ed780afe1ec4fb3" Namespace="calico-apiserver" Pod="calico-apiserver-68f79c84d4-xr6b6" WorkloadEndpoint="10.0.0.160-k8s-calico--apiserver--68f79c84d4--xr6b6-" Jan 17 12:25:09.402717 containerd[1462]: 2025-01-17 12:25:09.329 [INFO][4633] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a17ce83b27e1ce8b503d39d1d540f008a21074a4f6f99a463ed780afe1ec4fb3" Namespace="calico-apiserver" Pod="calico-apiserver-68f79c84d4-xr6b6" WorkloadEndpoint="10.0.0.160-k8s-calico--apiserver--68f79c84d4--xr6b6-eth0" Jan 17 12:25:09.402717 containerd[1462]: 2025-01-17 12:25:09.355 [INFO][4649] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a17ce83b27e1ce8b503d39d1d540f008a21074a4f6f99a463ed780afe1ec4fb3" HandleID="k8s-pod-network.a17ce83b27e1ce8b503d39d1d540f008a21074a4f6f99a463ed780afe1ec4fb3" Workload="10.0.0.160-k8s-calico--apiserver--68f79c84d4--xr6b6-eth0" Jan 17 12:25:09.402717 containerd[1462]: 2025-01-17 12:25:09.363 [INFO][4649] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a17ce83b27e1ce8b503d39d1d540f008a21074a4f6f99a463ed780afe1ec4fb3" HandleID="k8s-pod-network.a17ce83b27e1ce8b503d39d1d540f008a21074a4f6f99a463ed780afe1ec4fb3" Workload="10.0.0.160-k8s-calico--apiserver--68f79c84d4--xr6b6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000309570), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"10.0.0.160", "pod":"calico-apiserver-68f79c84d4-xr6b6", "timestamp":"2025-01-17 12:25:09.355416842 +0000 UTC"}, Hostname:"10.0.0.160", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:25:09.402717 containerd[1462]: 2025-01-17 12:25:09.363 [INFO][4649] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:25:09.402717 containerd[1462]: 2025-01-17 12:25:09.363 [INFO][4649] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:25:09.402717 containerd[1462]: 2025-01-17 12:25:09.363 [INFO][4649] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.160' Jan 17 12:25:09.402717 containerd[1462]: 2025-01-17 12:25:09.365 [INFO][4649] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a17ce83b27e1ce8b503d39d1d540f008a21074a4f6f99a463ed780afe1ec4fb3" host="10.0.0.160" Jan 17 12:25:09.402717 containerd[1462]: 2025-01-17 12:25:09.368 [INFO][4649] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.160" Jan 17 12:25:09.402717 containerd[1462]: 2025-01-17 12:25:09.371 [INFO][4649] ipam/ipam.go 489: Trying affinity for 192.168.44.0/26 host="10.0.0.160" Jan 17 12:25:09.402717 containerd[1462]: 2025-01-17 12:25:09.372 [INFO][4649] ipam/ipam.go 155: Attempting to load block cidr=192.168.44.0/26 host="10.0.0.160" Jan 17 12:25:09.402717 containerd[1462]: 2025-01-17 12:25:09.374 [INFO][4649] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.44.0/26 host="10.0.0.160" Jan 17 12:25:09.402717 containerd[1462]: 2025-01-17 12:25:09.374 [INFO][4649] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.44.0/26 handle="k8s-pod-network.a17ce83b27e1ce8b503d39d1d540f008a21074a4f6f99a463ed780afe1ec4fb3" host="10.0.0.160" Jan 17 12:25:09.402717 containerd[1462]: 2025-01-17 12:25:09.375 [INFO][4649] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a17ce83b27e1ce8b503d39d1d540f008a21074a4f6f99a463ed780afe1ec4fb3 Jan 17 12:25:09.402717 containerd[1462]: 2025-01-17 12:25:09.379 [INFO][4649] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.44.0/26 handle="k8s-pod-network.a17ce83b27e1ce8b503d39d1d540f008a21074a4f6f99a463ed780afe1ec4fb3" host="10.0.0.160" Jan 17 12:25:09.402717 containerd[1462]: 2025-01-17 12:25:09.383 [INFO][4649] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.44.7/26] block=192.168.44.0/26 handle="k8s-pod-network.a17ce83b27e1ce8b503d39d1d540f008a21074a4f6f99a463ed780afe1ec4fb3" host="10.0.0.160" Jan 17 12:25:09.402717 containerd[1462]: 2025-01-17 12:25:09.384 [INFO][4649] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.44.7/26] handle="k8s-pod-network.a17ce83b27e1ce8b503d39d1d540f008a21074a4f6f99a463ed780afe1ec4fb3" host="10.0.0.160" Jan 17 12:25:09.402717 containerd[1462]: 2025-01-17 12:25:09.384 [INFO][4649] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:25:09.402717 containerd[1462]: 2025-01-17 12:25:09.384 [INFO][4649] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.44.7/26] IPv6=[] ContainerID="a17ce83b27e1ce8b503d39d1d540f008a21074a4f6f99a463ed780afe1ec4fb3" HandleID="k8s-pod-network.a17ce83b27e1ce8b503d39d1d540f008a21074a4f6f99a463ed780afe1ec4fb3" Workload="10.0.0.160-k8s-calico--apiserver--68f79c84d4--xr6b6-eth0" Jan 17 12:25:09.403250 containerd[1462]: 2025-01-17 12:25:09.386 [INFO][4633] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a17ce83b27e1ce8b503d39d1d540f008a21074a4f6f99a463ed780afe1ec4fb3" Namespace="calico-apiserver" Pod="calico-apiserver-68f79c84d4-xr6b6" WorkloadEndpoint="10.0.0.160-k8s-calico--apiserver--68f79c84d4--xr6b6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.160-k8s-calico--apiserver--68f79c84d4--xr6b6-eth0", GenerateName:"calico-apiserver-68f79c84d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"d48ea218-f377-4f0d-a97e-292b17df6d89", ResourceVersion:"1152", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 23, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68f79c84d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.160", ContainerID:"", Pod:"calico-apiserver-68f79c84d4-xr6b6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali59f5e5d8307", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:25:09.403250 containerd[1462]: 2025-01-17 12:25:09.386 [INFO][4633] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.44.7/32] ContainerID="a17ce83b27e1ce8b503d39d1d540f008a21074a4f6f99a463ed780afe1ec4fb3" Namespace="calico-apiserver" Pod="calico-apiserver-68f79c84d4-xr6b6" WorkloadEndpoint="10.0.0.160-k8s-calico--apiserver--68f79c84d4--xr6b6-eth0" Jan 17 12:25:09.403250 containerd[1462]: 2025-01-17 12:25:09.386 [INFO][4633] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali59f5e5d8307 ContainerID="a17ce83b27e1ce8b503d39d1d540f008a21074a4f6f99a463ed780afe1ec4fb3" Namespace="calico-apiserver" Pod="calico-apiserver-68f79c84d4-xr6b6" WorkloadEndpoint="10.0.0.160-k8s-calico--apiserver--68f79c84d4--xr6b6-eth0" Jan 17 12:25:09.403250 containerd[1462]: 2025-01-17 12:25:09.390 [INFO][4633] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a17ce83b27e1ce8b503d39d1d540f008a21074a4f6f99a463ed780afe1ec4fb3" Namespace="calico-apiserver" Pod="calico-apiserver-68f79c84d4-xr6b6" WorkloadEndpoint="10.0.0.160-k8s-calico--apiserver--68f79c84d4--xr6b6-eth0" Jan 17 12:25:09.403250 containerd[1462]: 2025-01-17 12:25:09.391 [INFO][4633] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a17ce83b27e1ce8b503d39d1d540f008a21074a4f6f99a463ed780afe1ec4fb3" Namespace="calico-apiserver" Pod="calico-apiserver-68f79c84d4-xr6b6" WorkloadEndpoint="10.0.0.160-k8s-calico--apiserver--68f79c84d4--xr6b6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.160-k8s-calico--apiserver--68f79c84d4--xr6b6-eth0", GenerateName:"calico-apiserver-68f79c84d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"d48ea218-f377-4f0d-a97e-292b17df6d89", ResourceVersion:"1152", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 23, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68f79c84d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.160", ContainerID:"a17ce83b27e1ce8b503d39d1d540f008a21074a4f6f99a463ed780afe1ec4fb3", Pod:"calico-apiserver-68f79c84d4-xr6b6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali59f5e5d8307", MAC:"36:2c:ae:18:e6:ad", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:25:09.403250 containerd[1462]: 2025-01-17 12:25:09.398 [INFO][4633] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a17ce83b27e1ce8b503d39d1d540f008a21074a4f6f99a463ed780afe1ec4fb3" Namespace="calico-apiserver" Pod="calico-apiserver-68f79c84d4-xr6b6" WorkloadEndpoint="10.0.0.160-k8s-calico--apiserver--68f79c84d4--xr6b6-eth0" Jan 17 12:25:09.427127 containerd[1462]: time="2025-01-17T12:25:09.423762814Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:25:09.427127 containerd[1462]: time="2025-01-17T12:25:09.424165210Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:25:09.427127 containerd[1462]: time="2025-01-17T12:25:09.424355186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:25:09.427127 containerd[1462]: time="2025-01-17T12:25:09.424443502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:25:09.424707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1257319510.mount: Deactivated successfully. Jan 17 12:25:09.424816 systemd[1]: run-netns-cni\x2dcedb32b6\x2d4caa\x2dc5b7\x2d22e1\x2d4a1e14aeaeda.mount: Deactivated successfully. Jan 17 12:25:09.450837 systemd[1]: Started cri-containerd-a17ce83b27e1ce8b503d39d1d540f008a21074a4f6f99a463ed780afe1ec4fb3.scope - libcontainer container a17ce83b27e1ce8b503d39d1d540f008a21074a4f6f99a463ed780afe1ec4fb3. Jan 17 12:25:09.462666 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:25:09.484890 containerd[1462]: time="2025-01-17T12:25:09.484859649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68f79c84d4-xr6b6,Uid:d48ea218-f377-4f0d-a97e-292b17df6d89,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"a17ce83b27e1ce8b503d39d1d540f008a21074a4f6f99a463ed780afe1ec4fb3\"" Jan 17 12:25:09.700881 systemd-networkd[1390]: cali9c3c618bae5: Gained IPv6LL Jan 17 12:25:10.088591 containerd[1462]: time="2025-01-17T12:25:10.088532323Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:25:10.089216 containerd[1462]: time="2025-01-17T12:25:10.089165552Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71036018" Jan 17 12:25:10.090271 containerd[1462]: time="2025-01-17T12:25:10.090240018Z" level=info msg="ImageCreate event name:\"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:25:10.092684 containerd[1462]: time="2025-01-17T12:25:10.092646184Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:25:10.093452 containerd[1462]: time="2025-01-17T12:25:10.093409807Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 5.630398122s" Jan 17 12:25:10.093489 containerd[1462]: time="2025-01-17T12:25:10.093452858Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 17 12:25:10.094082 containerd[1462]: time="2025-01-17T12:25:10.093949881Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 17 12:25:10.094764 containerd[1462]: time="2025-01-17T12:25:10.094739293Z" level=info msg="CreateContainer within sandbox \"ff5741c06f0144fa618883b93d2c7f79c46d887c06a03018be9da38513b4908b\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 17 12:25:10.106441 containerd[1462]: time="2025-01-17T12:25:10.106406220Z" level=info msg="CreateContainer within sandbox \"ff5741c06f0144fa618883b93d2c7f79c46d887c06a03018be9da38513b4908b\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"58c39edd6ff68fb0f65e803f76070356850beda0d0351e4f6889df82befb8b9a\"" Jan 17 12:25:10.106789 containerd[1462]: time="2025-01-17T12:25:10.106769181Z" level=info msg="StartContainer for \"58c39edd6ff68fb0f65e803f76070356850beda0d0351e4f6889df82befb8b9a\"" Jan 17 12:25:10.132829 systemd[1]: Started cri-containerd-58c39edd6ff68fb0f65e803f76070356850beda0d0351e4f6889df82befb8b9a.scope - libcontainer container 58c39edd6ff68fb0f65e803f76070356850beda0d0351e4f6889df82befb8b9a. Jan 17 12:25:10.204299 containerd[1462]: time="2025-01-17T12:25:10.204249287Z" level=info msg="StartContainer for \"58c39edd6ff68fb0f65e803f76070356850beda0d0351e4f6889df82befb8b9a\" returns successfully" Jan 17 12:25:10.397744 kubelet[1774]: E0117 12:25:10.397617 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:10.622897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1239247973.mount: Deactivated successfully. Jan 17 12:25:10.723920 systemd-networkd[1390]: cali59f5e5d8307: Gained IPv6LL Jan 17 12:25:11.194512 kubelet[1774]: E0117 12:25:11.194375 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:25:11.398529 kubelet[1774]: E0117 12:25:11.398484 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:11.650984 containerd[1462]: time="2025-01-17T12:25:11.650930156Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:25:11.651819 containerd[1462]: time="2025-01-17T12:25:11.651791081Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 17 12:25:11.653073 containerd[1462]: time="2025-01-17T12:25:11.653047860Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:25:11.655620 containerd[1462]: time="2025-01-17T12:25:11.655595752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:25:11.656558 containerd[1462]: time="2025-01-17T12:25:11.656529885Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.562553534s" Jan 17 12:25:11.656608 containerd[1462]: time="2025-01-17T12:25:11.656562005Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 17 12:25:11.657059 containerd[1462]: time="2025-01-17T12:25:11.656993145Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 17 12:25:11.658171 containerd[1462]: time="2025-01-17T12:25:11.658147731Z" level=info msg="CreateContainer within sandbox \"3f3df8bece8cb989ff367ee9bf7d4b7d8a16bd5dd32a08a3c9d496825a27df27\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:25:11.672679 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount265132084.mount: Deactivated successfully. Jan 17 12:25:11.673108 containerd[1462]: time="2025-01-17T12:25:11.672682019Z" level=info msg="CreateContainer within sandbox \"3f3df8bece8cb989ff367ee9bf7d4b7d8a16bd5dd32a08a3c9d496825a27df27\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e164491599ce490d4309d70f60563e1d0414685fe7992a0d2e732853c685c644\"" Jan 17 12:25:11.673208 containerd[1462]: time="2025-01-17T12:25:11.673172659Z" level=info msg="StartContainer for \"e164491599ce490d4309d70f60563e1d0414685fe7992a0d2e732853c685c644\"" Jan 17 12:25:11.697857 systemd[1]: Started cri-containerd-e164491599ce490d4309d70f60563e1d0414685fe7992a0d2e732853c685c644.scope - libcontainer container e164491599ce490d4309d70f60563e1d0414685fe7992a0d2e732853c685c644. Jan 17 12:25:11.751283 containerd[1462]: time="2025-01-17T12:25:11.751210157Z" level=info msg="StartContainer for \"e164491599ce490d4309d70f60563e1d0414685fe7992a0d2e732853c685c644\" returns successfully" Jan 17 12:25:12.368142 kubelet[1774]: E0117 12:25:12.368069 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:25:12.376303 kubelet[1774]: I0117 12:25:12.376257 1774 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-q4bsh" podStartSLOduration=81.213831672 podStartE2EDuration="1m28.376221507s" podCreationTimestamp="2025-01-17 12:23:44 +0000 UTC" firstStartedPulling="2025-01-17 12:25:04.494467024 +0000 UTC m=+67.758422823" lastFinishedPulling="2025-01-17 12:25:11.656856849 +0000 UTC m=+74.920812658" observedRunningTime="2025-01-17 12:25:12.375866551 +0000 UTC m=+75.639822350" watchObservedRunningTime="2025-01-17 12:25:12.376221507 +0000 UTC m=+75.640177306" Jan 17 12:25:12.376523 kubelet[1774]: I0117 12:25:12.376339 1774 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-869v5" podStartSLOduration=51.744851932 podStartE2EDuration="57.376319602s" podCreationTimestamp="2025-01-17 12:24:15 +0000 UTC" firstStartedPulling="2025-01-17 12:25:04.462276445 +0000 UTC m=+67.726232244" lastFinishedPulling="2025-01-17 12:25:10.093744115 +0000 UTC m=+73.357699914" observedRunningTime="2025-01-17 12:25:10.368629994 +0000 UTC m=+73.632585793" watchObservedRunningTime="2025-01-17 12:25:12.376319602 +0000 UTC m=+75.640275401" Jan 17 12:25:12.398866 kubelet[1774]: E0117 12:25:12.398807 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:13.251471 containerd[1462]: time="2025-01-17T12:25:13.251406939Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:25:13.252081 containerd[1462]: time="2025-01-17T12:25:13.252033224Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 17 12:25:13.253086 containerd[1462]: time="2025-01-17T12:25:13.253050543Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:25:13.255255 containerd[1462]: time="2025-01-17T12:25:13.255223441Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:25:13.255785 containerd[1462]: time="2025-01-17T12:25:13.255751902Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.598734372s" Jan 17 12:25:13.255785 containerd[1462]: time="2025-01-17T12:25:13.255780937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 17 12:25:13.256395 containerd[1462]: time="2025-01-17T12:25:13.256292427Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 17 12:25:13.257531 containerd[1462]: time="2025-01-17T12:25:13.257502398Z" level=info msg="CreateContainer within sandbox \"405a158c85be99c863d2dfdde9542760680bf737e6a4e11ad97fed9c1857a9b9\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 17 12:25:13.273876 containerd[1462]: time="2025-01-17T12:25:13.273839105Z" level=info msg="CreateContainer within sandbox \"405a158c85be99c863d2dfdde9542760680bf737e6a4e11ad97fed9c1857a9b9\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9dd9f3d6c74fea1ee180f317b57028f07b947d1ea065fd8fcd7c981a5a30fe70\"" Jan 17 12:25:13.274283 containerd[1462]: time="2025-01-17T12:25:13.274259413Z" level=info msg="StartContainer for \"9dd9f3d6c74fea1ee180f317b57028f07b947d1ea065fd8fcd7c981a5a30fe70\"" Jan 17 12:25:13.312996 systemd[1]: Started cri-containerd-9dd9f3d6c74fea1ee180f317b57028f07b947d1ea065fd8fcd7c981a5a30fe70.scope - libcontainer container 9dd9f3d6c74fea1ee180f317b57028f07b947d1ea065fd8fcd7c981a5a30fe70. Jan 17 12:25:13.382348 containerd[1462]: time="2025-01-17T12:25:13.382278300Z" level=info msg="StartContainer for \"9dd9f3d6c74fea1ee180f317b57028f07b947d1ea065fd8fcd7c981a5a30fe70\" returns successfully" Jan 17 12:25:13.384724 kubelet[1774]: E0117 12:25:13.384674 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:25:13.399758 kubelet[1774]: E0117 12:25:13.399689 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:14.386321 kubelet[1774]: E0117 12:25:14.386277 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:25:14.400257 kubelet[1774]: E0117 12:25:14.400197 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:15.194523 kubelet[1774]: I0117 12:25:15.194488 1774 topology_manager.go:215] "Topology Admit Handler" podUID="0f399b97-3681-49cd-b14d-9783777091f7" podNamespace="default" podName="nfs-server-provisioner-0" Jan 17 12:25:15.200892 systemd[1]: Created slice kubepods-besteffort-pod0f399b97_3681_49cd_b14d_9783777091f7.slice - libcontainer container kubepods-besteffort-pod0f399b97_3681_49cd_b14d_9783777091f7.slice. Jan 17 12:25:15.288104 kubelet[1774]: I0117 12:25:15.288058 1774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/0f399b97-3681-49cd-b14d-9783777091f7-data\") pod \"nfs-server-provisioner-0\" (UID: \"0f399b97-3681-49cd-b14d-9783777091f7\") " pod="default/nfs-server-provisioner-0" Jan 17 12:25:15.288236 kubelet[1774]: I0117 12:25:15.288129 1774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9qrz\" (UniqueName: \"kubernetes.io/projected/0f399b97-3681-49cd-b14d-9783777091f7-kube-api-access-n9qrz\") pod \"nfs-server-provisioner-0\" (UID: \"0f399b97-3681-49cd-b14d-9783777091f7\") " pod="default/nfs-server-provisioner-0" Jan 17 12:25:15.401020 kubelet[1774]: E0117 12:25:15.400984 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:15.401790 containerd[1462]: time="2025-01-17T12:25:15.401751728Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:25:15.402422 containerd[1462]: time="2025-01-17T12:25:15.402361322Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 17 12:25:15.403372 containerd[1462]: time="2025-01-17T12:25:15.403339618Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:25:15.405523 containerd[1462]: time="2025-01-17T12:25:15.405488761Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:25:15.406196 containerd[1462]: time="2025-01-17T12:25:15.406157466Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.149836686s" Jan 17 12:25:15.406232 containerd[1462]: time="2025-01-17T12:25:15.406193984Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 17 12:25:15.406729 containerd[1462]: time="2025-01-17T12:25:15.406689995Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 17 12:25:15.407773 containerd[1462]: time="2025-01-17T12:25:15.407742210Z" level=info msg="CreateContainer within sandbox \"705b9e1c6b5bbf8ef7aa026983d5709d6aab3efa37a4109de5e3084938d7c205\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 17 12:25:15.419341 containerd[1462]: time="2025-01-17T12:25:15.419301279Z" level=info msg="CreateContainer within sandbox \"705b9e1c6b5bbf8ef7aa026983d5709d6aab3efa37a4109de5e3084938d7c205\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ba60ed14026d12642bb8b3f7cf4bf8d0cce7531c34d277203f69a311b45d1b09\"" Jan 17 12:25:15.419685 containerd[1462]: time="2025-01-17T12:25:15.419659061Z" level=info msg="StartContainer for \"ba60ed14026d12642bb8b3f7cf4bf8d0cce7531c34d277203f69a311b45d1b09\"" Jan 17 12:25:15.451835 systemd[1]: Started cri-containerd-ba60ed14026d12642bb8b3f7cf4bf8d0cce7531c34d277203f69a311b45d1b09.scope - libcontainer container ba60ed14026d12642bb8b3f7cf4bf8d0cce7531c34d277203f69a311b45d1b09. Jan 17 12:25:15.490266 containerd[1462]: time="2025-01-17T12:25:15.490225995Z" level=info msg="StartContainer for \"ba60ed14026d12642bb8b3f7cf4bf8d0cce7531c34d277203f69a311b45d1b09\" returns successfully" Jan 17 12:25:15.504473 containerd[1462]: time="2025-01-17T12:25:15.504427673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:0f399b97-3681-49cd-b14d-9783777091f7,Namespace:default,Attempt:0,}" Jan 17 12:25:15.627973 systemd-networkd[1390]: cali60e51b789ff: Link UP Jan 17 12:25:15.628163 systemd-networkd[1390]: cali60e51b789ff: Gained carrier Jan 17 12:25:15.641735 containerd[1462]: 2025-01-17 12:25:15.560 [INFO][4979] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.160-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 0f399b97-3681-49cd-b14d-9783777091f7 1224 0 2025-01-17 12:25:15 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.160 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="1d98e7a4e3f09c48f89ec764395331851db6b80ef8ee0062dcb6afd92e236eae" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.160-k8s-nfs--server--provisioner--0-" Jan 17 12:25:15.641735 containerd[1462]: 2025-01-17 12:25:15.560 [INFO][4979] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1d98e7a4e3f09c48f89ec764395331851db6b80ef8ee0062dcb6afd92e236eae" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.160-k8s-nfs--server--provisioner--0-eth0" Jan 17 12:25:15.641735 containerd[1462]: 2025-01-17 12:25:15.587 [INFO][4993] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1d98e7a4e3f09c48f89ec764395331851db6b80ef8ee0062dcb6afd92e236eae" HandleID="k8s-pod-network.1d98e7a4e3f09c48f89ec764395331851db6b80ef8ee0062dcb6afd92e236eae" Workload="10.0.0.160-k8s-nfs--server--provisioner--0-eth0" Jan 17 12:25:15.641735 containerd[1462]: 2025-01-17 12:25:15.596 [INFO][4993] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1d98e7a4e3f09c48f89ec764395331851db6b80ef8ee0062dcb6afd92e236eae" HandleID="k8s-pod-network.1d98e7a4e3f09c48f89ec764395331851db6b80ef8ee0062dcb6afd92e236eae" Workload="10.0.0.160-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dcaf0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.160", "pod":"nfs-server-provisioner-0", "timestamp":"2025-01-17 12:25:15.587930922 +0000 UTC"}, Hostname:"10.0.0.160", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:25:15.641735 containerd[1462]: 2025-01-17 12:25:15.596 [INFO][4993] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:25:15.641735 containerd[1462]: 2025-01-17 12:25:15.596 [INFO][4993] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:25:15.641735 containerd[1462]: 2025-01-17 12:25:15.596 [INFO][4993] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.160' Jan 17 12:25:15.641735 containerd[1462]: 2025-01-17 12:25:15.598 [INFO][4993] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1d98e7a4e3f09c48f89ec764395331851db6b80ef8ee0062dcb6afd92e236eae" host="10.0.0.160" Jan 17 12:25:15.641735 containerd[1462]: 2025-01-17 12:25:15.602 [INFO][4993] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.160" Jan 17 12:25:15.641735 containerd[1462]: 2025-01-17 12:25:15.606 [INFO][4993] ipam/ipam.go 489: Trying affinity for 192.168.44.0/26 host="10.0.0.160" Jan 17 12:25:15.641735 containerd[1462]: 2025-01-17 12:25:15.608 [INFO][4993] ipam/ipam.go 155: Attempting to load block cidr=192.168.44.0/26 host="10.0.0.160" Jan 17 12:25:15.641735 containerd[1462]: 2025-01-17 12:25:15.610 [INFO][4993] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.44.0/26 host="10.0.0.160" Jan 17 12:25:15.641735 containerd[1462]: 2025-01-17 12:25:15.610 [INFO][4993] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.44.0/26 handle="k8s-pod-network.1d98e7a4e3f09c48f89ec764395331851db6b80ef8ee0062dcb6afd92e236eae" host="10.0.0.160" Jan 17 12:25:15.641735 containerd[1462]: 2025-01-17 12:25:15.611 [INFO][4993] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1d98e7a4e3f09c48f89ec764395331851db6b80ef8ee0062dcb6afd92e236eae Jan 17 12:25:15.641735 containerd[1462]: 2025-01-17 12:25:15.614 [INFO][4993] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.44.0/26 handle="k8s-pod-network.1d98e7a4e3f09c48f89ec764395331851db6b80ef8ee0062dcb6afd92e236eae" host="10.0.0.160" Jan 17 12:25:15.641735 containerd[1462]: 2025-01-17 12:25:15.622 [INFO][4993] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.44.8/26] block=192.168.44.0/26 handle="k8s-pod-network.1d98e7a4e3f09c48f89ec764395331851db6b80ef8ee0062dcb6afd92e236eae" host="10.0.0.160" Jan 17 12:25:15.641735 containerd[1462]: 2025-01-17 12:25:15.622 [INFO][4993] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.44.8/26] handle="k8s-pod-network.1d98e7a4e3f09c48f89ec764395331851db6b80ef8ee0062dcb6afd92e236eae" host="10.0.0.160" Jan 17 12:25:15.641735 containerd[1462]: 2025-01-17 12:25:15.622 [INFO][4993] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:25:15.641735 containerd[1462]: 2025-01-17 12:25:15.622 [INFO][4993] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.44.8/26] IPv6=[] ContainerID="1d98e7a4e3f09c48f89ec764395331851db6b80ef8ee0062dcb6afd92e236eae" HandleID="k8s-pod-network.1d98e7a4e3f09c48f89ec764395331851db6b80ef8ee0062dcb6afd92e236eae" Workload="10.0.0.160-k8s-nfs--server--provisioner--0-eth0" Jan 17 12:25:15.643108 containerd[1462]: 2025-01-17 12:25:15.624 [INFO][4979] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1d98e7a4e3f09c48f89ec764395331851db6b80ef8ee0062dcb6afd92e236eae" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.160-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.160-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"0f399b97-3681-49cd-b14d-9783777091f7", ResourceVersion:"1224", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 25, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.160", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.44.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:25:15.643108 containerd[1462]: 2025-01-17 12:25:15.625 [INFO][4979] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.44.8/32] ContainerID="1d98e7a4e3f09c48f89ec764395331851db6b80ef8ee0062dcb6afd92e236eae" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.160-k8s-nfs--server--provisioner--0-eth0" Jan 17 12:25:15.643108 containerd[1462]: 2025-01-17 12:25:15.625 [INFO][4979] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="1d98e7a4e3f09c48f89ec764395331851db6b80ef8ee0062dcb6afd92e236eae" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.160-k8s-nfs--server--provisioner--0-eth0" Jan 17 12:25:15.643108 containerd[1462]: 2025-01-17 12:25:15.628 [INFO][4979] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1d98e7a4e3f09c48f89ec764395331851db6b80ef8ee0062dcb6afd92e236eae" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.160-k8s-nfs--server--provisioner--0-eth0" Jan 17 12:25:15.643247 containerd[1462]: 2025-01-17 12:25:15.629 [INFO][4979] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1d98e7a4e3f09c48f89ec764395331851db6b80ef8ee0062dcb6afd92e236eae" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.160-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.160-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"0f399b97-3681-49cd-b14d-9783777091f7", ResourceVersion:"1224", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 25, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.160", ContainerID:"1d98e7a4e3f09c48f89ec764395331851db6b80ef8ee0062dcb6afd92e236eae", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.44.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"e6:b9:6f:a0:90:46", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:25:15.643247 containerd[1462]: 2025-01-17 12:25:15.637 [INFO][4979] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1d98e7a4e3f09c48f89ec764395331851db6b80ef8ee0062dcb6afd92e236eae" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.160-k8s-nfs--server--provisioner--0-eth0" Jan 17 12:25:15.981673 containerd[1462]: time="2025-01-17T12:25:15.981471046Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:25:15.982318 containerd[1462]: time="2025-01-17T12:25:15.981589648Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:25:15.982318 containerd[1462]: time="2025-01-17T12:25:15.982212366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:25:15.982542 containerd[1462]: time="2025-01-17T12:25:15.982327904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:25:16.003828 systemd[1]: Started cri-containerd-1d98e7a4e3f09c48f89ec764395331851db6b80ef8ee0062dcb6afd92e236eae.scope - libcontainer container 1d98e7a4e3f09c48f89ec764395331851db6b80ef8ee0062dcb6afd92e236eae. Jan 17 12:25:16.017728 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:25:16.040597 containerd[1462]: time="2025-01-17T12:25:16.040562384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:0f399b97-3681-49cd-b14d-9783777091f7,Namespace:default,Attempt:0,} returns sandbox id \"1d98e7a4e3f09c48f89ec764395331851db6b80ef8ee0062dcb6afd92e236eae\"" Jan 17 12:25:16.401178 kubelet[1774]: E0117 12:25:16.401082 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:16.401545 kubelet[1774]: I0117 12:25:16.401185 1774 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-68f79c84d4-6mrf5" podStartSLOduration=70.565032657 podStartE2EDuration="1m20.401146814s" podCreationTimestamp="2025-01-17 12:23:56 +0000 UTC" firstStartedPulling="2025-01-17 12:25:05.570372827 +0000 UTC m=+68.834328626" lastFinishedPulling="2025-01-17 12:25:15.406486984 +0000 UTC m=+78.670442783" observedRunningTime="2025-01-17 12:25:16.400948331 +0000 UTC m=+79.664904130" watchObservedRunningTime="2025-01-17 12:25:16.401146814 +0000 UTC m=+79.665102613" Jan 17 12:25:17.187992 systemd-networkd[1390]: cali60e51b789ff: Gained IPv6LL Jan 17 12:25:17.236733 containerd[1462]: time="2025-01-17T12:25:17.236681891Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:25:17.237532 containerd[1462]: time="2025-01-17T12:25:17.237487342Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 17 12:25:17.238523 containerd[1462]: time="2025-01-17T12:25:17.238492739Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:25:17.240510 containerd[1462]: time="2025-01-17T12:25:17.240458217Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:25:17.241047 containerd[1462]: time="2025-01-17T12:25:17.241011585Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 1.834256347s" Jan 17 12:25:17.241084 containerd[1462]: time="2025-01-17T12:25:17.241050738Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 17 12:25:17.241589 containerd[1462]: time="2025-01-17T12:25:17.241556177Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 17 12:25:17.247671 containerd[1462]: time="2025-01-17T12:25:17.247647777Z" level=info msg="CreateContainer within sandbox \"46127e5ff968b2917de40166ec0cbb391eb42dcf45c2a8acc1e490aa998ce0d5\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 17 12:25:17.261507 containerd[1462]: time="2025-01-17T12:25:17.261466545Z" level=info msg="CreateContainer within sandbox \"46127e5ff968b2917de40166ec0cbb391eb42dcf45c2a8acc1e490aa998ce0d5\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"2f313d11e208a7c96956e9f239c921cf15ecc24e7f133e9848680c2d736f7255\"" Jan 17 12:25:17.261868 containerd[1462]: time="2025-01-17T12:25:17.261826921Z" level=info msg="StartContainer for \"2f313d11e208a7c96956e9f239c921cf15ecc24e7f133e9848680c2d736f7255\"" Jan 17 12:25:17.289811 systemd[1]: Started cri-containerd-2f313d11e208a7c96956e9f239c921cf15ecc24e7f133e9848680c2d736f7255.scope - libcontainer container 2f313d11e208a7c96956e9f239c921cf15ecc24e7f133e9848680c2d736f7255. Jan 17 12:25:17.326975 containerd[1462]: time="2025-01-17T12:25:17.326935564Z" level=info msg="StartContainer for \"2f313d11e208a7c96956e9f239c921cf15ecc24e7f133e9848680c2d736f7255\" returns successfully" Jan 17 12:25:17.356378 kubelet[1774]: E0117 12:25:17.356349 1774 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:17.376257 containerd[1462]: time="2025-01-17T12:25:17.376210910Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:25:17.378771 containerd[1462]: time="2025-01-17T12:25:17.378713716Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=0" Jan 17 12:25:17.382158 containerd[1462]: time="2025-01-17T12:25:17.381894524Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 140.310244ms" Jan 17 12:25:17.382158 containerd[1462]: time="2025-01-17T12:25:17.381930782Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 17 12:25:17.383461 containerd[1462]: time="2025-01-17T12:25:17.382689596Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 17 12:25:17.384479 containerd[1462]: time="2025-01-17T12:25:17.384442004Z" level=info msg="CreateContainer within sandbox \"3c8b1143d7533e7f37538df9b1a6dfcea840a2766c890dbdf7f9b173584dcb0b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:25:17.399782 containerd[1462]: time="2025-01-17T12:25:17.399730650Z" level=info msg="CreateContainer within sandbox \"3c8b1143d7533e7f37538df9b1a6dfcea840a2766c890dbdf7f9b173584dcb0b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6fa6a140884a8687f39332df1f2580c0c0626620950932e68b8872f31a60fef0\"" Jan 17 12:25:17.400261 containerd[1462]: time="2025-01-17T12:25:17.400220640Z" level=info msg="StartContainer for \"6fa6a140884a8687f39332df1f2580c0c0626620950932e68b8872f31a60fef0\"" Jan 17 12:25:17.401479 kubelet[1774]: E0117 12:25:17.401445 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:17.403756 kubelet[1774]: I0117 12:25:17.403737 1774 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-9fc77b6d-kp6sc" podStartSLOduration=68.754901182 podStartE2EDuration="1m20.40370628s" podCreationTimestamp="2025-01-17 12:23:57 +0000 UTC" firstStartedPulling="2025-01-17 12:25:05.592547577 +0000 UTC m=+68.856503376" lastFinishedPulling="2025-01-17 12:25:17.241352685 +0000 UTC m=+80.505308474" observedRunningTime="2025-01-17 12:25:17.403382783 +0000 UTC m=+80.667338572" watchObservedRunningTime="2025-01-17 12:25:17.40370628 +0000 UTC m=+80.667662079" Jan 17 12:25:17.435833 systemd[1]: Started cri-containerd-6fa6a140884a8687f39332df1f2580c0c0626620950932e68b8872f31a60fef0.scope - libcontainer container 6fa6a140884a8687f39332df1f2580c0c0626620950932e68b8872f31a60fef0. Jan 17 12:25:17.462348 containerd[1462]: time="2025-01-17T12:25:17.462166359Z" level=info msg="StartContainer for \"6fa6a140884a8687f39332df1f2580c0c0626620950932e68b8872f31a60fef0\" returns successfully" Jan 17 12:25:17.874083 containerd[1462]: time="2025-01-17T12:25:17.874021455Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:25:17.874847 containerd[1462]: time="2025-01-17T12:25:17.874763997Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 17 12:25:17.876802 containerd[1462]: time="2025-01-17T12:25:17.876769240Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 494.042444ms" Jan 17 12:25:17.876802 containerd[1462]: time="2025-01-17T12:25:17.876799497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 17 12:25:17.877343 containerd[1462]: time="2025-01-17T12:25:17.877307060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 17 12:25:17.878666 containerd[1462]: time="2025-01-17T12:25:17.878608601Z" level=info msg="CreateContainer within sandbox \"a17ce83b27e1ce8b503d39d1d540f008a21074a4f6f99a463ed780afe1ec4fb3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 17 12:25:17.891949 containerd[1462]: time="2025-01-17T12:25:17.891906994Z" level=info msg="CreateContainer within sandbox \"a17ce83b27e1ce8b503d39d1d540f008a21074a4f6f99a463ed780afe1ec4fb3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"189d4a46cb2ef1083975caffa5e4da8804acfb316d4bb5e8e5d9aacf7fd44ce6\"" Jan 17 12:25:17.892324 containerd[1462]: time="2025-01-17T12:25:17.892290623Z" level=info msg="StartContainer for \"189d4a46cb2ef1083975caffa5e4da8804acfb316d4bb5e8e5d9aacf7fd44ce6\"" Jan 17 12:25:17.917825 systemd[1]: Started cri-containerd-189d4a46cb2ef1083975caffa5e4da8804acfb316d4bb5e8e5d9aacf7fd44ce6.scope - libcontainer container 189d4a46cb2ef1083975caffa5e4da8804acfb316d4bb5e8e5d9aacf7fd44ce6. Jan 17 12:25:17.954621 containerd[1462]: time="2025-01-17T12:25:17.954583944Z" level=info msg="StartContainer for \"189d4a46cb2ef1083975caffa5e4da8804acfb316d4bb5e8e5d9aacf7fd44ce6\" returns successfully" Jan 17 12:25:18.401304 kubelet[1774]: E0117 12:25:18.401277 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:25:18.402050 kubelet[1774]: E0117 12:25:18.401505 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:18.410671 kubelet[1774]: I0117 12:25:18.410646 1774 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-nlhgn" podStartSLOduration=85.730308668 podStartE2EDuration="1m34.410602691s" podCreationTimestamp="2025-01-17 12:23:44 +0000 UTC" firstStartedPulling="2025-01-17 12:25:08.701904933 +0000 UTC m=+71.965860722" lastFinishedPulling="2025-01-17 12:25:17.382198946 +0000 UTC m=+80.646154745" observedRunningTime="2025-01-17 12:25:18.410384723 +0000 UTC m=+81.674340522" watchObservedRunningTime="2025-01-17 12:25:18.410602691 +0000 UTC m=+81.674558490" Jan 17 12:25:18.424168 kubelet[1774]: I0117 12:25:18.423837 1774 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-68f79c84d4-xr6b6" podStartSLOduration=74.032872094 podStartE2EDuration="1m22.423801666s" podCreationTimestamp="2025-01-17 12:23:56 +0000 UTC" firstStartedPulling="2025-01-17 12:25:09.486169598 +0000 UTC m=+72.750125397" lastFinishedPulling="2025-01-17 12:25:17.87709917 +0000 UTC m=+81.141054969" observedRunningTime="2025-01-17 12:25:18.423420351 +0000 UTC m=+81.687376150" watchObservedRunningTime="2025-01-17 12:25:18.423801666 +0000 UTC m=+81.687757465" Jan 17 12:25:18.430580 systemd[1]: run-containerd-runc-k8s.io-2f313d11e208a7c96956e9f239c921cf15ecc24e7f133e9848680c2d736f7255-runc.4Loms0.mount: Deactivated successfully. Jan 17 12:25:19.195597 containerd[1462]: time="2025-01-17T12:25:19.195534357Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:25:19.196572 containerd[1462]: time="2025-01-17T12:25:19.196532590Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 17 12:25:19.197975 containerd[1462]: time="2025-01-17T12:25:19.197944439Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:25:19.200631 containerd[1462]: time="2025-01-17T12:25:19.200598548Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:25:19.201528 containerd[1462]: time="2025-01-17T12:25:19.201475564Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.324121547s" Jan 17 12:25:19.201562 containerd[1462]: time="2025-01-17T12:25:19.201534114Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 17 12:25:19.202172 containerd[1462]: time="2025-01-17T12:25:19.202122447Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 17 12:25:19.203401 containerd[1462]: time="2025-01-17T12:25:19.203363856Z" level=info msg="CreateContainer within sandbox \"405a158c85be99c863d2dfdde9542760680bf737e6a4e11ad97fed9c1857a9b9\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 17 12:25:19.218892 containerd[1462]: time="2025-01-17T12:25:19.218843108Z" level=info msg="CreateContainer within sandbox \"405a158c85be99c863d2dfdde9542760680bf737e6a4e11ad97fed9c1857a9b9\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"82aeeaeac37f8c1bf20fa3aee354a97ad507e21da2649bb9f91dbd968b41b82f\"" Jan 17 12:25:19.219383 containerd[1462]: time="2025-01-17T12:25:19.219361371Z" level=info msg="StartContainer for \"82aeeaeac37f8c1bf20fa3aee354a97ad507e21da2649bb9f91dbd968b41b82f\"" Jan 17 12:25:19.249826 systemd[1]: Started cri-containerd-82aeeaeac37f8c1bf20fa3aee354a97ad507e21da2649bb9f91dbd968b41b82f.scope - libcontainer container 82aeeaeac37f8c1bf20fa3aee354a97ad507e21da2649bb9f91dbd968b41b82f. Jan 17 12:25:19.278555 containerd[1462]: time="2025-01-17T12:25:19.278502621Z" level=info msg="StartContainer for \"82aeeaeac37f8c1bf20fa3aee354a97ad507e21da2649bb9f91dbd968b41b82f\" returns successfully" Jan 17 12:25:19.370473 kubelet[1774]: I0117 12:25:19.370444 1774 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 17 12:25:19.371374 kubelet[1774]: I0117 12:25:19.371352 1774 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 17 12:25:19.402327 kubelet[1774]: E0117 12:25:19.402267 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:19.409744 kubelet[1774]: E0117 12:25:19.408994 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:25:19.417452 kubelet[1774]: I0117 12:25:19.417418 1774 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-b66m9" podStartSLOduration=67.743559014 podStartE2EDuration="1m21.41738442s" podCreationTimestamp="2025-01-17 12:23:58 +0000 UTC" firstStartedPulling="2025-01-17 12:25:05.528025893 +0000 UTC m=+68.791981682" lastFinishedPulling="2025-01-17 12:25:19.201851279 +0000 UTC m=+82.465807088" observedRunningTime="2025-01-17 12:25:19.41714434 +0000 UTC m=+82.681100129" watchObservedRunningTime="2025-01-17 12:25:19.41738442 +0000 UTC m=+82.681340219" Jan 17 12:25:20.402467 kubelet[1774]: E0117 12:25:20.402410 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:20.411961 kubelet[1774]: E0117 12:25:20.411936 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:25:21.020511 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount131864836.mount: Deactivated successfully. Jan 17 12:25:21.403331 kubelet[1774]: E0117 12:25:21.403220 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:22.404325 kubelet[1774]: E0117 12:25:22.404270 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:22.992223 containerd[1462]: time="2025-01-17T12:25:22.992164408Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:25:22.993028 containerd[1462]: time="2025-01-17T12:25:22.992959995Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jan 17 12:25:22.994140 containerd[1462]: time="2025-01-17T12:25:22.994109595Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:25:22.997039 containerd[1462]: time="2025-01-17T12:25:22.996980199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:25:22.997879 containerd[1462]: time="2025-01-17T12:25:22.997838638Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 3.795678499s" Jan 17 12:25:22.997879 containerd[1462]: time="2025-01-17T12:25:22.997874085Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 17 12:25:22.999759 containerd[1462]: time="2025-01-17T12:25:22.999731704Z" level=info msg="CreateContainer within sandbox \"1d98e7a4e3f09c48f89ec764395331851db6b80ef8ee0062dcb6afd92e236eae\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 17 12:25:23.012842 containerd[1462]: time="2025-01-17T12:25:23.012808971Z" level=info msg="CreateContainer within sandbox \"1d98e7a4e3f09c48f89ec764395331851db6b80ef8ee0062dcb6afd92e236eae\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"779fe7ef511d53b28c4e3a5b0955cec133707eaa880a4a666f775c72cb806eca\"" Jan 17 12:25:23.013279 containerd[1462]: time="2025-01-17T12:25:23.013111045Z" level=info msg="StartContainer for \"779fe7ef511d53b28c4e3a5b0955cec133707eaa880a4a666f775c72cb806eca\"" Jan 17 12:25:23.044828 systemd[1]: Started cri-containerd-779fe7ef511d53b28c4e3a5b0955cec133707eaa880a4a666f775c72cb806eca.scope - libcontainer container 779fe7ef511d53b28c4e3a5b0955cec133707eaa880a4a666f775c72cb806eca. Jan 17 12:25:23.070184 containerd[1462]: time="2025-01-17T12:25:23.070149095Z" level=info msg="StartContainer for \"779fe7ef511d53b28c4e3a5b0955cec133707eaa880a4a666f775c72cb806eca\" returns successfully" Jan 17 12:25:23.404667 kubelet[1774]: E0117 12:25:23.404555 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:23.427232 kubelet[1774]: I0117 12:25:23.427188 1774 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.470978837 podStartE2EDuration="8.427157296s" podCreationTimestamp="2025-01-17 12:25:15 +0000 UTC" firstStartedPulling="2025-01-17 12:25:16.04194071 +0000 UTC m=+79.305896509" lastFinishedPulling="2025-01-17 12:25:22.998119169 +0000 UTC m=+86.262074968" observedRunningTime="2025-01-17 12:25:23.426945167 +0000 UTC m=+86.690900976" watchObservedRunningTime="2025-01-17 12:25:23.427157296 +0000 UTC m=+86.691113095" Jan 17 12:25:24.405085 kubelet[1774]: E0117 12:25:24.405042 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:25.405783 kubelet[1774]: E0117 12:25:25.405740 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:26.406872 kubelet[1774]: E0117 12:25:26.406827 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:27.407755 kubelet[1774]: E0117 12:25:27.407721 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:28.407923 kubelet[1774]: E0117 12:25:28.407863 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:28.419079 kubelet[1774]: E0117 12:25:28.419052 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:25:29.408350 kubelet[1774]: E0117 12:25:29.408297 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:30.408627 kubelet[1774]: E0117 12:25:30.408572 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:31.408707 kubelet[1774]: E0117 12:25:31.408666 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:32.409680 kubelet[1774]: E0117 12:25:32.409633 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:33.410231 kubelet[1774]: E0117 12:25:33.410188 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:34.410318 kubelet[1774]: E0117 12:25:34.410283 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:35.410641 kubelet[1774]: E0117 12:25:35.410595 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:36.410964 kubelet[1774]: E0117 12:25:36.410931 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:36.841704 kubelet[1774]: I0117 12:25:36.841673 1774 topology_manager.go:215] "Topology Admit Handler" podUID="8ca60de1-530d-4692-b0b4-fd0a1854685b" podNamespace="default" podName="test-pod-1" Jan 17 12:25:36.846935 systemd[1]: Created slice kubepods-besteffort-pod8ca60de1_530d_4692_b0b4_fd0a1854685b.slice - libcontainer container kubepods-besteffort-pod8ca60de1_530d_4692_b0b4_fd0a1854685b.slice. Jan 17 12:25:36.904791 kubelet[1774]: I0117 12:25:36.904768 1774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a46b90b5-bf2d-43b5-ae19-41da0b2c8f51\" (UniqueName: \"kubernetes.io/nfs/8ca60de1-530d-4692-b0b4-fd0a1854685b-pvc-a46b90b5-bf2d-43b5-ae19-41da0b2c8f51\") pod \"test-pod-1\" (UID: \"8ca60de1-530d-4692-b0b4-fd0a1854685b\") " pod="default/test-pod-1" Jan 17 12:25:36.908115 kubelet[1774]: I0117 12:25:36.908081 1774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6msng\" (UniqueName: \"kubernetes.io/projected/8ca60de1-530d-4692-b0b4-fd0a1854685b-kube-api-access-6msng\") pod \"test-pod-1\" (UID: \"8ca60de1-530d-4692-b0b4-fd0a1854685b\") " pod="default/test-pod-1" Jan 17 12:25:37.036733 kernel: FS-Cache: Loaded Jan 17 12:25:37.105036 kernel: RPC: Registered named UNIX socket transport module. Jan 17 12:25:37.105091 kernel: RPC: Registered udp transport module. Jan 17 12:25:37.105115 kernel: RPC: Registered tcp transport module. Jan 17 12:25:37.105136 kernel: RPC: Registered tcp-with-tls transport module. Jan 17 12:25:37.105866 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 17 12:25:37.356743 kubelet[1774]: E0117 12:25:37.356596 1774 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:37.362153 kernel: NFS: Registering the id_resolver key type Jan 17 12:25:37.362209 kernel: Key type id_resolver registered Jan 17 12:25:37.362233 kernel: Key type id_legacy registered Jan 17 12:25:37.388968 nfsidmap[5427]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 17 12:25:37.393796 nfsidmap[5430]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 17 12:25:37.411742 kubelet[1774]: E0117 12:25:37.411691 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:37.450401 containerd[1462]: time="2025-01-17T12:25:37.450353436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:8ca60de1-530d-4692-b0b4-fd0a1854685b,Namespace:default,Attempt:0,}" Jan 17 12:25:37.566189 systemd-networkd[1390]: cali5ec59c6bf6e: Link UP Jan 17 12:25:37.566988 systemd-networkd[1390]: cali5ec59c6bf6e: Gained carrier Jan 17 12:25:37.576541 containerd[1462]: 2025-01-17 12:25:37.505 [INFO][5434] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.160-k8s-test--pod--1-eth0 default 8ca60de1-530d-4692-b0b4-fd0a1854685b 1375 0 2025-01-17 12:25:15 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.160 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="fbf625681b4fab031943010d087060f147aaf2667f2cce95d0b8b5ec5043f7dd" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.160-k8s-test--pod--1-" Jan 17 12:25:37.576541 containerd[1462]: 2025-01-17 12:25:37.505 [INFO][5434] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fbf625681b4fab031943010d087060f147aaf2667f2cce95d0b8b5ec5043f7dd" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.160-k8s-test--pod--1-eth0" Jan 17 12:25:37.576541 containerd[1462]: 2025-01-17 12:25:37.531 [INFO][5447] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fbf625681b4fab031943010d087060f147aaf2667f2cce95d0b8b5ec5043f7dd" HandleID="k8s-pod-network.fbf625681b4fab031943010d087060f147aaf2667f2cce95d0b8b5ec5043f7dd" Workload="10.0.0.160-k8s-test--pod--1-eth0" Jan 17 12:25:37.576541 containerd[1462]: 2025-01-17 12:25:37.538 [INFO][5447] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fbf625681b4fab031943010d087060f147aaf2667f2cce95d0b8b5ec5043f7dd" HandleID="k8s-pod-network.fbf625681b4fab031943010d087060f147aaf2667f2cce95d0b8b5ec5043f7dd" Workload="10.0.0.160-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df330), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.160", "pod":"test-pod-1", "timestamp":"2025-01-17 12:25:37.531863793 +0000 UTC"}, Hostname:"10.0.0.160", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:25:37.576541 containerd[1462]: 2025-01-17 12:25:37.539 [INFO][5447] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:25:37.576541 containerd[1462]: 2025-01-17 12:25:37.539 [INFO][5447] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:25:37.576541 containerd[1462]: 2025-01-17 12:25:37.539 [INFO][5447] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.160' Jan 17 12:25:37.576541 containerd[1462]: 2025-01-17 12:25:37.540 [INFO][5447] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fbf625681b4fab031943010d087060f147aaf2667f2cce95d0b8b5ec5043f7dd" host="10.0.0.160" Jan 17 12:25:37.576541 containerd[1462]: 2025-01-17 12:25:37.543 [INFO][5447] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.160" Jan 17 12:25:37.576541 containerd[1462]: 2025-01-17 12:25:37.546 [INFO][5447] ipam/ipam.go 489: Trying affinity for 192.168.44.0/26 host="10.0.0.160" Jan 17 12:25:37.576541 containerd[1462]: 2025-01-17 12:25:37.548 [INFO][5447] ipam/ipam.go 155: Attempting to load block cidr=192.168.44.0/26 host="10.0.0.160" Jan 17 12:25:37.576541 containerd[1462]: 2025-01-17 12:25:37.550 [INFO][5447] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.44.0/26 host="10.0.0.160" Jan 17 12:25:37.576541 containerd[1462]: 2025-01-17 12:25:37.550 [INFO][5447] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.44.0/26 handle="k8s-pod-network.fbf625681b4fab031943010d087060f147aaf2667f2cce95d0b8b5ec5043f7dd" host="10.0.0.160" Jan 17 12:25:37.576541 containerd[1462]: 2025-01-17 12:25:37.551 [INFO][5447] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.fbf625681b4fab031943010d087060f147aaf2667f2cce95d0b8b5ec5043f7dd Jan 17 12:25:37.576541 containerd[1462]: 2025-01-17 12:25:37.554 [INFO][5447] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.44.0/26 handle="k8s-pod-network.fbf625681b4fab031943010d087060f147aaf2667f2cce95d0b8b5ec5043f7dd" host="10.0.0.160" Jan 17 12:25:37.576541 containerd[1462]: 2025-01-17 12:25:37.560 [INFO][5447] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.44.9/26] block=192.168.44.0/26 handle="k8s-pod-network.fbf625681b4fab031943010d087060f147aaf2667f2cce95d0b8b5ec5043f7dd" host="10.0.0.160" Jan 17 12:25:37.576541 containerd[1462]: 2025-01-17 12:25:37.561 [INFO][5447] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.44.9/26] handle="k8s-pod-network.fbf625681b4fab031943010d087060f147aaf2667f2cce95d0b8b5ec5043f7dd" host="10.0.0.160" Jan 17 12:25:37.576541 containerd[1462]: 2025-01-17 12:25:37.561 [INFO][5447] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:25:37.576541 containerd[1462]: 2025-01-17 12:25:37.561 [INFO][5447] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.44.9/26] IPv6=[] ContainerID="fbf625681b4fab031943010d087060f147aaf2667f2cce95d0b8b5ec5043f7dd" HandleID="k8s-pod-network.fbf625681b4fab031943010d087060f147aaf2667f2cce95d0b8b5ec5043f7dd" Workload="10.0.0.160-k8s-test--pod--1-eth0" Jan 17 12:25:37.576541 containerd[1462]: 2025-01-17 12:25:37.563 [INFO][5434] cni-plugin/k8s.go 386: Populated endpoint ContainerID="fbf625681b4fab031943010d087060f147aaf2667f2cce95d0b8b5ec5043f7dd" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.160-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.160-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"8ca60de1-530d-4692-b0b4-fd0a1854685b", ResourceVersion:"1375", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 25, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.160", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.44.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:25:37.577143 containerd[1462]: 2025-01-17 12:25:37.563 [INFO][5434] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.44.9/32] ContainerID="fbf625681b4fab031943010d087060f147aaf2667f2cce95d0b8b5ec5043f7dd" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.160-k8s-test--pod--1-eth0" Jan 17 12:25:37.577143 containerd[1462]: 2025-01-17 12:25:37.563 [INFO][5434] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="fbf625681b4fab031943010d087060f147aaf2667f2cce95d0b8b5ec5043f7dd" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.160-k8s-test--pod--1-eth0" Jan 17 12:25:37.577143 containerd[1462]: 2025-01-17 12:25:37.567 [INFO][5434] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fbf625681b4fab031943010d087060f147aaf2667f2cce95d0b8b5ec5043f7dd" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.160-k8s-test--pod--1-eth0" Jan 17 12:25:37.577143 containerd[1462]: 2025-01-17 12:25:37.567 [INFO][5434] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fbf625681b4fab031943010d087060f147aaf2667f2cce95d0b8b5ec5043f7dd" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.160-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.160-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"8ca60de1-530d-4692-b0b4-fd0a1854685b", ResourceVersion:"1375", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 25, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.160", ContainerID:"fbf625681b4fab031943010d087060f147aaf2667f2cce95d0b8b5ec5043f7dd", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.44.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"96:89:30:88:01:b9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:25:37.577143 containerd[1462]: 2025-01-17 12:25:37.574 [INFO][5434] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="fbf625681b4fab031943010d087060f147aaf2667f2cce95d0b8b5ec5043f7dd" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.160-k8s-test--pod--1-eth0" Jan 17 12:25:37.600726 containerd[1462]: time="2025-01-17T12:25:37.600522651Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:25:37.600726 containerd[1462]: time="2025-01-17T12:25:37.600614628Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:25:37.601479 containerd[1462]: time="2025-01-17T12:25:37.600658581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:25:37.601479 containerd[1462]: time="2025-01-17T12:25:37.601392946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:25:37.620840 systemd[1]: Started cri-containerd-fbf625681b4fab031943010d087060f147aaf2667f2cce95d0b8b5ec5043f7dd.scope - libcontainer container fbf625681b4fab031943010d087060f147aaf2667f2cce95d0b8b5ec5043f7dd. Jan 17 12:25:37.632515 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:25:37.653820 containerd[1462]: time="2025-01-17T12:25:37.653786846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:8ca60de1-530d-4692-b0b4-fd0a1854685b,Namespace:default,Attempt:0,} returns sandbox id \"fbf625681b4fab031943010d087060f147aaf2667f2cce95d0b8b5ec5043f7dd\"" Jan 17 12:25:37.655099 containerd[1462]: time="2025-01-17T12:25:37.655072845Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 17 12:25:38.026023 containerd[1462]: time="2025-01-17T12:25:38.025982341Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:25:38.026683 containerd[1462]: time="2025-01-17T12:25:38.026612416Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 17 12:25:38.029280 containerd[1462]: time="2025-01-17T12:25:38.029244296Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 374.142747ms" Jan 17 12:25:38.029318 containerd[1462]: time="2025-01-17T12:25:38.029277741Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 17 12:25:38.030718 containerd[1462]: time="2025-01-17T12:25:38.030676966Z" level=info msg="CreateContainer within sandbox \"fbf625681b4fab031943010d087060f147aaf2667f2cce95d0b8b5ec5043f7dd\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 17 12:25:38.052242 containerd[1462]: time="2025-01-17T12:25:38.052200330Z" level=info msg="CreateContainer within sandbox \"fbf625681b4fab031943010d087060f147aaf2667f2cce95d0b8b5ec5043f7dd\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"ac6c49c889c75acae8bb6a55ddb615f90a1f4c165dd8723245895b53bd5235ec\"" Jan 17 12:25:38.052643 containerd[1462]: time="2025-01-17T12:25:38.052607268Z" level=info msg="StartContainer for \"ac6c49c889c75acae8bb6a55ddb615f90a1f4c165dd8723245895b53bd5235ec\"" Jan 17 12:25:38.078819 systemd[1]: Started cri-containerd-ac6c49c889c75acae8bb6a55ddb615f90a1f4c165dd8723245895b53bd5235ec.scope - libcontainer container ac6c49c889c75acae8bb6a55ddb615f90a1f4c165dd8723245895b53bd5235ec. Jan 17 12:25:38.102655 containerd[1462]: time="2025-01-17T12:25:38.102597473Z" level=info msg="StartContainer for \"ac6c49c889c75acae8bb6a55ddb615f90a1f4c165dd8723245895b53bd5235ec\" returns successfully" Jan 17 12:25:38.412020 kubelet[1774]: E0117 12:25:38.411917 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:38.486846 kubelet[1774]: I0117 12:25:38.486801 1774 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=23.112100755 podStartE2EDuration="23.486760591s" podCreationTimestamp="2025-01-17 12:25:15 +0000 UTC" firstStartedPulling="2025-01-17 12:25:37.654806987 +0000 UTC m=+100.918762786" lastFinishedPulling="2025-01-17 12:25:38.029466822 +0000 UTC m=+101.293422622" observedRunningTime="2025-01-17 12:25:38.486749981 +0000 UTC m=+101.750705770" watchObservedRunningTime="2025-01-17 12:25:38.486760591 +0000 UTC m=+101.750716380" Jan 17 12:25:39.075881 systemd-networkd[1390]: cali5ec59c6bf6e: Gained IPv6LL Jan 17 12:25:39.412793 kubelet[1774]: E0117 12:25:39.412626 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:40.412884 kubelet[1774]: E0117 12:25:40.412850 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:41.413293 kubelet[1774]: E0117 12:25:41.413248 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:42.413651 kubelet[1774]: E0117 12:25:42.413608 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:43.414353 kubelet[1774]: E0117 12:25:43.414321 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"