Feb 13 19:58:27.902404 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 18:03:41 -00 2025 Feb 13 19:58:27.902431 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 19:58:27.902445 kernel: BIOS-provided physical RAM map: Feb 13 19:58:27.902454 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 19:58:27.902462 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 19:58:27.902470 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 19:58:27.902480 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Feb 13 19:58:27.902489 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Feb 13 19:58:27.902497 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Feb 13 19:58:27.902508 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Feb 13 19:58:27.902517 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 19:58:27.902525 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 19:58:27.902538 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 13 19:58:27.902547 kernel: NX (Execute Disable) protection: active Feb 13 19:58:27.902557 kernel: APIC: Static calls initialized Feb 13 19:58:27.902572 kernel: SMBIOS 2.8 present. Feb 13 19:58:27.902581 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Feb 13 19:58:27.902590 kernel: Hypervisor detected: KVM Feb 13 19:58:27.902599 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 19:58:27.902608 kernel: kvm-clock: using sched offset of 2620047289 cycles Feb 13 19:58:27.902618 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 19:58:27.902628 kernel: tsc: Detected 2794.750 MHz processor Feb 13 19:58:27.902637 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 19:58:27.902647 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 19:58:27.902656 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Feb 13 19:58:27.902669 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 19:58:27.902678 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 19:58:27.902688 kernel: Using GB pages for direct mapping Feb 13 19:58:27.902697 kernel: ACPI: Early table checksum verification disabled Feb 13 19:58:27.902707 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Feb 13 19:58:27.902716 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:58:27.902726 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:58:27.902735 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:58:27.902747 kernel: ACPI: FACS 0x000000009CFE0000 000040 Feb 13 19:58:27.902758 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:58:27.902769 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:58:27.902779 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:58:27.902790 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:58:27.902801 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Feb 13 19:58:27.902813 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Feb 13 19:58:27.902827 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Feb 13 19:58:27.902839 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Feb 13 19:58:27.902849 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Feb 13 19:58:27.902859 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Feb 13 19:58:27.902869 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Feb 13 19:58:27.902881 kernel: No NUMA configuration found Feb 13 19:58:27.902891 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Feb 13 19:58:27.902901 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Feb 13 19:58:27.902933 kernel: Zone ranges: Feb 13 19:58:27.902943 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 19:58:27.902953 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Feb 13 19:58:27.902963 kernel: Normal empty Feb 13 19:58:27.902973 kernel: Movable zone start for each node Feb 13 19:58:27.902982 kernel: Early memory node ranges Feb 13 19:58:27.902992 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 19:58:27.903002 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Feb 13 19:58:27.903012 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Feb 13 19:58:27.903025 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 19:58:27.903038 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 19:58:27.903048 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Feb 13 19:58:27.903058 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 19:58:27.903067 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 19:58:27.903077 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 19:58:27.903087 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 19:58:27.903097 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 19:58:27.903107 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 19:58:27.903121 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 19:58:27.903131 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 19:58:27.903141 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 19:58:27.903151 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 19:58:27.903160 kernel: TSC deadline timer available Feb 13 19:58:27.903170 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 13 19:58:27.903180 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 19:58:27.903190 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 13 19:58:27.903202 kernel: kvm-guest: setup PV sched yield Feb 13 19:58:27.903215 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Feb 13 19:58:27.903225 kernel: Booting paravirtualized kernel on KVM Feb 13 19:58:27.903235 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 19:58:27.903245 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Feb 13 19:58:27.903255 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Feb 13 19:58:27.903274 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Feb 13 19:58:27.903283 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 13 19:58:27.903293 kernel: kvm-guest: PV spinlocks enabled Feb 13 19:58:27.903303 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 19:58:27.903318 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 19:58:27.903340 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:58:27.903351 kernel: random: crng init done Feb 13 19:58:27.903361 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:58:27.903390 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:58:27.903410 kernel: Fallback order for Node 0: 0 Feb 13 19:58:27.903437 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Feb 13 19:58:27.903457 kernel: Policy zone: DMA32 Feb 13 19:58:27.903488 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:58:27.903501 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42840K init, 2352K bss, 136900K reserved, 0K cma-reserved) Feb 13 19:58:27.903512 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 19:58:27.903522 kernel: ftrace: allocating 37921 entries in 149 pages Feb 13 19:58:27.903533 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 19:58:27.903543 kernel: Dynamic Preempt: voluntary Feb 13 19:58:27.903554 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:58:27.903566 kernel: rcu: RCU event tracing is enabled. Feb 13 19:58:27.903576 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 19:58:27.903592 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:58:27.903602 kernel: Rude variant of Tasks RCU enabled. Feb 13 19:58:27.903613 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:58:27.903623 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:58:27.903637 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 19:58:27.903648 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 13 19:58:27.903658 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:58:27.903669 kernel: Console: colour VGA+ 80x25 Feb 13 19:58:27.903679 kernel: printk: console [ttyS0] enabled Feb 13 19:58:27.903693 kernel: ACPI: Core revision 20230628 Feb 13 19:58:27.903704 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 13 19:58:27.903716 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 19:58:27.903727 kernel: x2apic enabled Feb 13 19:58:27.903737 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 19:58:27.903748 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Feb 13 19:58:27.903758 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Feb 13 19:58:27.903769 kernel: kvm-guest: setup PV IPIs Feb 13 19:58:27.903792 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 19:58:27.903803 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 13 19:58:27.903814 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Feb 13 19:58:27.903825 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 13 19:58:27.903839 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 13 19:58:27.903849 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 13 19:58:27.903860 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 19:58:27.903871 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 19:58:27.903882 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 19:58:27.903896 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 19:58:27.903927 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 13 19:58:27.903941 kernel: RETBleed: Mitigation: untrained return thunk Feb 13 19:58:27.903953 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 19:58:27.903964 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 19:58:27.903974 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Feb 13 19:58:27.903986 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Feb 13 19:58:27.903997 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Feb 13 19:58:27.904012 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 19:58:27.904023 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 19:58:27.904034 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 19:58:27.904045 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 19:58:27.904056 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Feb 13 19:58:27.904067 kernel: Freeing SMP alternatives memory: 32K Feb 13 19:58:27.904078 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:58:27.904088 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:58:27.904099 kernel: landlock: Up and running. Feb 13 19:58:27.904113 kernel: SELinux: Initializing. Feb 13 19:58:27.904124 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:58:27.904134 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:58:27.904145 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 13 19:58:27.904156 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:58:27.904167 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:58:27.904178 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:58:27.904189 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 13 19:58:27.904203 kernel: ... version: 0 Feb 13 19:58:27.904217 kernel: ... bit width: 48 Feb 13 19:58:27.904227 kernel: ... generic registers: 6 Feb 13 19:58:27.904238 kernel: ... value mask: 0000ffffffffffff Feb 13 19:58:27.904249 kernel: ... max period: 00007fffffffffff Feb 13 19:58:27.904269 kernel: ... fixed-purpose events: 0 Feb 13 19:58:27.904280 kernel: ... event mask: 000000000000003f Feb 13 19:58:27.904291 kernel: signal: max sigframe size: 1776 Feb 13 19:58:27.904302 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:58:27.904313 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:58:27.904327 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:58:27.904338 kernel: smpboot: x86: Booting SMP configuration: Feb 13 19:58:27.904349 kernel: .... node #0, CPUs: #1 #2 #3 Feb 13 19:58:27.904359 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 19:58:27.904370 kernel: smpboot: Max logical packages: 1 Feb 13 19:58:27.904381 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Feb 13 19:58:27.904392 kernel: devtmpfs: initialized Feb 13 19:58:27.904402 kernel: x86/mm: Memory block size: 128MB Feb 13 19:58:27.904413 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:58:27.904427 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 19:58:27.904438 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:58:27.904449 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:58:27.904460 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:58:27.904471 kernel: audit: type=2000 audit(1739476707.741:1): state=initialized audit_enabled=0 res=1 Feb 13 19:58:27.904482 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:58:27.904492 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 19:58:27.904503 kernel: cpuidle: using governor menu Feb 13 19:58:27.904514 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:58:27.904528 kernel: dca service started, version 1.12.1 Feb 13 19:58:27.904539 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Feb 13 19:58:27.904550 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Feb 13 19:58:27.904561 kernel: PCI: Using configuration type 1 for base access Feb 13 19:58:27.904572 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 19:58:27.904583 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:58:27.904594 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:58:27.904605 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:58:27.904615 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:58:27.904629 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:58:27.904640 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:58:27.904650 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:58:27.904661 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:58:27.904672 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:58:27.904683 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 19:58:27.904694 kernel: ACPI: Interpreter enabled Feb 13 19:58:27.904705 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 19:58:27.904715 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 19:58:27.904730 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 19:58:27.904741 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 19:58:27.904752 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Feb 13 19:58:27.904762 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:58:27.905041 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:58:27.905308 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Feb 13 19:58:27.905471 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Feb 13 19:58:27.905487 kernel: PCI host bridge to bus 0000:00 Feb 13 19:58:27.905700 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 19:58:27.905863 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 19:58:27.906035 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 19:58:27.906186 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Feb 13 19:58:27.906344 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Feb 13 19:58:27.906491 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Feb 13 19:58:27.906644 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:58:27.906869 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Feb 13 19:58:27.907087 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Feb 13 19:58:27.907344 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Feb 13 19:58:27.907522 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Feb 13 19:58:27.907689 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Feb 13 19:58:27.907851 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 19:58:27.908108 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 19:58:27.908284 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Feb 13 19:58:27.908508 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Feb 13 19:58:27.908683 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Feb 13 19:58:27.908870 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Feb 13 19:58:27.909059 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Feb 13 19:58:27.909225 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Feb 13 19:58:27.909408 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Feb 13 19:58:27.909591 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 19:58:27.909757 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Feb 13 19:58:27.909938 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Feb 13 19:58:27.910111 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Feb 13 19:58:27.910287 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Feb 13 19:58:27.910479 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Feb 13 19:58:27.910654 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Feb 13 19:58:27.910839 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Feb 13 19:58:27.911023 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Feb 13 19:58:27.911188 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Feb 13 19:58:27.911380 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Feb 13 19:58:27.911543 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Feb 13 19:58:27.911566 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 19:58:27.911578 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 19:58:27.911589 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 19:58:27.911600 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 19:58:27.911611 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Feb 13 19:58:27.911622 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Feb 13 19:58:27.911633 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Feb 13 19:58:27.911644 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Feb 13 19:58:27.911656 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Feb 13 19:58:27.911670 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Feb 13 19:58:27.911681 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Feb 13 19:58:27.911692 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Feb 13 19:58:27.911703 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Feb 13 19:58:27.911714 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Feb 13 19:58:27.911725 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Feb 13 19:58:27.911736 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Feb 13 19:58:27.911748 kernel: iommu: Default domain type: Translated Feb 13 19:58:27.911759 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 19:58:27.911773 kernel: PCI: Using ACPI for IRQ routing Feb 13 19:58:27.911785 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 19:58:27.911796 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 19:58:27.911807 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Feb 13 19:58:27.912016 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Feb 13 19:58:27.912185 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Feb 13 19:58:27.912359 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 19:58:27.912375 kernel: vgaarb: loaded Feb 13 19:58:27.912393 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 13 19:58:27.912405 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 13 19:58:27.912416 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 19:58:27.912427 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:58:27.912438 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:58:27.912449 kernel: pnp: PnP ACPI init Feb 13 19:58:27.912643 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Feb 13 19:58:27.912661 kernel: pnp: PnP ACPI: found 6 devices Feb 13 19:58:27.912673 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 19:58:27.912689 kernel: NET: Registered PF_INET protocol family Feb 13 19:58:27.912701 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:58:27.912713 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:58:27.912726 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:58:27.912740 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:58:27.912752 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:58:27.912763 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:58:27.912775 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:58:27.912790 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:58:27.912801 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:58:27.912813 kernel: NET: Registered PF_XDP protocol family Feb 13 19:58:27.912983 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 19:58:27.913135 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 19:58:27.913295 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 19:58:27.913448 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Feb 13 19:58:27.913597 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Feb 13 19:58:27.913747 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Feb 13 19:58:27.913768 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:58:27.913780 kernel: Initialise system trusted keyrings Feb 13 19:58:27.913791 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:58:27.913803 kernel: Key type asymmetric registered Feb 13 19:58:27.913815 kernel: Asymmetric key parser 'x509' registered Feb 13 19:58:27.913826 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 19:58:27.913838 kernel: io scheduler mq-deadline registered Feb 13 19:58:27.913849 kernel: io scheduler kyber registered Feb 13 19:58:27.913860 kernel: io scheduler bfq registered Feb 13 19:58:27.913875 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 19:58:27.913887 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Feb 13 19:58:27.913898 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Feb 13 19:58:27.913986 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Feb 13 19:58:27.913997 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:58:27.914008 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 19:58:27.914020 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 19:58:27.914031 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 19:58:27.914042 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 19:58:27.914223 kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 13 19:58:27.914391 kernel: rtc_cmos 00:04: registered as rtc0 Feb 13 19:58:27.914545 kernel: rtc_cmos 00:04: setting system clock to 2025-02-13T19:58:27 UTC (1739476707) Feb 13 19:58:27.914698 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Feb 13 19:58:27.914714 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Feb 13 19:58:27.914726 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 19:58:27.914738 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:58:27.914749 kernel: Segment Routing with IPv6 Feb 13 19:58:27.914765 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:58:27.914777 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:58:27.914788 kernel: Key type dns_resolver registered Feb 13 19:58:27.914800 kernel: IPI shorthand broadcast: enabled Feb 13 19:58:27.914811 kernel: sched_clock: Marking stable (983003357, 153501295)->(1178210585, -41705933) Feb 13 19:58:27.914823 kernel: registered taskstats version 1 Feb 13 19:58:27.914834 kernel: Loading compiled-in X.509 certificates Feb 13 19:58:27.914845 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6e17590ca2768b672aa48f3e0cedc4061febfe93' Feb 13 19:58:27.914856 kernel: Key type .fscrypt registered Feb 13 19:58:27.914871 kernel: Key type fscrypt-provisioning registered Feb 13 19:58:27.914883 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:58:27.914894 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:58:27.914921 kernel: ima: No architecture policies found Feb 13 19:58:27.914933 kernel: clk: Disabling unused clocks Feb 13 19:58:27.914944 kernel: Freeing unused kernel image (initmem) memory: 42840K Feb 13 19:58:27.914956 kernel: Write protecting the kernel read-only data: 36864k Feb 13 19:58:27.914967 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Feb 13 19:58:27.914983 kernel: Run /init as init process Feb 13 19:58:27.914995 kernel: with arguments: Feb 13 19:58:27.915006 kernel: /init Feb 13 19:58:27.915017 kernel: with environment: Feb 13 19:58:27.915028 kernel: HOME=/ Feb 13 19:58:27.915039 kernel: TERM=linux Feb 13 19:58:27.915050 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:58:27.915063 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:58:27.915081 systemd[1]: Detected virtualization kvm. Feb 13 19:58:27.915093 systemd[1]: Detected architecture x86-64. Feb 13 19:58:27.915105 systemd[1]: Running in initrd. Feb 13 19:58:27.915117 systemd[1]: No hostname configured, using default hostname. Feb 13 19:58:27.915129 systemd[1]: Hostname set to . Feb 13 19:58:27.915141 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:58:27.915153 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:58:27.915166 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:58:27.915182 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:58:27.915195 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:58:27.915223 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:58:27.915240 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:58:27.915253 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:58:27.915282 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:58:27.915295 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:58:27.915308 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:58:27.915320 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:58:27.915332 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:58:27.915344 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:58:27.915356 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:58:27.915368 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:58:27.915384 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:58:27.915396 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:58:27.915409 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:58:27.915421 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:58:27.915433 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:58:27.915446 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:58:27.915459 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:58:27.915471 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:58:27.915483 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:58:27.915500 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:58:27.915513 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:58:27.915525 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:58:27.915537 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:58:27.915550 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:58:27.915563 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:58:27.915575 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:58:27.915588 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:58:27.915600 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:58:27.915617 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:58:27.915655 systemd-journald[193]: Collecting audit messages is disabled. Feb 13 19:58:27.915688 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:58:27.915701 systemd-journald[193]: Journal started Feb 13 19:58:27.915730 systemd-journald[193]: Runtime Journal (/run/log/journal/c18d3bdb7a92444ab61dbcba090ef959) is 6.0M, max 48.4M, 42.3M free. Feb 13 19:58:27.905628 systemd-modules-load[194]: Inserted module 'overlay' Feb 13 19:58:27.949245 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:58:27.949283 kernel: Bridge firewalling registered Feb 13 19:58:27.934969 systemd-modules-load[194]: Inserted module 'br_netfilter' Feb 13 19:58:27.952121 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:58:27.952583 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:58:27.954976 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:58:27.968140 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:58:27.971472 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:58:27.974391 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:58:27.981598 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:58:27.985846 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:58:28.001226 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:58:28.001774 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:58:28.007045 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:58:28.018223 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:58:28.020070 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:58:28.025855 dracut-cmdline[228]: dracut-dracut-053 Feb 13 19:58:28.028839 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 19:58:28.067022 systemd-resolved[234]: Positive Trust Anchors: Feb 13 19:58:28.067042 systemd-resolved[234]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:58:28.067082 systemd-resolved[234]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:58:28.069733 systemd-resolved[234]: Defaulting to hostname 'linux'. Feb 13 19:58:28.071010 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:58:28.077158 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:58:28.114947 kernel: SCSI subsystem initialized Feb 13 19:58:28.124929 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:58:28.135936 kernel: iscsi: registered transport (tcp) Feb 13 19:58:28.156933 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:58:28.156958 kernel: QLogic iSCSI HBA Driver Feb 13 19:58:28.207706 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:58:28.216104 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:58:28.240415 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:58:28.240447 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:58:28.241453 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:58:28.283927 kernel: raid6: avx2x4 gen() 29538 MB/s Feb 13 19:58:28.300925 kernel: raid6: avx2x2 gen() 30224 MB/s Feb 13 19:58:28.318055 kernel: raid6: avx2x1 gen() 25395 MB/s Feb 13 19:58:28.318071 kernel: raid6: using algorithm avx2x2 gen() 30224 MB/s Feb 13 19:58:28.336022 kernel: raid6: .... xor() 19383 MB/s, rmw enabled Feb 13 19:58:28.336056 kernel: raid6: using avx2x2 recovery algorithm Feb 13 19:58:28.356930 kernel: xor: automatically using best checksumming function avx Feb 13 19:58:28.516944 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:58:28.531685 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:58:28.547076 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:58:28.560161 systemd-udevd[414]: Using default interface naming scheme 'v255'. Feb 13 19:58:28.565111 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:58:28.573061 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:58:28.590195 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation Feb 13 19:58:28.626437 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:58:28.635090 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:58:28.705981 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:58:28.712418 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:58:28.728856 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:58:28.730684 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:58:28.733663 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:58:28.736129 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:58:28.745301 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Feb 13 19:58:28.774023 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 19:58:28.774228 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:58:28.774256 kernel: GPT:9289727 != 19775487 Feb 13 19:58:28.774272 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:58:28.774286 kernel: GPT:9289727 != 19775487 Feb 13 19:58:28.774310 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:58:28.774324 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:58:28.748137 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:58:28.777168 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 19:58:28.763698 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:58:28.779923 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:58:28.780001 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:58:28.785344 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:58:28.788145 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:58:28.788220 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:58:28.810307 kernel: BTRFS: device fsid 892c7470-7713-4b0f-880a-4c5f7bf5b72d devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (457) Feb 13 19:58:28.792473 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:58:28.813362 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:58:28.820319 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (461) Feb 13 19:58:28.820340 kernel: libata version 3.00 loaded. Feb 13 19:58:28.836415 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 19:58:28.836475 kernel: AES CTR mode by8 optimization enabled Feb 13 19:58:28.844955 kernel: ahci 0000:00:1f.2: version 3.0 Feb 13 19:58:28.852571 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Feb 13 19:58:28.852594 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Feb 13 19:58:28.852804 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Feb 13 19:58:28.853080 kernel: scsi host0: ahci Feb 13 19:58:28.853303 kernel: scsi host1: ahci Feb 13 19:58:28.853515 kernel: scsi host2: ahci Feb 13 19:58:28.853727 kernel: scsi host3: ahci Feb 13 19:58:28.853969 kernel: scsi host4: ahci Feb 13 19:58:28.854166 kernel: scsi host5: ahci Feb 13 19:58:28.854368 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Feb 13 19:58:28.854384 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Feb 13 19:58:28.854398 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Feb 13 19:58:28.854411 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Feb 13 19:58:28.854425 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Feb 13 19:58:28.854444 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Feb 13 19:58:28.849875 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 19:58:28.888656 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:58:28.896644 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 19:58:28.906944 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:58:28.915241 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 19:58:28.919112 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 19:58:28.935121 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:58:28.938930 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:58:28.947086 disk-uuid[554]: Primary Header is updated. Feb 13 19:58:28.947086 disk-uuid[554]: Secondary Entries is updated. Feb 13 19:58:28.947086 disk-uuid[554]: Secondary Header is updated. Feb 13 19:58:28.951930 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:58:28.957920 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:58:28.967311 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:58:29.163103 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 19:58:29.163180 kernel: ata1: SATA link down (SStatus 0 SControl 300) Feb 13 19:58:29.163208 kernel: ata2: SATA link down (SStatus 0 SControl 300) Feb 13 19:58:29.164934 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 19:58:29.165031 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Feb 13 19:58:29.165935 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 19:58:29.166942 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 13 19:58:29.168182 kernel: ata3.00: applying bridge limits Feb 13 19:58:29.168206 kernel: ata3.00: configured for UDMA/100 Feb 13 19:58:29.168939 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 13 19:58:29.213945 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 13 19:58:29.227728 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 19:58:29.227747 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Feb 13 19:58:29.957930 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:58:29.958132 disk-uuid[555]: The operation has completed successfully. Feb 13 19:58:29.993363 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:58:29.993491 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:58:30.015081 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:58:30.018969 sh[591]: Success Feb 13 19:58:30.031938 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 13 19:58:30.068449 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:58:30.087557 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:58:30.090763 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:58:30.102953 kernel: BTRFS info (device dm-0): first mount of filesystem 892c7470-7713-4b0f-880a-4c5f7bf5b72d Feb 13 19:58:30.102983 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:58:30.102995 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:58:30.105463 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:58:30.105496 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:58:30.110067 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:58:30.110867 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:58:30.117218 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:58:30.120056 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:58:30.128958 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 19:58:30.129018 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:58:30.129030 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:58:30.131937 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:58:30.142092 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:58:30.143782 kernel: BTRFS info (device vda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 19:58:30.154518 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:58:30.163142 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:58:30.258695 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:58:30.269026 ignition[683]: Ignition 2.19.0 Feb 13 19:58:30.269039 ignition[683]: Stage: fetch-offline Feb 13 19:58:30.270089 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:58:30.269108 ignition[683]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:58:30.269121 ignition[683]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:58:30.269248 ignition[683]: parsed url from cmdline: "" Feb 13 19:58:30.269253 ignition[683]: no config URL provided Feb 13 19:58:30.269260 ignition[683]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:58:30.269273 ignition[683]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:58:30.269310 ignition[683]: op(1): [started] loading QEMU firmware config module Feb 13 19:58:30.269319 ignition[683]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 19:58:30.276624 ignition[683]: op(1): [finished] loading QEMU firmware config module Feb 13 19:58:30.279767 ignition[683]: parsing config with SHA512: b987d5ecf793e989b068d8fd36bbb4bb33ff0e9fb3593a3b218ea87d68b57cd45d94847763a4021e5450cc0f520960a394b51ff321b2c388c3153c1f8b8b940b Feb 13 19:58:30.282462 unknown[683]: fetched base config from "system" Feb 13 19:58:30.282476 unknown[683]: fetched user config from "qemu" Feb 13 19:58:30.282742 ignition[683]: fetch-offline: fetch-offline passed Feb 13 19:58:30.282804 ignition[683]: Ignition finished successfully Feb 13 19:58:30.285419 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:58:30.296005 systemd-networkd[777]: lo: Link UP Feb 13 19:58:30.296020 systemd-networkd[777]: lo: Gained carrier Feb 13 19:58:30.297831 systemd-networkd[777]: Enumeration completed Feb 13 19:58:30.297990 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:58:30.298343 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:58:30.298347 systemd-networkd[777]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:58:30.299412 systemd-networkd[777]: eth0: Link UP Feb 13 19:58:30.299416 systemd-networkd[777]: eth0: Gained carrier Feb 13 19:58:30.299424 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:58:30.300322 systemd[1]: Reached target network.target - Network. Feb 13 19:58:30.302224 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 19:58:30.312062 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:58:30.318975 systemd-networkd[777]: eth0: DHCPv4 address 10.0.0.102/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:58:30.328393 ignition[782]: Ignition 2.19.0 Feb 13 19:58:30.328404 ignition[782]: Stage: kargs Feb 13 19:58:30.328574 ignition[782]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:58:30.328587 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:58:30.329344 ignition[782]: kargs: kargs passed Feb 13 19:58:30.329387 ignition[782]: Ignition finished successfully Feb 13 19:58:30.332881 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:58:30.350045 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:58:30.365199 ignition[791]: Ignition 2.19.0 Feb 13 19:58:30.365212 ignition[791]: Stage: disks Feb 13 19:58:30.365398 ignition[791]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:58:30.365410 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:58:30.368327 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:58:30.366109 ignition[791]: disks: disks passed Feb 13 19:58:30.370019 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:58:30.366161 ignition[791]: Ignition finished successfully Feb 13 19:58:30.371862 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:58:30.373773 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:58:30.375893 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:58:30.377820 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:58:30.400062 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:58:30.413465 systemd-fsck[802]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:58:30.420274 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:58:30.434011 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:58:30.519940 kernel: EXT4-fs (vda9): mounted filesystem 85215ce4-0be3-4782-863e-8dde129924f0 r/w with ordered data mode. Quota mode: none. Feb 13 19:58:30.520421 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:58:30.521956 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:58:30.534009 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:58:30.535761 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:58:30.537021 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:58:30.542452 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (810) Feb 13 19:58:30.537071 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:58:30.549156 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 19:58:30.549175 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:58:30.549196 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:58:30.549207 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:58:30.537099 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:58:30.545979 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:58:30.550108 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:58:30.552937 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:58:30.589667 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:58:30.595400 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:58:30.599731 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:58:30.604266 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:58:30.704863 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:58:30.713147 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:58:30.715119 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:58:30.727012 kernel: BTRFS info (device vda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 19:58:30.742344 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:58:30.774126 ignition[925]: INFO : Ignition 2.19.0 Feb 13 19:58:30.774126 ignition[925]: INFO : Stage: mount Feb 13 19:58:30.776228 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:58:30.776228 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:58:30.776228 ignition[925]: INFO : mount: mount passed Feb 13 19:58:30.776228 ignition[925]: INFO : Ignition finished successfully Feb 13 19:58:30.778358 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:58:30.784216 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:58:31.102391 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:58:31.114274 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:58:31.122932 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (937) Feb 13 19:58:31.125116 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 19:58:31.125139 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:58:31.125151 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:58:31.128926 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:58:31.130070 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:58:31.166321 ignition[954]: INFO : Ignition 2.19.0 Feb 13 19:58:31.166321 ignition[954]: INFO : Stage: files Feb 13 19:58:31.168162 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:58:31.168162 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:58:31.170730 ignition[954]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:58:31.172119 ignition[954]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:58:31.172119 ignition[954]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:58:31.176117 ignition[954]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:58:31.177604 ignition[954]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:58:31.179440 unknown[954]: wrote ssh authorized keys file for user: core Feb 13 19:58:31.180701 ignition[954]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:58:31.183002 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 19:58:31.184790 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 19:58:31.186512 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:58:31.188282 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:58:31.190125 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:58:31.192004 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:58:31.193836 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 19:58:31.196360 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 19:58:31.198804 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 19:58:31.200940 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Feb 13 19:58:31.755033 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 13 19:58:31.948120 systemd-networkd[777]: eth0: Gained IPv6LL Feb 13 19:58:32.407236 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 19:58:32.407236 ignition[954]: INFO : files: op(8): [started] processing unit "containerd.service" Feb 13 19:58:32.411347 ignition[954]: INFO : files: op(8): op(9): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 19:58:32.411347 ignition[954]: INFO : files: op(8): op(9): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 19:58:32.411347 ignition[954]: INFO : files: op(8): [finished] processing unit "containerd.service" Feb 13 19:58:32.411347 ignition[954]: INFO : files: op(a): [started] processing unit "coreos-metadata.service" Feb 13 19:58:32.411347 ignition[954]: INFO : files: op(a): op(b): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:58:32.411347 ignition[954]: INFO : files: op(a): op(b): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:58:32.411347 ignition[954]: INFO : files: op(a): [finished] processing unit "coreos-metadata.service" Feb 13 19:58:32.411347 ignition[954]: INFO : files: op(c): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 19:58:32.448999 ignition[954]: INFO : files: op(c): op(d): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:58:32.455263 ignition[954]: INFO : files: op(c): op(d): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:58:32.456930 ignition[954]: INFO : files: op(c): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 19:58:32.456930 ignition[954]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:58:32.456930 ignition[954]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:58:32.456930 ignition[954]: INFO : files: files passed Feb 13 19:58:32.456930 ignition[954]: INFO : Ignition finished successfully Feb 13 19:58:32.465534 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:58:32.482159 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:58:32.485702 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:58:32.488572 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:58:32.489622 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:58:32.497511 initrd-setup-root-after-ignition[982]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 19:58:32.501647 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:58:32.501647 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:58:32.505047 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:58:32.508619 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:58:32.511715 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:58:32.522045 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:58:32.549973 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:58:32.550176 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:58:32.552654 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:58:32.553687 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:58:32.555699 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:58:32.576174 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:58:32.594469 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:58:32.607107 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:58:32.619856 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:58:32.621468 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:58:32.624375 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:58:32.626973 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:58:32.627095 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:58:32.630774 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:58:32.633504 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:58:32.644061 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:58:32.646213 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:58:32.649077 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:58:32.651913 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:58:32.654502 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:58:32.657166 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:58:32.659806 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:58:32.670788 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:58:32.672578 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:58:32.672725 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:58:32.675509 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:58:32.677629 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:58:32.679816 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:58:32.679971 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:58:32.682239 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:58:32.682379 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:58:32.686056 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:58:32.686208 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:58:32.688717 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:58:32.690933 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:58:32.692169 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:58:32.695074 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:58:32.697288 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:58:32.699701 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:58:32.699839 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:58:32.702249 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:58:32.702370 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:58:32.704161 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:58:32.704308 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:58:32.706308 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:58:32.706446 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:58:32.720085 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:58:32.720957 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:58:32.722855 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:58:32.723044 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:58:32.725117 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:58:32.725271 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:58:32.733562 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:58:32.733721 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:58:32.746772 ignition[1008]: INFO : Ignition 2.19.0 Feb 13 19:58:32.746772 ignition[1008]: INFO : Stage: umount Feb 13 19:58:32.748653 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:58:32.748653 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:58:32.751424 ignition[1008]: INFO : umount: umount passed Feb 13 19:58:32.752322 ignition[1008]: INFO : Ignition finished successfully Feb 13 19:58:32.755369 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:58:32.755517 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:58:32.757615 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:58:32.758027 systemd[1]: Stopped target network.target - Network. Feb 13 19:58:32.758328 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:58:32.758386 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:58:32.760254 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:58:32.760311 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:58:32.760573 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:58:32.760617 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:58:32.760922 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:58:32.760968 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:58:32.766660 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:58:32.768627 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:58:32.777970 systemd-networkd[777]: eth0: DHCPv6 lease lost Feb 13 19:58:32.780220 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:58:32.780376 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:58:32.784178 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:58:32.784360 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:58:32.786227 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:58:32.786288 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:58:32.797115 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:58:32.798082 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:58:32.798154 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:58:32.800608 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:58:32.800690 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:58:32.803215 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:58:32.803275 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:58:32.805784 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:58:32.805850 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:58:32.808300 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:58:32.818584 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:58:32.818780 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:58:32.822854 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:58:32.823053 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:58:32.825312 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:58:32.825361 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:58:32.827562 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:58:32.827605 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:58:32.830086 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:58:32.830148 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:58:32.832862 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:58:32.832928 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:58:32.835337 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:58:32.835387 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:58:32.851292 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:58:32.854093 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:58:32.855481 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:58:32.858679 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 19:58:32.860106 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:58:32.863479 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:58:32.864711 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:58:32.867769 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:58:32.869074 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:58:32.872442 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:58:32.873885 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:58:33.262656 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:58:33.262813 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:58:33.265925 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:58:33.268052 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:58:33.268124 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:58:33.282126 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:58:33.291466 systemd[1]: Switching root. Feb 13 19:58:33.326731 systemd-journald[193]: Journal stopped Feb 13 19:58:34.998509 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Feb 13 19:58:34.998590 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:58:34.998610 kernel: SELinux: policy capability open_perms=1 Feb 13 19:58:34.998622 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:58:34.998633 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:58:34.998644 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:58:34.998656 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:58:34.998668 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:58:34.998683 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:58:34.998695 kernel: audit: type=1403 audit(1739476714.136:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:58:34.998708 systemd[1]: Successfully loaded SELinux policy in 42.426ms. Feb 13 19:58:34.998736 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.118ms. Feb 13 19:58:34.998753 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:58:34.998765 systemd[1]: Detected virtualization kvm. Feb 13 19:58:34.998780 systemd[1]: Detected architecture x86-64. Feb 13 19:58:34.998791 systemd[1]: Detected first boot. Feb 13 19:58:34.998803 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:58:34.998816 zram_generator::config[1074]: No configuration found. Feb 13 19:58:34.998829 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:58:34.998846 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:58:34.998858 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 19:58:34.998872 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:58:34.998888 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:58:34.998916 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:58:34.998929 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:58:34.998945 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:58:34.998962 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:58:34.998979 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:58:34.998992 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:58:34.999005 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:58:34.999023 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:58:34.999036 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:58:34.999053 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:58:34.999074 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:58:34.999108 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:58:34.999121 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 19:58:34.999133 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:58:34.999145 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:58:34.999157 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:58:34.999172 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:58:34.999185 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:58:34.999197 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:58:34.999209 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:58:34.999222 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:58:34.999234 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:58:34.999246 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:58:34.999259 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:58:34.999274 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:58:34.999286 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:58:34.999298 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:58:34.999311 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:58:34.999323 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:58:34.999335 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:58:34.999347 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:58:34.999359 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:58:34.999378 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:58:34.999392 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:58:34.999404 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:58:34.999416 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:58:34.999429 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:58:34.999441 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:58:34.999453 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:58:34.999468 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:58:34.999481 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:58:34.999493 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:58:34.999508 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:58:34.999520 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:58:34.999532 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 13 19:58:34.999545 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Feb 13 19:58:34.999557 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:58:34.999569 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:58:34.999581 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:58:34.999614 systemd-journald[1148]: Collecting audit messages is disabled. Feb 13 19:58:34.999639 kernel: fuse: init (API version 7.39) Feb 13 19:58:34.999651 systemd-journald[1148]: Journal started Feb 13 19:58:34.999672 systemd-journald[1148]: Runtime Journal (/run/log/journal/c18d3bdb7a92444ab61dbcba090ef959) is 6.0M, max 48.4M, 42.3M free. Feb 13 19:58:35.001971 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:58:35.006634 kernel: loop: module loaded Feb 13 19:58:35.006669 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:58:35.009959 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:58:35.014073 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:58:35.016659 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:58:35.020153 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:58:35.021930 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:58:35.023323 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:58:35.025091 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:58:35.027088 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:58:35.029454 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:58:35.029970 kernel: ACPI: bus type drm_connector registered Feb 13 19:58:35.031675 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:58:35.032016 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:58:35.033706 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:58:35.033971 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:58:35.035482 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:58:35.035706 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:58:35.037573 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:58:35.037820 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:58:35.039396 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:58:35.039612 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:58:35.041108 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:58:35.041344 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:58:35.042845 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:58:35.044427 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:58:35.046511 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:58:35.061256 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:58:35.074006 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:58:35.076784 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:58:35.078019 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:58:35.082313 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:58:35.085108 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:58:35.086518 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:58:35.091046 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:58:35.094272 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:58:35.096094 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:58:35.099164 systemd-journald[1148]: Time spent on flushing to /var/log/journal/c18d3bdb7a92444ab61dbcba090ef959 is 18.629ms for 923 entries. Feb 13 19:58:35.099164 systemd-journald[1148]: System Journal (/var/log/journal/c18d3bdb7a92444ab61dbcba090ef959) is 8.0M, max 195.6M, 187.6M free. Feb 13 19:58:35.144513 systemd-journald[1148]: Received client request to flush runtime journal. Feb 13 19:58:35.105562 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:58:35.112404 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:58:35.113795 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:58:35.144592 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:58:35.148636 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:58:35.163644 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:58:35.172256 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:58:35.181734 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Feb 13 19:58:35.181759 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Feb 13 19:58:35.186103 udevadm[1219]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 19:58:35.188987 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:58:35.218655 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:58:35.220245 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:58:35.271219 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:58:35.278095 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:58:35.307979 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:58:35.319167 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:58:35.339413 systemd-tmpfiles[1230]: ACLs are not supported, ignoring. Feb 13 19:58:35.339437 systemd-tmpfiles[1230]: ACLs are not supported, ignoring. Feb 13 19:58:35.345748 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:58:35.805319 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:58:35.825194 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:58:35.854020 systemd-udevd[1236]: Using default interface naming scheme 'v255'. Feb 13 19:58:35.874384 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:58:35.885107 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:58:35.898072 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:58:35.918934 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1238) Feb 13 19:58:35.919869 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Feb 13 19:58:35.952014 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:58:35.996941 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 13 19:58:36.001927 kernel: ACPI: button: Power Button [PWRF] Feb 13 19:58:36.041020 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Feb 13 19:58:36.041443 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Feb 13 19:58:36.041637 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Feb 13 19:58:36.041813 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 13 19:58:36.037374 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:58:36.085490 systemd-networkd[1240]: lo: Link UP Feb 13 19:58:36.085506 systemd-networkd[1240]: lo: Gained carrier Feb 13 19:58:36.088939 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 19:58:36.091884 systemd-networkd[1240]: Enumeration completed Feb 13 19:58:36.092175 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:58:36.094006 systemd-networkd[1240]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:58:36.094020 systemd-networkd[1240]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:58:36.094800 systemd-networkd[1240]: eth0: Link UP Feb 13 19:58:36.094813 systemd-networkd[1240]: eth0: Gained carrier Feb 13 19:58:36.094824 systemd-networkd[1240]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:58:36.105067 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:58:36.137662 systemd-networkd[1240]: eth0: DHCPv4 address 10.0.0.102/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:58:36.145765 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:58:36.213707 kernel: kvm_amd: TSC scaling supported Feb 13 19:58:36.213808 kernel: kvm_amd: Nested Virtualization enabled Feb 13 19:58:36.213822 kernel: kvm_amd: Nested Paging enabled Feb 13 19:58:36.214325 kernel: kvm_amd: LBR virtualization supported Feb 13 19:58:36.215018 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Feb 13 19:58:36.216402 kernel: kvm_amd: Virtual GIF supported Feb 13 19:58:36.241973 kernel: EDAC MC: Ver: 3.0.0 Feb 13 19:58:36.273435 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:58:36.286072 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:58:36.287833 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:58:36.301820 lvm[1280]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:58:36.339476 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:58:36.341101 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:58:36.356186 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:58:36.361914 lvm[1285]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:58:36.400542 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:58:36.402133 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:58:36.403446 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:58:36.403474 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:58:36.404572 systemd[1]: Reached target machines.target - Containers. Feb 13 19:58:36.406742 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:58:36.418207 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:58:36.421336 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:58:36.422580 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:58:36.423599 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:58:36.426628 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:58:36.430689 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:58:36.433093 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:58:36.444112 kernel: loop0: detected capacity change from 0 to 210664 Feb 13 19:58:36.444845 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:58:36.460886 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:58:36.461834 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:58:36.466029 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:58:36.492938 kernel: loop1: detected capacity change from 0 to 140768 Feb 13 19:58:36.520929 kernel: loop2: detected capacity change from 0 to 142488 Feb 13 19:58:36.558950 kernel: loop3: detected capacity change from 0 to 210664 Feb 13 19:58:36.571218 kernel: loop4: detected capacity change from 0 to 140768 Feb 13 19:58:36.581932 kernel: loop5: detected capacity change from 0 to 142488 Feb 13 19:58:36.590492 (sd-merge)[1305]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 19:58:36.591125 (sd-merge)[1305]: Merged extensions into '/usr'. Feb 13 19:58:36.594858 systemd[1]: Reloading requested from client PID 1293 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:58:36.594875 systemd[1]: Reloading... Feb 13 19:58:36.677934 zram_generator::config[1335]: No configuration found. Feb 13 19:58:36.806258 ldconfig[1289]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:58:36.827184 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:58:36.894329 systemd[1]: Reloading finished in 298 ms. Feb 13 19:58:36.912760 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:58:36.914576 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:58:36.931064 systemd[1]: Starting ensure-sysext.service... Feb 13 19:58:36.933154 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:58:36.938395 systemd[1]: Reloading requested from client PID 1377 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:58:36.938418 systemd[1]: Reloading... Feb 13 19:58:36.965525 systemd-tmpfiles[1378]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:58:36.965937 systemd-tmpfiles[1378]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:58:36.966980 systemd-tmpfiles[1378]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:58:36.967292 systemd-tmpfiles[1378]: ACLs are not supported, ignoring. Feb 13 19:58:36.967377 systemd-tmpfiles[1378]: ACLs are not supported, ignoring. Feb 13 19:58:36.974212 systemd-tmpfiles[1378]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:58:36.974224 systemd-tmpfiles[1378]: Skipping /boot Feb 13 19:58:36.991274 systemd-tmpfiles[1378]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:58:36.991411 systemd-tmpfiles[1378]: Skipping /boot Feb 13 19:58:36.997938 zram_generator::config[1408]: No configuration found. Feb 13 19:58:37.123692 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:58:37.193647 systemd[1]: Reloading finished in 254 ms. Feb 13 19:58:37.214472 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:58:37.232674 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 19:58:37.235807 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:58:37.238315 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:58:37.244210 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:58:37.249886 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:58:37.259350 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:58:37.259554 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:58:37.261284 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:58:37.266311 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:58:37.272316 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:58:37.274514 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:58:37.274684 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:58:37.276432 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:58:37.276767 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:58:37.284193 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:58:37.285377 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:58:37.287450 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:58:37.287674 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:58:37.289538 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:58:37.300155 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:58:37.303079 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:58:37.303707 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:58:37.322310 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:58:37.324615 augenrules[1488]: No rules Feb 13 19:58:37.325000 systemd-networkd[1240]: eth0: Gained IPv6LL Feb 13 19:58:37.326203 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:58:37.333147 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:58:37.339260 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:58:37.340601 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:58:37.345076 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:58:37.346384 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:58:37.347758 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:58:37.351013 systemd[1]: Finished ensure-sysext.service. Feb 13 19:58:37.352599 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 19:58:37.354564 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:58:37.356436 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:58:37.356674 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:58:37.358478 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:58:37.358705 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:58:37.359635 systemd-resolved[1455]: Positive Trust Anchors: Feb 13 19:58:37.359659 systemd-resolved[1455]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:58:37.359693 systemd-resolved[1455]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:58:37.360432 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:58:37.360655 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:58:37.362650 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:58:37.362916 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:58:37.364043 systemd-resolved[1455]: Defaulting to hostname 'linux'. Feb 13 19:58:37.366585 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:58:37.368197 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:58:37.376848 systemd[1]: Reached target network.target - Network. Feb 13 19:58:37.377865 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:58:37.379024 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:58:37.380286 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:58:37.380377 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:58:37.392221 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 19:58:37.393399 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:58:37.462210 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 19:58:37.463761 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:58:38.723107 systemd-resolved[1455]: Clock change detected. Flushing caches. Feb 13 19:58:38.723136 systemd-timesyncd[1515]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 19:58:38.723182 systemd-timesyncd[1515]: Initial clock synchronization to Thu 2025-02-13 19:58:38.723036 UTC. Feb 13 19:58:38.723937 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:58:38.725259 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:58:38.726574 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:58:38.727882 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:58:38.727907 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:58:38.728832 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:58:38.730124 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:58:38.731342 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:58:38.732624 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:58:38.734901 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:58:38.739092 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:58:38.742065 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:58:38.747410 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:58:38.748689 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:58:38.749734 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:58:38.750993 systemd[1]: System is tainted: cgroupsv1 Feb 13 19:58:38.751038 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:58:38.751063 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:58:38.752624 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:58:38.755063 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 19:58:38.757432 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:58:38.761843 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:58:38.765010 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:58:38.766905 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:58:38.771917 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:58:38.774841 jq[1522]: false Feb 13 19:58:38.775255 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:58:38.780485 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:58:38.788908 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:58:38.792803 extend-filesystems[1525]: Found loop3 Feb 13 19:58:38.792803 extend-filesystems[1525]: Found loop4 Feb 13 19:58:38.792803 extend-filesystems[1525]: Found loop5 Feb 13 19:58:38.792803 extend-filesystems[1525]: Found sr0 Feb 13 19:58:38.792803 extend-filesystems[1525]: Found vda Feb 13 19:58:38.792803 extend-filesystems[1525]: Found vda1 Feb 13 19:58:38.792803 extend-filesystems[1525]: Found vda2 Feb 13 19:58:38.792803 extend-filesystems[1525]: Found vda3 Feb 13 19:58:38.792803 extend-filesystems[1525]: Found usr Feb 13 19:58:38.792803 extend-filesystems[1525]: Found vda4 Feb 13 19:58:38.792803 extend-filesystems[1525]: Found vda6 Feb 13 19:58:38.792803 extend-filesystems[1525]: Found vda7 Feb 13 19:58:38.792803 extend-filesystems[1525]: Found vda9 Feb 13 19:58:38.792803 extend-filesystems[1525]: Checking size of /dev/vda9 Feb 13 19:58:38.794152 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:58:38.798206 dbus-daemon[1521]: [system] SELinux support is enabled Feb 13 19:58:38.802591 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:58:38.805281 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:58:38.809015 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:58:38.814864 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:58:38.819517 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:58:38.826045 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:58:38.827882 extend-filesystems[1525]: Resized partition /dev/vda9 Feb 13 19:58:38.829782 jq[1545]: true Feb 13 19:58:38.826376 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:58:38.832466 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:58:38.835246 extend-filesystems[1554]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:58:38.835344 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:58:38.837292 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:58:38.837695 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:58:38.848136 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 19:58:38.861411 (ntainerd)[1561]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:58:38.863989 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1238) Feb 13 19:58:38.866806 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 19:58:38.881185 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 19:58:38.881650 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 19:58:38.890520 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:58:38.892065 jq[1558]: true Feb 13 19:58:38.897052 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:58:38.902172 extend-filesystems[1554]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 19:58:38.902172 extend-filesystems[1554]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:58:38.902172 extend-filesystems[1554]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 19:58:38.897404 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:58:38.907624 update_engine[1543]: I20250213 19:58:38.907128 1543 main.cc:92] Flatcar Update Engine starting Feb 13 19:58:38.912182 extend-filesystems[1525]: Resized filesystem in /dev/vda9 Feb 13 19:58:38.916625 update_engine[1543]: I20250213 19:58:38.911903 1543 update_check_scheduler.cc:74] Next update check in 2m55s Feb 13 19:58:38.928392 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:58:38.930395 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:58:38.930524 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:58:38.930550 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:58:38.931994 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:58:38.932011 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:58:38.935476 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:58:38.940843 systemd-logind[1538]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 19:58:38.940873 systemd-logind[1538]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 19:58:38.943024 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:58:38.943561 systemd-logind[1538]: New seat seat0. Feb 13 19:58:38.947586 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:58:38.955046 sshd_keygen[1559]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:58:38.971858 bash[1600]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:58:38.975301 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:58:38.978262 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 19:58:38.984381 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:58:38.986552 locksmithd[1596]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:58:38.995303 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:58:39.005098 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:58:39.005458 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:58:39.017312 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:58:39.029194 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:58:39.034713 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:58:39.038003 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 19:58:39.040802 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:58:39.115982 containerd[1561]: time="2025-02-13T19:58:39.115889436Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 19:58:39.139432 containerd[1561]: time="2025-02-13T19:58:39.139357048Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:58:39.141552 containerd[1561]: time="2025-02-13T19:58:39.141512730Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:58:39.141552 containerd[1561]: time="2025-02-13T19:58:39.141549309Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:58:39.141610 containerd[1561]: time="2025-02-13T19:58:39.141569056Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:58:39.141852 containerd[1561]: time="2025-02-13T19:58:39.141828192Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:58:39.141889 containerd[1561]: time="2025-02-13T19:58:39.141852868Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:58:39.141961 containerd[1561]: time="2025-02-13T19:58:39.141941514Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:58:39.141981 containerd[1561]: time="2025-02-13T19:58:39.141962083Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:58:39.142341 containerd[1561]: time="2025-02-13T19:58:39.142312490Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:58:39.142368 containerd[1561]: time="2025-02-13T19:58:39.142337136Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:58:39.142368 containerd[1561]: time="2025-02-13T19:58:39.142361602Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:58:39.142412 containerd[1561]: time="2025-02-13T19:58:39.142378654Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:58:39.142534 containerd[1561]: time="2025-02-13T19:58:39.142509920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:58:39.142887 containerd[1561]: time="2025-02-13T19:58:39.142861860Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:58:39.143135 containerd[1561]: time="2025-02-13T19:58:39.143107580Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:58:39.143135 containerd[1561]: time="2025-02-13T19:58:39.143131355Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:58:39.143285 containerd[1561]: time="2025-02-13T19:58:39.143263212Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:58:39.143357 containerd[1561]: time="2025-02-13T19:58:39.143339716Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:58:39.149306 containerd[1561]: time="2025-02-13T19:58:39.149240179Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:58:39.149351 containerd[1561]: time="2025-02-13T19:58:39.149331480Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:58:39.149386 containerd[1561]: time="2025-02-13T19:58:39.149350145Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:58:39.149406 containerd[1561]: time="2025-02-13T19:58:39.149396151Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:58:39.149446 containerd[1561]: time="2025-02-13T19:58:39.149419425Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:58:39.149653 containerd[1561]: time="2025-02-13T19:58:39.149622115Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:58:39.150170 containerd[1561]: time="2025-02-13T19:58:39.150137381Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:58:39.150291 containerd[1561]: time="2025-02-13T19:58:39.150273366Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:58:39.150313 containerd[1561]: time="2025-02-13T19:58:39.150293835Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:58:39.150313 containerd[1561]: time="2025-02-13T19:58:39.150307701Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:58:39.150349 containerd[1561]: time="2025-02-13T19:58:39.150323049Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:58:39.150349 containerd[1561]: time="2025-02-13T19:58:39.150336595Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:58:39.150392 containerd[1561]: time="2025-02-13T19:58:39.150350040Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:58:39.150392 containerd[1561]: time="2025-02-13T19:58:39.150364537Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:58:39.150392 containerd[1561]: time="2025-02-13T19:58:39.150378744Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:58:39.150392 containerd[1561]: time="2025-02-13T19:58:39.150391418Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:58:39.150460 containerd[1561]: time="2025-02-13T19:58:39.150404422Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:58:39.150460 containerd[1561]: time="2025-02-13T19:58:39.150417366Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:58:39.150460 containerd[1561]: time="2025-02-13T19:58:39.150438155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:58:39.150460 containerd[1561]: time="2025-02-13T19:58:39.150452212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:58:39.150533 containerd[1561]: time="2025-02-13T19:58:39.150467650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:58:39.150533 containerd[1561]: time="2025-02-13T19:58:39.150487237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:58:39.150533 containerd[1561]: time="2025-02-13T19:58:39.150500853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:58:39.150533 containerd[1561]: time="2025-02-13T19:58:39.150514158Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:58:39.150533 containerd[1561]: time="2025-02-13T19:58:39.150525870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:58:39.150628 containerd[1561]: time="2025-02-13T19:58:39.150538503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:58:39.150628 containerd[1561]: time="2025-02-13T19:58:39.150551428Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:58:39.150628 containerd[1561]: time="2025-02-13T19:58:39.150565865Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:58:39.150628 containerd[1561]: time="2025-02-13T19:58:39.150577967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:58:39.150628 containerd[1561]: time="2025-02-13T19:58:39.150590801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:58:39.150628 containerd[1561]: time="2025-02-13T19:58:39.150602343Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:58:39.150628 containerd[1561]: time="2025-02-13T19:58:39.150616910Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:58:39.150770 containerd[1561]: time="2025-02-13T19:58:39.150641166Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:58:39.150770 containerd[1561]: time="2025-02-13T19:58:39.150654601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:58:39.150770 containerd[1561]: time="2025-02-13T19:58:39.150664860Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:58:39.150770 containerd[1561]: time="2025-02-13T19:58:39.150712189Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:58:39.150770 containerd[1561]: time="2025-02-13T19:58:39.150730093Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:58:39.150770 containerd[1561]: time="2025-02-13T19:58:39.150760790Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:58:39.150896 containerd[1561]: time="2025-02-13T19:58:39.150773825Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:58:39.150896 containerd[1561]: time="2025-02-13T19:58:39.150784174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:58:39.150896 containerd[1561]: time="2025-02-13T19:58:39.150796347Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:58:39.150896 containerd[1561]: time="2025-02-13T19:58:39.150822045Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:58:39.150896 containerd[1561]: time="2025-02-13T19:58:39.150832805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:58:39.151151 containerd[1561]: time="2025-02-13T19:58:39.151089416Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:58:39.151151 containerd[1561]: time="2025-02-13T19:58:39.151150561Z" level=info msg="Connect containerd service" Feb 13 19:58:39.151303 containerd[1561]: time="2025-02-13T19:58:39.151182421Z" level=info msg="using legacy CRI server" Feb 13 19:58:39.151303 containerd[1561]: time="2025-02-13T19:58:39.151189444Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:58:39.151303 containerd[1561]: time="2025-02-13T19:58:39.151281016Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:58:39.151938 containerd[1561]: time="2025-02-13T19:58:39.151904234Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:58:39.152122 containerd[1561]: time="2025-02-13T19:58:39.152067841Z" level=info msg="Start subscribing containerd event" Feb 13 19:58:39.152150 containerd[1561]: time="2025-02-13T19:58:39.152142451Z" level=info msg="Start recovering state" Feb 13 19:58:39.152491 containerd[1561]: time="2025-02-13T19:58:39.152443605Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:58:39.152525 containerd[1561]: time="2025-02-13T19:58:39.152509409Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:58:39.153674 containerd[1561]: time="2025-02-13T19:58:39.153647803Z" level=info msg="Start event monitor" Feb 13 19:58:39.153860 containerd[1561]: time="2025-02-13T19:58:39.153724918Z" level=info msg="Start snapshots syncer" Feb 13 19:58:39.153860 containerd[1561]: time="2025-02-13T19:58:39.153754734Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:58:39.153860 containerd[1561]: time="2025-02-13T19:58:39.153765614Z" level=info msg="Start streaming server" Feb 13 19:58:39.154131 containerd[1561]: time="2025-02-13T19:58:39.154109459Z" level=info msg="containerd successfully booted in 0.039667s" Feb 13 19:58:39.154137 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:58:39.860049 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:58:39.861957 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:58:39.863867 systemd[1]: Startup finished in 7.549s (kernel) + 4.508s (userspace) = 12.058s. Feb 13 19:58:39.893335 (kubelet)[1645]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:58:40.544510 kubelet[1645]: E0213 19:58:40.544393 1645 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:58:40.549393 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:58:40.549707 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:58:48.009867 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:58:48.021120 systemd[1]: Started sshd@0-10.0.0.102:22-10.0.0.1:34368.service - OpenSSH per-connection server daemon (10.0.0.1:34368). Feb 13 19:58:48.062481 sshd[1659]: Accepted publickey for core from 10.0.0.1 port 34368 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 19:58:48.064924 sshd[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:58:48.075207 systemd-logind[1538]: New session 1 of user core. Feb 13 19:58:48.076438 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:58:48.089266 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:58:48.104474 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:58:48.113174 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:58:48.118659 (systemd)[1665]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:58:48.272938 systemd[1665]: Queued start job for default target default.target. Feb 13 19:58:48.273538 systemd[1665]: Created slice app.slice - User Application Slice. Feb 13 19:58:48.273575 systemd[1665]: Reached target paths.target - Paths. Feb 13 19:58:48.273595 systemd[1665]: Reached target timers.target - Timers. Feb 13 19:58:48.284850 systemd[1665]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:58:48.294184 systemd[1665]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:58:48.294296 systemd[1665]: Reached target sockets.target - Sockets. Feb 13 19:58:48.294316 systemd[1665]: Reached target basic.target - Basic System. Feb 13 19:58:48.294381 systemd[1665]: Reached target default.target - Main User Target. Feb 13 19:58:48.294431 systemd[1665]: Startup finished in 164ms. Feb 13 19:58:48.295021 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:58:48.297432 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:58:48.356021 systemd[1]: Started sshd@1-10.0.0.102:22-10.0.0.1:34384.service - OpenSSH per-connection server daemon (10.0.0.1:34384). Feb 13 19:58:48.385157 sshd[1677]: Accepted publickey for core from 10.0.0.1 port 34384 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 19:58:48.387154 sshd[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:58:48.392075 systemd-logind[1538]: New session 2 of user core. Feb 13 19:58:48.409352 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:58:48.467734 sshd[1677]: pam_unix(sshd:session): session closed for user core Feb 13 19:58:48.482129 systemd[1]: Started sshd@2-10.0.0.102:22-10.0.0.1:34390.service - OpenSSH per-connection server daemon (10.0.0.1:34390). Feb 13 19:58:48.482791 systemd[1]: sshd@1-10.0.0.102:22-10.0.0.1:34384.service: Deactivated successfully. Feb 13 19:58:48.485867 systemd-logind[1538]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:58:48.486775 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:58:48.488542 systemd-logind[1538]: Removed session 2. Feb 13 19:58:48.513610 sshd[1682]: Accepted publickey for core from 10.0.0.1 port 34390 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 19:58:48.515626 sshd[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:58:48.520183 systemd-logind[1538]: New session 3 of user core. Feb 13 19:58:48.539300 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:58:48.593004 sshd[1682]: pam_unix(sshd:session): session closed for user core Feb 13 19:58:48.610197 systemd[1]: Started sshd@3-10.0.0.102:22-10.0.0.1:34404.service - OpenSSH per-connection server daemon (10.0.0.1:34404). Feb 13 19:58:48.610939 systemd[1]: sshd@2-10.0.0.102:22-10.0.0.1:34390.service: Deactivated successfully. Feb 13 19:58:48.614673 systemd-logind[1538]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:58:48.615130 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:58:48.616777 systemd-logind[1538]: Removed session 3. Feb 13 19:58:48.640309 sshd[1690]: Accepted publickey for core from 10.0.0.1 port 34404 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 19:58:48.642227 sshd[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:58:48.647166 systemd-logind[1538]: New session 4 of user core. Feb 13 19:58:48.659071 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:58:48.715610 sshd[1690]: pam_unix(sshd:session): session closed for user core Feb 13 19:58:48.730084 systemd[1]: Started sshd@4-10.0.0.102:22-10.0.0.1:34406.service - OpenSSH per-connection server daemon (10.0.0.1:34406). Feb 13 19:58:48.730768 systemd[1]: sshd@3-10.0.0.102:22-10.0.0.1:34404.service: Deactivated successfully. Feb 13 19:58:48.733776 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:58:48.734659 systemd-logind[1538]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:58:48.736345 systemd-logind[1538]: Removed session 4. Feb 13 19:58:48.761434 sshd[1698]: Accepted publickey for core from 10.0.0.1 port 34406 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 19:58:48.763460 sshd[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:58:48.768387 systemd-logind[1538]: New session 5 of user core. Feb 13 19:58:48.778168 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:58:48.838951 sudo[1705]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:58:48.839325 sudo[1705]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:58:48.860977 sudo[1705]: pam_unix(sudo:session): session closed for user root Feb 13 19:58:48.863100 sshd[1698]: pam_unix(sshd:session): session closed for user core Feb 13 19:58:48.872028 systemd[1]: Started sshd@5-10.0.0.102:22-10.0.0.1:34418.service - OpenSSH per-connection server daemon (10.0.0.1:34418). Feb 13 19:58:48.873041 systemd[1]: sshd@4-10.0.0.102:22-10.0.0.1:34406.service: Deactivated successfully. Feb 13 19:58:48.875758 systemd-logind[1538]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:58:48.877255 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:58:48.878404 systemd-logind[1538]: Removed session 5. Feb 13 19:58:48.900824 sshd[1707]: Accepted publickey for core from 10.0.0.1 port 34418 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 19:58:48.902583 sshd[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:58:48.906937 systemd-logind[1538]: New session 6 of user core. Feb 13 19:58:48.917002 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:58:48.973439 sudo[1715]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:58:48.973817 sudo[1715]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:58:48.977730 sudo[1715]: pam_unix(sudo:session): session closed for user root Feb 13 19:58:48.984417 sudo[1714]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 13 19:58:48.984797 sudo[1714]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:58:49.005043 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Feb 13 19:58:49.006973 auditctl[1718]: No rules Feb 13 19:58:49.008523 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:58:49.008929 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Feb 13 19:58:49.010997 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 19:58:49.043855 augenrules[1737]: No rules Feb 13 19:58:49.044911 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 19:58:49.046408 sudo[1714]: pam_unix(sudo:session): session closed for user root Feb 13 19:58:49.048918 sshd[1707]: pam_unix(sshd:session): session closed for user core Feb 13 19:58:49.068146 systemd[1]: Started sshd@6-10.0.0.102:22-10.0.0.1:34424.service - OpenSSH per-connection server daemon (10.0.0.1:34424). Feb 13 19:58:49.068957 systemd[1]: sshd@5-10.0.0.102:22-10.0.0.1:34418.service: Deactivated successfully. Feb 13 19:58:49.071633 systemd-logind[1538]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:58:49.072504 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:58:49.073928 systemd-logind[1538]: Removed session 6. Feb 13 19:58:49.096625 sshd[1743]: Accepted publickey for core from 10.0.0.1 port 34424 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 19:58:49.098244 sshd[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:58:49.102241 systemd-logind[1538]: New session 7 of user core. Feb 13 19:58:49.112027 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:58:49.167561 sudo[1750]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:58:49.167996 sudo[1750]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:58:49.197008 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 19:58:49.220098 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 19:58:49.220521 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 19:58:49.808001 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:58:49.821024 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:58:49.860866 systemd[1]: Reloading requested from client PID 1803 ('systemctl') (unit session-7.scope)... Feb 13 19:58:49.860889 systemd[1]: Reloading... Feb 13 19:58:49.946778 zram_generator::config[1847]: No configuration found. Feb 13 19:58:50.316219 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:58:50.399785 systemd[1]: Reloading finished in 538 ms. Feb 13 19:58:50.448576 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 19:58:50.448689 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 19:58:50.449066 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:58:50.467401 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:58:50.617170 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:58:50.622736 (kubelet)[1901]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:58:50.662832 kubelet[1901]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:58:50.662832 kubelet[1901]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:58:50.662832 kubelet[1901]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:58:50.663956 kubelet[1901]: I0213 19:58:50.663908 1901 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:58:50.967556 kubelet[1901]: I0213 19:58:50.967431 1901 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:58:50.967556 kubelet[1901]: I0213 19:58:50.967463 1901 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:58:50.967686 kubelet[1901]: I0213 19:58:50.967671 1901 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:58:50.985483 kubelet[1901]: I0213 19:58:50.985412 1901 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:58:50.997071 kubelet[1901]: I0213 19:58:50.997023 1901 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:58:50.998915 kubelet[1901]: I0213 19:58:50.998872 1901 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:58:50.999072 kubelet[1901]: I0213 19:58:50.998905 1901 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.102","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:58:50.999461 kubelet[1901]: I0213 19:58:50.999440 1901 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:58:50.999461 kubelet[1901]: I0213 19:58:50.999455 1901 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:58:50.999622 kubelet[1901]: I0213 19:58:50.999598 1901 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:58:51.000223 kubelet[1901]: I0213 19:58:51.000193 1901 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:58:51.000223 kubelet[1901]: I0213 19:58:51.000210 1901 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:58:51.000276 kubelet[1901]: I0213 19:58:51.000230 1901 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:58:51.000276 kubelet[1901]: I0213 19:58:51.000249 1901 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:58:51.000373 kubelet[1901]: E0213 19:58:51.000346 1901 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:51.000393 kubelet[1901]: E0213 19:58:51.000384 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:51.004297 kubelet[1901]: I0213 19:58:51.004186 1901 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 19:58:51.005078 kubelet[1901]: W0213 19:58:51.005050 1901 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.102" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 19:58:51.005130 kubelet[1901]: E0213 19:58:51.005085 1901 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.102" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 19:58:51.005130 kubelet[1901]: W0213 19:58:51.005050 1901 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 19:58:51.005130 kubelet[1901]: E0213 19:58:51.005122 1901 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 19:58:51.005666 kubelet[1901]: I0213 19:58:51.005633 1901 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:58:51.005709 kubelet[1901]: W0213 19:58:51.005698 1901 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:58:51.006466 kubelet[1901]: I0213 19:58:51.006437 1901 server.go:1264] "Started kubelet" Feb 13 19:58:51.007398 kubelet[1901]: I0213 19:58:51.006605 1901 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:58:51.007398 kubelet[1901]: I0213 19:58:51.007003 1901 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:58:51.007398 kubelet[1901]: I0213 19:58:51.007041 1901 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:58:51.008371 kubelet[1901]: I0213 19:58:51.007821 1901 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:58:51.008371 kubelet[1901]: I0213 19:58:51.008078 1901 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:58:51.011142 kubelet[1901]: I0213 19:58:51.010438 1901 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:58:51.011142 kubelet[1901]: I0213 19:58:51.010552 1901 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:58:51.011142 kubelet[1901]: I0213 19:58:51.010620 1901 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:58:51.011766 kubelet[1901]: E0213 19:58:51.011726 1901 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:58:51.011925 kubelet[1901]: I0213 19:58:51.011854 1901 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:58:51.012067 kubelet[1901]: I0213 19:58:51.012034 1901 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:58:51.013282 kubelet[1901]: I0213 19:58:51.013257 1901 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:58:51.020270 kubelet[1901]: E0213 19:58:51.019362 1901 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.102.1823dcdee6b1f495 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.102,UID:10.0.0.102,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.102,},FirstTimestamp:2025-02-13 19:58:51.006416021 +0000 UTC m=+0.379252079,LastTimestamp:2025-02-13 19:58:51.006416021 +0000 UTC m=+0.379252079,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.102,}" Feb 13 19:58:51.020270 kubelet[1901]: E0213 19:58:51.019913 1901 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.102\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 13 19:58:51.020270 kubelet[1901]: W0213 19:58:51.019990 1901 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 13 19:58:51.020270 kubelet[1901]: E0213 19:58:51.020022 1901 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 13 19:58:51.023330 kubelet[1901]: E0213 19:58:51.023214 1901 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.102.1823dcdee702cf28 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.102,UID:10.0.0.102,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.102,},FirstTimestamp:2025-02-13 19:58:51.011714856 +0000 UTC m=+0.384550914,LastTimestamp:2025-02-13 19:58:51.011714856 +0000 UTC m=+0.384550914,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.102,}" Feb 13 19:58:51.035163 kubelet[1901]: I0213 19:58:51.035124 1901 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:58:51.035163 kubelet[1901]: I0213 19:58:51.035143 1901 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:58:51.035163 kubelet[1901]: I0213 19:58:51.035168 1901 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:58:51.038096 kubelet[1901]: E0213 19:58:51.037995 1901 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.102.1823dcdee85d25c4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.102,UID:10.0.0.102,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.102 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.102,},FirstTimestamp:2025-02-13 19:58:51.034412484 +0000 UTC m=+0.407248542,LastTimestamp:2025-02-13 19:58:51.034412484 +0000 UTC m=+0.407248542,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.102,}" Feb 13 19:58:51.041343 kubelet[1901]: E0213 19:58:51.041261 1901 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.102.1823dcdee85d4a1f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.102,UID:10.0.0.102,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.102 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.102,},FirstTimestamp:2025-02-13 19:58:51.034421791 +0000 UTC m=+0.407257849,LastTimestamp:2025-02-13 19:58:51.034421791 +0000 UTC m=+0.407257849,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.102,}" Feb 13 19:58:51.044588 kubelet[1901]: E0213 19:58:51.044504 1901 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.102.1823dcdee85d5804 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.102,UID:10.0.0.102,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.102 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.102,},FirstTimestamp:2025-02-13 19:58:51.034425348 +0000 UTC m=+0.407261406,LastTimestamp:2025-02-13 19:58:51.034425348 +0000 UTC m=+0.407261406,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.102,}" Feb 13 19:58:51.078518 kubelet[1901]: I0213 19:58:51.078451 1901 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:58:51.079772 kubelet[1901]: I0213 19:58:51.079735 1901 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:58:51.079824 kubelet[1901]: I0213 19:58:51.079792 1901 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:58:51.079824 kubelet[1901]: I0213 19:58:51.079814 1901 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:58:51.080167 kubelet[1901]: E0213 19:58:51.079868 1901 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:58:51.084113 kubelet[1901]: W0213 19:58:51.084088 1901 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 13 19:58:51.084113 kubelet[1901]: E0213 19:58:51.084116 1901 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 13 19:58:51.111666 kubelet[1901]: I0213 19:58:51.111629 1901 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.102" Feb 13 19:58:51.115057 kubelet[1901]: E0213 19:58:51.114973 1901 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.102.1823dcdee85d25c4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.102.1823dcdee85d25c4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.102,UID:10.0.0.102,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.102 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.102,},FirstTimestamp:2025-02-13 19:58:51.034412484 +0000 UTC m=+0.407248542,LastTimestamp:2025-02-13 19:58:51.111601443 +0000 UTC m=+0.484437501,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.102,}" Feb 13 19:58:51.115161 kubelet[1901]: E0213 19:58:51.115118 1901 kubelet_node_status.go:96] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.102" Feb 13 19:58:51.116579 kubelet[1901]: E0213 19:58:51.116520 1901 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.102.1823dcdee85d4a1f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.102.1823dcdee85d4a1f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.102,UID:10.0.0.102,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.102 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.102,},FirstTimestamp:2025-02-13 19:58:51.034421791 +0000 UTC m=+0.407257849,LastTimestamp:2025-02-13 19:58:51.111606873 +0000 UTC m=+0.484442931,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.102,}" Feb 13 19:58:51.118020 kubelet[1901]: E0213 19:58:51.117960 1901 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.102.1823dcdee85d5804\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.102.1823dcdee85d5804 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.102,UID:10.0.0.102,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.102 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.102,},FirstTimestamp:2025-02-13 19:58:51.034425348 +0000 UTC m=+0.407261406,LastTimestamp:2025-02-13 19:58:51.111609879 +0000 UTC m=+0.484445937,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.102,}" Feb 13 19:58:51.180101 kubelet[1901]: E0213 19:58:51.180039 1901 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:58:51.222159 kubelet[1901]: E0213 19:58:51.221996 1901 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.102\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Feb 13 19:58:51.316170 kubelet[1901]: I0213 19:58:51.316136 1901 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.102" Feb 13 19:58:51.320941 kubelet[1901]: E0213 19:58:51.320902 1901 kubelet_node_status.go:96] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.102" Feb 13 19:58:51.321456 kubelet[1901]: E0213 19:58:51.321070 1901 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.102.1823dcdee85d25c4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.102.1823dcdee85d25c4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.102,UID:10.0.0.102,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.102 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.102,},FirstTimestamp:2025-02-13 19:58:51.034412484 +0000 UTC m=+0.407248542,LastTimestamp:2025-02-13 19:58:51.316096742 +0000 UTC m=+0.688932800,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.102,}" Feb 13 19:58:51.325344 kubelet[1901]: E0213 19:58:51.325142 1901 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.102.1823dcdee85d4a1f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.102.1823dcdee85d4a1f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.102,UID:10.0.0.102,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.102 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.102,},FirstTimestamp:2025-02-13 19:58:51.034421791 +0000 UTC m=+0.407257849,LastTimestamp:2025-02-13 19:58:51.316103675 +0000 UTC m=+0.688939723,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.102,}" Feb 13 19:58:51.328736 kubelet[1901]: E0213 19:58:51.328660 1901 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.102.1823dcdee85d5804\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.102.1823dcdee85d5804 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.102,UID:10.0.0.102,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.102 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.102,},FirstTimestamp:2025-02-13 19:58:51.034425348 +0000 UTC m=+0.407261406,LastTimestamp:2025-02-13 19:58:51.316106169 +0000 UTC m=+0.688942228,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.102,}" Feb 13 19:58:51.380945 kubelet[1901]: E0213 19:58:51.380869 1901 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:58:51.626817 kubelet[1901]: E0213 19:58:51.626605 1901 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.102\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Feb 13 19:58:51.722308 kubelet[1901]: I0213 19:58:51.722234 1901 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.102" Feb 13 19:58:51.781070 kubelet[1901]: E0213 19:58:51.780991 1901 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:58:51.970733 kubelet[1901]: I0213 19:58:51.970571 1901 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 13 19:58:51.970897 kubelet[1901]: W0213 19:58:51.970852 1901 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 19:58:51.970972 kubelet[1901]: E0213 19:58:51.970846 1901 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.91:6443/api/v1/namespaces/default/events\": read tcp 10.0.0.102:60188->10.0.0.91:6443: use of closed network connection" event="&Event{ObjectMeta:{10.0.0.102.1823dcdee85d25c4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.102,UID:10.0.0.102,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.102 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.102,},FirstTimestamp:2025-02-13 19:58:51.034412484 +0000 UTC m=+0.407248542,LastTimestamp:2025-02-13 19:58:51.722173756 +0000 UTC m=+1.095009824,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.102,}" Feb 13 19:58:52.000997 kubelet[1901]: E0213 19:58:52.000955 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:52.574066 kubelet[1901]: I0213 19:58:52.574001 1901 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.102" Feb 13 19:58:52.575064 kubelet[1901]: I0213 19:58:52.575010 1901 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 13 19:58:52.575635 containerd[1561]: time="2025-02-13T19:58:52.575584731Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:58:52.576225 kubelet[1901]: I0213 19:58:52.575859 1901 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 13 19:58:52.579776 kubelet[1901]: I0213 19:58:52.579727 1901 policy_none.go:49] "None policy: Start" Feb 13 19:58:52.580481 kubelet[1901]: I0213 19:58:52.580455 1901 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:58:52.580542 kubelet[1901]: I0213 19:58:52.580510 1901 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:58:52.581636 kubelet[1901]: E0213 19:58:52.581612 1901 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:58:52.663785 kubelet[1901]: I0213 19:58:52.662943 1901 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:58:52.663785 kubelet[1901]: I0213 19:58:52.663204 1901 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:58:52.663785 kubelet[1901]: I0213 19:58:52.663366 1901 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:58:52.670800 kubelet[1901]: E0213 19:58:52.670771 1901 cadvisor_stats_provider.go:500] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/besteffort\": RecentStats: unable to find data in memory cache]" Feb 13 19:58:52.757479 sudo[1750]: pam_unix(sudo:session): session closed for user root Feb 13 19:58:52.760041 sshd[1743]: pam_unix(sshd:session): session closed for user core Feb 13 19:58:52.765065 systemd[1]: sshd@6-10.0.0.102:22-10.0.0.1:34424.service: Deactivated successfully. Feb 13 19:58:52.767671 systemd-logind[1538]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:58:52.767806 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:58:52.769184 systemd-logind[1538]: Removed session 7. Feb 13 19:58:53.002061 kubelet[1901]: I0213 19:58:53.001871 1901 apiserver.go:52] "Watching apiserver" Feb 13 19:58:53.002061 kubelet[1901]: E0213 19:58:53.001894 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:54.002901 kubelet[1901]: E0213 19:58:54.002853 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:54.182826 kubelet[1901]: I0213 19:58:54.182730 1901 topology_manager.go:215] "Topology Admit Handler" podUID="52cf5fc6-a76a-426e-b244-4d1b397f40fe" podNamespace="calico-system" podName="calico-node-jvhrh" Feb 13 19:58:54.182989 kubelet[1901]: I0213 19:58:54.182928 1901 topology_manager.go:215] "Topology Admit Handler" podUID="c7cccdaa-623e-4b78-b7f9-71b591d49e20" podNamespace="calico-system" podName="csi-node-driver-2d48g" Feb 13 19:58:54.183025 kubelet[1901]: I0213 19:58:54.183005 1901 topology_manager.go:215] "Topology Admit Handler" podUID="9511cf27-bc69-4aaf-9a69-d3bc7ca1360a" podNamespace="kube-system" podName="kube-proxy-rzchk" Feb 13 19:58:54.183441 kubelet[1901]: E0213 19:58:54.183373 1901 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2d48g" podUID="c7cccdaa-623e-4b78-b7f9-71b591d49e20" Feb 13 19:58:54.211475 kubelet[1901]: I0213 19:58:54.211424 1901 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:58:54.283500 kubelet[1901]: I0213 19:58:54.283288 1901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c7cccdaa-623e-4b78-b7f9-71b591d49e20-registration-dir\") pod \"csi-node-driver-2d48g\" (UID: \"c7cccdaa-623e-4b78-b7f9-71b591d49e20\") " pod="calico-system/csi-node-driver-2d48g" Feb 13 19:58:54.283500 kubelet[1901]: I0213 19:58:54.283343 1901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/52cf5fc6-a76a-426e-b244-4d1b397f40fe-cni-net-dir\") pod \"calico-node-jvhrh\" (UID: \"52cf5fc6-a76a-426e-b244-4d1b397f40fe\") " pod="calico-system/calico-node-jvhrh" Feb 13 19:58:54.283500 kubelet[1901]: I0213 19:58:54.283362 1901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/52cf5fc6-a76a-426e-b244-4d1b397f40fe-cni-log-dir\") pod \"calico-node-jvhrh\" (UID: \"52cf5fc6-a76a-426e-b244-4d1b397f40fe\") " pod="calico-system/calico-node-jvhrh" Feb 13 19:58:54.283500 kubelet[1901]: I0213 19:58:54.283381 1901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/52cf5fc6-a76a-426e-b244-4d1b397f40fe-flexvol-driver-host\") pod \"calico-node-jvhrh\" (UID: \"52cf5fc6-a76a-426e-b244-4d1b397f40fe\") " pod="calico-system/calico-node-jvhrh" Feb 13 19:58:54.283500 kubelet[1901]: I0213 19:58:54.283491 1901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkpdr\" (UniqueName: \"kubernetes.io/projected/52cf5fc6-a76a-426e-b244-4d1b397f40fe-kube-api-access-wkpdr\") pod \"calico-node-jvhrh\" (UID: \"52cf5fc6-a76a-426e-b244-4d1b397f40fe\") " pod="calico-system/calico-node-jvhrh" Feb 13 19:58:54.283731 kubelet[1901]: I0213 19:58:54.283511 1901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9511cf27-bc69-4aaf-9a69-d3bc7ca1360a-lib-modules\") pod \"kube-proxy-rzchk\" (UID: \"9511cf27-bc69-4aaf-9a69-d3bc7ca1360a\") " pod="kube-system/kube-proxy-rzchk" Feb 13 19:58:54.283731 kubelet[1901]: I0213 19:58:54.283531 1901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/52cf5fc6-a76a-426e-b244-4d1b397f40fe-xtables-lock\") pod \"calico-node-jvhrh\" (UID: \"52cf5fc6-a76a-426e-b244-4d1b397f40fe\") " pod="calico-system/calico-node-jvhrh" Feb 13 19:58:54.283731 kubelet[1901]: I0213 19:58:54.283553 1901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/52cf5fc6-a76a-426e-b244-4d1b397f40fe-policysync\") pod \"calico-node-jvhrh\" (UID: \"52cf5fc6-a76a-426e-b244-4d1b397f40fe\") " pod="calico-system/calico-node-jvhrh" Feb 13 19:58:54.283731 kubelet[1901]: I0213 19:58:54.283573 1901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/52cf5fc6-a76a-426e-b244-4d1b397f40fe-cni-bin-dir\") pod \"calico-node-jvhrh\" (UID: \"52cf5fc6-a76a-426e-b244-4d1b397f40fe\") " pod="calico-system/calico-node-jvhrh" Feb 13 19:58:54.283731 kubelet[1901]: I0213 19:58:54.283587 1901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9511cf27-bc69-4aaf-9a69-d3bc7ca1360a-kube-proxy\") pod \"kube-proxy-rzchk\" (UID: \"9511cf27-bc69-4aaf-9a69-d3bc7ca1360a\") " pod="kube-system/kube-proxy-rzchk" Feb 13 19:58:54.283864 kubelet[1901]: I0213 19:58:54.283603 1901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c7cccdaa-623e-4b78-b7f9-71b591d49e20-kubelet-dir\") pod \"csi-node-driver-2d48g\" (UID: \"c7cccdaa-623e-4b78-b7f9-71b591d49e20\") " pod="calico-system/csi-node-driver-2d48g" Feb 13 19:58:54.283864 kubelet[1901]: I0213 19:58:54.283619 1901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9511cf27-bc69-4aaf-9a69-d3bc7ca1360a-xtables-lock\") pod \"kube-proxy-rzchk\" (UID: \"9511cf27-bc69-4aaf-9a69-d3bc7ca1360a\") " pod="kube-system/kube-proxy-rzchk" Feb 13 19:58:54.283864 kubelet[1901]: I0213 19:58:54.283692 1901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/52cf5fc6-a76a-426e-b244-4d1b397f40fe-lib-modules\") pod \"calico-node-jvhrh\" (UID: \"52cf5fc6-a76a-426e-b244-4d1b397f40fe\") " pod="calico-system/calico-node-jvhrh" Feb 13 19:58:54.283864 kubelet[1901]: I0213 19:58:54.283781 1901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52cf5fc6-a76a-426e-b244-4d1b397f40fe-tigera-ca-bundle\") pod \"calico-node-jvhrh\" (UID: \"52cf5fc6-a76a-426e-b244-4d1b397f40fe\") " pod="calico-system/calico-node-jvhrh" Feb 13 19:58:54.283864 kubelet[1901]: I0213 19:58:54.283821 1901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/52cf5fc6-a76a-426e-b244-4d1b397f40fe-node-certs\") pod \"calico-node-jvhrh\" (UID: \"52cf5fc6-a76a-426e-b244-4d1b397f40fe\") " pod="calico-system/calico-node-jvhrh" Feb 13 19:58:54.284134 kubelet[1901]: I0213 19:58:54.283844 1901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/52cf5fc6-a76a-426e-b244-4d1b397f40fe-var-run-calico\") pod \"calico-node-jvhrh\" (UID: \"52cf5fc6-a76a-426e-b244-4d1b397f40fe\") " pod="calico-system/calico-node-jvhrh" Feb 13 19:58:54.284134 kubelet[1901]: I0213 19:58:54.283886 1901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d6gh\" (UniqueName: \"kubernetes.io/projected/9511cf27-bc69-4aaf-9a69-d3bc7ca1360a-kube-api-access-2d6gh\") pod \"kube-proxy-rzchk\" (UID: \"9511cf27-bc69-4aaf-9a69-d3bc7ca1360a\") " pod="kube-system/kube-proxy-rzchk" Feb 13 19:58:54.284134 kubelet[1901]: I0213 19:58:54.283920 1901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/52cf5fc6-a76a-426e-b244-4d1b397f40fe-var-lib-calico\") pod \"calico-node-jvhrh\" (UID: \"52cf5fc6-a76a-426e-b244-4d1b397f40fe\") " pod="calico-system/calico-node-jvhrh" Feb 13 19:58:54.284134 kubelet[1901]: I0213 19:58:54.283961 1901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c7cccdaa-623e-4b78-b7f9-71b591d49e20-varrun\") pod \"csi-node-driver-2d48g\" (UID: \"c7cccdaa-623e-4b78-b7f9-71b591d49e20\") " pod="calico-system/csi-node-driver-2d48g" Feb 13 19:58:54.284134 kubelet[1901]: I0213 19:58:54.284016 1901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c7cccdaa-623e-4b78-b7f9-71b591d49e20-socket-dir\") pod \"csi-node-driver-2d48g\" (UID: \"c7cccdaa-623e-4b78-b7f9-71b591d49e20\") " pod="calico-system/csi-node-driver-2d48g" Feb 13 19:58:54.284259 kubelet[1901]: I0213 19:58:54.284055 1901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2jdr\" (UniqueName: \"kubernetes.io/projected/c7cccdaa-623e-4b78-b7f9-71b591d49e20-kube-api-access-m2jdr\") pod \"csi-node-driver-2d48g\" (UID: \"c7cccdaa-623e-4b78-b7f9-71b591d49e20\") " pod="calico-system/csi-node-driver-2d48g" Feb 13 19:58:54.387022 kubelet[1901]: E0213 19:58:54.386729 1901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:58:54.387022 kubelet[1901]: W0213 19:58:54.386771 1901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:58:54.387022 kubelet[1901]: E0213 19:58:54.386813 1901 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:58:54.387290 kubelet[1901]: E0213 19:58:54.387251 1901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:58:54.387290 kubelet[1901]: W0213 19:58:54.387276 1901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:58:54.387470 kubelet[1901]: E0213 19:58:54.387310 1901 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:58:54.387891 kubelet[1901]: E0213 19:58:54.387873 1901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:58:54.387891 kubelet[1901]: W0213 19:58:54.387889 1901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:58:54.387947 kubelet[1901]: E0213 19:58:54.387907 1901 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:58:54.388187 kubelet[1901]: E0213 19:58:54.388169 1901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:58:54.388187 kubelet[1901]: W0213 19:58:54.388185 1901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:58:54.388255 kubelet[1901]: E0213 19:58:54.388232 1901 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:58:54.388521 kubelet[1901]: E0213 19:58:54.388487 1901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:58:54.388521 kubelet[1901]: W0213 19:58:54.388502 1901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:58:54.388614 kubelet[1901]: E0213 19:58:54.388565 1901 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:58:54.388859 kubelet[1901]: E0213 19:58:54.388826 1901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:58:54.388859 kubelet[1901]: W0213 19:58:54.388840 1901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:58:54.389031 kubelet[1901]: E0213 19:58:54.388936 1901 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:58:54.389683 kubelet[1901]: E0213 19:58:54.389662 1901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:58:54.389683 kubelet[1901]: W0213 19:58:54.389678 1901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:58:54.389765 kubelet[1901]: E0213 19:58:54.389695 1901 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:58:54.389972 kubelet[1901]: E0213 19:58:54.389943 1901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:58:54.390013 kubelet[1901]: W0213 19:58:54.389962 1901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:58:54.390013 kubelet[1901]: E0213 19:58:54.389991 1901 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:58:54.390225 kubelet[1901]: E0213 19:58:54.390209 1901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:58:54.390266 kubelet[1901]: W0213 19:58:54.390226 1901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:58:54.390287 kubelet[1901]: E0213 19:58:54.390268 1901 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:58:54.390617 kubelet[1901]: E0213 19:58:54.390600 1901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:58:54.390617 kubelet[1901]: W0213 19:58:54.390614 1901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:58:54.390705 kubelet[1901]: E0213 19:58:54.390644 1901 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:58:54.390906 kubelet[1901]: E0213 19:58:54.390891 1901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:58:54.390906 kubelet[1901]: W0213 19:58:54.390904 1901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:58:54.390960 kubelet[1901]: E0213 19:58:54.390915 1901 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:58:54.427663 kubelet[1901]: E0213 19:58:54.427539 1901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:58:54.427663 kubelet[1901]: W0213 19:58:54.427565 1901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:58:54.427663 kubelet[1901]: E0213 19:58:54.427587 1901 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:58:54.428314 kubelet[1901]: E0213 19:58:54.428278 1901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:58:54.428314 kubelet[1901]: W0213 19:58:54.428309 1901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:58:54.428372 kubelet[1901]: E0213 19:58:54.428337 1901 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:58:54.428794 kubelet[1901]: E0213 19:58:54.428700 1901 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:58:54.428794 kubelet[1901]: W0213 19:58:54.428715 1901 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:58:54.428794 kubelet[1901]: E0213 19:58:54.428725 1901 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:58:54.487244 kubelet[1901]: E0213 19:58:54.487185 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:58:54.488291 containerd[1561]: time="2025-02-13T19:58:54.488255669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jvhrh,Uid:52cf5fc6-a76a-426e-b244-4d1b397f40fe,Namespace:calico-system,Attempt:0,}" Feb 13 19:58:54.489425 kubelet[1901]: E0213 19:58:54.489359 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:58:54.489775 containerd[1561]: time="2025-02-13T19:58:54.489707581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rzchk,Uid:9511cf27-bc69-4aaf-9a69-d3bc7ca1360a,Namespace:kube-system,Attempt:0,}" Feb 13 19:58:55.003843 kubelet[1901]: E0213 19:58:55.003765 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:55.492029 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount927990271.mount: Deactivated successfully. Feb 13 19:58:55.507245 containerd[1561]: time="2025-02-13T19:58:55.507152671Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:58:55.509156 containerd[1561]: time="2025-02-13T19:58:55.509088831Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:58:55.510157 containerd[1561]: time="2025-02-13T19:58:55.510060974Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:58:55.511045 containerd[1561]: time="2025-02-13T19:58:55.511009943Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 19:58:55.512191 containerd[1561]: time="2025-02-13T19:58:55.512146805Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:58:55.516123 containerd[1561]: time="2025-02-13T19:58:55.516073647Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:58:55.516977 containerd[1561]: time="2025-02-13T19:58:55.516940373Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.027167639s" Feb 13 19:58:55.520989 containerd[1561]: time="2025-02-13T19:58:55.520920155Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.032578414s" Feb 13 19:58:55.635682 containerd[1561]: time="2025-02-13T19:58:55.635389925Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:58:55.635682 containerd[1561]: time="2025-02-13T19:58:55.635459235Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:58:55.635682 containerd[1561]: time="2025-02-13T19:58:55.635473412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:58:55.635682 containerd[1561]: time="2025-02-13T19:58:55.635582647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:58:55.636445 containerd[1561]: time="2025-02-13T19:58:55.635736846Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:58:55.636445 containerd[1561]: time="2025-02-13T19:58:55.635808791Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:58:55.636445 containerd[1561]: time="2025-02-13T19:58:55.635822797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:58:55.636445 containerd[1561]: time="2025-02-13T19:58:55.635906574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:58:55.722392 containerd[1561]: time="2025-02-13T19:58:55.722336380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rzchk,Uid:9511cf27-bc69-4aaf-9a69-d3bc7ca1360a,Namespace:kube-system,Attempt:0,} returns sandbox id \"87d6830b4aa0bae7eebb088c5ba171581830363268a3cf4f662c2077f09fc718\"" Feb 13 19:58:55.722870 containerd[1561]: time="2025-02-13T19:58:55.722762950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jvhrh,Uid:52cf5fc6-a76a-426e-b244-4d1b397f40fe,Namespace:calico-system,Attempt:0,} returns sandbox id \"2c562c077f183462007d2ae58b40e02d5d45894dc1f17d72ef6dba27df507786\"" Feb 13 19:58:55.725470 kubelet[1901]: E0213 19:58:55.725443 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:58:55.725726 kubelet[1901]: E0213 19:58:55.725602 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:58:55.726412 containerd[1561]: time="2025-02-13T19:58:55.726390271Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 19:58:56.004609 kubelet[1901]: E0213 19:58:56.004531 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:56.080805 kubelet[1901]: E0213 19:58:56.080703 1901 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2d48g" podUID="c7cccdaa-623e-4b78-b7f9-71b591d49e20" Feb 13 19:58:57.005610 kubelet[1901]: E0213 19:58:57.005539 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:58.006767 kubelet[1901]: E0213 19:58:58.006676 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:58.080994 kubelet[1901]: E0213 19:58:58.080943 1901 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2d48g" podUID="c7cccdaa-623e-4b78-b7f9-71b591d49e20" Feb 13 19:58:59.007466 kubelet[1901]: E0213 19:58:59.007392 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:58:59.575999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3061252060.mount: Deactivated successfully. Feb 13 19:59:00.007622 kubelet[1901]: E0213 19:59:00.007486 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:00.080714 kubelet[1901]: E0213 19:59:00.080614 1901 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2d48g" podUID="c7cccdaa-623e-4b78-b7f9-71b591d49e20" Feb 13 19:59:00.409781 containerd[1561]: time="2025-02-13T19:59:00.409561877Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:59:00.458187 containerd[1561]: time="2025-02-13T19:59:00.458091780Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Feb 13 19:59:00.511630 containerd[1561]: time="2025-02-13T19:59:00.511548481Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:59:00.575782 containerd[1561]: time="2025-02-13T19:59:00.575672394Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:59:00.576505 containerd[1561]: time="2025-02-13T19:59:00.576455232Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 4.850033962s" Feb 13 19:59:00.576505 containerd[1561]: time="2025-02-13T19:59:00.576502160Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 13 19:59:00.578005 containerd[1561]: time="2025-02-13T19:59:00.577978688Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 19:59:00.579666 containerd[1561]: time="2025-02-13T19:59:00.579632719Z" level=info msg="CreateContainer within sandbox \"2c562c077f183462007d2ae58b40e02d5d45894dc1f17d72ef6dba27df507786\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 19:59:01.008387 kubelet[1901]: E0213 19:59:01.008331 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:01.695760 containerd[1561]: time="2025-02-13T19:59:01.695683614Z" level=info msg="CreateContainer within sandbox \"2c562c077f183462007d2ae58b40e02d5d45894dc1f17d72ef6dba27df507786\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"fa69824e3fa6c5c7e63613ffc3220027c23ac9661c8820c123a15070f3aebad9\"" Feb 13 19:59:01.696591 containerd[1561]: time="2025-02-13T19:59:01.696566420Z" level=info msg="StartContainer for \"fa69824e3fa6c5c7e63613ffc3220027c23ac9661c8820c123a15070f3aebad9\"" Feb 13 19:59:01.842951 containerd[1561]: time="2025-02-13T19:59:01.842888213Z" level=info msg="StartContainer for \"fa69824e3fa6c5c7e63613ffc3220027c23ac9661c8820c123a15070f3aebad9\" returns successfully" Feb 13 19:59:02.262020 kubelet[1901]: E0213 19:59:02.009207 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:02.262020 kubelet[1901]: E0213 19:59:02.080720 1901 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2d48g" podUID="c7cccdaa-623e-4b78-b7f9-71b591d49e20" Feb 13 19:59:02.262020 kubelet[1901]: E0213 19:59:02.105567 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:02.200581 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa69824e3fa6c5c7e63613ffc3220027c23ac9661c8820c123a15070f3aebad9-rootfs.mount: Deactivated successfully. Feb 13 19:59:02.677107 containerd[1561]: time="2025-02-13T19:59:02.676955158Z" level=error msg="collecting metrics for fa69824e3fa6c5c7e63613ffc3220027c23ac9661c8820c123a15070f3aebad9" error="cgroups: cgroup deleted: unknown" Feb 13 19:59:02.865846 containerd[1561]: time="2025-02-13T19:59:02.865738046Z" level=info msg="shim disconnected" id=fa69824e3fa6c5c7e63613ffc3220027c23ac9661c8820c123a15070f3aebad9 namespace=k8s.io Feb 13 19:59:02.865846 containerd[1561]: time="2025-02-13T19:59:02.865820541Z" level=warning msg="cleaning up after shim disconnected" id=fa69824e3fa6c5c7e63613ffc3220027c23ac9661c8820c123a15070f3aebad9 namespace=k8s.io Feb 13 19:59:02.865846 containerd[1561]: time="2025-02-13T19:59:02.865831301Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:59:03.009889 kubelet[1901]: E0213 19:59:03.009838 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:03.291162 kubelet[1901]: E0213 19:59:03.291016 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:04.010700 kubelet[1901]: E0213 19:59:04.010629 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:04.080294 kubelet[1901]: E0213 19:59:04.080248 1901 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2d48g" podUID="c7cccdaa-623e-4b78-b7f9-71b591d49e20" Feb 13 19:59:05.011096 kubelet[1901]: E0213 19:59:05.011024 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:06.011681 kubelet[1901]: E0213 19:59:06.011618 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:06.080357 kubelet[1901]: E0213 19:59:06.080282 1901 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2d48g" podUID="c7cccdaa-623e-4b78-b7f9-71b591d49e20" Feb 13 19:59:07.012733 kubelet[1901]: E0213 19:59:07.012660 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:07.459615 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3405098825.mount: Deactivated successfully. Feb 13 19:59:07.985173 containerd[1561]: time="2025-02-13T19:59:07.985118493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:59:07.986299 containerd[1561]: time="2025-02-13T19:59:07.986255034Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=29057858" Feb 13 19:59:07.987658 containerd[1561]: time="2025-02-13T19:59:07.987626285Z" level=info msg="ImageCreate event name:\"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:59:07.991318 containerd[1561]: time="2025-02-13T19:59:07.991239710Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:59:07.991824 containerd[1561]: time="2025-02-13T19:59:07.991785474Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"29056877\" in 7.413773403s" Feb 13 19:59:07.991824 containerd[1561]: time="2025-02-13T19:59:07.991814428Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\"" Feb 13 19:59:07.993081 containerd[1561]: time="2025-02-13T19:59:07.992901807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 19:59:07.994549 containerd[1561]: time="2025-02-13T19:59:07.994506215Z" level=info msg="CreateContainer within sandbox \"87d6830b4aa0bae7eebb088c5ba171581830363268a3cf4f662c2077f09fc718\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:59:08.013164 kubelet[1901]: E0213 19:59:08.013098 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:08.048040 containerd[1561]: time="2025-02-13T19:59:08.047975881Z" level=info msg="CreateContainer within sandbox \"87d6830b4aa0bae7eebb088c5ba171581830363268a3cf4f662c2077f09fc718\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e3c8bf3e6571a268aa48d755b78f1a55f308fdd1f3de8ac940ca2bcde150e350\"" Feb 13 19:59:08.048555 containerd[1561]: time="2025-02-13T19:59:08.048514821Z" level=info msg="StartContainer for \"e3c8bf3e6571a268aa48d755b78f1a55f308fdd1f3de8ac940ca2bcde150e350\"" Feb 13 19:59:08.080465 kubelet[1901]: E0213 19:59:08.080403 1901 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2d48g" podUID="c7cccdaa-623e-4b78-b7f9-71b591d49e20" Feb 13 19:59:08.112546 containerd[1561]: time="2025-02-13T19:59:08.112495245Z" level=info msg="StartContainer for \"e3c8bf3e6571a268aa48d755b78f1a55f308fdd1f3de8ac940ca2bcde150e350\" returns successfully" Feb 13 19:59:08.305979 kubelet[1901]: E0213 19:59:08.304239 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:09.013929 kubelet[1901]: E0213 19:59:09.013853 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:09.305457 kubelet[1901]: E0213 19:59:09.305300 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:10.014330 kubelet[1901]: E0213 19:59:10.014234 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:10.080399 kubelet[1901]: E0213 19:59:10.080328 1901 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2d48g" podUID="c7cccdaa-623e-4b78-b7f9-71b591d49e20" Feb 13 19:59:11.001078 kubelet[1901]: E0213 19:59:11.000993 1901 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:11.014556 kubelet[1901]: E0213 19:59:11.014477 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:12.014972 kubelet[1901]: E0213 19:59:12.014908 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:12.080931 kubelet[1901]: E0213 19:59:12.080871 1901 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2d48g" podUID="c7cccdaa-623e-4b78-b7f9-71b591d49e20" Feb 13 19:59:12.462879 containerd[1561]: time="2025-02-13T19:59:12.462799668Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:59:12.463519 containerd[1561]: time="2025-02-13T19:59:12.463454273Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 13 19:59:12.466770 containerd[1561]: time="2025-02-13T19:59:12.465784957Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:59:12.468697 containerd[1561]: time="2025-02-13T19:59:12.468652899Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:59:12.469409 containerd[1561]: time="2025-02-13T19:59:12.469370325Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.476433423s" Feb 13 19:59:12.469478 containerd[1561]: time="2025-02-13T19:59:12.469410474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 13 19:59:12.471541 containerd[1561]: time="2025-02-13T19:59:12.471506423Z" level=info msg="CreateContainer within sandbox \"2c562c077f183462007d2ae58b40e02d5d45894dc1f17d72ef6dba27df507786\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 19:59:12.486551 containerd[1561]: time="2025-02-13T19:59:12.486499227Z" level=info msg="CreateContainer within sandbox \"2c562c077f183462007d2ae58b40e02d5d45894dc1f17d72ef6dba27df507786\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c58cf0e62a1efdeee07b2f175ea580c6952ac44448d92c96305b3bad05a55f5f\"" Feb 13 19:59:12.487262 containerd[1561]: time="2025-02-13T19:59:12.487218787Z" level=info msg="StartContainer for \"c58cf0e62a1efdeee07b2f175ea580c6952ac44448d92c96305b3bad05a55f5f\"" Feb 13 19:59:12.550488 containerd[1561]: time="2025-02-13T19:59:12.550434342Z" level=info msg="StartContainer for \"c58cf0e62a1efdeee07b2f175ea580c6952ac44448d92c96305b3bad05a55f5f\" returns successfully" Feb 13 19:59:13.015244 kubelet[1901]: E0213 19:59:13.015210 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:13.313365 kubelet[1901]: E0213 19:59:13.313209 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:13.431848 kubelet[1901]: I0213 19:59:13.431725 1901 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rzchk" podStartSLOduration=9.164977501 podStartE2EDuration="21.431705102s" podCreationTimestamp="2025-02-13 19:58:52 +0000 UTC" firstStartedPulling="2025-02-13 19:58:55.726057677 +0000 UTC m=+5.098893735" lastFinishedPulling="2025-02-13 19:59:07.992785278 +0000 UTC m=+17.365621336" observedRunningTime="2025-02-13 19:59:08.330496888 +0000 UTC m=+17.703332966" watchObservedRunningTime="2025-02-13 19:59:13.431705102 +0000 UTC m=+22.804541160" Feb 13 19:59:14.015911 kubelet[1901]: E0213 19:59:14.015839 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:14.081175 kubelet[1901]: E0213 19:59:14.081110 1901 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2d48g" podUID="c7cccdaa-623e-4b78-b7f9-71b591d49e20" Feb 13 19:59:14.314671 kubelet[1901]: E0213 19:59:14.314535 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:15.016962 kubelet[1901]: E0213 19:59:15.016927 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:15.294122 containerd[1561]: time="2025-02-13T19:59:15.293968403Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:59:15.315282 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c58cf0e62a1efdeee07b2f175ea580c6952ac44448d92c96305b3bad05a55f5f-rootfs.mount: Deactivated successfully. Feb 13 19:59:15.321316 kubelet[1901]: I0213 19:59:15.321257 1901 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 19:59:15.360007 containerd[1561]: time="2025-02-13T19:59:15.359920870Z" level=info msg="shim disconnected" id=c58cf0e62a1efdeee07b2f175ea580c6952ac44448d92c96305b3bad05a55f5f namespace=k8s.io Feb 13 19:59:15.360007 containerd[1561]: time="2025-02-13T19:59:15.359970225Z" level=warning msg="cleaning up after shim disconnected" id=c58cf0e62a1efdeee07b2f175ea580c6952ac44448d92c96305b3bad05a55f5f namespace=k8s.io Feb 13 19:59:15.360007 containerd[1561]: time="2025-02-13T19:59:15.359981497Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:59:16.017252 kubelet[1901]: E0213 19:59:16.017180 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:16.083909 containerd[1561]: time="2025-02-13T19:59:16.083850068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2d48g,Uid:c7cccdaa-623e-4b78-b7f9-71b591d49e20,Namespace:calico-system,Attempt:0,}" Feb 13 19:59:16.151431 containerd[1561]: time="2025-02-13T19:59:16.151370559Z" level=error msg="Failed to destroy network for sandbox \"44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:59:16.151815 containerd[1561]: time="2025-02-13T19:59:16.151790335Z" level=error msg="encountered an error cleaning up failed sandbox \"44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:59:16.151866 containerd[1561]: time="2025-02-13T19:59:16.151834349Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2d48g,Uid:c7cccdaa-623e-4b78-b7f9-71b591d49e20,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:59:16.152135 kubelet[1901]: E0213 19:59:16.152067 1901 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:59:16.152259 kubelet[1901]: E0213 19:59:16.152147 1901 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2d48g" Feb 13 19:59:16.152259 kubelet[1901]: E0213 19:59:16.152168 1901 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2d48g" Feb 13 19:59:16.152259 kubelet[1901]: E0213 19:59:16.152217 1901 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2d48g_calico-system(c7cccdaa-623e-4b78-b7f9-71b591d49e20)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2d48g_calico-system(c7cccdaa-623e-4b78-b7f9-71b591d49e20)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2d48g" podUID="c7cccdaa-623e-4b78-b7f9-71b591d49e20" Feb 13 19:59:16.153919 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf-shm.mount: Deactivated successfully. Feb 13 19:59:16.318116 kubelet[1901]: I0213 19:59:16.318003 1901 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf" Feb 13 19:59:16.318868 containerd[1561]: time="2025-02-13T19:59:16.318822962Z" level=info msg="StopPodSandbox for \"44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf\"" Feb 13 19:59:16.319346 containerd[1561]: time="2025-02-13T19:59:16.318995312Z" level=info msg="Ensure that sandbox 44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf in task-service has been cleanup successfully" Feb 13 19:59:16.320570 kubelet[1901]: E0213 19:59:16.320537 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:16.321536 containerd[1561]: time="2025-02-13T19:59:16.321505370Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 19:59:16.347760 containerd[1561]: time="2025-02-13T19:59:16.347678267Z" level=error msg="StopPodSandbox for \"44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf\" failed" error="failed to destroy network for sandbox \"44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:59:16.347963 kubelet[1901]: E0213 19:59:16.347919 1901 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf" Feb 13 19:59:16.348066 kubelet[1901]: E0213 19:59:16.347973 1901 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf"} Feb 13 19:59:16.348066 kubelet[1901]: E0213 19:59:16.348032 1901 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c7cccdaa-623e-4b78-b7f9-71b591d49e20\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 19:59:16.348183 kubelet[1901]: E0213 19:59:16.348064 1901 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c7cccdaa-623e-4b78-b7f9-71b591d49e20\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2d48g" podUID="c7cccdaa-623e-4b78-b7f9-71b591d49e20" Feb 13 19:59:16.581399 kubelet[1901]: I0213 19:59:16.581223 1901 topology_manager.go:215] "Topology Admit Handler" podUID="565a1071-1bcc-4669-b9a9-4e955e40c368" podNamespace="default" podName="nginx-deployment-85f456d6dd-gl9h2" Feb 13 19:59:16.763140 kubelet[1901]: I0213 19:59:16.762894 1901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg4rf\" (UniqueName: \"kubernetes.io/projected/565a1071-1bcc-4669-b9a9-4e955e40c368-kube-api-access-sg4rf\") pod \"nginx-deployment-85f456d6dd-gl9h2\" (UID: \"565a1071-1bcc-4669-b9a9-4e955e40c368\") " pod="default/nginx-deployment-85f456d6dd-gl9h2" Feb 13 19:59:16.885592 containerd[1561]: time="2025-02-13T19:59:16.885473366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-gl9h2,Uid:565a1071-1bcc-4669-b9a9-4e955e40c368,Namespace:default,Attempt:0,}" Feb 13 19:59:16.949190 containerd[1561]: time="2025-02-13T19:59:16.949126314Z" level=error msg="Failed to destroy network for sandbox \"db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:59:16.949698 containerd[1561]: time="2025-02-13T19:59:16.949655800Z" level=error msg="encountered an error cleaning up failed sandbox \"db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:59:16.949758 containerd[1561]: time="2025-02-13T19:59:16.949716086Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-gl9h2,Uid:565a1071-1bcc-4669-b9a9-4e955e40c368,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:59:16.950093 kubelet[1901]: E0213 19:59:16.950008 1901 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:59:16.950093 kubelet[1901]: E0213 19:59:16.950088 1901 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-gl9h2" Feb 13 19:59:16.950307 kubelet[1901]: E0213 19:59:16.950114 1901 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-gl9h2" Feb 13 19:59:16.950307 kubelet[1901]: E0213 19:59:16.950163 1901 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-gl9h2_default(565a1071-1bcc-4669-b9a9-4e955e40c368)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-gl9h2_default(565a1071-1bcc-4669-b9a9-4e955e40c368)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-gl9h2" podUID="565a1071-1bcc-4669-b9a9-4e955e40c368" Feb 13 19:59:16.951685 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828-shm.mount: Deactivated successfully. Feb 13 19:59:17.017767 kubelet[1901]: E0213 19:59:17.017710 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:17.322989 kubelet[1901]: I0213 19:59:17.322951 1901 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828" Feb 13 19:59:17.323691 containerd[1561]: time="2025-02-13T19:59:17.323645857Z" level=info msg="StopPodSandbox for \"db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828\"" Feb 13 19:59:17.324157 containerd[1561]: time="2025-02-13T19:59:17.323902560Z" level=info msg="Ensure that sandbox db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828 in task-service has been cleanup successfully" Feb 13 19:59:17.350692 containerd[1561]: time="2025-02-13T19:59:17.350613194Z" level=error msg="StopPodSandbox for \"db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828\" failed" error="failed to destroy network for sandbox \"db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:59:17.350932 kubelet[1901]: E0213 19:59:17.350886 1901 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828" Feb 13 19:59:17.351004 kubelet[1901]: E0213 19:59:17.350946 1901 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828"} Feb 13 19:59:17.351033 kubelet[1901]: E0213 19:59:17.351006 1901 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"565a1071-1bcc-4669-b9a9-4e955e40c368\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 19:59:17.351111 kubelet[1901]: E0213 19:59:17.351031 1901 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"565a1071-1bcc-4669-b9a9-4e955e40c368\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-gl9h2" podUID="565a1071-1bcc-4669-b9a9-4e955e40c368" Feb 13 19:59:18.018867 kubelet[1901]: E0213 19:59:18.018806 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:19.019525 kubelet[1901]: E0213 19:59:19.019409 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:20.020272 kubelet[1901]: E0213 19:59:20.020212 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:20.547906 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount530829307.mount: Deactivated successfully. Feb 13 19:59:21.020977 kubelet[1901]: E0213 19:59:21.020925 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:21.351413 containerd[1561]: time="2025-02-13T19:59:21.351208187Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:59:21.352046 containerd[1561]: time="2025-02-13T19:59:21.352022209Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 13 19:59:21.353461 containerd[1561]: time="2025-02-13T19:59:21.353403163Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:59:21.356114 containerd[1561]: time="2025-02-13T19:59:21.356071652Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:59:21.356948 containerd[1561]: time="2025-02-13T19:59:21.356880635Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 5.035333816s" Feb 13 19:59:21.357014 containerd[1561]: time="2025-02-13T19:59:21.356944577Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 13 19:59:21.365821 containerd[1561]: time="2025-02-13T19:59:21.365784577Z" level=info msg="CreateContainer within sandbox \"2c562c077f183462007d2ae58b40e02d5d45894dc1f17d72ef6dba27df507786\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 19:59:21.382193 containerd[1561]: time="2025-02-13T19:59:21.382136773Z" level=info msg="CreateContainer within sandbox \"2c562c077f183462007d2ae58b40e02d5d45894dc1f17d72ef6dba27df507786\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3b30fffc6ceac64b6b59cd6ba731a968124e164dbbb13c6a57079ef96c151ec0\"" Feb 13 19:59:21.382895 containerd[1561]: time="2025-02-13T19:59:21.382853329Z" level=info msg="StartContainer for \"3b30fffc6ceac64b6b59cd6ba731a968124e164dbbb13c6a57079ef96c151ec0\"" Feb 13 19:59:21.453571 containerd[1561]: time="2025-02-13T19:59:21.453521286Z" level=info msg="StartContainer for \"3b30fffc6ceac64b6b59cd6ba731a968124e164dbbb13c6a57079ef96c151ec0\" returns successfully" Feb 13 19:59:21.519942 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 19:59:21.520063 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 19:59:22.021109 kubelet[1901]: E0213 19:59:22.021031 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:22.335163 kubelet[1901]: E0213 19:59:22.335018 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:22.351613 kubelet[1901]: I0213 19:59:22.351550 1901 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jvhrh" podStartSLOduration=5.719859292 podStartE2EDuration="31.351534495s" podCreationTimestamp="2025-02-13 19:58:51 +0000 UTC" firstStartedPulling="2025-02-13 19:58:55.726038441 +0000 UTC m=+5.098874499" lastFinishedPulling="2025-02-13 19:59:21.357713644 +0000 UTC m=+30.730549702" observedRunningTime="2025-02-13 19:59:22.351446668 +0000 UTC m=+31.724282736" watchObservedRunningTime="2025-02-13 19:59:22.351534495 +0000 UTC m=+31.724370543" Feb 13 19:59:22.902782 kernel: bpftool[2707]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 19:59:23.021704 kubelet[1901]: E0213 19:59:23.021646 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:23.151366 systemd-networkd[1240]: vxlan.calico: Link UP Feb 13 19:59:23.151377 systemd-networkd[1240]: vxlan.calico: Gained carrier Feb 13 19:59:23.336641 kubelet[1901]: E0213 19:59:23.336595 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:23.356961 systemd[1]: run-containerd-runc-k8s.io-3b30fffc6ceac64b6b59cd6ba731a968124e164dbbb13c6a57079ef96c151ec0-runc.vCU2Gl.mount: Deactivated successfully. Feb 13 19:59:23.837498 update_engine[1543]: I20250213 19:59:23.837354 1543 update_attempter.cc:509] Updating boot flags... Feb 13 19:59:23.863881 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2746) Feb 13 19:59:23.890804 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2746) Feb 13 19:59:23.918917 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2746) Feb 13 19:59:24.022222 kubelet[1901]: E0213 19:59:24.022093 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:24.342953 systemd-networkd[1240]: vxlan.calico: Gained IPv6LL Feb 13 19:59:25.022931 kubelet[1901]: E0213 19:59:25.022838 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:26.023057 kubelet[1901]: E0213 19:59:26.022988 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:27.023530 kubelet[1901]: E0213 19:59:27.023455 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:27.081563 containerd[1561]: time="2025-02-13T19:59:27.081471684Z" level=info msg="StopPodSandbox for \"44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf\"" Feb 13 19:59:27.267943 containerd[1561]: 2025-02-13 19:59:27.193 [INFO][2834] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf" Feb 13 19:59:27.267943 containerd[1561]: 2025-02-13 19:59:27.193 [INFO][2834] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf" iface="eth0" netns="/var/run/netns/cni-bd993d57-4913-cecf-8b69-32a85ba32688" Feb 13 19:59:27.267943 containerd[1561]: 2025-02-13 19:59:27.193 [INFO][2834] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf" iface="eth0" netns="/var/run/netns/cni-bd993d57-4913-cecf-8b69-32a85ba32688" Feb 13 19:59:27.267943 containerd[1561]: 2025-02-13 19:59:27.193 [INFO][2834] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf" iface="eth0" netns="/var/run/netns/cni-bd993d57-4913-cecf-8b69-32a85ba32688" Feb 13 19:59:27.267943 containerd[1561]: 2025-02-13 19:59:27.194 [INFO][2834] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf" Feb 13 19:59:27.267943 containerd[1561]: 2025-02-13 19:59:27.194 [INFO][2834] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf" Feb 13 19:59:27.267943 containerd[1561]: 2025-02-13 19:59:27.218 [INFO][2843] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf" HandleID="k8s-pod-network.44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf" Workload="10.0.0.102-k8s-csi--node--driver--2d48g-eth0" Feb 13 19:59:27.267943 containerd[1561]: 2025-02-13 19:59:27.218 [INFO][2843] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:59:27.267943 containerd[1561]: 2025-02-13 19:59:27.218 [INFO][2843] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:59:27.267943 containerd[1561]: 2025-02-13 19:59:27.258 [WARNING][2843] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf" HandleID="k8s-pod-network.44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf" Workload="10.0.0.102-k8s-csi--node--driver--2d48g-eth0" Feb 13 19:59:27.267943 containerd[1561]: 2025-02-13 19:59:27.258 [INFO][2843] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf" HandleID="k8s-pod-network.44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf" Workload="10.0.0.102-k8s-csi--node--driver--2d48g-eth0" Feb 13 19:59:27.267943 containerd[1561]: 2025-02-13 19:59:27.260 [INFO][2843] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:59:27.267943 containerd[1561]: 2025-02-13 19:59:27.265 [INFO][2834] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf" Feb 13 19:59:27.268484 containerd[1561]: time="2025-02-13T19:59:27.268177876Z" level=info msg="TearDown network for sandbox \"44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf\" successfully" Feb 13 19:59:27.268484 containerd[1561]: time="2025-02-13T19:59:27.268218343Z" level=info msg="StopPodSandbox for \"44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf\" returns successfully" Feb 13 19:59:27.269002 containerd[1561]: time="2025-02-13T19:59:27.268970350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2d48g,Uid:c7cccdaa-623e-4b78-b7f9-71b591d49e20,Namespace:calico-system,Attempt:1,}" Feb 13 19:59:27.270709 systemd[1]: run-netns-cni\x2dbd993d57\x2d4913\x2dcecf\x2d8b69\x2d32a85ba32688.mount: Deactivated successfully. Feb 13 19:59:27.547915 systemd-networkd[1240]: cali01d70187a4a: Link UP Feb 13 19:59:27.549069 systemd-networkd[1240]: cali01d70187a4a: Gained carrier Feb 13 19:59:27.671876 containerd[1561]: 2025-02-13 19:59:27.466 [INFO][2851] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.102-k8s-csi--node--driver--2d48g-eth0 csi-node-driver- calico-system c7cccdaa-623e-4b78-b7f9-71b591d49e20 1104 0 2025-02-13 19:58:51 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.102 csi-node-driver-2d48g eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali01d70187a4a [] []}} ContainerID="742b6efeaa6c16c9a2ce63af7073ace12c0d1201d61f1d3d9c4958fafde6f229" Namespace="calico-system" Pod="csi-node-driver-2d48g" WorkloadEndpoint="10.0.0.102-k8s-csi--node--driver--2d48g-" Feb 13 19:59:27.671876 containerd[1561]: 2025-02-13 19:59:27.466 [INFO][2851] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="742b6efeaa6c16c9a2ce63af7073ace12c0d1201d61f1d3d9c4958fafde6f229" Namespace="calico-system" Pod="csi-node-driver-2d48g" WorkloadEndpoint="10.0.0.102-k8s-csi--node--driver--2d48g-eth0" Feb 13 19:59:27.671876 containerd[1561]: 2025-02-13 19:59:27.494 [INFO][2864] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="742b6efeaa6c16c9a2ce63af7073ace12c0d1201d61f1d3d9c4958fafde6f229" HandleID="k8s-pod-network.742b6efeaa6c16c9a2ce63af7073ace12c0d1201d61f1d3d9c4958fafde6f229" Workload="10.0.0.102-k8s-csi--node--driver--2d48g-eth0" Feb 13 19:59:27.671876 containerd[1561]: 2025-02-13 19:59:27.503 [INFO][2864] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="742b6efeaa6c16c9a2ce63af7073ace12c0d1201d61f1d3d9c4958fafde6f229" HandleID="k8s-pod-network.742b6efeaa6c16c9a2ce63af7073ace12c0d1201d61f1d3d9c4958fafde6f229" Workload="10.0.0.102-k8s-csi--node--driver--2d48g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003040d0), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.102", "pod":"csi-node-driver-2d48g", "timestamp":"2025-02-13 19:59:27.494077543 +0000 UTC"}, Hostname:"10.0.0.102", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:59:27.671876 containerd[1561]: 2025-02-13 19:59:27.504 [INFO][2864] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:59:27.671876 containerd[1561]: 2025-02-13 19:59:27.504 [INFO][2864] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:59:27.671876 containerd[1561]: 2025-02-13 19:59:27.504 [INFO][2864] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.102' Feb 13 19:59:27.671876 containerd[1561]: 2025-02-13 19:59:27.507 [INFO][2864] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.742b6efeaa6c16c9a2ce63af7073ace12c0d1201d61f1d3d9c4958fafde6f229" host="10.0.0.102" Feb 13 19:59:27.671876 containerd[1561]: 2025-02-13 19:59:27.512 [INFO][2864] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.102" Feb 13 19:59:27.671876 containerd[1561]: 2025-02-13 19:59:27.516 [INFO][2864] ipam/ipam.go 489: Trying affinity for 192.168.54.192/26 host="10.0.0.102" Feb 13 19:59:27.671876 containerd[1561]: 2025-02-13 19:59:27.518 [INFO][2864] ipam/ipam.go 155: Attempting to load block cidr=192.168.54.192/26 host="10.0.0.102" Feb 13 19:59:27.671876 containerd[1561]: 2025-02-13 19:59:27.521 [INFO][2864] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.54.192/26 host="10.0.0.102" Feb 13 19:59:27.671876 containerd[1561]: 2025-02-13 19:59:27.521 [INFO][2864] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.54.192/26 handle="k8s-pod-network.742b6efeaa6c16c9a2ce63af7073ace12c0d1201d61f1d3d9c4958fafde6f229" host="10.0.0.102" Feb 13 19:59:27.671876 containerd[1561]: 2025-02-13 19:59:27.523 [INFO][2864] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.742b6efeaa6c16c9a2ce63af7073ace12c0d1201d61f1d3d9c4958fafde6f229 Feb 13 19:59:27.671876 containerd[1561]: 2025-02-13 19:59:27.527 [INFO][2864] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.54.192/26 handle="k8s-pod-network.742b6efeaa6c16c9a2ce63af7073ace12c0d1201d61f1d3d9c4958fafde6f229" host="10.0.0.102" Feb 13 19:59:27.671876 containerd[1561]: 2025-02-13 19:59:27.542 [INFO][2864] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.54.193/26] block=192.168.54.192/26 handle="k8s-pod-network.742b6efeaa6c16c9a2ce63af7073ace12c0d1201d61f1d3d9c4958fafde6f229" host="10.0.0.102" Feb 13 19:59:27.671876 containerd[1561]: 2025-02-13 19:59:27.542 [INFO][2864] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.54.193/26] handle="k8s-pod-network.742b6efeaa6c16c9a2ce63af7073ace12c0d1201d61f1d3d9c4958fafde6f229" host="10.0.0.102" Feb 13 19:59:27.671876 containerd[1561]: 2025-02-13 19:59:27.542 [INFO][2864] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:59:27.671876 containerd[1561]: 2025-02-13 19:59:27.542 [INFO][2864] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.54.193/26] IPv6=[] ContainerID="742b6efeaa6c16c9a2ce63af7073ace12c0d1201d61f1d3d9c4958fafde6f229" HandleID="k8s-pod-network.742b6efeaa6c16c9a2ce63af7073ace12c0d1201d61f1d3d9c4958fafde6f229" Workload="10.0.0.102-k8s-csi--node--driver--2d48g-eth0" Feb 13 19:59:27.672722 containerd[1561]: 2025-02-13 19:59:27.545 [INFO][2851] cni-plugin/k8s.go 386: Populated endpoint ContainerID="742b6efeaa6c16c9a2ce63af7073ace12c0d1201d61f1d3d9c4958fafde6f229" Namespace="calico-system" Pod="csi-node-driver-2d48g" WorkloadEndpoint="10.0.0.102-k8s-csi--node--driver--2d48g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.102-k8s-csi--node--driver--2d48g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c7cccdaa-623e-4b78-b7f9-71b591d49e20", ResourceVersion:"1104", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 58, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.102", ContainerID:"", Pod:"csi-node-driver-2d48g", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.54.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali01d70187a4a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:59:27.672722 containerd[1561]: 2025-02-13 19:59:27.545 [INFO][2851] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.54.193/32] ContainerID="742b6efeaa6c16c9a2ce63af7073ace12c0d1201d61f1d3d9c4958fafde6f229" Namespace="calico-system" Pod="csi-node-driver-2d48g" WorkloadEndpoint="10.0.0.102-k8s-csi--node--driver--2d48g-eth0" Feb 13 19:59:27.672722 containerd[1561]: 2025-02-13 19:59:27.545 [INFO][2851] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali01d70187a4a ContainerID="742b6efeaa6c16c9a2ce63af7073ace12c0d1201d61f1d3d9c4958fafde6f229" Namespace="calico-system" Pod="csi-node-driver-2d48g" WorkloadEndpoint="10.0.0.102-k8s-csi--node--driver--2d48g-eth0" Feb 13 19:59:27.672722 containerd[1561]: 2025-02-13 19:59:27.549 [INFO][2851] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="742b6efeaa6c16c9a2ce63af7073ace12c0d1201d61f1d3d9c4958fafde6f229" Namespace="calico-system" Pod="csi-node-driver-2d48g" WorkloadEndpoint="10.0.0.102-k8s-csi--node--driver--2d48g-eth0" Feb 13 19:59:27.672722 containerd[1561]: 2025-02-13 19:59:27.549 [INFO][2851] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="742b6efeaa6c16c9a2ce63af7073ace12c0d1201d61f1d3d9c4958fafde6f229" Namespace="calico-system" Pod="csi-node-driver-2d48g" WorkloadEndpoint="10.0.0.102-k8s-csi--node--driver--2d48g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.102-k8s-csi--node--driver--2d48g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c7cccdaa-623e-4b78-b7f9-71b591d49e20", ResourceVersion:"1104", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 58, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.102", ContainerID:"742b6efeaa6c16c9a2ce63af7073ace12c0d1201d61f1d3d9c4958fafde6f229", Pod:"csi-node-driver-2d48g", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.54.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali01d70187a4a", MAC:"8e:bc:fc:35:30:60", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:59:27.672722 containerd[1561]: 2025-02-13 19:59:27.669 [INFO][2851] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="742b6efeaa6c16c9a2ce63af7073ace12c0d1201d61f1d3d9c4958fafde6f229" Namespace="calico-system" Pod="csi-node-driver-2d48g" WorkloadEndpoint="10.0.0.102-k8s-csi--node--driver--2d48g-eth0" Feb 13 19:59:27.728510 containerd[1561]: time="2025-02-13T19:59:27.728180991Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:59:27.728510 containerd[1561]: time="2025-02-13T19:59:27.728278025Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:59:27.728510 containerd[1561]: time="2025-02-13T19:59:27.728293063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:59:27.728510 containerd[1561]: time="2025-02-13T19:59:27.728425765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:59:27.758365 systemd-resolved[1455]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:59:27.772047 containerd[1561]: time="2025-02-13T19:59:27.771980992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2d48g,Uid:c7cccdaa-623e-4b78-b7f9-71b591d49e20,Namespace:calico-system,Attempt:1,} returns sandbox id \"742b6efeaa6c16c9a2ce63af7073ace12c0d1201d61f1d3d9c4958fafde6f229\"" Feb 13 19:59:27.773892 containerd[1561]: time="2025-02-13T19:59:27.773836803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 19:59:28.024378 kubelet[1901]: E0213 19:59:28.024301 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:28.081693 containerd[1561]: time="2025-02-13T19:59:28.081625705Z" level=info msg="StopPodSandbox for \"db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828\"" Feb 13 19:59:28.566942 systemd-networkd[1240]: cali01d70187a4a: Gained IPv6LL Feb 13 19:59:28.797719 containerd[1561]: 2025-02-13 19:59:28.186 [INFO][2944] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828" Feb 13 19:59:28.797719 containerd[1561]: 2025-02-13 19:59:28.187 [INFO][2944] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828" iface="eth0" netns="/var/run/netns/cni-fc58c8d8-81aa-6d64-228d-6bf1df3a3f0e" Feb 13 19:59:28.797719 containerd[1561]: 2025-02-13 19:59:28.187 [INFO][2944] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828" iface="eth0" netns="/var/run/netns/cni-fc58c8d8-81aa-6d64-228d-6bf1df3a3f0e" Feb 13 19:59:28.797719 containerd[1561]: 2025-02-13 19:59:28.187 [INFO][2944] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828" iface="eth0" netns="/var/run/netns/cni-fc58c8d8-81aa-6d64-228d-6bf1df3a3f0e" Feb 13 19:59:28.797719 containerd[1561]: 2025-02-13 19:59:28.187 [INFO][2944] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828" Feb 13 19:59:28.797719 containerd[1561]: 2025-02-13 19:59:28.187 [INFO][2944] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828" Feb 13 19:59:28.797719 containerd[1561]: 2025-02-13 19:59:28.208 [INFO][2951] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828" HandleID="k8s-pod-network.db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828" Workload="10.0.0.102-k8s-nginx--deployment--85f456d6dd--gl9h2-eth0" Feb 13 19:59:28.797719 containerd[1561]: 2025-02-13 19:59:28.208 [INFO][2951] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:59:28.797719 containerd[1561]: 2025-02-13 19:59:28.208 [INFO][2951] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:59:28.797719 containerd[1561]: 2025-02-13 19:59:28.791 [WARNING][2951] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828" HandleID="k8s-pod-network.db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828" Workload="10.0.0.102-k8s-nginx--deployment--85f456d6dd--gl9h2-eth0" Feb 13 19:59:28.797719 containerd[1561]: 2025-02-13 19:59:28.791 [INFO][2951] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828" HandleID="k8s-pod-network.db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828" Workload="10.0.0.102-k8s-nginx--deployment--85f456d6dd--gl9h2-eth0" Feb 13 19:59:28.797719 containerd[1561]: 2025-02-13 19:59:28.793 [INFO][2951] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:59:28.797719 containerd[1561]: 2025-02-13 19:59:28.795 [INFO][2944] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828" Feb 13 19:59:28.798139 containerd[1561]: time="2025-02-13T19:59:28.797934293Z" level=info msg="TearDown network for sandbox \"db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828\" successfully" Feb 13 19:59:28.798139 containerd[1561]: time="2025-02-13T19:59:28.797963278Z" level=info msg="StopPodSandbox for \"db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828\" returns successfully" Feb 13 19:59:28.798628 containerd[1561]: time="2025-02-13T19:59:28.798603492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-gl9h2,Uid:565a1071-1bcc-4669-b9a9-4e955e40c368,Namespace:default,Attempt:1,}" Feb 13 19:59:28.800549 systemd[1]: run-netns-cni\x2dfc58c8d8\x2d81aa\x2d6d64\x2d228d\x2d6bf1df3a3f0e.mount: Deactivated successfully. Feb 13 19:59:29.025224 kubelet[1901]: E0213 19:59:29.025157 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:29.424695 systemd-networkd[1240]: cali237bfe8e030: Link UP Feb 13 19:59:29.425408 systemd-networkd[1240]: cali237bfe8e030: Gained carrier Feb 13 19:59:29.621632 containerd[1561]: 2025-02-13 19:59:29.194 [INFO][2965] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.102-k8s-nginx--deployment--85f456d6dd--gl9h2-eth0 nginx-deployment-85f456d6dd- default 565a1071-1bcc-4669-b9a9-4e955e40c368 1111 0 2025-02-13 19:59:16 +0000 UTC map[app:nginx pod-template-hash:85f456d6dd projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.102 nginx-deployment-85f456d6dd-gl9h2 eth0 default [] [] [kns.default ksa.default.default] cali237bfe8e030 [] []}} ContainerID="70aea53b3a965186ff7807502c909e3be7cc21f29e320a9e56fe0ec9afd0493b" Namespace="default" Pod="nginx-deployment-85f456d6dd-gl9h2" WorkloadEndpoint="10.0.0.102-k8s-nginx--deployment--85f456d6dd--gl9h2-" Feb 13 19:59:29.621632 containerd[1561]: 2025-02-13 19:59:29.194 [INFO][2965] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="70aea53b3a965186ff7807502c909e3be7cc21f29e320a9e56fe0ec9afd0493b" Namespace="default" Pod="nginx-deployment-85f456d6dd-gl9h2" WorkloadEndpoint="10.0.0.102-k8s-nginx--deployment--85f456d6dd--gl9h2-eth0" Feb 13 19:59:29.621632 containerd[1561]: 2025-02-13 19:59:29.219 [INFO][2978] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="70aea53b3a965186ff7807502c909e3be7cc21f29e320a9e56fe0ec9afd0493b" HandleID="k8s-pod-network.70aea53b3a965186ff7807502c909e3be7cc21f29e320a9e56fe0ec9afd0493b" Workload="10.0.0.102-k8s-nginx--deployment--85f456d6dd--gl9h2-eth0" Feb 13 19:59:29.621632 containerd[1561]: 2025-02-13 19:59:29.228 [INFO][2978] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="70aea53b3a965186ff7807502c909e3be7cc21f29e320a9e56fe0ec9afd0493b" HandleID="k8s-pod-network.70aea53b3a965186ff7807502c909e3be7cc21f29e320a9e56fe0ec9afd0493b" Workload="10.0.0.102-k8s-nginx--deployment--85f456d6dd--gl9h2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f40b0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.102", "pod":"nginx-deployment-85f456d6dd-gl9h2", "timestamp":"2025-02-13 19:59:29.219711306 +0000 UTC"}, Hostname:"10.0.0.102", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:59:29.621632 containerd[1561]: 2025-02-13 19:59:29.228 [INFO][2978] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:59:29.621632 containerd[1561]: 2025-02-13 19:59:29.229 [INFO][2978] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:59:29.621632 containerd[1561]: 2025-02-13 19:59:29.229 [INFO][2978] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.102' Feb 13 19:59:29.621632 containerd[1561]: 2025-02-13 19:59:29.231 [INFO][2978] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.70aea53b3a965186ff7807502c909e3be7cc21f29e320a9e56fe0ec9afd0493b" host="10.0.0.102" Feb 13 19:59:29.621632 containerd[1561]: 2025-02-13 19:59:29.235 [INFO][2978] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.102" Feb 13 19:59:29.621632 containerd[1561]: 2025-02-13 19:59:29.241 [INFO][2978] ipam/ipam.go 489: Trying affinity for 192.168.54.192/26 host="10.0.0.102" Feb 13 19:59:29.621632 containerd[1561]: 2025-02-13 19:59:29.243 [INFO][2978] ipam/ipam.go 155: Attempting to load block cidr=192.168.54.192/26 host="10.0.0.102" Feb 13 19:59:29.621632 containerd[1561]: 2025-02-13 19:59:29.245 [INFO][2978] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.54.192/26 host="10.0.0.102" Feb 13 19:59:29.621632 containerd[1561]: 2025-02-13 19:59:29.245 [INFO][2978] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.54.192/26 handle="k8s-pod-network.70aea53b3a965186ff7807502c909e3be7cc21f29e320a9e56fe0ec9afd0493b" host="10.0.0.102" Feb 13 19:59:29.621632 containerd[1561]: 2025-02-13 19:59:29.247 [INFO][2978] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.70aea53b3a965186ff7807502c909e3be7cc21f29e320a9e56fe0ec9afd0493b Feb 13 19:59:29.621632 containerd[1561]: 2025-02-13 19:59:29.389 [INFO][2978] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.54.192/26 handle="k8s-pod-network.70aea53b3a965186ff7807502c909e3be7cc21f29e320a9e56fe0ec9afd0493b" host="10.0.0.102" Feb 13 19:59:29.621632 containerd[1561]: 2025-02-13 19:59:29.419 [INFO][2978] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.54.194/26] block=192.168.54.192/26 handle="k8s-pod-network.70aea53b3a965186ff7807502c909e3be7cc21f29e320a9e56fe0ec9afd0493b" host="10.0.0.102" Feb 13 19:59:29.621632 containerd[1561]: 2025-02-13 19:59:29.419 [INFO][2978] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.54.194/26] handle="k8s-pod-network.70aea53b3a965186ff7807502c909e3be7cc21f29e320a9e56fe0ec9afd0493b" host="10.0.0.102" Feb 13 19:59:29.621632 containerd[1561]: 2025-02-13 19:59:29.419 [INFO][2978] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:59:29.621632 containerd[1561]: 2025-02-13 19:59:29.419 [INFO][2978] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.54.194/26] IPv6=[] ContainerID="70aea53b3a965186ff7807502c909e3be7cc21f29e320a9e56fe0ec9afd0493b" HandleID="k8s-pod-network.70aea53b3a965186ff7807502c909e3be7cc21f29e320a9e56fe0ec9afd0493b" Workload="10.0.0.102-k8s-nginx--deployment--85f456d6dd--gl9h2-eth0" Feb 13 19:59:29.622856 containerd[1561]: 2025-02-13 19:59:29.422 [INFO][2965] cni-plugin/k8s.go 386: Populated endpoint ContainerID="70aea53b3a965186ff7807502c909e3be7cc21f29e320a9e56fe0ec9afd0493b" Namespace="default" Pod="nginx-deployment-85f456d6dd-gl9h2" WorkloadEndpoint="10.0.0.102-k8s-nginx--deployment--85f456d6dd--gl9h2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.102-k8s-nginx--deployment--85f456d6dd--gl9h2-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"565a1071-1bcc-4669-b9a9-4e955e40c368", ResourceVersion:"1111", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 59, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.102", ContainerID:"", Pod:"nginx-deployment-85f456d6dd-gl9h2", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.54.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali237bfe8e030", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:59:29.622856 containerd[1561]: 2025-02-13 19:59:29.422 [INFO][2965] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.54.194/32] ContainerID="70aea53b3a965186ff7807502c909e3be7cc21f29e320a9e56fe0ec9afd0493b" Namespace="default" Pod="nginx-deployment-85f456d6dd-gl9h2" WorkloadEndpoint="10.0.0.102-k8s-nginx--deployment--85f456d6dd--gl9h2-eth0" Feb 13 19:59:29.622856 containerd[1561]: 2025-02-13 19:59:29.422 [INFO][2965] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali237bfe8e030 ContainerID="70aea53b3a965186ff7807502c909e3be7cc21f29e320a9e56fe0ec9afd0493b" Namespace="default" Pod="nginx-deployment-85f456d6dd-gl9h2" WorkloadEndpoint="10.0.0.102-k8s-nginx--deployment--85f456d6dd--gl9h2-eth0" Feb 13 19:59:29.622856 containerd[1561]: 2025-02-13 19:59:29.424 [INFO][2965] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="70aea53b3a965186ff7807502c909e3be7cc21f29e320a9e56fe0ec9afd0493b" Namespace="default" Pod="nginx-deployment-85f456d6dd-gl9h2" WorkloadEndpoint="10.0.0.102-k8s-nginx--deployment--85f456d6dd--gl9h2-eth0" Feb 13 19:59:29.622856 containerd[1561]: 2025-02-13 19:59:29.424 [INFO][2965] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="70aea53b3a965186ff7807502c909e3be7cc21f29e320a9e56fe0ec9afd0493b" Namespace="default" Pod="nginx-deployment-85f456d6dd-gl9h2" WorkloadEndpoint="10.0.0.102-k8s-nginx--deployment--85f456d6dd--gl9h2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.102-k8s-nginx--deployment--85f456d6dd--gl9h2-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"565a1071-1bcc-4669-b9a9-4e955e40c368", ResourceVersion:"1111", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 59, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.102", ContainerID:"70aea53b3a965186ff7807502c909e3be7cc21f29e320a9e56fe0ec9afd0493b", Pod:"nginx-deployment-85f456d6dd-gl9h2", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.54.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali237bfe8e030", MAC:"02:84:23:98:9d:52", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:59:29.622856 containerd[1561]: 2025-02-13 19:59:29.618 [INFO][2965] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="70aea53b3a965186ff7807502c909e3be7cc21f29e320a9e56fe0ec9afd0493b" Namespace="default" Pod="nginx-deployment-85f456d6dd-gl9h2" WorkloadEndpoint="10.0.0.102-k8s-nginx--deployment--85f456d6dd--gl9h2-eth0" Feb 13 19:59:29.708041 containerd[1561]: time="2025-02-13T19:59:29.707221287Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:59:29.708041 containerd[1561]: time="2025-02-13T19:59:29.707991847Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:59:29.708041 containerd[1561]: time="2025-02-13T19:59:29.708006504Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:59:29.708255 containerd[1561]: time="2025-02-13T19:59:29.708122785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:59:29.744578 systemd-resolved[1455]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:59:29.775673 containerd[1561]: time="2025-02-13T19:59:29.775618345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-gl9h2,Uid:565a1071-1bcc-4669-b9a9-4e955e40c368,Namespace:default,Attempt:1,} returns sandbox id \"70aea53b3a965186ff7807502c909e3be7cc21f29e320a9e56fe0ec9afd0493b\"" Feb 13 19:59:30.026290 kubelet[1901]: E0213 19:59:30.026120 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:30.935074 systemd-networkd[1240]: cali237bfe8e030: Gained IPv6LL Feb 13 19:59:31.000614 kubelet[1901]: E0213 19:59:31.000551 1901 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:31.026987 kubelet[1901]: E0213 19:59:31.026915 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:31.068690 containerd[1561]: time="2025-02-13T19:59:31.068615246Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:59:31.069695 containerd[1561]: time="2025-02-13T19:59:31.069636367Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 13 19:59:31.071346 containerd[1561]: time="2025-02-13T19:59:31.071304082Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:59:31.074086 containerd[1561]: time="2025-02-13T19:59:31.074027454Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:59:31.074531 containerd[1561]: time="2025-02-13T19:59:31.074499167Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 3.300626014s" Feb 13 19:59:31.074531 containerd[1561]: time="2025-02-13T19:59:31.074527741Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 13 19:59:31.075518 containerd[1561]: time="2025-02-13T19:59:31.075490472Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 19:59:31.076837 containerd[1561]: time="2025-02-13T19:59:31.076802464Z" level=info msg="CreateContainer within sandbox \"742b6efeaa6c16c9a2ce63af7073ace12c0d1201d61f1d3d9c4958fafde6f229\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 19:59:31.094695 containerd[1561]: time="2025-02-13T19:59:31.094627478Z" level=info msg="CreateContainer within sandbox \"742b6efeaa6c16c9a2ce63af7073ace12c0d1201d61f1d3d9c4958fafde6f229\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"90a6ffe568bb443088aa1af010337f3ccbb2fe6be46af0ca592efb3e480da675\"" Feb 13 19:59:31.095367 containerd[1561]: time="2025-02-13T19:59:31.095256527Z" level=info msg="StartContainer for \"90a6ffe568bb443088aa1af010337f3ccbb2fe6be46af0ca592efb3e480da675\"" Feb 13 19:59:31.161574 containerd[1561]: time="2025-02-13T19:59:31.161506053Z" level=info msg="StartContainer for \"90a6ffe568bb443088aa1af010337f3ccbb2fe6be46af0ca592efb3e480da675\" returns successfully" Feb 13 19:59:32.027185 kubelet[1901]: E0213 19:59:32.027120 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:33.027725 kubelet[1901]: E0213 19:59:33.027651 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:34.026652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1743165186.mount: Deactivated successfully. Feb 13 19:59:34.028693 kubelet[1901]: E0213 19:59:34.028632 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:35.029190 kubelet[1901]: E0213 19:59:35.029116 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:35.750959 containerd[1561]: time="2025-02-13T19:59:35.750888396Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:59:35.751975 containerd[1561]: time="2025-02-13T19:59:35.751914794Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73054493" Feb 13 19:59:35.753303 containerd[1561]: time="2025-02-13T19:59:35.753267257Z" level=info msg="ImageCreate event name:\"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:59:35.758545 containerd[1561]: time="2025-02-13T19:59:35.758475791Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:59:35.759398 containerd[1561]: time="2025-02-13T19:59:35.759342608Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 4.683813083s" Feb 13 19:59:35.759446 containerd[1561]: time="2025-02-13T19:59:35.759403804Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 19:59:35.760803 containerd[1561]: time="2025-02-13T19:59:35.760764803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 19:59:35.761941 containerd[1561]: time="2025-02-13T19:59:35.761905246Z" level=info msg="CreateContainer within sandbox \"70aea53b3a965186ff7807502c909e3be7cc21f29e320a9e56fe0ec9afd0493b\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 13 19:59:35.775599 containerd[1561]: time="2025-02-13T19:59:35.775538727Z" level=info msg="CreateContainer within sandbox \"70aea53b3a965186ff7807502c909e3be7cc21f29e320a9e56fe0ec9afd0493b\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"acd43eab527f8f2ab5639ed79271570fcdcb18f0591e2f96c9a8b72e8f79c54a\"" Feb 13 19:59:35.776034 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1106732784.mount: Deactivated successfully. Feb 13 19:59:35.776486 containerd[1561]: time="2025-02-13T19:59:35.776159509Z" level=info msg="StartContainer for \"acd43eab527f8f2ab5639ed79271570fcdcb18f0591e2f96c9a8b72e8f79c54a\"" Feb 13 19:59:35.983321 containerd[1561]: time="2025-02-13T19:59:35.983240074Z" level=info msg="StartContainer for \"acd43eab527f8f2ab5639ed79271570fcdcb18f0591e2f96c9a8b72e8f79c54a\" returns successfully" Feb 13 19:59:36.029932 kubelet[1901]: E0213 19:59:36.029659 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:37.030102 kubelet[1901]: E0213 19:59:37.030046 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:38.030878 kubelet[1901]: E0213 19:59:38.030799 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:38.654923 containerd[1561]: time="2025-02-13T19:59:38.654854208Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:59:38.678787 containerd[1561]: time="2025-02-13T19:59:38.678722307Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 13 19:59:38.765172 containerd[1561]: time="2025-02-13T19:59:38.765108138Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:59:38.875428 containerd[1561]: time="2025-02-13T19:59:38.875351148Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:59:38.876085 containerd[1561]: time="2025-02-13T19:59:38.876030209Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 3.115227664s" Feb 13 19:59:38.876085 containerd[1561]: time="2025-02-13T19:59:38.876083940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 13 19:59:38.878437 containerd[1561]: time="2025-02-13T19:59:38.878367567Z" level=info msg="CreateContainer within sandbox \"742b6efeaa6c16c9a2ce63af7073ace12c0d1201d61f1d3d9c4958fafde6f229\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 19:59:39.031425 kubelet[1901]: E0213 19:59:39.031360 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:39.514915 containerd[1561]: time="2025-02-13T19:59:39.514839234Z" level=info msg="CreateContainer within sandbox \"742b6efeaa6c16c9a2ce63af7073ace12c0d1201d61f1d3d9c4958fafde6f229\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ffc2073799152005c51821785bf0719d2f44e7cb5151f3f6347595ad34ff592e\"" Feb 13 19:59:39.515512 containerd[1561]: time="2025-02-13T19:59:39.515482788Z" level=info msg="StartContainer for \"ffc2073799152005c51821785bf0719d2f44e7cb5151f3f6347595ad34ff592e\"" Feb 13 19:59:39.693148 kubelet[1901]: I0213 19:59:39.693104 1901 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 19:59:39.693148 kubelet[1901]: I0213 19:59:39.693148 1901 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 19:59:39.961331 containerd[1561]: time="2025-02-13T19:59:39.961265510Z" level=info msg="StartContainer for \"ffc2073799152005c51821785bf0719d2f44e7cb5151f3f6347595ad34ff592e\" returns successfully" Feb 13 19:59:40.032558 kubelet[1901]: E0213 19:59:40.032514 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:41.033378 kubelet[1901]: E0213 19:59:41.033339 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:42.034241 kubelet[1901]: E0213 19:59:42.034175 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:42.269878 kubelet[1901]: I0213 19:59:42.269793 1901 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-2d48g" podStartSLOduration=40.166192259 podStartE2EDuration="51.269773597s" podCreationTimestamp="2025-02-13 19:58:51 +0000 UTC" firstStartedPulling="2025-02-13 19:59:27.773377612 +0000 UTC m=+37.146213660" lastFinishedPulling="2025-02-13 19:59:38.87695894 +0000 UTC m=+48.249794998" observedRunningTime="2025-02-13 19:59:42.269040937 +0000 UTC m=+51.641876995" watchObservedRunningTime="2025-02-13 19:59:42.269773597 +0000 UTC m=+51.642609655" Feb 13 19:59:42.270116 kubelet[1901]: I0213 19:59:42.270043 1901 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-gl9h2" podStartSLOduration=20.286667628 podStartE2EDuration="26.270033686s" podCreationTimestamp="2025-02-13 19:59:16 +0000 UTC" firstStartedPulling="2025-02-13 19:59:29.777185243 +0000 UTC m=+39.150021301" lastFinishedPulling="2025-02-13 19:59:35.760551301 +0000 UTC m=+45.133387359" observedRunningTime="2025-02-13 19:59:36.379662738 +0000 UTC m=+45.752498796" watchObservedRunningTime="2025-02-13 19:59:42.270033686 +0000 UTC m=+51.642869744" Feb 13 19:59:43.035420 kubelet[1901]: E0213 19:59:43.035308 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:43.379242 kubelet[1901]: I0213 19:59:43.379058 1901 topology_manager.go:215] "Topology Admit Handler" podUID="0aba2456-0b1e-482e-9d76-da7a976f97e8" podNamespace="calico-system" podName="calico-typha-689c6764c5-6mnjk" Feb 13 19:59:43.496955 kubelet[1901]: I0213 19:59:43.496907 1901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf86j\" (UniqueName: \"kubernetes.io/projected/0aba2456-0b1e-482e-9d76-da7a976f97e8-kube-api-access-jf86j\") pod \"calico-typha-689c6764c5-6mnjk\" (UID: \"0aba2456-0b1e-482e-9d76-da7a976f97e8\") " pod="calico-system/calico-typha-689c6764c5-6mnjk" Feb 13 19:59:43.496955 kubelet[1901]: I0213 19:59:43.496947 1901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0aba2456-0b1e-482e-9d76-da7a976f97e8-typha-certs\") pod \"calico-typha-689c6764c5-6mnjk\" (UID: \"0aba2456-0b1e-482e-9d76-da7a976f97e8\") " pod="calico-system/calico-typha-689c6764c5-6mnjk" Feb 13 19:59:43.496955 kubelet[1901]: I0213 19:59:43.496967 1901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0aba2456-0b1e-482e-9d76-da7a976f97e8-tigera-ca-bundle\") pod \"calico-typha-689c6764c5-6mnjk\" (UID: \"0aba2456-0b1e-482e-9d76-da7a976f97e8\") " pod="calico-system/calico-typha-689c6764c5-6mnjk" Feb 13 19:59:43.684075 kubelet[1901]: E0213 19:59:43.683896 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:43.684788 containerd[1561]: time="2025-02-13T19:59:43.684528779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-689c6764c5-6mnjk,Uid:0aba2456-0b1e-482e-9d76-da7a976f97e8,Namespace:calico-system,Attempt:0,}" Feb 13 19:59:43.945693 containerd[1561]: time="2025-02-13T19:59:43.945525989Z" level=info msg="StopContainer for \"3b30fffc6ceac64b6b59cd6ba731a968124e164dbbb13c6a57079ef96c151ec0\" with timeout 5 (s)" Feb 13 19:59:43.945980 containerd[1561]: time="2025-02-13T19:59:43.945793843Z" level=info msg="Stop container \"3b30fffc6ceac64b6b59cd6ba731a968124e164dbbb13c6a57079ef96c151ec0\" with signal terminated" Feb 13 19:59:43.981960 containerd[1561]: time="2025-02-13T19:59:43.981797400Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:59:43.981960 containerd[1561]: time="2025-02-13T19:59:43.981875597Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:59:43.981960 containerd[1561]: time="2025-02-13T19:59:43.981894532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:59:43.982269 containerd[1561]: time="2025-02-13T19:59:43.982003698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:59:43.985170 containerd[1561]: time="2025-02-13T19:59:43.985105989Z" level=info msg="shim disconnected" id=3b30fffc6ceac64b6b59cd6ba731a968124e164dbbb13c6a57079ef96c151ec0 namespace=k8s.io Feb 13 19:59:43.985236 containerd[1561]: time="2025-02-13T19:59:43.985171913Z" level=warning msg="cleaning up after shim disconnected" id=3b30fffc6ceac64b6b59cd6ba731a968124e164dbbb13c6a57079ef96c151ec0 namespace=k8s.io Feb 13 19:59:43.985236 containerd[1561]: time="2025-02-13T19:59:43.985181171Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:59:44.035177 containerd[1561]: time="2025-02-13T19:59:44.035122282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-689c6764c5-6mnjk,Uid:0aba2456-0b1e-482e-9d76-da7a976f97e8,Namespace:calico-system,Attempt:0,} returns sandbox id \"c7522ba92e33e719548feec7d94c863dc0c33604f96d173f90fed19a50651e73\"" Feb 13 19:59:44.035447 kubelet[1901]: E0213 19:59:44.035419 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:44.035929 kubelet[1901]: E0213 19:59:44.035910 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:44.036777 containerd[1561]: time="2025-02-13T19:59:44.036731171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 19:59:44.117702 containerd[1561]: time="2025-02-13T19:59:44.117620308Z" level=info msg="StopContainer for \"3b30fffc6ceac64b6b59cd6ba731a968124e164dbbb13c6a57079ef96c151ec0\" returns successfully" Feb 13 19:59:44.118505 containerd[1561]: time="2025-02-13T19:59:44.118461362Z" level=info msg="StopPodSandbox for \"2c562c077f183462007d2ae58b40e02d5d45894dc1f17d72ef6dba27df507786\"" Feb 13 19:59:44.118562 containerd[1561]: time="2025-02-13T19:59:44.118524130Z" level=info msg="Container to stop \"fa69824e3fa6c5c7e63613ffc3220027c23ac9661c8820c123a15070f3aebad9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:59:44.118562 containerd[1561]: time="2025-02-13T19:59:44.118543165Z" level=info msg="Container to stop \"c58cf0e62a1efdeee07b2f175ea580c6952ac44448d92c96305b3bad05a55f5f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:59:44.118562 containerd[1561]: time="2025-02-13T19:59:44.118553705Z" level=info msg="Container to stop \"3b30fffc6ceac64b6b59cd6ba731a968124e164dbbb13c6a57079ef96c151ec0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:59:44.144266 containerd[1561]: time="2025-02-13T19:59:44.143669901Z" level=info msg="shim disconnected" id=2c562c077f183462007d2ae58b40e02d5d45894dc1f17d72ef6dba27df507786 namespace=k8s.io Feb 13 19:59:44.144266 containerd[1561]: time="2025-02-13T19:59:44.143731356Z" level=warning msg="cleaning up after shim disconnected" id=2c562c077f183462007d2ae58b40e02d5d45894dc1f17d72ef6dba27df507786 namespace=k8s.io Feb 13 19:59:44.144266 containerd[1561]: time="2025-02-13T19:59:44.143777984Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:59:44.273146 containerd[1561]: time="2025-02-13T19:59:44.273089005Z" level=info msg="TearDown network for sandbox \"2c562c077f183462007d2ae58b40e02d5d45894dc1f17d72ef6dba27df507786\" successfully" Feb 13 19:59:44.273146 containerd[1561]: time="2025-02-13T19:59:44.273137076Z" level=info msg="StopPodSandbox for \"2c562c077f183462007d2ae58b40e02d5d45894dc1f17d72ef6dba27df507786\" returns successfully" Feb 13 19:59:44.387998 kubelet[1901]: I0213 19:59:44.387958 1901 scope.go:117] "RemoveContainer" containerID="3b30fffc6ceac64b6b59cd6ba731a968124e164dbbb13c6a57079ef96c151ec0" Feb 13 19:59:44.389158 containerd[1561]: time="2025-02-13T19:59:44.389122113Z" level=info msg="RemoveContainer for \"3b30fffc6ceac64b6b59cd6ba731a968124e164dbbb13c6a57079ef96c151ec0\"" Feb 13 19:59:44.402481 kubelet[1901]: I0213 19:59:44.402410 1901 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/52cf5fc6-a76a-426e-b244-4d1b397f40fe-lib-modules\") pod \"52cf5fc6-a76a-426e-b244-4d1b397f40fe\" (UID: \"52cf5fc6-a76a-426e-b244-4d1b397f40fe\") " Feb 13 19:59:44.402481 kubelet[1901]: I0213 19:59:44.402450 1901 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/52cf5fc6-a76a-426e-b244-4d1b397f40fe-flexvol-driver-host\") pod \"52cf5fc6-a76a-426e-b244-4d1b397f40fe\" (UID: \"52cf5fc6-a76a-426e-b244-4d1b397f40fe\") " Feb 13 19:59:44.402481 kubelet[1901]: I0213 19:59:44.402474 1901 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkpdr\" (UniqueName: \"kubernetes.io/projected/52cf5fc6-a76a-426e-b244-4d1b397f40fe-kube-api-access-wkpdr\") pod \"52cf5fc6-a76a-426e-b244-4d1b397f40fe\" (UID: \"52cf5fc6-a76a-426e-b244-4d1b397f40fe\") " Feb 13 19:59:44.402481 kubelet[1901]: I0213 19:59:44.402488 1901 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/52cf5fc6-a76a-426e-b244-4d1b397f40fe-xtables-lock\") pod \"52cf5fc6-a76a-426e-b244-4d1b397f40fe\" (UID: \"52cf5fc6-a76a-426e-b244-4d1b397f40fe\") " Feb 13 19:59:44.402893 kubelet[1901]: I0213 19:59:44.402535 1901 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/52cf5fc6-a76a-426e-b244-4d1b397f40fe-node-certs\") pod \"52cf5fc6-a76a-426e-b244-4d1b397f40fe\" (UID: \"52cf5fc6-a76a-426e-b244-4d1b397f40fe\") " Feb 13 19:59:44.402893 kubelet[1901]: I0213 19:59:44.402550 1901 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/52cf5fc6-a76a-426e-b244-4d1b397f40fe-cni-log-dir\") pod \"52cf5fc6-a76a-426e-b244-4d1b397f40fe\" (UID: \"52cf5fc6-a76a-426e-b244-4d1b397f40fe\") " Feb 13 19:59:44.402893 kubelet[1901]: I0213 19:59:44.402546 1901 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52cf5fc6-a76a-426e-b244-4d1b397f40fe-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "52cf5fc6-a76a-426e-b244-4d1b397f40fe" (UID: "52cf5fc6-a76a-426e-b244-4d1b397f40fe"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:59:44.402893 kubelet[1901]: I0213 19:59:44.402569 1901 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52cf5fc6-a76a-426e-b244-4d1b397f40fe-tigera-ca-bundle\") pod \"52cf5fc6-a76a-426e-b244-4d1b397f40fe\" (UID: \"52cf5fc6-a76a-426e-b244-4d1b397f40fe\") " Feb 13 19:59:44.402893 kubelet[1901]: I0213 19:59:44.402583 1901 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/52cf5fc6-a76a-426e-b244-4d1b397f40fe-var-lib-calico\") pod \"52cf5fc6-a76a-426e-b244-4d1b397f40fe\" (UID: \"52cf5fc6-a76a-426e-b244-4d1b397f40fe\") " Feb 13 19:59:44.403073 kubelet[1901]: I0213 19:59:44.402601 1901 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52cf5fc6-a76a-426e-b244-4d1b397f40fe-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "52cf5fc6-a76a-426e-b244-4d1b397f40fe" (UID: "52cf5fc6-a76a-426e-b244-4d1b397f40fe"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:59:44.403073 kubelet[1901]: I0213 19:59:44.402605 1901 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/52cf5fc6-a76a-426e-b244-4d1b397f40fe-var-run-calico\") pod \"52cf5fc6-a76a-426e-b244-4d1b397f40fe\" (UID: \"52cf5fc6-a76a-426e-b244-4d1b397f40fe\") " Feb 13 19:59:44.403073 kubelet[1901]: I0213 19:59:44.402646 1901 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/52cf5fc6-a76a-426e-b244-4d1b397f40fe-cni-net-dir\") pod \"52cf5fc6-a76a-426e-b244-4d1b397f40fe\" (UID: \"52cf5fc6-a76a-426e-b244-4d1b397f40fe\") " Feb 13 19:59:44.403073 kubelet[1901]: I0213 19:59:44.402651 1901 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52cf5fc6-a76a-426e-b244-4d1b397f40fe-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "52cf5fc6-a76a-426e-b244-4d1b397f40fe" (UID: "52cf5fc6-a76a-426e-b244-4d1b397f40fe"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:59:44.403073 kubelet[1901]: I0213 19:59:44.402667 1901 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/52cf5fc6-a76a-426e-b244-4d1b397f40fe-policysync\") pod \"52cf5fc6-a76a-426e-b244-4d1b397f40fe\" (UID: \"52cf5fc6-a76a-426e-b244-4d1b397f40fe\") " Feb 13 19:59:44.403242 kubelet[1901]: I0213 19:59:44.402683 1901 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52cf5fc6-a76a-426e-b244-4d1b397f40fe-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "52cf5fc6-a76a-426e-b244-4d1b397f40fe" (UID: "52cf5fc6-a76a-426e-b244-4d1b397f40fe"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:59:44.403242 kubelet[1901]: I0213 19:59:44.402685 1901 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/52cf5fc6-a76a-426e-b244-4d1b397f40fe-cni-bin-dir\") pod \"52cf5fc6-a76a-426e-b244-4d1b397f40fe\" (UID: \"52cf5fc6-a76a-426e-b244-4d1b397f40fe\") " Feb 13 19:59:44.403242 kubelet[1901]: I0213 19:59:44.402701 1901 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52cf5fc6-a76a-426e-b244-4d1b397f40fe-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "52cf5fc6-a76a-426e-b244-4d1b397f40fe" (UID: "52cf5fc6-a76a-426e-b244-4d1b397f40fe"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:59:44.403242 kubelet[1901]: I0213 19:59:44.402719 1901 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52cf5fc6-a76a-426e-b244-4d1b397f40fe-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "52cf5fc6-a76a-426e-b244-4d1b397f40fe" (UID: "52cf5fc6-a76a-426e-b244-4d1b397f40fe"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:59:44.403242 kubelet[1901]: I0213 19:59:44.402728 1901 reconciler_common.go:289] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/52cf5fc6-a76a-426e-b244-4d1b397f40fe-cni-bin-dir\") on node \"10.0.0.102\" DevicePath \"\"" Feb 13 19:59:44.403413 kubelet[1901]: I0213 19:59:44.402758 1901 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52cf5fc6-a76a-426e-b244-4d1b397f40fe-policysync" (OuterVolumeSpecName: "policysync") pod "52cf5fc6-a76a-426e-b244-4d1b397f40fe" (UID: "52cf5fc6-a76a-426e-b244-4d1b397f40fe"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:59:44.403413 kubelet[1901]: I0213 19:59:44.402760 1901 reconciler_common.go:289] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/52cf5fc6-a76a-426e-b244-4d1b397f40fe-var-run-calico\") on node \"10.0.0.102\" DevicePath \"\"" Feb 13 19:59:44.403413 kubelet[1901]: I0213 19:59:44.402784 1901 reconciler_common.go:289] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/52cf5fc6-a76a-426e-b244-4d1b397f40fe-flexvol-driver-host\") on node \"10.0.0.102\" DevicePath \"\"" Feb 13 19:59:44.403413 kubelet[1901]: I0213 19:59:44.402795 1901 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/52cf5fc6-a76a-426e-b244-4d1b397f40fe-lib-modules\") on node \"10.0.0.102\" DevicePath \"\"" Feb 13 19:59:44.403413 kubelet[1901]: I0213 19:59:44.402804 1901 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/52cf5fc6-a76a-426e-b244-4d1b397f40fe-xtables-lock\") on node \"10.0.0.102\" DevicePath \"\"" Feb 13 19:59:44.403413 kubelet[1901]: I0213 19:59:44.403077 1901 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52cf5fc6-a76a-426e-b244-4d1b397f40fe-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "52cf5fc6-a76a-426e-b244-4d1b397f40fe" (UID: "52cf5fc6-a76a-426e-b244-4d1b397f40fe"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:59:44.403622 kubelet[1901]: I0213 19:59:44.403135 1901 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52cf5fc6-a76a-426e-b244-4d1b397f40fe-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "52cf5fc6-a76a-426e-b244-4d1b397f40fe" (UID: "52cf5fc6-a76a-426e-b244-4d1b397f40fe"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:59:44.406368 kubelet[1901]: I0213 19:59:44.406330 1901 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52cf5fc6-a76a-426e-b244-4d1b397f40fe-node-certs" (OuterVolumeSpecName: "node-certs") pod "52cf5fc6-a76a-426e-b244-4d1b397f40fe" (UID: "52cf5fc6-a76a-426e-b244-4d1b397f40fe"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 19:59:44.406502 kubelet[1901]: I0213 19:59:44.406453 1901 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52cf5fc6-a76a-426e-b244-4d1b397f40fe-kube-api-access-wkpdr" (OuterVolumeSpecName: "kube-api-access-wkpdr") pod "52cf5fc6-a76a-426e-b244-4d1b397f40fe" (UID: "52cf5fc6-a76a-426e-b244-4d1b397f40fe"). InnerVolumeSpecName "kube-api-access-wkpdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:59:44.407170 kubelet[1901]: I0213 19:59:44.407132 1901 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52cf5fc6-a76a-426e-b244-4d1b397f40fe-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "52cf5fc6-a76a-426e-b244-4d1b397f40fe" (UID: "52cf5fc6-a76a-426e-b244-4d1b397f40fe"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 19:59:44.469566 containerd[1561]: time="2025-02-13T19:59:44.469500009Z" level=info msg="RemoveContainer for \"3b30fffc6ceac64b6b59cd6ba731a968124e164dbbb13c6a57079ef96c151ec0\" returns successfully" Feb 13 19:59:44.469908 kubelet[1901]: I0213 19:59:44.469861 1901 scope.go:117] "RemoveContainer" containerID="c58cf0e62a1efdeee07b2f175ea580c6952ac44448d92c96305b3bad05a55f5f" Feb 13 19:59:44.470971 containerd[1561]: time="2025-02-13T19:59:44.470939247Z" level=info msg="RemoveContainer for \"c58cf0e62a1efdeee07b2f175ea580c6952ac44448d92c96305b3bad05a55f5f\"" Feb 13 19:59:44.503464 kubelet[1901]: I0213 19:59:44.503392 1901 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-wkpdr\" (UniqueName: \"kubernetes.io/projected/52cf5fc6-a76a-426e-b244-4d1b397f40fe-kube-api-access-wkpdr\") on node \"10.0.0.102\" DevicePath \"\"" Feb 13 19:59:44.503464 kubelet[1901]: I0213 19:59:44.503441 1901 reconciler_common.go:289] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/52cf5fc6-a76a-426e-b244-4d1b397f40fe-node-certs\") on node \"10.0.0.102\" DevicePath \"\"" Feb 13 19:59:44.503464 kubelet[1901]: I0213 19:59:44.503453 1901 reconciler_common.go:289] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/52cf5fc6-a76a-426e-b244-4d1b397f40fe-cni-log-dir\") on node \"10.0.0.102\" DevicePath \"\"" Feb 13 19:59:44.503464 kubelet[1901]: I0213 19:59:44.503463 1901 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52cf5fc6-a76a-426e-b244-4d1b397f40fe-tigera-ca-bundle\") on node \"10.0.0.102\" DevicePath \"\"" Feb 13 19:59:44.503464 kubelet[1901]: I0213 19:59:44.503472 1901 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/52cf5fc6-a76a-426e-b244-4d1b397f40fe-var-lib-calico\") on node \"10.0.0.102\" DevicePath \"\"" Feb 13 19:59:44.503464 kubelet[1901]: I0213 19:59:44.503479 1901 reconciler_common.go:289] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/52cf5fc6-a76a-426e-b244-4d1b397f40fe-cni-net-dir\") on node \"10.0.0.102\" DevicePath \"\"" Feb 13 19:59:44.503464 kubelet[1901]: I0213 19:59:44.503488 1901 reconciler_common.go:289] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/52cf5fc6-a76a-426e-b244-4d1b397f40fe-policysync\") on node \"10.0.0.102\" DevicePath \"\"" Feb 13 19:59:44.531284 containerd[1561]: time="2025-02-13T19:59:44.531132529Z" level=info msg="RemoveContainer for \"c58cf0e62a1efdeee07b2f175ea580c6952ac44448d92c96305b3bad05a55f5f\" returns successfully" Feb 13 19:59:44.531513 kubelet[1901]: I0213 19:59:44.531471 1901 scope.go:117] "RemoveContainer" containerID="fa69824e3fa6c5c7e63613ffc3220027c23ac9661c8820c123a15070f3aebad9" Feb 13 19:59:44.533057 containerd[1561]: time="2025-02-13T19:59:44.533030902Z" level=info msg="RemoveContainer for \"fa69824e3fa6c5c7e63613ffc3220027c23ac9661c8820c123a15070f3aebad9\"" Feb 13 19:59:44.604164 kubelet[1901]: I0213 19:59:44.604112 1901 topology_manager.go:215] "Topology Admit Handler" podUID="926a38b3-6c36-49d5-b73a-3aa837ab20c3" podNamespace="calico-system" podName="calico-node-hkfch" Feb 13 19:59:44.604164 kubelet[1901]: E0213 19:59:44.604177 1901 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="52cf5fc6-a76a-426e-b244-4d1b397f40fe" containerName="flexvol-driver" Feb 13 19:59:44.604381 kubelet[1901]: E0213 19:59:44.604188 1901 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="52cf5fc6-a76a-426e-b244-4d1b397f40fe" containerName="install-cni" Feb 13 19:59:44.604381 kubelet[1901]: E0213 19:59:44.604195 1901 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="52cf5fc6-a76a-426e-b244-4d1b397f40fe" containerName="calico-node" Feb 13 19:59:44.604381 kubelet[1901]: I0213 19:59:44.604216 1901 memory_manager.go:354] "RemoveStaleState removing state" podUID="52cf5fc6-a76a-426e-b244-4d1b397f40fe" containerName="calico-node" Feb 13 19:59:44.610164 containerd[1561]: time="2025-02-13T19:59:44.610104908Z" level=info msg="RemoveContainer for \"fa69824e3fa6c5c7e63613ffc3220027c23ac9661c8820c123a15070f3aebad9\" returns successfully" Feb 13 19:59:44.610217 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b30fffc6ceac64b6b59cd6ba731a968124e164dbbb13c6a57079ef96c151ec0-rootfs.mount: Deactivated successfully. Feb 13 19:59:44.610485 systemd[1]: var-lib-kubelet-pods-52cf5fc6\x2da76a\x2d426e\x2db244\x2d4d1b397f40fe-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Feb 13 19:59:44.610870 containerd[1561]: time="2025-02-13T19:59:44.610719054Z" level=error msg="ContainerStatus for \"3b30fffc6ceac64b6b59cd6ba731a968124e164dbbb13c6a57079ef96c151ec0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3b30fffc6ceac64b6b59cd6ba731a968124e164dbbb13c6a57079ef96c151ec0\": not found" Feb 13 19:59:44.610907 kubelet[1901]: I0213 19:59:44.610479 1901 scope.go:117] "RemoveContainer" containerID="3b30fffc6ceac64b6b59cd6ba731a968124e164dbbb13c6a57079ef96c151ec0" Feb 13 19:59:44.610907 kubelet[1901]: E0213 19:59:44.610877 1901 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3b30fffc6ceac64b6b59cd6ba731a968124e164dbbb13c6a57079ef96c151ec0\": not found" containerID="3b30fffc6ceac64b6b59cd6ba731a968124e164dbbb13c6a57079ef96c151ec0" Feb 13 19:59:44.610731 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c562c077f183462007d2ae58b40e02d5d45894dc1f17d72ef6dba27df507786-rootfs.mount: Deactivated successfully. Feb 13 19:59:44.611052 kubelet[1901]: I0213 19:59:44.610902 1901 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3b30fffc6ceac64b6b59cd6ba731a968124e164dbbb13c6a57079ef96c151ec0"} err="failed to get container status \"3b30fffc6ceac64b6b59cd6ba731a968124e164dbbb13c6a57079ef96c151ec0\": rpc error: code = NotFound desc = an error occurred when try to find container \"3b30fffc6ceac64b6b59cd6ba731a968124e164dbbb13c6a57079ef96c151ec0\": not found" Feb 13 19:59:44.611052 kubelet[1901]: I0213 19:59:44.610920 1901 scope.go:117] "RemoveContainer" containerID="c58cf0e62a1efdeee07b2f175ea580c6952ac44448d92c96305b3bad05a55f5f" Feb 13 19:59:44.610941 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2c562c077f183462007d2ae58b40e02d5d45894dc1f17d72ef6dba27df507786-shm.mount: Deactivated successfully. Feb 13 19:59:44.611200 containerd[1561]: time="2025-02-13T19:59:44.611090023Z" level=error msg="ContainerStatus for \"c58cf0e62a1efdeee07b2f175ea580c6952ac44448d92c96305b3bad05a55f5f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c58cf0e62a1efdeee07b2f175ea580c6952ac44448d92c96305b3bad05a55f5f\": not found" Feb 13 19:59:44.611248 kubelet[1901]: E0213 19:59:44.611184 1901 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c58cf0e62a1efdeee07b2f175ea580c6952ac44448d92c96305b3bad05a55f5f\": not found" containerID="c58cf0e62a1efdeee07b2f175ea580c6952ac44448d92c96305b3bad05a55f5f" Feb 13 19:59:44.611248 kubelet[1901]: I0213 19:59:44.611221 1901 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c58cf0e62a1efdeee07b2f175ea580c6952ac44448d92c96305b3bad05a55f5f"} err="failed to get container status \"c58cf0e62a1efdeee07b2f175ea580c6952ac44448d92c96305b3bad05a55f5f\": rpc error: code = NotFound desc = an error occurred when try to find container \"c58cf0e62a1efdeee07b2f175ea580c6952ac44448d92c96305b3bad05a55f5f\": not found" Feb 13 19:59:44.611248 kubelet[1901]: I0213 19:59:44.611235 1901 scope.go:117] "RemoveContainer" containerID="fa69824e3fa6c5c7e63613ffc3220027c23ac9661c8820c123a15070f3aebad9" Feb 13 19:59:44.611127 systemd[1]: var-lib-kubelet-pods-52cf5fc6\x2da76a\x2d426e\x2db244\x2d4d1b397f40fe-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwkpdr.mount: Deactivated successfully. Feb 13 19:59:44.611306 systemd[1]: var-lib-kubelet-pods-52cf5fc6\x2da76a\x2d426e\x2db244\x2d4d1b397f40fe-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Feb 13 19:59:44.611416 containerd[1561]: time="2025-02-13T19:59:44.611365271Z" level=error msg="ContainerStatus for \"fa69824e3fa6c5c7e63613ffc3220027c23ac9661c8820c123a15070f3aebad9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fa69824e3fa6c5c7e63613ffc3220027c23ac9661c8820c123a15070f3aebad9\": not found" Feb 13 19:59:44.611494 kubelet[1901]: E0213 19:59:44.611471 1901 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fa69824e3fa6c5c7e63613ffc3220027c23ac9661c8820c123a15070f3aebad9\": not found" containerID="fa69824e3fa6c5c7e63613ffc3220027c23ac9661c8820c123a15070f3aebad9" Feb 13 19:59:44.611527 kubelet[1901]: I0213 19:59:44.611493 1901 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fa69824e3fa6c5c7e63613ffc3220027c23ac9661c8820c123a15070f3aebad9"} err="failed to get container status \"fa69824e3fa6c5c7e63613ffc3220027c23ac9661c8820c123a15070f3aebad9\": rpc error: code = NotFound desc = an error occurred when try to find container \"fa69824e3fa6c5c7e63613ffc3220027c23ac9661c8820c123a15070f3aebad9\": not found" Feb 13 19:59:44.804311 kubelet[1901]: I0213 19:59:44.804144 1901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/926a38b3-6c36-49d5-b73a-3aa837ab20c3-xtables-lock\") pod \"calico-node-hkfch\" (UID: \"926a38b3-6c36-49d5-b73a-3aa837ab20c3\") " pod="calico-system/calico-node-hkfch" Feb 13 19:59:44.804311 kubelet[1901]: I0213 19:59:44.804200 1901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klhrq\" (UniqueName: \"kubernetes.io/projected/926a38b3-6c36-49d5-b73a-3aa837ab20c3-kube-api-access-klhrq\") pod \"calico-node-hkfch\" (UID: \"926a38b3-6c36-49d5-b73a-3aa837ab20c3\") " pod="calico-system/calico-node-hkfch" Feb 13 19:59:44.804311 kubelet[1901]: I0213 19:59:44.804224 1901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/926a38b3-6c36-49d5-b73a-3aa837ab20c3-flexvol-driver-host\") pod \"calico-node-hkfch\" (UID: \"926a38b3-6c36-49d5-b73a-3aa837ab20c3\") " pod="calico-system/calico-node-hkfch" Feb 13 19:59:44.804311 kubelet[1901]: I0213 19:59:44.804239 1901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/926a38b3-6c36-49d5-b73a-3aa837ab20c3-lib-modules\") pod \"calico-node-hkfch\" (UID: \"926a38b3-6c36-49d5-b73a-3aa837ab20c3\") " pod="calico-system/calico-node-hkfch" Feb 13 19:59:44.804311 kubelet[1901]: I0213 19:59:44.804277 1901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/926a38b3-6c36-49d5-b73a-3aa837ab20c3-node-certs\") pod \"calico-node-hkfch\" (UID: \"926a38b3-6c36-49d5-b73a-3aa837ab20c3\") " pod="calico-system/calico-node-hkfch" Feb 13 19:59:44.804605 kubelet[1901]: I0213 19:59:44.804300 1901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/926a38b3-6c36-49d5-b73a-3aa837ab20c3-cni-net-dir\") pod \"calico-node-hkfch\" (UID: \"926a38b3-6c36-49d5-b73a-3aa837ab20c3\") " pod="calico-system/calico-node-hkfch" Feb 13 19:59:44.804605 kubelet[1901]: I0213 19:59:44.804350 1901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/926a38b3-6c36-49d5-b73a-3aa837ab20c3-policysync\") pod \"calico-node-hkfch\" (UID: \"926a38b3-6c36-49d5-b73a-3aa837ab20c3\") " pod="calico-system/calico-node-hkfch" Feb 13 19:59:44.804605 kubelet[1901]: I0213 19:59:44.804385 1901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/926a38b3-6c36-49d5-b73a-3aa837ab20c3-tigera-ca-bundle\") pod \"calico-node-hkfch\" (UID: \"926a38b3-6c36-49d5-b73a-3aa837ab20c3\") " pod="calico-system/calico-node-hkfch" Feb 13 19:59:44.804605 kubelet[1901]: I0213 19:59:44.804415 1901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/926a38b3-6c36-49d5-b73a-3aa837ab20c3-cni-bin-dir\") pod \"calico-node-hkfch\" (UID: \"926a38b3-6c36-49d5-b73a-3aa837ab20c3\") " pod="calico-system/calico-node-hkfch" Feb 13 19:59:44.804605 kubelet[1901]: I0213 19:59:44.804440 1901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/926a38b3-6c36-49d5-b73a-3aa837ab20c3-cni-log-dir\") pod \"calico-node-hkfch\" (UID: \"926a38b3-6c36-49d5-b73a-3aa837ab20c3\") " pod="calico-system/calico-node-hkfch" Feb 13 19:59:44.804735 kubelet[1901]: I0213 19:59:44.804468 1901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/926a38b3-6c36-49d5-b73a-3aa837ab20c3-var-lib-calico\") pod \"calico-node-hkfch\" (UID: \"926a38b3-6c36-49d5-b73a-3aa837ab20c3\") " pod="calico-system/calico-node-hkfch" Feb 13 19:59:44.804735 kubelet[1901]: I0213 19:59:44.804493 1901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/926a38b3-6c36-49d5-b73a-3aa837ab20c3-var-run-calico\") pod \"calico-node-hkfch\" (UID: \"926a38b3-6c36-49d5-b73a-3aa837ab20c3\") " pod="calico-system/calico-node-hkfch" Feb 13 19:59:45.036438 kubelet[1901]: E0213 19:59:45.036379 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:45.082830 kubelet[1901]: I0213 19:59:45.082703 1901 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52cf5fc6-a76a-426e-b244-4d1b397f40fe" path="/var/lib/kubelet/pods/52cf5fc6-a76a-426e-b244-4d1b397f40fe/volumes" Feb 13 19:59:45.208241 kubelet[1901]: E0213 19:59:45.208185 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:45.208692 containerd[1561]: time="2025-02-13T19:59:45.208637848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hkfch,Uid:926a38b3-6c36-49d5-b73a-3aa837ab20c3,Namespace:calico-system,Attempt:0,}" Feb 13 19:59:45.229429 containerd[1561]: time="2025-02-13T19:59:45.229282288Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:59:45.229582 containerd[1561]: time="2025-02-13T19:59:45.229405821Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:59:45.229582 containerd[1561]: time="2025-02-13T19:59:45.229482735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:59:45.229683 containerd[1561]: time="2025-02-13T19:59:45.229620284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:59:45.272600 containerd[1561]: time="2025-02-13T19:59:45.272551283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hkfch,Uid:926a38b3-6c36-49d5-b73a-3aa837ab20c3,Namespace:calico-system,Attempt:0,} returns sandbox id \"284aa0efd7b7badcc5918ce26c861f33d35c9c2d1d228e5b075e18091d267163\"" Feb 13 19:59:45.273516 kubelet[1901]: E0213 19:59:45.273277 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:45.275376 containerd[1561]: time="2025-02-13T19:59:45.275338167Z" level=info msg="CreateContainer within sandbox \"284aa0efd7b7badcc5918ce26c861f33d35c9c2d1d228e5b075e18091d267163\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 19:59:45.288771 containerd[1561]: time="2025-02-13T19:59:45.288684834Z" level=info msg="CreateContainer within sandbox \"284aa0efd7b7badcc5918ce26c861f33d35c9c2d1d228e5b075e18091d267163\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d3883b675b5abc1c6b182b1a1f709475ae10350d9cf7e9f12587ad71b9df50a1\"" Feb 13 19:59:45.289311 containerd[1561]: time="2025-02-13T19:59:45.289267220Z" level=info msg="StartContainer for \"d3883b675b5abc1c6b182b1a1f709475ae10350d9cf7e9f12587ad71b9df50a1\"" Feb 13 19:59:45.350867 containerd[1561]: time="2025-02-13T19:59:45.350754509Z" level=info msg="StartContainer for \"d3883b675b5abc1c6b182b1a1f709475ae10350d9cf7e9f12587ad71b9df50a1\" returns successfully" Feb 13 19:59:45.392128 kubelet[1901]: E0213 19:59:45.392066 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:45.414641 containerd[1561]: time="2025-02-13T19:59:45.414575920Z" level=info msg="shim disconnected" id=d3883b675b5abc1c6b182b1a1f709475ae10350d9cf7e9f12587ad71b9df50a1 namespace=k8s.io Feb 13 19:59:45.414641 containerd[1561]: time="2025-02-13T19:59:45.414626986Z" level=warning msg="cleaning up after shim disconnected" id=d3883b675b5abc1c6b182b1a1f709475ae10350d9cf7e9f12587ad71b9df50a1 namespace=k8s.io Feb 13 19:59:45.414641 containerd[1561]: time="2025-02-13T19:59:45.414636003Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:59:46.037238 kubelet[1901]: E0213 19:59:46.037178 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:46.074519 containerd[1561]: time="2025-02-13T19:59:46.074463550Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:59:46.075321 containerd[1561]: time="2025-02-13T19:59:46.075225854Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Feb 13 19:59:46.076364 containerd[1561]: time="2025-02-13T19:59:46.076307509Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:59:46.078449 containerd[1561]: time="2025-02-13T19:59:46.078410245Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:59:46.079084 containerd[1561]: time="2025-02-13T19:59:46.079036774Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.042249557s" Feb 13 19:59:46.079084 containerd[1561]: time="2025-02-13T19:59:46.079077820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Feb 13 19:59:46.087056 containerd[1561]: time="2025-02-13T19:59:46.087021075Z" level=info msg="CreateContainer within sandbox \"c7522ba92e33e719548feec7d94c863dc0c33604f96d173f90fed19a50651e73\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 19:59:46.099238 containerd[1561]: time="2025-02-13T19:59:46.099183247Z" level=info msg="CreateContainer within sandbox \"c7522ba92e33e719548feec7d94c863dc0c33604f96d173f90fed19a50651e73\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c5cad21c0a9c17f5675a141fd540c57b0a5fcb1b8924cede020ad66332bfda0c\"" Feb 13 19:59:46.099732 containerd[1561]: time="2025-02-13T19:59:46.099693377Z" level=info msg="StartContainer for \"c5cad21c0a9c17f5675a141fd540c57b0a5fcb1b8924cede020ad66332bfda0c\"" Feb 13 19:59:46.169194 containerd[1561]: time="2025-02-13T19:59:46.169146888Z" level=info msg="StartContainer for \"c5cad21c0a9c17f5675a141fd540c57b0a5fcb1b8924cede020ad66332bfda0c\" returns successfully" Feb 13 19:59:46.396675 kubelet[1901]: E0213 19:59:46.395857 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:46.398207 kubelet[1901]: E0213 19:59:46.398164 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:46.400404 containerd[1561]: time="2025-02-13T19:59:46.400350866Z" level=info msg="CreateContainer within sandbox \"284aa0efd7b7badcc5918ce26c861f33d35c9c2d1d228e5b075e18091d267163\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 19:59:46.404984 kubelet[1901]: I0213 19:59:46.404909 1901 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-689c6764c5-6mnjk" podStartSLOduration=1.361485966 podStartE2EDuration="3.404884656s" podCreationTimestamp="2025-02-13 19:59:43 +0000 UTC" firstStartedPulling="2025-02-13 19:59:44.036403454 +0000 UTC m=+53.409239512" lastFinishedPulling="2025-02-13 19:59:46.079802154 +0000 UTC m=+55.452638202" observedRunningTime="2025-02-13 19:59:46.404838951 +0000 UTC m=+55.777675029" watchObservedRunningTime="2025-02-13 19:59:46.404884656 +0000 UTC m=+55.777720714" Feb 13 19:59:46.420599 containerd[1561]: time="2025-02-13T19:59:46.420515936Z" level=info msg="CreateContainer within sandbox \"284aa0efd7b7badcc5918ce26c861f33d35c9c2d1d228e5b075e18091d267163\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"509d707dc745d8531043e895016942a544f3100d622b122866ef6c53be2d6b6c\"" Feb 13 19:59:46.421622 containerd[1561]: time="2025-02-13T19:59:46.421541805Z" level=info msg="StartContainer for \"509d707dc745d8531043e895016942a544f3100d622b122866ef6c53be2d6b6c\"" Feb 13 19:59:46.497238 containerd[1561]: time="2025-02-13T19:59:46.497191483Z" level=info msg="StartContainer for \"509d707dc745d8531043e895016942a544f3100d622b122866ef6c53be2d6b6c\" returns successfully" Feb 13 19:59:46.841921 containerd[1561]: time="2025-02-13T19:59:46.841859185Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: failed to load CNI config list file /etc/cni/net.d/10-calico.conflist: error parsing configuration list: unexpected end of JSON input: invalid cni config: failed to load cni config" Feb 13 19:59:46.865389 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-509d707dc745d8531043e895016942a544f3100d622b122866ef6c53be2d6b6c-rootfs.mount: Deactivated successfully. Feb 13 19:59:47.006352 containerd[1561]: time="2025-02-13T19:59:47.006271310Z" level=info msg="shim disconnected" id=509d707dc745d8531043e895016942a544f3100d622b122866ef6c53be2d6b6c namespace=k8s.io Feb 13 19:59:47.006352 containerd[1561]: time="2025-02-13T19:59:47.006344677Z" level=warning msg="cleaning up after shim disconnected" id=509d707dc745d8531043e895016942a544f3100d622b122866ef6c53be2d6b6c namespace=k8s.io Feb 13 19:59:47.006352 containerd[1561]: time="2025-02-13T19:59:47.006355438Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:59:47.038367 kubelet[1901]: E0213 19:59:47.038301 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:47.402069 kubelet[1901]: E0213 19:59:47.402010 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:47.402218 kubelet[1901]: E0213 19:59:47.402174 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:47.411587 containerd[1561]: time="2025-02-13T19:59:47.411440940Z" level=info msg="CreateContainer within sandbox \"284aa0efd7b7badcc5918ce26c861f33d35c9c2d1d228e5b075e18091d267163\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 19:59:47.429730 containerd[1561]: time="2025-02-13T19:59:47.429675389Z" level=info msg="CreateContainer within sandbox \"284aa0efd7b7badcc5918ce26c861f33d35c9c2d1d228e5b075e18091d267163\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"10b63848b146eac6865e255a7a3092bdde63cd15dbbe643fafa6ad95f88fbcd6\"" Feb 13 19:59:47.430322 containerd[1561]: time="2025-02-13T19:59:47.430283302Z" level=info msg="StartContainer for \"10b63848b146eac6865e255a7a3092bdde63cd15dbbe643fafa6ad95f88fbcd6\"" Feb 13 19:59:47.498624 containerd[1561]: time="2025-02-13T19:59:47.498558447Z" level=info msg="StartContainer for \"10b63848b146eac6865e255a7a3092bdde63cd15dbbe643fafa6ad95f88fbcd6\" returns successfully" Feb 13 19:59:47.578020 kubelet[1901]: I0213 19:59:47.577930 1901 topology_manager.go:215] "Topology Admit Handler" podUID="49017dee-7f45-452a-9e57-4a3aac91eda2" podNamespace="default" podName="nfs-server-provisioner-0" Feb 13 19:59:47.622239 kubelet[1901]: I0213 19:59:47.622180 1901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/49017dee-7f45-452a-9e57-4a3aac91eda2-data\") pod \"nfs-server-provisioner-0\" (UID: \"49017dee-7f45-452a-9e57-4a3aac91eda2\") " pod="default/nfs-server-provisioner-0" Feb 13 19:59:47.622239 kubelet[1901]: I0213 19:59:47.622226 1901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z4r2\" (UniqueName: \"kubernetes.io/projected/49017dee-7f45-452a-9e57-4a3aac91eda2-kube-api-access-4z4r2\") pod \"nfs-server-provisioner-0\" (UID: \"49017dee-7f45-452a-9e57-4a3aac91eda2\") " pod="default/nfs-server-provisioner-0" Feb 13 19:59:47.882409 containerd[1561]: time="2025-02-13T19:59:47.882349331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:49017dee-7f45-452a-9e57-4a3aac91eda2,Namespace:default,Attempt:0,}" Feb 13 19:59:48.000726 systemd-networkd[1240]: cali60e51b789ff: Link UP Feb 13 19:59:48.001467 systemd-networkd[1240]: cali60e51b789ff: Gained carrier Feb 13 19:59:48.013975 containerd[1561]: 2025-02-13 19:59:47.928 [INFO][3670] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.102-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 49017dee-7f45-452a-9e57-4a3aac91eda2 1371 0 2025-02-13 19:59:44 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.102 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="aa439e0f858729b67fe74d69c63a61f299f697c99469399a8a7d1bd97e06ae44" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.102-k8s-nfs--server--provisioner--0-" Feb 13 19:59:48.013975 containerd[1561]: 2025-02-13 19:59:47.928 [INFO][3670] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="aa439e0f858729b67fe74d69c63a61f299f697c99469399a8a7d1bd97e06ae44" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.102-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:59:48.013975 containerd[1561]: 2025-02-13 19:59:47.958 [INFO][3683] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aa439e0f858729b67fe74d69c63a61f299f697c99469399a8a7d1bd97e06ae44" HandleID="k8s-pod-network.aa439e0f858729b67fe74d69c63a61f299f697c99469399a8a7d1bd97e06ae44" Workload="10.0.0.102-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:59:48.013975 containerd[1561]: 2025-02-13 19:59:47.970 [INFO][3683] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="aa439e0f858729b67fe74d69c63a61f299f697c99469399a8a7d1bd97e06ae44" HandleID="k8s-pod-network.aa439e0f858729b67fe74d69c63a61f299f697c99469399a8a7d1bd97e06ae44" Workload="10.0.0.102-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000360b60), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.102", "pod":"nfs-server-provisioner-0", "timestamp":"2025-02-13 19:59:47.958797139 +0000 UTC"}, Hostname:"10.0.0.102", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:59:48.013975 containerd[1561]: 2025-02-13 19:59:47.970 [INFO][3683] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:59:48.013975 containerd[1561]: 2025-02-13 19:59:47.970 [INFO][3683] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:59:48.013975 containerd[1561]: 2025-02-13 19:59:47.970 [INFO][3683] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.102' Feb 13 19:59:48.013975 containerd[1561]: 2025-02-13 19:59:47.972 [INFO][3683] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.aa439e0f858729b67fe74d69c63a61f299f697c99469399a8a7d1bd97e06ae44" host="10.0.0.102" Feb 13 19:59:48.013975 containerd[1561]: 2025-02-13 19:59:47.977 [INFO][3683] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.102" Feb 13 19:59:48.013975 containerd[1561]: 2025-02-13 19:59:47.980 [INFO][3683] ipam/ipam.go 489: Trying affinity for 192.168.54.192/26 host="10.0.0.102" Feb 13 19:59:48.013975 containerd[1561]: 2025-02-13 19:59:47.982 [INFO][3683] ipam/ipam.go 155: Attempting to load block cidr=192.168.54.192/26 host="10.0.0.102" Feb 13 19:59:48.013975 containerd[1561]: 2025-02-13 19:59:47.985 [INFO][3683] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.54.192/26 host="10.0.0.102" Feb 13 19:59:48.013975 containerd[1561]: 2025-02-13 19:59:47.985 [INFO][3683] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.54.192/26 handle="k8s-pod-network.aa439e0f858729b67fe74d69c63a61f299f697c99469399a8a7d1bd97e06ae44" host="10.0.0.102" Feb 13 19:59:48.013975 containerd[1561]: 2025-02-13 19:59:47.987 [INFO][3683] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.aa439e0f858729b67fe74d69c63a61f299f697c99469399a8a7d1bd97e06ae44 Feb 13 19:59:48.013975 containerd[1561]: 2025-02-13 19:59:47.990 [INFO][3683] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.54.192/26 handle="k8s-pod-network.aa439e0f858729b67fe74d69c63a61f299f697c99469399a8a7d1bd97e06ae44" host="10.0.0.102" Feb 13 19:59:48.013975 containerd[1561]: 2025-02-13 19:59:47.996 [INFO][3683] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.54.195/26] block=192.168.54.192/26 handle="k8s-pod-network.aa439e0f858729b67fe74d69c63a61f299f697c99469399a8a7d1bd97e06ae44" host="10.0.0.102" Feb 13 19:59:48.013975 containerd[1561]: 2025-02-13 19:59:47.996 [INFO][3683] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.54.195/26] handle="k8s-pod-network.aa439e0f858729b67fe74d69c63a61f299f697c99469399a8a7d1bd97e06ae44" host="10.0.0.102" Feb 13 19:59:48.013975 containerd[1561]: 2025-02-13 19:59:47.996 [INFO][3683] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:59:48.013975 containerd[1561]: 2025-02-13 19:59:47.996 [INFO][3683] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.54.195/26] IPv6=[] ContainerID="aa439e0f858729b67fe74d69c63a61f299f697c99469399a8a7d1bd97e06ae44" HandleID="k8s-pod-network.aa439e0f858729b67fe74d69c63a61f299f697c99469399a8a7d1bd97e06ae44" Workload="10.0.0.102-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:59:48.014886 containerd[1561]: 2025-02-13 19:59:47.998 [INFO][3670] cni-plugin/k8s.go 386: Populated endpoint ContainerID="aa439e0f858729b67fe74d69c63a61f299f697c99469399a8a7d1bd97e06ae44" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.102-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.102-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"49017dee-7f45-452a-9e57-4a3aac91eda2", ResourceVersion:"1371", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 59, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.102", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.54.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:59:48.014886 containerd[1561]: 2025-02-13 19:59:47.999 [INFO][3670] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.54.195/32] ContainerID="aa439e0f858729b67fe74d69c63a61f299f697c99469399a8a7d1bd97e06ae44" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.102-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:59:48.014886 containerd[1561]: 2025-02-13 19:59:47.999 [INFO][3670] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="aa439e0f858729b67fe74d69c63a61f299f697c99469399a8a7d1bd97e06ae44" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.102-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:59:48.014886 containerd[1561]: 2025-02-13 19:59:48.001 [INFO][3670] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aa439e0f858729b67fe74d69c63a61f299f697c99469399a8a7d1bd97e06ae44" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.102-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:59:48.015110 containerd[1561]: 2025-02-13 19:59:48.002 [INFO][3670] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="aa439e0f858729b67fe74d69c63a61f299f697c99469399a8a7d1bd97e06ae44" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.102-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.102-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"49017dee-7f45-452a-9e57-4a3aac91eda2", ResourceVersion:"1371", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 59, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.102", ContainerID:"aa439e0f858729b67fe74d69c63a61f299f697c99469399a8a7d1bd97e06ae44", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.54.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"96:b9:7f:82:cc:4c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:59:48.015110 containerd[1561]: 2025-02-13 19:59:48.011 [INFO][3670] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="aa439e0f858729b67fe74d69c63a61f299f697c99469399a8a7d1bd97e06ae44" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.102-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:59:48.038303 containerd[1561]: time="2025-02-13T19:59:48.038174345Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:59:48.038303 containerd[1561]: time="2025-02-13T19:59:48.038243886Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:59:48.038303 containerd[1561]: time="2025-02-13T19:59:48.038259445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:59:48.038604 containerd[1561]: time="2025-02-13T19:59:48.038378739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:59:48.039423 kubelet[1901]: E0213 19:59:48.039377 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:48.073609 systemd-resolved[1455]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:59:48.104124 containerd[1561]: time="2025-02-13T19:59:48.104060963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:49017dee-7f45-452a-9e57-4a3aac91eda2,Namespace:default,Attempt:0,} returns sandbox id \"aa439e0f858729b67fe74d69c63a61f299f697c99469399a8a7d1bd97e06ae44\"" Feb 13 19:59:48.105810 containerd[1561]: time="2025-02-13T19:59:48.105708332Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 13 19:59:48.407082 kubelet[1901]: E0213 19:59:48.407029 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:48.408912 kubelet[1901]: E0213 19:59:48.408858 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:48.423339 kubelet[1901]: I0213 19:59:48.423276 1901 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-hkfch" podStartSLOduration=4.423256072 podStartE2EDuration="4.423256072s" podCreationTimestamp="2025-02-13 19:59:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:59:48.422910883 +0000 UTC m=+57.795746941" watchObservedRunningTime="2025-02-13 19:59:48.423256072 +0000 UTC m=+57.796092130" Feb 13 19:59:49.040028 kubelet[1901]: E0213 19:59:49.039891 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:49.417977 kubelet[1901]: E0213 19:59:49.414097 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:59:49.431028 systemd-networkd[1240]: cali60e51b789ff: Gained IPv6LL Feb 13 19:59:50.040659 kubelet[1901]: E0213 19:59:50.040587 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:50.211037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1316037665.mount: Deactivated successfully. Feb 13 19:59:51.000516 kubelet[1901]: E0213 19:59:51.000461 1901 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:51.014145 containerd[1561]: time="2025-02-13T19:59:51.014083582Z" level=info msg="StopPodSandbox for \"44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf\"" Feb 13 19:59:51.041792 kubelet[1901]: E0213 19:59:51.041474 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:51.123869 containerd[1561]: 2025-02-13 19:59:51.076 [WARNING][4011] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.102-k8s-csi--node--driver--2d48g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c7cccdaa-623e-4b78-b7f9-71b591d49e20", ResourceVersion:"1196", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 58, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.102", ContainerID:"742b6efeaa6c16c9a2ce63af7073ace12c0d1201d61f1d3d9c4958fafde6f229", Pod:"csi-node-driver-2d48g", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.54.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali01d70187a4a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:59:51.123869 containerd[1561]: 2025-02-13 19:59:51.076 [INFO][4011] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf" Feb 13 19:59:51.123869 containerd[1561]: 2025-02-13 19:59:51.076 [INFO][4011] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf" iface="eth0" netns="" Feb 13 19:59:51.123869 containerd[1561]: 2025-02-13 19:59:51.077 [INFO][4011] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf" Feb 13 19:59:51.123869 containerd[1561]: 2025-02-13 19:59:51.077 [INFO][4011] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf" Feb 13 19:59:51.123869 containerd[1561]: 2025-02-13 19:59:51.108 [INFO][4019] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf" HandleID="k8s-pod-network.44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf" Workload="10.0.0.102-k8s-csi--node--driver--2d48g-eth0" Feb 13 19:59:51.123869 containerd[1561]: 2025-02-13 19:59:51.108 [INFO][4019] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:59:51.123869 containerd[1561]: 2025-02-13 19:59:51.108 [INFO][4019] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:59:51.123869 containerd[1561]: 2025-02-13 19:59:51.116 [WARNING][4019] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf" HandleID="k8s-pod-network.44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf" Workload="10.0.0.102-k8s-csi--node--driver--2d48g-eth0" Feb 13 19:59:51.123869 containerd[1561]: 2025-02-13 19:59:51.116 [INFO][4019] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf" HandleID="k8s-pod-network.44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf" Workload="10.0.0.102-k8s-csi--node--driver--2d48g-eth0" Feb 13 19:59:51.123869 containerd[1561]: 2025-02-13 19:59:51.118 [INFO][4019] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:59:51.123869 containerd[1561]: 2025-02-13 19:59:51.121 [INFO][4011] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf" Feb 13 19:59:51.124398 containerd[1561]: time="2025-02-13T19:59:51.123911379Z" level=info msg="TearDown network for sandbox \"44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf\" successfully" Feb 13 19:59:51.124398 containerd[1561]: time="2025-02-13T19:59:51.123943840Z" level=info msg="StopPodSandbox for \"44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf\" returns successfully" Feb 13 19:59:51.124602 containerd[1561]: time="2025-02-13T19:59:51.124545361Z" level=info msg="RemovePodSandbox for \"44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf\"" Feb 13 19:59:51.124650 containerd[1561]: time="2025-02-13T19:59:51.124606906Z" level=info msg="Forcibly stopping sandbox \"44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf\"" Feb 13 19:59:51.207346 containerd[1561]: 2025-02-13 19:59:51.162 [WARNING][4044] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.102-k8s-csi--node--driver--2d48g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c7cccdaa-623e-4b78-b7f9-71b591d49e20", ResourceVersion:"1196", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 58, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.102", ContainerID:"742b6efeaa6c16c9a2ce63af7073ace12c0d1201d61f1d3d9c4958fafde6f229", Pod:"csi-node-driver-2d48g", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.54.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali01d70187a4a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:59:51.207346 containerd[1561]: 2025-02-13 19:59:51.162 [INFO][4044] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf" Feb 13 19:59:51.207346 containerd[1561]: 2025-02-13 19:59:51.162 [INFO][4044] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf" iface="eth0" netns="" Feb 13 19:59:51.207346 containerd[1561]: 2025-02-13 19:59:51.162 [INFO][4044] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf" Feb 13 19:59:51.207346 containerd[1561]: 2025-02-13 19:59:51.162 [INFO][4044] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf" Feb 13 19:59:51.207346 containerd[1561]: 2025-02-13 19:59:51.191 [INFO][4052] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf" HandleID="k8s-pod-network.44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf" Workload="10.0.0.102-k8s-csi--node--driver--2d48g-eth0" Feb 13 19:59:51.207346 containerd[1561]: 2025-02-13 19:59:51.191 [INFO][4052] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:59:51.207346 containerd[1561]: 2025-02-13 19:59:51.192 [INFO][4052] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:59:51.207346 containerd[1561]: 2025-02-13 19:59:51.199 [WARNING][4052] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf" HandleID="k8s-pod-network.44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf" Workload="10.0.0.102-k8s-csi--node--driver--2d48g-eth0" Feb 13 19:59:51.207346 containerd[1561]: 2025-02-13 19:59:51.199 [INFO][4052] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf" HandleID="k8s-pod-network.44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf" Workload="10.0.0.102-k8s-csi--node--driver--2d48g-eth0" Feb 13 19:59:51.207346 containerd[1561]: 2025-02-13 19:59:51.200 [INFO][4052] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:59:51.207346 containerd[1561]: 2025-02-13 19:59:51.203 [INFO][4044] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf" Feb 13 19:59:51.207346 containerd[1561]: time="2025-02-13T19:59:51.206898805Z" level=info msg="TearDown network for sandbox \"44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf\" successfully" Feb 13 19:59:51.380854 containerd[1561]: time="2025-02-13T19:59:51.379407249Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:59:51.380854 containerd[1561]: time="2025-02-13T19:59:51.379484574Z" level=info msg="RemovePodSandbox \"44e1caa37b87750a9f2adb47a266dbd737b6af7b655a57d9c7aa4956675f8dcf\" returns successfully" Feb 13 19:59:51.380854 containerd[1561]: time="2025-02-13T19:59:51.380135468Z" level=info msg="StopPodSandbox for \"db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828\"" Feb 13 19:59:51.461353 containerd[1561]: 2025-02-13 19:59:51.417 [WARNING][4075] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.102-k8s-nginx--deployment--85f456d6dd--gl9h2-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"565a1071-1bcc-4669-b9a9-4e955e40c368", ResourceVersion:"1177", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 59, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.102", ContainerID:"70aea53b3a965186ff7807502c909e3be7cc21f29e320a9e56fe0ec9afd0493b", Pod:"nginx-deployment-85f456d6dd-gl9h2", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.54.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali237bfe8e030", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:59:51.461353 containerd[1561]: 2025-02-13 19:59:51.417 [INFO][4075] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828" Feb 13 19:59:51.461353 containerd[1561]: 2025-02-13 19:59:51.417 [INFO][4075] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828" iface="eth0" netns="" Feb 13 19:59:51.461353 containerd[1561]: 2025-02-13 19:59:51.417 [INFO][4075] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828" Feb 13 19:59:51.461353 containerd[1561]: 2025-02-13 19:59:51.417 [INFO][4075] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828" Feb 13 19:59:51.461353 containerd[1561]: 2025-02-13 19:59:51.447 [INFO][4083] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828" HandleID="k8s-pod-network.db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828" Workload="10.0.0.102-k8s-nginx--deployment--85f456d6dd--gl9h2-eth0" Feb 13 19:59:51.461353 containerd[1561]: 2025-02-13 19:59:51.447 [INFO][4083] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:59:51.461353 containerd[1561]: 2025-02-13 19:59:51.447 [INFO][4083] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:59:51.461353 containerd[1561]: 2025-02-13 19:59:51.454 [WARNING][4083] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828" HandleID="k8s-pod-network.db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828" Workload="10.0.0.102-k8s-nginx--deployment--85f456d6dd--gl9h2-eth0" Feb 13 19:59:51.461353 containerd[1561]: 2025-02-13 19:59:51.454 [INFO][4083] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828" HandleID="k8s-pod-network.db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828" Workload="10.0.0.102-k8s-nginx--deployment--85f456d6dd--gl9h2-eth0" Feb 13 19:59:51.461353 containerd[1561]: 2025-02-13 19:59:51.456 [INFO][4083] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:59:51.461353 containerd[1561]: 2025-02-13 19:59:51.458 [INFO][4075] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828" Feb 13 19:59:51.461982 containerd[1561]: time="2025-02-13T19:59:51.461373717Z" level=info msg="TearDown network for sandbox \"db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828\" successfully" Feb 13 19:59:51.461982 containerd[1561]: time="2025-02-13T19:59:51.461404144Z" level=info msg="StopPodSandbox for \"db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828\" returns successfully" Feb 13 19:59:51.462199 containerd[1561]: time="2025-02-13T19:59:51.462156378Z" level=info msg="RemovePodSandbox for \"db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828\"" Feb 13 19:59:51.462255 containerd[1561]: time="2025-02-13T19:59:51.462203817Z" level=info msg="Forcibly stopping sandbox \"db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828\"" Feb 13 19:59:51.540336 containerd[1561]: 2025-02-13 19:59:51.502 [WARNING][4107] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.102-k8s-nginx--deployment--85f456d6dd--gl9h2-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"565a1071-1bcc-4669-b9a9-4e955e40c368", ResourceVersion:"1177", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 59, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.102", ContainerID:"70aea53b3a965186ff7807502c909e3be7cc21f29e320a9e56fe0ec9afd0493b", Pod:"nginx-deployment-85f456d6dd-gl9h2", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.54.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali237bfe8e030", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:59:51.540336 containerd[1561]: 2025-02-13 19:59:51.502 [INFO][4107] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828" Feb 13 19:59:51.540336 containerd[1561]: 2025-02-13 19:59:51.502 [INFO][4107] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828" iface="eth0" netns="" Feb 13 19:59:51.540336 containerd[1561]: 2025-02-13 19:59:51.502 [INFO][4107] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828" Feb 13 19:59:51.540336 containerd[1561]: 2025-02-13 19:59:51.502 [INFO][4107] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828" Feb 13 19:59:51.540336 containerd[1561]: 2025-02-13 19:59:51.527 [INFO][4119] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828" HandleID="k8s-pod-network.db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828" Workload="10.0.0.102-k8s-nginx--deployment--85f456d6dd--gl9h2-eth0" Feb 13 19:59:51.540336 containerd[1561]: 2025-02-13 19:59:51.527 [INFO][4119] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:59:51.540336 containerd[1561]: 2025-02-13 19:59:51.528 [INFO][4119] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:59:51.540336 containerd[1561]: 2025-02-13 19:59:51.533 [WARNING][4119] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828" HandleID="k8s-pod-network.db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828" Workload="10.0.0.102-k8s-nginx--deployment--85f456d6dd--gl9h2-eth0" Feb 13 19:59:51.540336 containerd[1561]: 2025-02-13 19:59:51.533 [INFO][4119] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828" HandleID="k8s-pod-network.db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828" Workload="10.0.0.102-k8s-nginx--deployment--85f456d6dd--gl9h2-eth0" Feb 13 19:59:51.540336 containerd[1561]: 2025-02-13 19:59:51.534 [INFO][4119] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:59:51.540336 containerd[1561]: 2025-02-13 19:59:51.537 [INFO][4107] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828" Feb 13 19:59:51.541043 containerd[1561]: time="2025-02-13T19:59:51.540377675Z" level=info msg="TearDown network for sandbox \"db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828\" successfully" Feb 13 19:59:51.545068 containerd[1561]: time="2025-02-13T19:59:51.545037505Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:59:51.545168 containerd[1561]: time="2025-02-13T19:59:51.545086306Z" level=info msg="RemovePodSandbox \"db29ef37f428b422ce2f19bf8be18b3d8ba6fa38a70fb16f37d683d1eb537828\" returns successfully" Feb 13 19:59:51.545722 containerd[1561]: time="2025-02-13T19:59:51.545618136Z" level=info msg="StopPodSandbox for \"2c562c077f183462007d2ae58b40e02d5d45894dc1f17d72ef6dba27df507786\"" Feb 13 19:59:51.545722 containerd[1561]: time="2025-02-13T19:59:51.545711301Z" level=info msg="TearDown network for sandbox \"2c562c077f183462007d2ae58b40e02d5d45894dc1f17d72ef6dba27df507786\" successfully" Feb 13 19:59:51.545817 containerd[1561]: time="2025-02-13T19:59:51.545726509Z" level=info msg="StopPodSandbox for \"2c562c077f183462007d2ae58b40e02d5d45894dc1f17d72ef6dba27df507786\" returns successfully" Feb 13 19:59:51.546356 containerd[1561]: time="2025-02-13T19:59:51.546327048Z" level=info msg="RemovePodSandbox for \"2c562c077f183462007d2ae58b40e02d5d45894dc1f17d72ef6dba27df507786\"" Feb 13 19:59:51.546503 containerd[1561]: time="2025-02-13T19:59:51.546437567Z" level=info msg="Forcibly stopping sandbox \"2c562c077f183462007d2ae58b40e02d5d45894dc1f17d72ef6dba27df507786\"" Feb 13 19:59:51.546597 containerd[1561]: time="2025-02-13T19:59:51.546581376Z" level=info msg="TearDown network for sandbox \"2c562c077f183462007d2ae58b40e02d5d45894dc1f17d72ef6dba27df507786\" successfully" Feb 13 19:59:51.550331 containerd[1561]: time="2025-02-13T19:59:51.550253859Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2c562c077f183462007d2ae58b40e02d5d45894dc1f17d72ef6dba27df507786\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:59:51.550331 containerd[1561]: time="2025-02-13T19:59:51.550328149Z" level=info msg="RemovePodSandbox \"2c562c077f183462007d2ae58b40e02d5d45894dc1f17d72ef6dba27df507786\" returns successfully" Feb 13 19:59:51.924559 containerd[1561]: time="2025-02-13T19:59:51.924461626Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:59:51.925288 containerd[1561]: time="2025-02-13T19:59:51.925206716Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Feb 13 19:59:51.926502 containerd[1561]: time="2025-02-13T19:59:51.926467326Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:59:51.929197 containerd[1561]: time="2025-02-13T19:59:51.929163834Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:59:51.930052 containerd[1561]: time="2025-02-13T19:59:51.930002731Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 3.824260375s" Feb 13 19:59:51.930089 containerd[1561]: time="2025-02-13T19:59:51.930052986Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 13 19:59:51.932527 containerd[1561]: time="2025-02-13T19:59:51.932487362Z" level=info msg="CreateContainer within sandbox \"aa439e0f858729b67fe74d69c63a61f299f697c99469399a8a7d1bd97e06ae44\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 13 19:59:51.945938 containerd[1561]: time="2025-02-13T19:59:51.945882189Z" level=info msg="CreateContainer within sandbox \"aa439e0f858729b67fe74d69c63a61f299f697c99469399a8a7d1bd97e06ae44\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"bd0471b0a9c82bb3313bc21395def8f160dab32ae7dbeede057223d0ca1e69e4\"" Feb 13 19:59:51.946475 containerd[1561]: time="2025-02-13T19:59:51.946443514Z" level=info msg="StartContainer for \"bd0471b0a9c82bb3313bc21395def8f160dab32ae7dbeede057223d0ca1e69e4\"" Feb 13 19:59:52.008048 containerd[1561]: time="2025-02-13T19:59:52.007879179Z" level=info msg="StartContainer for \"bd0471b0a9c82bb3313bc21395def8f160dab32ae7dbeede057223d0ca1e69e4\" returns successfully" Feb 13 19:59:52.042245 kubelet[1901]: E0213 19:59:52.042167 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:53.042765 kubelet[1901]: E0213 19:59:53.042672 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:54.043829 kubelet[1901]: E0213 19:59:54.043734 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:55.044336 kubelet[1901]: E0213 19:59:55.044269 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:56.045074 kubelet[1901]: E0213 19:59:56.044998 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:57.045671 kubelet[1901]: E0213 19:59:57.045618 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:58.045968 kubelet[1901]: E0213 19:59:58.045911 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:59:59.046890 kubelet[1901]: E0213 19:59:59.046827 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:00:00.047557 kubelet[1901]: E0213 20:00:00.047478 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:00:01.048618 kubelet[1901]: E0213 20:00:01.048552 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:00:02.049256 kubelet[1901]: E0213 20:00:02.049194 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:00:03.049779 kubelet[1901]: E0213 20:00:03.049689 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:00:04.050143 kubelet[1901]: E0213 20:00:04.050070 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:00:05.050382 kubelet[1901]: E0213 20:00:05.050311 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:00:05.985738 kubelet[1901]: I0213 20:00:05.985643 1901 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=18.160142427 podStartE2EDuration="21.985614221s" podCreationTimestamp="2025-02-13 19:59:44 +0000 UTC" firstStartedPulling="2025-02-13 19:59:48.1054844 +0000 UTC m=+57.478320458" lastFinishedPulling="2025-02-13 19:59:51.930956194 +0000 UTC m=+61.303792252" observedRunningTime="2025-02-13 19:59:52.435467498 +0000 UTC m=+61.808303556" watchObservedRunningTime="2025-02-13 20:00:05.985614221 +0000 UTC m=+75.358450279" Feb 13 20:00:05.986203 kubelet[1901]: I0213 20:00:05.986171 1901 topology_manager.go:215] "Topology Admit Handler" podUID="32fe5049-4e4e-47cd-ac8e-8427f969a07b" podNamespace="default" podName="test-pod-1" Feb 13 20:00:06.051373 kubelet[1901]: E0213 20:00:06.051294 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:00:06.124062 kubelet[1901]: I0213 20:00:06.123998 1901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-376ecf95-085c-42cc-a3ac-236692975bb8\" (UniqueName: \"kubernetes.io/nfs/32fe5049-4e4e-47cd-ac8e-8427f969a07b-pvc-376ecf95-085c-42cc-a3ac-236692975bb8\") pod \"test-pod-1\" (UID: \"32fe5049-4e4e-47cd-ac8e-8427f969a07b\") " pod="default/test-pod-1" Feb 13 20:00:06.124062 kubelet[1901]: I0213 20:00:06.124056 1901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d75xk\" (UniqueName: \"kubernetes.io/projected/32fe5049-4e4e-47cd-ac8e-8427f969a07b-kube-api-access-d75xk\") pod \"test-pod-1\" (UID: \"32fe5049-4e4e-47cd-ac8e-8427f969a07b\") " pod="default/test-pod-1" Feb 13 20:00:06.249795 kernel: FS-Cache: Loaded Feb 13 20:00:06.315843 kernel: RPC: Registered named UNIX socket transport module. Feb 13 20:00:06.315986 kernel: RPC: Registered udp transport module. Feb 13 20:00:06.316008 kernel: RPC: Registered tcp transport module. Feb 13 20:00:06.317101 kernel: RPC: Registered tcp-with-tls transport module. Feb 13 20:00:06.317123 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 13 20:00:06.601155 kernel: NFS: Registering the id_resolver key type Feb 13 20:00:06.601364 kernel: Key type id_resolver registered Feb 13 20:00:06.601395 kernel: Key type id_legacy registered Feb 13 20:00:06.629173 nfsidmap[4247]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 13 20:00:06.634161 nfsidmap[4250]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 13 20:00:06.891843 containerd[1561]: time="2025-02-13T20:00:06.891688880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:32fe5049-4e4e-47cd-ac8e-8427f969a07b,Namespace:default,Attempt:0,}" Feb 13 20:00:06.994055 systemd-networkd[1240]: cali5ec59c6bf6e: Link UP Feb 13 20:00:06.994585 systemd-networkd[1240]: cali5ec59c6bf6e: Gained carrier Feb 13 20:00:07.005890 containerd[1561]: 2025-02-13 20:00:06.934 [INFO][4253] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.102-k8s-test--pod--1-eth0 default 32fe5049-4e4e-47cd-ac8e-8427f969a07b 1445 0 2025-02-13 19:59:44 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.102 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="40838bc66f8bd48adc0ed00b2019b4685d0cb6eb4c8b72623ad3c93ae9748347" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.102-k8s-test--pod--1-" Feb 13 20:00:07.005890 containerd[1561]: 2025-02-13 20:00:06.935 [INFO][4253] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="40838bc66f8bd48adc0ed00b2019b4685d0cb6eb4c8b72623ad3c93ae9748347" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.102-k8s-test--pod--1-eth0" Feb 13 20:00:07.005890 containerd[1561]: 2025-02-13 20:00:06.959 [INFO][4265] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="40838bc66f8bd48adc0ed00b2019b4685d0cb6eb4c8b72623ad3c93ae9748347" HandleID="k8s-pod-network.40838bc66f8bd48adc0ed00b2019b4685d0cb6eb4c8b72623ad3c93ae9748347" Workload="10.0.0.102-k8s-test--pod--1-eth0" Feb 13 20:00:07.005890 containerd[1561]: 2025-02-13 20:00:06.967 [INFO][4265] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="40838bc66f8bd48adc0ed00b2019b4685d0cb6eb4c8b72623ad3c93ae9748347" HandleID="k8s-pod-network.40838bc66f8bd48adc0ed00b2019b4685d0cb6eb4c8b72623ad3c93ae9748347" Workload="10.0.0.102-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0006805c0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.102", "pod":"test-pod-1", "timestamp":"2025-02-13 20:00:06.95917156 +0000 UTC"}, Hostname:"10.0.0.102", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:00:07.005890 containerd[1561]: 2025-02-13 20:00:06.967 [INFO][4265] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:00:07.005890 containerd[1561]: 2025-02-13 20:00:06.967 [INFO][4265] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:00:07.005890 containerd[1561]: 2025-02-13 20:00:06.967 [INFO][4265] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.102' Feb 13 20:00:07.005890 containerd[1561]: 2025-02-13 20:00:06.968 [INFO][4265] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.40838bc66f8bd48adc0ed00b2019b4685d0cb6eb4c8b72623ad3c93ae9748347" host="10.0.0.102" Feb 13 20:00:07.005890 containerd[1561]: 2025-02-13 20:00:06.972 [INFO][4265] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.102" Feb 13 20:00:07.005890 containerd[1561]: 2025-02-13 20:00:06.975 [INFO][4265] ipam/ipam.go 489: Trying affinity for 192.168.54.192/26 host="10.0.0.102" Feb 13 20:00:07.005890 containerd[1561]: 2025-02-13 20:00:06.976 [INFO][4265] ipam/ipam.go 155: Attempting to load block cidr=192.168.54.192/26 host="10.0.0.102" Feb 13 20:00:07.005890 containerd[1561]: 2025-02-13 20:00:06.978 [INFO][4265] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.54.192/26 host="10.0.0.102" Feb 13 20:00:07.005890 containerd[1561]: 2025-02-13 20:00:06.978 [INFO][4265] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.54.192/26 handle="k8s-pod-network.40838bc66f8bd48adc0ed00b2019b4685d0cb6eb4c8b72623ad3c93ae9748347" host="10.0.0.102" Feb 13 20:00:07.005890 containerd[1561]: 2025-02-13 20:00:06.980 [INFO][4265] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.40838bc66f8bd48adc0ed00b2019b4685d0cb6eb4c8b72623ad3c93ae9748347 Feb 13 20:00:07.005890 containerd[1561]: 2025-02-13 20:00:06.983 [INFO][4265] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.54.192/26 handle="k8s-pod-network.40838bc66f8bd48adc0ed00b2019b4685d0cb6eb4c8b72623ad3c93ae9748347" host="10.0.0.102" Feb 13 20:00:07.005890 containerd[1561]: 2025-02-13 20:00:06.988 [INFO][4265] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.54.196/26] block=192.168.54.192/26 handle="k8s-pod-network.40838bc66f8bd48adc0ed00b2019b4685d0cb6eb4c8b72623ad3c93ae9748347" host="10.0.0.102" Feb 13 20:00:07.005890 containerd[1561]: 2025-02-13 20:00:06.988 [INFO][4265] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.54.196/26] handle="k8s-pod-network.40838bc66f8bd48adc0ed00b2019b4685d0cb6eb4c8b72623ad3c93ae9748347" host="10.0.0.102" Feb 13 20:00:07.005890 containerd[1561]: 2025-02-13 20:00:06.988 [INFO][4265] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:00:07.005890 containerd[1561]: 2025-02-13 20:00:06.988 [INFO][4265] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.54.196/26] IPv6=[] ContainerID="40838bc66f8bd48adc0ed00b2019b4685d0cb6eb4c8b72623ad3c93ae9748347" HandleID="k8s-pod-network.40838bc66f8bd48adc0ed00b2019b4685d0cb6eb4c8b72623ad3c93ae9748347" Workload="10.0.0.102-k8s-test--pod--1-eth0" Feb 13 20:00:07.005890 containerd[1561]: 2025-02-13 20:00:06.991 [INFO][4253] cni-plugin/k8s.go 386: Populated endpoint ContainerID="40838bc66f8bd48adc0ed00b2019b4685d0cb6eb4c8b72623ad3c93ae9748347" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.102-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.102-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"32fe5049-4e4e-47cd-ac8e-8427f969a07b", ResourceVersion:"1445", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 59, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.102", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.54.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:00:07.006779 containerd[1561]: 2025-02-13 20:00:06.991 [INFO][4253] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.54.196/32] ContainerID="40838bc66f8bd48adc0ed00b2019b4685d0cb6eb4c8b72623ad3c93ae9748347" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.102-k8s-test--pod--1-eth0" Feb 13 20:00:07.006779 containerd[1561]: 2025-02-13 20:00:06.991 [INFO][4253] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="40838bc66f8bd48adc0ed00b2019b4685d0cb6eb4c8b72623ad3c93ae9748347" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.102-k8s-test--pod--1-eth0" Feb 13 20:00:07.006779 containerd[1561]: 2025-02-13 20:00:06.994 [INFO][4253] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="40838bc66f8bd48adc0ed00b2019b4685d0cb6eb4c8b72623ad3c93ae9748347" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.102-k8s-test--pod--1-eth0" Feb 13 20:00:07.006779 containerd[1561]: 2025-02-13 20:00:06.995 [INFO][4253] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="40838bc66f8bd48adc0ed00b2019b4685d0cb6eb4c8b72623ad3c93ae9748347" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.102-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.102-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"32fe5049-4e4e-47cd-ac8e-8427f969a07b", ResourceVersion:"1445", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 59, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.102", ContainerID:"40838bc66f8bd48adc0ed00b2019b4685d0cb6eb4c8b72623ad3c93ae9748347", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.54.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"32:c9:60:f6:1a:9b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:00:07.006779 containerd[1561]: 2025-02-13 20:00:07.002 [INFO][4253] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="40838bc66f8bd48adc0ed00b2019b4685d0cb6eb4c8b72623ad3c93ae9748347" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.102-k8s-test--pod--1-eth0" Feb 13 20:00:07.028687 containerd[1561]: time="2025-02-13T20:00:07.028596070Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:00:07.028687 containerd[1561]: time="2025-02-13T20:00:07.028654168Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:00:07.028687 containerd[1561]: time="2025-02-13T20:00:07.028666592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:00:07.028957 containerd[1561]: time="2025-02-13T20:00:07.028797378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:00:07.051693 kubelet[1901]: E0213 20:00:07.051649 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:00:07.054601 systemd-resolved[1455]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 20:00:07.082768 containerd[1561]: time="2025-02-13T20:00:07.082703801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:32fe5049-4e4e-47cd-ac8e-8427f969a07b,Namespace:default,Attempt:0,} returns sandbox id \"40838bc66f8bd48adc0ed00b2019b4685d0cb6eb4c8b72623ad3c93ae9748347\"" Feb 13 20:00:07.083805 containerd[1561]: time="2025-02-13T20:00:07.083782265Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 20:00:08.052569 kubelet[1901]: E0213 20:00:08.052483 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:00:08.439125 systemd-networkd[1240]: cali5ec59c6bf6e: Gained IPv6LL Feb 13 20:00:09.052933 kubelet[1901]: E0213 20:00:09.052859 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:00:10.053163 kubelet[1901]: E0213 20:00:10.053062 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:00:10.471039 containerd[1561]: time="2025-02-13T20:00:10.470377167Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:00:10.473013 containerd[1561]: time="2025-02-13T20:00:10.472874714Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Feb 13 20:00:10.475839 containerd[1561]: time="2025-02-13T20:00:10.475725482Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 3.391900678s" Feb 13 20:00:10.475839 containerd[1561]: time="2025-02-13T20:00:10.475834498Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 20:00:10.478534 containerd[1561]: time="2025-02-13T20:00:10.478457669Z" level=info msg="CreateContainer within sandbox \"40838bc66f8bd48adc0ed00b2019b4685d0cb6eb4c8b72623ad3c93ae9748347\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 13 20:00:10.501196 containerd[1561]: time="2025-02-13T20:00:10.501122958Z" level=info msg="CreateContainer within sandbox \"40838bc66f8bd48adc0ed00b2019b4685d0cb6eb4c8b72623ad3c93ae9748347\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"8001216426c61a5fcae7f7811a878aba815171a3d52c343aa73252364a6e045c\"" Feb 13 20:00:10.501959 containerd[1561]: time="2025-02-13T20:00:10.501860141Z" level=info msg="StartContainer for \"8001216426c61a5fcae7f7811a878aba815171a3d52c343aa73252364a6e045c\"" Feb 13 20:00:10.562159 containerd[1561]: time="2025-02-13T20:00:10.562020353Z" level=info msg="StartContainer for \"8001216426c61a5fcae7f7811a878aba815171a3d52c343aa73252364a6e045c\" returns successfully" Feb 13 20:00:11.001272 kubelet[1901]: E0213 20:00:11.001194 1901 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:00:11.053781 kubelet[1901]: E0213 20:00:11.053724 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:00:11.475533 kubelet[1901]: I0213 20:00:11.475471 1901 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=24.08219064 podStartE2EDuration="27.475455898s" podCreationTimestamp="2025-02-13 19:59:44 +0000 UTC" firstStartedPulling="2025-02-13 20:00:07.083573723 +0000 UTC m=+76.456409782" lastFinishedPulling="2025-02-13 20:00:10.476838982 +0000 UTC m=+79.849675040" observedRunningTime="2025-02-13 20:00:11.475234633 +0000 UTC m=+80.848070691" watchObservedRunningTime="2025-02-13 20:00:11.475455898 +0000 UTC m=+80.848291956" Feb 13 20:00:12.055033 kubelet[1901]: E0213 20:00:12.054948 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:00:13.055602 kubelet[1901]: E0213 20:00:13.055538 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:00:14.056165 kubelet[1901]: E0213 20:00:14.056072 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:00:15.056430 kubelet[1901]: E0213 20:00:15.056381 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:00:15.273969 kubelet[1901]: E0213 20:00:15.273916 1901 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:00:16.056700 kubelet[1901]: E0213 20:00:16.056650 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:00:17.057783 kubelet[1901]: E0213 20:00:17.057723 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:00:18.058067 kubelet[1901]: E0213 20:00:18.057998 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"