Jan 29 11:31:58.044522 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 09:36:13 -00 2025 Jan 29 11:31:58.044545 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 11:31:58.044557 kernel: BIOS-provided physical RAM map: Jan 29 11:31:58.044563 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 29 11:31:58.044569 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 29 11:31:58.044575 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 29 11:31:58.044583 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 29 11:31:58.044589 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 29 11:31:58.044595 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 29 11:31:58.044604 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 29 11:31:58.044610 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 29 11:31:58.044616 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 29 11:31:58.044626 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 29 11:31:58.044633 kernel: NX (Execute Disable) protection: active Jan 29 11:31:58.044640 kernel: APIC: Static calls initialized Jan 29 11:31:58.044652 kernel: SMBIOS 2.8 present. Jan 29 11:31:58.044660 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 29 11:31:58.044666 kernel: Hypervisor detected: KVM Jan 29 11:31:58.044673 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 11:31:58.044680 kernel: kvm-clock: using sched offset of 4063450962 cycles Jan 29 11:31:58.044687 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 11:31:58.044694 kernel: tsc: Detected 2794.748 MHz processor Jan 29 11:31:58.044701 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 11:31:58.044709 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 11:31:58.044716 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 29 11:31:58.044725 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 29 11:31:58.044732 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 11:31:58.044739 kernel: Using GB pages for direct mapping Jan 29 11:31:58.044746 kernel: ACPI: Early table checksum verification disabled Jan 29 11:31:58.044753 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 29 11:31:58.044760 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:31:58.044767 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:31:58.044774 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:31:58.044783 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 29 11:31:58.044790 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:31:58.044797 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:31:58.044804 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:31:58.044818 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:31:58.044824 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Jan 29 11:31:58.044832 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Jan 29 11:31:58.044843 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 29 11:31:58.044852 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Jan 29 11:31:58.044859 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Jan 29 11:31:58.044867 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Jan 29 11:31:58.044874 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Jan 29 11:31:58.044883 kernel: No NUMA configuration found Jan 29 11:31:58.044890 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 29 11:31:58.044898 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 29 11:31:58.044908 kernel: Zone ranges: Jan 29 11:31:58.044915 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 11:31:58.044922 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 29 11:31:58.044929 kernel: Normal empty Jan 29 11:31:58.044936 kernel: Movable zone start for each node Jan 29 11:31:58.044944 kernel: Early memory node ranges Jan 29 11:31:58.044951 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 29 11:31:58.044958 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 29 11:31:58.044965 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 29 11:31:58.044975 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 11:31:58.044984 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 29 11:31:58.044991 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 29 11:31:58.044998 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 29 11:31:58.045005 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 11:31:58.045012 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 11:31:58.045020 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 11:31:58.045027 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 11:31:58.045034 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 11:31:58.045044 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 11:31:58.045051 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 11:31:58.045058 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 11:31:58.045066 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 29 11:31:58.045073 kernel: TSC deadline timer available Jan 29 11:31:58.045080 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 29 11:31:58.045087 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 11:31:58.045094 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 29 11:31:58.045103 kernel: kvm-guest: setup PV sched yield Jan 29 11:31:58.045113 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 29 11:31:58.045120 kernel: Booting paravirtualized kernel on KVM Jan 29 11:31:58.045128 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 11:31:58.045135 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 29 11:31:58.045142 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 29 11:31:58.045150 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 29 11:31:58.045157 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 29 11:31:58.045164 kernel: kvm-guest: PV spinlocks enabled Jan 29 11:31:58.045171 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 29 11:31:58.045179 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 11:31:58.045190 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 11:31:58.045197 kernel: random: crng init done Jan 29 11:31:58.045204 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 11:31:58.045211 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 11:31:58.045218 kernel: Fallback order for Node 0: 0 Jan 29 11:31:58.045226 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 29 11:31:58.045233 kernel: Policy zone: DMA32 Jan 29 11:31:58.045240 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 11:31:58.045377 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2301K rwdata, 22736K rodata, 42972K init, 2220K bss, 136900K reserved, 0K cma-reserved) Jan 29 11:31:58.045384 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 29 11:31:58.045392 kernel: ftrace: allocating 37923 entries in 149 pages Jan 29 11:31:58.045399 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 11:31:58.045406 kernel: Dynamic Preempt: voluntary Jan 29 11:31:58.045413 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 11:31:58.045421 kernel: rcu: RCU event tracing is enabled. Jan 29 11:31:58.045428 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 29 11:31:58.045436 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 11:31:58.045447 kernel: Rude variant of Tasks RCU enabled. Jan 29 11:31:58.045454 kernel: Tracing variant of Tasks RCU enabled. Jan 29 11:31:58.045461 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 11:31:58.045471 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 29 11:31:58.045478 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 29 11:31:58.045486 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 11:31:58.045493 kernel: Console: colour VGA+ 80x25 Jan 29 11:31:58.045500 kernel: printk: console [ttyS0] enabled Jan 29 11:31:58.045507 kernel: ACPI: Core revision 20230628 Jan 29 11:31:58.045517 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 29 11:31:58.045525 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 11:31:58.045532 kernel: x2apic enabled Jan 29 11:31:58.045539 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 11:31:58.045546 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 29 11:31:58.045554 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 29 11:31:58.045561 kernel: kvm-guest: setup PV IPIs Jan 29 11:31:58.045579 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 29 11:31:58.045586 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 29 11:31:58.045594 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jan 29 11:31:58.045601 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 29 11:31:58.045609 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 29 11:31:58.045619 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 29 11:31:58.045627 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 11:31:58.045634 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 11:31:58.045642 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 11:31:58.045650 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 11:31:58.045660 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 29 11:31:58.045670 kernel: RETBleed: Mitigation: untrained return thunk Jan 29 11:31:58.045678 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 11:31:58.045685 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 11:31:58.045693 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 29 11:31:58.045701 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 29 11:31:58.045709 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 29 11:31:58.045716 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 11:31:58.045727 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 11:31:58.045734 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 11:31:58.045742 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 11:31:58.045749 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 29 11:31:58.045757 kernel: Freeing SMP alternatives memory: 32K Jan 29 11:31:58.045765 kernel: pid_max: default: 32768 minimum: 301 Jan 29 11:31:58.045773 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 11:31:58.045782 kernel: landlock: Up and running. Jan 29 11:31:58.045790 kernel: SELinux: Initializing. Jan 29 11:31:58.045803 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:31:58.045818 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:31:58.045826 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 29 11:31:58.045834 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:31:58.045841 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:31:58.045849 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:31:58.045857 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 29 11:31:58.045866 kernel: ... version: 0 Jan 29 11:31:58.045874 kernel: ... bit width: 48 Jan 29 11:31:58.045884 kernel: ... generic registers: 6 Jan 29 11:31:58.045892 kernel: ... value mask: 0000ffffffffffff Jan 29 11:31:58.045899 kernel: ... max period: 00007fffffffffff Jan 29 11:31:58.045907 kernel: ... fixed-purpose events: 0 Jan 29 11:31:58.045914 kernel: ... event mask: 000000000000003f Jan 29 11:31:58.045922 kernel: signal: max sigframe size: 1776 Jan 29 11:31:58.045929 kernel: rcu: Hierarchical SRCU implementation. Jan 29 11:31:58.045937 kernel: rcu: Max phase no-delay instances is 400. Jan 29 11:31:58.045944 kernel: smp: Bringing up secondary CPUs ... Jan 29 11:31:58.045954 kernel: smpboot: x86: Booting SMP configuration: Jan 29 11:31:58.045962 kernel: .... node #0, CPUs: #1 #2 #3 Jan 29 11:31:58.045969 kernel: smp: Brought up 1 node, 4 CPUs Jan 29 11:31:58.045977 kernel: smpboot: Max logical packages: 1 Jan 29 11:31:58.045985 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jan 29 11:31:58.045992 kernel: devtmpfs: initialized Jan 29 11:31:58.045999 kernel: x86/mm: Memory block size: 128MB Jan 29 11:31:58.046007 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 11:31:58.046015 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 29 11:31:58.046025 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 11:31:58.046032 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 11:31:58.046040 kernel: audit: initializing netlink subsys (disabled) Jan 29 11:31:58.046047 kernel: audit: type=2000 audit(1738150318.107:1): state=initialized audit_enabled=0 res=1 Jan 29 11:31:58.046055 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 11:31:58.046062 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 11:31:58.046070 kernel: cpuidle: using governor menu Jan 29 11:31:58.046077 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 11:31:58.046085 kernel: dca service started, version 1.12.1 Jan 29 11:31:58.046095 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 29 11:31:58.046103 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 29 11:31:58.046110 kernel: PCI: Using configuration type 1 for base access Jan 29 11:31:58.046118 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 11:31:58.046126 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 11:31:58.046133 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 11:31:58.046141 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 11:31:58.046148 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 11:31:58.046156 kernel: ACPI: Added _OSI(Module Device) Jan 29 11:31:58.046166 kernel: ACPI: Added _OSI(Processor Device) Jan 29 11:31:58.046174 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 11:31:58.046181 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 11:31:58.046189 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 11:31:58.046196 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 11:31:58.046204 kernel: ACPI: Interpreter enabled Jan 29 11:31:58.046211 kernel: ACPI: PM: (supports S0 S3 S5) Jan 29 11:31:58.046219 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 11:31:58.046226 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 11:31:58.046236 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 11:31:58.046244 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 29 11:31:58.046263 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 11:31:58.046454 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 11:31:58.046590 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 29 11:31:58.046718 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 29 11:31:58.046728 kernel: PCI host bridge to bus 0000:00 Jan 29 11:31:58.046892 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 11:31:58.047011 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 11:31:58.047127 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 11:31:58.047241 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 29 11:31:58.047375 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 29 11:31:58.047490 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 29 11:31:58.047604 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 11:31:58.047752 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 29 11:31:58.047899 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 29 11:31:58.048028 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 29 11:31:58.048155 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 29 11:31:58.048298 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 29 11:31:58.048426 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 11:31:58.048564 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 11:31:58.048696 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 29 11:31:58.048833 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 29 11:31:58.048961 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 29 11:31:58.049098 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 29 11:31:58.049226 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 29 11:31:58.049404 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 29 11:31:58.049552 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 29 11:31:58.049723 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 29 11:31:58.049864 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 29 11:31:58.049994 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 29 11:31:58.050120 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 29 11:31:58.050274 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 29 11:31:58.050419 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 29 11:31:58.050553 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 29 11:31:58.050688 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 29 11:31:58.050822 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 29 11:31:58.050954 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 29 11:31:58.051104 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 29 11:31:58.051231 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 29 11:31:58.051242 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 11:31:58.051272 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 11:31:58.051280 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 11:31:58.051288 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 11:31:58.051296 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 29 11:31:58.051304 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 29 11:31:58.051312 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 29 11:31:58.051320 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 29 11:31:58.051327 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 29 11:31:58.051335 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 29 11:31:58.051346 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 29 11:31:58.051354 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 29 11:31:58.051362 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 29 11:31:58.051370 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 29 11:31:58.051377 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 29 11:31:58.051385 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 29 11:31:58.051393 kernel: iommu: Default domain type: Translated Jan 29 11:31:58.051401 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 11:31:58.051408 kernel: PCI: Using ACPI for IRQ routing Jan 29 11:31:58.051418 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 11:31:58.051426 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 29 11:31:58.051434 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 29 11:31:58.051564 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 29 11:31:58.051687 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 29 11:31:58.051842 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 11:31:58.051855 kernel: vgaarb: loaded Jan 29 11:31:58.051864 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 29 11:31:58.051879 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 29 11:31:58.051889 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 11:31:58.051897 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 11:31:58.051905 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 11:31:58.051912 kernel: pnp: PnP ACPI init Jan 29 11:31:58.052049 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 29 11:31:58.052061 kernel: pnp: PnP ACPI: found 6 devices Jan 29 11:31:58.052069 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 11:31:58.052081 kernel: NET: Registered PF_INET protocol family Jan 29 11:31:58.052089 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 11:31:58.052097 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 11:31:58.052105 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 11:31:58.052113 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 11:31:58.052121 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 11:31:58.052129 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 11:31:58.052136 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:31:58.052144 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:31:58.052155 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 11:31:58.052162 kernel: NET: Registered PF_XDP protocol family Jan 29 11:31:58.052303 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 11:31:58.052420 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 11:31:58.052536 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 11:31:58.052651 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 29 11:31:58.052765 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 29 11:31:58.052911 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 29 11:31:58.052930 kernel: PCI: CLS 0 bytes, default 64 Jan 29 11:31:58.052940 kernel: Initialise system trusted keyrings Jan 29 11:31:58.052949 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 11:31:58.052959 kernel: Key type asymmetric registered Jan 29 11:31:58.052969 kernel: Asymmetric key parser 'x509' registered Jan 29 11:31:58.052979 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 11:31:58.052988 kernel: io scheduler mq-deadline registered Jan 29 11:31:58.052998 kernel: io scheduler kyber registered Jan 29 11:31:58.053008 kernel: io scheduler bfq registered Jan 29 11:31:58.053017 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 11:31:58.053031 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 29 11:31:58.053041 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 29 11:31:58.053049 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 29 11:31:58.053057 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 11:31:58.053064 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 11:31:58.053072 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 11:31:58.053080 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 11:31:58.053088 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 11:31:58.053218 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 29 11:31:58.053234 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 11:31:58.053370 kernel: rtc_cmos 00:04: registered as rtc0 Jan 29 11:31:58.053491 kernel: rtc_cmos 00:04: setting system clock to 2025-01-29T11:31:57 UTC (1738150317) Jan 29 11:31:58.053609 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 29 11:31:58.053619 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 29 11:31:58.053627 kernel: hpet: Lost 5 RTC interrupts Jan 29 11:31:58.053635 kernel: NET: Registered PF_INET6 protocol family Jan 29 11:31:58.053643 kernel: Segment Routing with IPv6 Jan 29 11:31:58.053655 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 11:31:58.053663 kernel: NET: Registered PF_PACKET protocol family Jan 29 11:31:58.053670 kernel: Key type dns_resolver registered Jan 29 11:31:58.053678 kernel: IPI shorthand broadcast: enabled Jan 29 11:31:58.053686 kernel: sched_clock: Marking stable (845003263, 109053101)->(977630826, -23574462) Jan 29 11:31:58.053694 kernel: registered taskstats version 1 Jan 29 11:31:58.053702 kernel: Loading compiled-in X.509 certificates Jan 29 11:31:58.053709 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: de92a621108c58f5771c86c5c3ccb1aa0728ed55' Jan 29 11:31:58.053717 kernel: Key type .fscrypt registered Jan 29 11:31:58.053728 kernel: Key type fscrypt-provisioning registered Jan 29 11:31:58.053736 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 11:31:58.053743 kernel: ima: Allocated hash algorithm: sha1 Jan 29 11:31:58.053751 kernel: ima: No architecture policies found Jan 29 11:31:58.053759 kernel: clk: Disabling unused clocks Jan 29 11:31:58.053767 kernel: Freeing unused kernel image (initmem) memory: 42972K Jan 29 11:31:58.053775 kernel: Write protecting the kernel read-only data: 36864k Jan 29 11:31:58.053783 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Jan 29 11:31:58.053793 kernel: Run /init as init process Jan 29 11:31:58.053801 kernel: with arguments: Jan 29 11:31:58.053817 kernel: /init Jan 29 11:31:58.053825 kernel: with environment: Jan 29 11:31:58.053833 kernel: HOME=/ Jan 29 11:31:58.053840 kernel: TERM=linux Jan 29 11:31:58.053848 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 11:31:58.053858 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:31:58.053887 systemd[1]: Detected virtualization kvm. Jan 29 11:31:58.053896 systemd[1]: Detected architecture x86-64. Jan 29 11:31:58.053913 systemd[1]: Running in initrd. Jan 29 11:31:58.053930 systemd[1]: No hostname configured, using default hostname. Jan 29 11:31:58.053947 systemd[1]: Hostname set to . Jan 29 11:31:58.053965 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:31:58.053990 systemd[1]: Queued start job for default target initrd.target. Jan 29 11:31:58.054013 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:31:58.054042 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:31:58.054084 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 11:31:58.054095 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:31:58.054104 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 11:31:58.054113 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 11:31:58.054126 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 11:31:58.054135 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 11:31:58.054143 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:31:58.054152 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:31:58.054160 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:31:58.054168 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:31:58.054177 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:31:58.054185 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:31:58.054196 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:31:58.054205 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:31:58.054213 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:31:58.054222 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:31:58.054230 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:31:58.054239 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:31:58.054262 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:31:58.054271 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:31:58.054279 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 11:31:58.054291 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:31:58.054299 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 11:31:58.054308 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 11:31:58.054316 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:31:58.054324 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:31:58.054333 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:31:58.054341 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 11:31:58.054350 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:31:58.054358 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 11:31:58.054370 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:31:58.054405 systemd-journald[194]: Collecting audit messages is disabled. Jan 29 11:31:58.054429 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:31:58.054438 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:31:58.054447 systemd-journald[194]: Journal started Jan 29 11:31:58.054468 systemd-journald[194]: Runtime Journal (/run/log/journal/5df393c955c3400dadb81ad71720e186) is 6.0M, max 48.4M, 42.3M free. Jan 29 11:31:58.055704 systemd-modules-load[195]: Inserted module 'overlay' Jan 29 11:31:58.087344 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 11:31:58.087366 kernel: Bridge firewalling registered Jan 29 11:31:58.088269 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:31:58.088611 systemd-modules-load[195]: Inserted module 'br_netfilter' Jan 29 11:31:58.102782 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:31:58.104246 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:31:58.118460 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:31:58.119750 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:31:58.122313 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:31:58.123864 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:31:58.136415 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:31:58.138082 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:31:58.152470 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 11:31:58.154743 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:31:58.159326 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:31:58.165190 dracut-cmdline[229]: dracut-dracut-053 Jan 29 11:31:58.169061 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 11:31:58.196744 systemd-resolved[235]: Positive Trust Anchors: Jan 29 11:31:58.196761 systemd-resolved[235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:31:58.196814 systemd-resolved[235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:31:58.200213 systemd-resolved[235]: Defaulting to hostname 'linux'. Jan 29 11:31:58.201637 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:31:58.207023 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:31:58.282293 kernel: SCSI subsystem initialized Jan 29 11:31:58.291293 kernel: Loading iSCSI transport class v2.0-870. Jan 29 11:31:58.302289 kernel: iscsi: registered transport (tcp) Jan 29 11:31:58.325453 kernel: iscsi: registered transport (qla4xxx) Jan 29 11:31:58.325525 kernel: QLogic iSCSI HBA Driver Jan 29 11:31:58.382108 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 11:31:58.394472 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 11:31:58.421295 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 11:31:58.421390 kernel: device-mapper: uevent: version 1.0.3 Jan 29 11:31:58.422907 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 11:31:58.466303 kernel: raid6: avx2x4 gen() 27791 MB/s Jan 29 11:31:58.483298 kernel: raid6: avx2x2 gen() 30105 MB/s Jan 29 11:31:58.500396 kernel: raid6: avx2x1 gen() 23859 MB/s Jan 29 11:31:58.500488 kernel: raid6: using algorithm avx2x2 gen() 30105 MB/s Jan 29 11:31:58.518411 kernel: raid6: .... xor() 18892 MB/s, rmw enabled Jan 29 11:31:58.518513 kernel: raid6: using avx2x2 recovery algorithm Jan 29 11:31:58.539306 kernel: xor: automatically using best checksumming function avx Jan 29 11:31:58.696320 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 11:31:58.711554 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:31:58.730492 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:31:58.747458 systemd-udevd[414]: Using default interface naming scheme 'v255'. Jan 29 11:31:58.752822 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:31:58.763509 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 11:31:58.781329 dracut-pre-trigger[419]: rd.md=0: removing MD RAID activation Jan 29 11:31:58.821979 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:31:58.830612 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:31:58.902957 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:31:58.911522 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 11:31:58.925791 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 11:31:58.931296 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:31:58.934160 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:31:58.936690 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:31:58.942075 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 29 11:31:58.969419 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 11:31:58.969440 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 29 11:31:58.969591 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 11:31:58.969603 kernel: GPT:9289727 != 19775487 Jan 29 11:31:58.969613 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 11:31:58.969624 kernel: GPT:9289727 != 19775487 Jan 29 11:31:58.969638 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 11:31:58.969648 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:31:58.947423 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 11:31:58.967051 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:31:59.015278 kernel: libata version 3.00 loaded. Jan 29 11:31:59.022562 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:31:59.029234 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 11:31:59.029271 kernel: AES CTR mode by8 optimization enabled Jan 29 11:31:59.025024 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:31:59.031931 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:31:59.038690 kernel: ahci 0000:00:1f.2: version 3.0 Jan 29 11:31:59.068899 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 29 11:31:59.068917 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 29 11:31:59.069074 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 29 11:31:59.069217 kernel: scsi host0: ahci Jan 29 11:31:59.069567 kernel: scsi host1: ahci Jan 29 11:31:59.069752 kernel: scsi host2: ahci Jan 29 11:31:59.069931 kernel: BTRFS: device fsid 5ba3c9ea-61f2-4fe6-a507-2966757f6d44 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (461) Jan 29 11:31:59.069944 kernel: scsi host3: ahci Jan 29 11:31:59.070096 kernel: scsi host4: ahci Jan 29 11:31:59.070259 kernel: scsi host5: ahci Jan 29 11:31:59.070430 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (471) Jan 29 11:31:59.070446 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 29 11:31:59.070457 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 29 11:31:59.070467 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 29 11:31:59.070478 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 29 11:31:59.070488 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 29 11:31:59.070498 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 29 11:31:59.033826 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:31:59.034015 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:31:59.035458 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:31:59.048023 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:31:59.084608 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 11:31:59.090973 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 11:31:59.096106 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 11:31:59.096584 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 11:31:59.102736 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:31:59.194715 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 11:31:59.220790 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:31:59.222429 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:31:59.246939 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:31:59.344428 disk-uuid[552]: Primary Header is updated. Jan 29 11:31:59.344428 disk-uuid[552]: Secondary Entries is updated. Jan 29 11:31:59.344428 disk-uuid[552]: Secondary Header is updated. Jan 29 11:31:59.349269 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:31:59.353264 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:31:59.382718 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 29 11:31:59.382789 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 29 11:31:59.382805 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 29 11:31:59.382819 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 29 11:31:59.382842 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 29 11:31:59.382856 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 29 11:31:59.385530 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 29 11:31:59.385558 kernel: ata3.00: applying bridge limits Jan 29 11:31:59.389352 kernel: ata3.00: configured for UDMA/100 Jan 29 11:31:59.389380 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 29 11:31:59.677292 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 29 11:31:59.696057 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 11:31:59.696095 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 29 11:32:00.354293 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:32:00.355078 disk-uuid[563]: The operation has completed successfully. Jan 29 11:32:00.387453 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 11:32:00.387625 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 11:32:00.414451 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 11:32:00.418469 sh[591]: Success Jan 29 11:32:00.431281 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 29 11:32:00.465784 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 11:32:00.483851 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 11:32:00.486313 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 11:32:01.003848 kernel: BTRFS info (device dm-0): first mount of filesystem 5ba3c9ea-61f2-4fe6-a507-2966757f6d44 Jan 29 11:32:01.003932 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:32:01.003943 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 11:32:01.004862 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 11:32:01.006275 kernel: BTRFS info (device dm-0): using free space tree Jan 29 11:32:01.010662 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 11:32:01.011809 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 11:32:01.022414 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 11:32:01.023815 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 11:32:01.035688 kernel: BTRFS info (device vda6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:32:01.035737 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:32:01.035749 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:32:01.039271 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:32:01.048930 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 11:32:01.050792 kernel: BTRFS info (device vda6): last unmount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:32:01.060080 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 11:32:01.067448 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 11:32:01.253854 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:32:01.258783 ignition[687]: Ignition 2.20.0 Jan 29 11:32:01.258949 ignition[687]: Stage: fetch-offline Jan 29 11:32:01.263529 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:32:01.258989 ignition[687]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:32:01.259001 ignition[687]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:32:01.259111 ignition[687]: parsed url from cmdline: "" Jan 29 11:32:01.259115 ignition[687]: no config URL provided Jan 29 11:32:01.259121 ignition[687]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:32:01.259130 ignition[687]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:32:01.259160 ignition[687]: op(1): [started] loading QEMU firmware config module Jan 29 11:32:01.278917 unknown[687]: fetched base config from "system" Jan 29 11:32:01.259165 ignition[687]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 29 11:32:01.278925 unknown[687]: fetched user config from "qemu" Jan 29 11:32:01.274000 ignition[687]: op(1): [finished] loading QEMU firmware config module Jan 29 11:32:01.281367 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:32:01.276230 ignition[687]: parsing config with SHA512: 0178856ae9d590cbcda33f61ba6ea0255b32f0095d1a2ca2dcc9cbc291194f759484cbfdd0acfc81aae7f9d4a07822ae5603039fa59e94b3a66b3e9b8fa2971f Jan 29 11:32:01.279151 ignition[687]: fetch-offline: fetch-offline passed Jan 29 11:32:01.279218 ignition[687]: Ignition finished successfully Jan 29 11:32:01.321459 systemd-networkd[777]: lo: Link UP Jan 29 11:32:01.321469 systemd-networkd[777]: lo: Gained carrier Jan 29 11:32:01.324590 systemd-networkd[777]: Enumeration completed Jan 29 11:32:01.324710 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:32:01.326657 systemd[1]: Reached target network.target - Network. Jan 29 11:32:01.327209 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 29 11:32:01.327999 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:32:01.328006 systemd-networkd[777]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:32:01.329159 systemd-networkd[777]: eth0: Link UP Jan 29 11:32:01.329164 systemd-networkd[777]: eth0: Gained carrier Jan 29 11:32:01.329173 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:32:01.333382 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 11:32:01.352335 systemd-networkd[777]: eth0: DHCPv4 address 10.0.0.79/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:32:01.358378 ignition[783]: Ignition 2.20.0 Jan 29 11:32:01.358395 ignition[783]: Stage: kargs Jan 29 11:32:01.358544 ignition[783]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:32:01.358556 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:32:01.362611 ignition[783]: kargs: kargs passed Jan 29 11:32:01.362663 ignition[783]: Ignition finished successfully Jan 29 11:32:01.367400 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 11:32:01.822573 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 11:32:01.852358 ignition[792]: Ignition 2.20.0 Jan 29 11:32:01.852370 ignition[792]: Stage: disks Jan 29 11:32:01.852545 ignition[792]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:32:01.852556 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:32:01.855942 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 11:32:01.853197 ignition[792]: disks: disks passed Jan 29 11:32:01.857919 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 11:32:01.853242 ignition[792]: Ignition finished successfully Jan 29 11:32:01.860176 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:32:01.862229 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:32:01.864561 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:32:01.865771 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:32:01.879471 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 11:32:01.896980 systemd-fsck[802]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 11:32:01.982389 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 11:32:01.993367 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 11:32:02.079288 kernel: EXT4-fs (vda9): mounted filesystem 2fbf9359-701e-4995-b3f7-74280bd2b1c9 r/w with ordered data mode. Quota mode: none. Jan 29 11:32:02.079786 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 11:32:02.082101 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 11:32:02.096331 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:32:02.099289 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 11:32:02.102268 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 11:32:02.109203 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (811) Jan 29 11:32:02.109236 kernel: BTRFS info (device vda6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:32:02.109272 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:32:02.109288 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:32:02.102329 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 11:32:02.112502 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:32:02.102356 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:32:02.115738 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:32:02.117721 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 11:32:02.121332 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 11:32:02.158622 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 11:32:02.163672 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Jan 29 11:32:02.167298 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 11:32:02.171845 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 11:32:02.259469 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 11:32:02.269346 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 11:32:02.272243 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 11:32:02.279843 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 11:32:02.282004 kernel: BTRFS info (device vda6): last unmount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:32:02.605112 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 11:32:02.627903 ignition[926]: INFO : Ignition 2.20.0 Jan 29 11:32:02.627903 ignition[926]: INFO : Stage: mount Jan 29 11:32:02.629845 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:32:02.629845 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:32:02.629845 ignition[926]: INFO : mount: mount passed Jan 29 11:32:02.629845 ignition[926]: INFO : Ignition finished successfully Jan 29 11:32:02.631968 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 11:32:02.639458 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 11:32:02.844523 systemd-networkd[777]: eth0: Gained IPv6LL Jan 29 11:32:03.096424 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:32:03.105168 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (937) Jan 29 11:32:03.105198 kernel: BTRFS info (device vda6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:32:03.105217 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:32:03.106280 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:32:03.110276 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:32:03.112194 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:32:03.164034 ignition[954]: INFO : Ignition 2.20.0 Jan 29 11:32:03.164034 ignition[954]: INFO : Stage: files Jan 29 11:32:03.166098 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:32:03.166098 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:32:03.166098 ignition[954]: DEBUG : files: compiled without relabeling support, skipping Jan 29 11:32:03.166098 ignition[954]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 11:32:03.166098 ignition[954]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 11:32:03.174241 ignition[954]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 11:32:03.174241 ignition[954]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 11:32:03.174241 ignition[954]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 11:32:03.174241 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 29 11:32:03.174241 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 11:32:03.174241 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:32:03.174241 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:32:03.174241 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:32:03.174241 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:32:03.174241 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:32:03.174241 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 29 11:32:03.170147 unknown[954]: wrote ssh authorized keys file for user: core Jan 29 11:32:03.560439 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 29 11:32:04.152661 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:32:04.152661 ignition[954]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Jan 29 11:32:04.157377 ignition[954]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:32:04.157377 ignition[954]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:32:04.157377 ignition[954]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Jan 29 11:32:04.157377 ignition[954]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Jan 29 11:32:04.198706 ignition[954]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:32:04.206916 ignition[954]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:32:04.208759 ignition[954]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Jan 29 11:32:04.208759 ignition[954]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:32:04.208759 ignition[954]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:32:04.208759 ignition[954]: INFO : files: files passed Jan 29 11:32:04.208759 ignition[954]: INFO : Ignition finished successfully Jan 29 11:32:04.216991 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 11:32:04.226414 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 11:32:04.229681 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 11:32:04.235337 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 11:32:04.235483 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 11:32:04.243677 initrd-setup-root-after-ignition[982]: grep: /sysroot/oem/oem-release: No such file or directory Jan 29 11:32:04.248848 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:32:04.250969 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:32:04.252908 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:32:04.256002 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:32:04.256710 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 11:32:04.273396 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 11:32:04.303278 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 11:32:04.303412 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 11:32:04.305811 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 11:32:04.306147 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 11:32:04.306709 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 11:32:04.307511 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 11:32:04.330815 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:32:04.342481 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 11:32:04.354501 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:32:04.356929 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:32:04.357302 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 11:32:04.357791 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 11:32:04.357917 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:32:04.362530 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 11:32:04.363092 systemd[1]: Stopped target basic.target - Basic System. Jan 29 11:32:04.363594 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 11:32:04.363935 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:32:04.364282 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 11:32:04.364785 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 11:32:04.365159 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:32:04.365679 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 11:32:04.365987 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 11:32:04.366333 systemd[1]: Stopped target swap.target - Swaps. Jan 29 11:32:04.367560 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 11:32:04.367701 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:32:04.386461 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:32:04.386896 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:32:04.387179 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 11:32:04.391558 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:32:04.392224 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 11:32:04.392405 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 11:32:04.393050 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 11:32:04.393193 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:32:04.398371 systemd[1]: Stopped target paths.target - Path Units. Jan 29 11:32:04.398662 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 11:32:04.406435 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:32:04.409321 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 11:32:04.410199 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 11:32:04.410546 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 11:32:04.410682 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:32:04.413745 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 11:32:04.413846 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:32:04.415710 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 11:32:04.415898 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:32:04.417974 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 11:32:04.418098 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 11:32:04.427436 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 11:32:04.429570 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 11:32:04.431390 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 11:32:04.431552 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:32:04.432173 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 11:32:04.432348 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:32:04.442093 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 11:32:04.442276 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 11:32:04.445658 ignition[1008]: INFO : Ignition 2.20.0 Jan 29 11:32:04.445658 ignition[1008]: INFO : Stage: umount Jan 29 11:32:04.445658 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:32:04.445658 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:32:04.445658 ignition[1008]: INFO : umount: umount passed Jan 29 11:32:04.445658 ignition[1008]: INFO : Ignition finished successfully Jan 29 11:32:04.446597 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 11:32:04.446736 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 11:32:04.449130 systemd[1]: Stopped target network.target - Network. Jan 29 11:32:04.450922 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 11:32:04.451041 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 11:32:04.453090 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 11:32:04.453145 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 11:32:04.454899 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 11:32:04.454952 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 11:32:04.456769 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 11:32:04.456828 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 11:32:04.459099 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 11:32:04.461050 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 11:32:04.464320 systemd-networkd[777]: eth0: DHCPv6 lease lost Jan 29 11:32:04.466757 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 11:32:04.466918 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 11:32:04.470735 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 11:32:04.471221 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 11:32:04.471365 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 11:32:04.474981 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 11:32:04.475045 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:32:04.485387 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 11:32:04.486622 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 11:32:04.486696 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:32:04.489431 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:32:04.489499 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:32:04.491922 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 11:32:04.492003 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 11:32:04.494583 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 11:32:04.494662 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:32:04.496937 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:32:04.506819 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 11:32:04.506958 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 11:32:04.523169 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 11:32:04.523387 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:32:04.525757 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 11:32:04.525812 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 11:32:04.527775 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 11:32:04.527816 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:32:04.529956 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 11:32:04.530010 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:32:04.532628 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 11:32:04.532705 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 11:32:04.534496 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:32:04.534555 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:32:04.548441 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 11:32:04.549563 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 11:32:04.549640 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:32:04.552012 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 11:32:04.552074 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:32:04.554279 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 11:32:04.554339 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:32:04.556751 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:32:04.556811 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:32:04.559427 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 11:32:04.559564 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 11:32:04.620281 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 11:32:04.620452 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 11:32:04.623006 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 11:32:04.624151 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 11:32:04.624229 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 11:32:04.641685 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 11:32:04.652452 systemd[1]: Switching root. Jan 29 11:32:04.686281 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Jan 29 11:32:04.686358 systemd-journald[194]: Journal stopped Jan 29 11:32:05.933118 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 11:32:05.933187 kernel: SELinux: policy capability open_perms=1 Jan 29 11:32:05.933199 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 11:32:05.933211 kernel: SELinux: policy capability always_check_network=0 Jan 29 11:32:05.933223 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 11:32:05.933234 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 11:32:05.933264 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 11:32:05.933284 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 11:32:05.933301 kernel: audit: type=1403 audit(1738150325.175:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 11:32:05.933319 systemd[1]: Successfully loaded SELinux policy in 42.900ms. Jan 29 11:32:05.933348 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.142ms. Jan 29 11:32:05.933362 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:32:05.933374 systemd[1]: Detected virtualization kvm. Jan 29 11:32:05.933386 systemd[1]: Detected architecture x86-64. Jan 29 11:32:05.933399 systemd[1]: Detected first boot. Jan 29 11:32:05.933414 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:32:05.933429 zram_generator::config[1053]: No configuration found. Jan 29 11:32:05.933442 systemd[1]: Populated /etc with preset unit settings. Jan 29 11:32:05.933455 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 11:32:05.933467 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 11:32:05.933481 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 11:32:05.933494 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 11:32:05.933506 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 11:32:05.933518 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 11:32:05.933530 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 11:32:05.933548 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 11:32:05.933560 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 11:32:05.933574 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 11:32:05.933586 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 11:32:05.933608 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:32:05.933621 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:32:05.933634 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 11:32:05.933647 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 11:32:05.933659 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 11:32:05.933672 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:32:05.933684 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 11:32:05.933697 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:32:05.933709 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 11:32:05.933723 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 11:32:05.933735 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 11:32:05.933747 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 11:32:05.933759 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:32:05.933772 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:32:05.933784 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:32:05.933796 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:32:05.933807 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 11:32:05.933822 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 11:32:05.933834 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:32:05.933847 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:32:05.933860 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:32:05.933875 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 11:32:05.933887 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 11:32:05.933900 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 11:32:05.933917 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 11:32:05.933929 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:32:05.933944 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 11:32:05.933956 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 11:32:05.933967 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 11:32:05.933980 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 11:32:05.933992 systemd[1]: Reached target machines.target - Containers. Jan 29 11:32:05.934004 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 11:32:05.934017 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:32:05.934029 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:32:05.934044 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 11:32:05.934056 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:32:05.934068 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:32:05.934080 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:32:05.934093 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 11:32:05.934104 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:32:05.934119 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 11:32:05.934132 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 11:32:05.934147 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 11:32:05.934159 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 11:32:05.934171 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 11:32:05.934183 kernel: loop: module loaded Jan 29 11:32:05.934195 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:32:05.934207 kernel: fuse: init (API version 7.39) Jan 29 11:32:05.934218 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:32:05.934230 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 11:32:05.934243 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 11:32:05.934290 systemd-journald[1123]: Collecting audit messages is disabled. Jan 29 11:32:05.934318 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:32:05.934330 systemd-journald[1123]: Journal started Jan 29 11:32:05.934351 systemd-journald[1123]: Runtime Journal (/run/log/journal/5df393c955c3400dadb81ad71720e186) is 6.0M, max 48.4M, 42.3M free. Jan 29 11:32:05.702833 systemd[1]: Queued start job for default target multi-user.target. Jan 29 11:32:05.721581 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 11:32:05.722026 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 11:32:05.939106 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 11:32:05.939141 systemd[1]: Stopped verity-setup.service. Jan 29 11:32:05.939157 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:32:05.941968 kernel: ACPI: bus type drm_connector registered Jan 29 11:32:05.942026 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:32:05.945031 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 11:32:05.946404 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 11:32:05.947751 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 11:32:05.948906 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 11:32:05.950160 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 11:32:05.951417 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 11:32:05.952761 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 11:32:05.954402 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:32:05.956005 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 11:32:05.956190 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 11:32:05.957684 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:32:05.957865 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:32:05.959421 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:32:05.959611 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:32:05.961083 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:32:05.961275 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:32:05.962884 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 11:32:05.963062 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 11:32:05.964457 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:32:05.964641 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:32:05.966119 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:32:05.967600 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 11:32:05.969146 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 11:32:05.983977 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 11:32:05.995346 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 11:32:05.997692 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 11:32:05.998839 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 11:32:05.998870 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:32:06.000857 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 11:32:06.003198 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 11:32:06.007561 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 11:32:06.009350 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:32:06.011201 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 11:32:06.014417 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 11:32:06.015696 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:32:06.018358 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 11:32:06.019860 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:32:06.021104 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:32:06.026078 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 11:32:06.028731 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:32:06.033085 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 11:32:06.033786 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 11:32:06.036658 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 11:32:06.044357 systemd-journald[1123]: Time spent on flushing to /var/log/journal/5df393c955c3400dadb81ad71720e186 is 17.313ms for 940 entries. Jan 29 11:32:06.044357 systemd-journald[1123]: System Journal (/var/log/journal/5df393c955c3400dadb81ad71720e186) is 8.0M, max 195.6M, 187.6M free. Jan 29 11:32:06.085105 systemd-journald[1123]: Received client request to flush runtime journal. Jan 29 11:32:06.085163 kernel: loop0: detected capacity change from 0 to 138184 Jan 29 11:32:06.056743 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 11:32:06.059813 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 11:32:06.072426 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 11:32:06.077757 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:32:06.078324 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Jan 29 11:32:06.078338 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Jan 29 11:32:06.082120 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:32:06.084786 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:32:06.086704 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 11:32:06.097308 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 11:32:06.099464 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 11:32:06.105041 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 11:32:06.108487 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 11:32:06.109376 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 11:32:06.123785 udevadm[1185]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 11:32:06.132761 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 11:32:06.136561 kernel: loop1: detected capacity change from 0 to 140992 Jan 29 11:32:06.142496 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:32:06.166435 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Jan 29 11:32:06.166854 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Jan 29 11:32:06.172278 kernel: loop2: detected capacity change from 0 to 210664 Jan 29 11:32:06.173852 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:32:06.209280 kernel: loop3: detected capacity change from 0 to 138184 Jan 29 11:32:06.222353 kernel: loop4: detected capacity change from 0 to 140992 Jan 29 11:32:06.232324 kernel: loop5: detected capacity change from 0 to 210664 Jan 29 11:32:06.240394 (sd-merge)[1196]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 29 11:32:06.241745 (sd-merge)[1196]: Merged extensions into '/usr'. Jan 29 11:32:06.248956 systemd[1]: Reloading requested from client PID 1167 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 11:32:06.248975 systemd[1]: Reloading... Jan 29 11:32:06.312288 zram_generator::config[1225]: No configuration found. Jan 29 11:32:06.374152 ldconfig[1162]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 11:32:06.439733 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:32:06.489737 systemd[1]: Reloading finished in 240 ms. Jan 29 11:32:06.523928 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 11:32:06.525588 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 11:32:06.542407 systemd[1]: Starting ensure-sysext.service... Jan 29 11:32:06.544533 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:32:06.550807 systemd[1]: Reloading requested from client PID 1259 ('systemctl') (unit ensure-sysext.service)... Jan 29 11:32:06.550824 systemd[1]: Reloading... Jan 29 11:32:06.575105 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 11:32:06.575507 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 11:32:06.576531 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 11:32:06.576841 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Jan 29 11:32:06.576917 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Jan 29 11:32:06.584401 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:32:06.584412 systemd-tmpfiles[1260]: Skipping /boot Jan 29 11:32:06.600290 zram_generator::config[1287]: No configuration found. Jan 29 11:32:06.601601 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:32:06.601679 systemd-tmpfiles[1260]: Skipping /boot Jan 29 11:32:06.718984 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:32:06.769272 systemd[1]: Reloading finished in 218 ms. Jan 29 11:32:06.790351 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 11:32:06.805744 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:32:06.813157 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:32:06.815863 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 11:32:06.818309 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 11:32:06.822845 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:32:06.826208 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:32:06.831478 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 11:32:06.836354 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:32:06.836521 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:32:06.839959 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:32:06.846335 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:32:06.849047 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:32:06.850461 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:32:06.850582 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:32:06.851819 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:32:06.852031 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:32:06.854407 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:32:06.854604 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:32:06.859784 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:32:06.860157 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:32:06.862666 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 11:32:06.863087 systemd-udevd[1330]: Using default interface naming scheme 'v255'. Jan 29 11:32:06.871126 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:32:06.871365 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:32:06.879682 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:32:06.881645 augenrules[1358]: No rules Jan 29 11:32:06.890865 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:32:06.894472 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:32:06.895889 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:32:06.902359 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 11:32:06.914137 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 11:32:06.915507 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:32:06.916916 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:32:06.919605 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:32:06.919892 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:32:06.921977 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 11:32:06.924910 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 11:32:06.926980 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:32:06.927193 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:32:06.929042 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:32:06.929294 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:32:06.931205 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:32:06.931542 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:32:06.933622 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 11:32:06.955283 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1369) Jan 29 11:32:06.965554 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 11:32:06.980787 systemd[1]: Finished ensure-sysext.service. Jan 29 11:32:06.987120 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:32:06.997631 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:32:06.999151 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:32:07.003632 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:32:07.010749 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:32:07.015489 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:32:07.020748 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:32:07.022273 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:32:07.024902 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:32:07.045338 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 29 11:32:07.049451 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 11:32:07.053558 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:32:07.053618 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:32:07.054201 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 11:32:07.056291 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:32:07.056617 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:32:07.058534 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:32:07.058777 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:32:07.059908 kernel: ACPI: button: Power Button [PWRF] Jan 29 11:32:07.060455 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:32:07.060712 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:32:07.062841 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:32:07.063091 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:32:07.063673 augenrules[1400]: /sbin/augenrules: No change Jan 29 11:32:07.076833 augenrules[1434]: No rules Jan 29 11:32:07.083544 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 29 11:32:07.078460 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:32:07.084883 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:32:07.085187 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:32:07.091756 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 11:32:07.093293 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:32:07.093381 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:32:07.102714 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 29 11:32:07.103243 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 29 11:32:07.120386 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 29 11:32:07.114337 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 11:32:07.138477 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:32:07.154382 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 11:32:07.175348 systemd-resolved[1328]: Positive Trust Anchors: Jan 29 11:32:07.175370 systemd-resolved[1328]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:32:07.175402 systemd-resolved[1328]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:32:07.184593 systemd-resolved[1328]: Defaulting to hostname 'linux'. Jan 29 11:32:07.186536 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:32:07.186909 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:32:07.196101 systemd-networkd[1416]: lo: Link UP Jan 29 11:32:07.196115 systemd-networkd[1416]: lo: Gained carrier Jan 29 11:32:07.199625 systemd-networkd[1416]: Enumeration completed Jan 29 11:32:07.200061 systemd-networkd[1416]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:32:07.200066 systemd-networkd[1416]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:32:07.200839 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:32:07.201240 systemd[1]: Reached target network.target - Network. Jan 29 11:32:07.202026 systemd-networkd[1416]: eth0: Link UP Jan 29 11:32:07.202033 systemd-networkd[1416]: eth0: Gained carrier Jan 29 11:32:07.202047 systemd-networkd[1416]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:32:07.245695 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 11:32:07.251387 systemd-networkd[1416]: eth0: DHCPv4 address 10.0.0.79/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:32:07.252206 systemd-timesyncd[1419]: Network configuration changed, trying to establish connection. Jan 29 11:32:08.620832 systemd-resolved[1328]: Clock change detected. Flushing caches. Jan 29 11:32:08.620891 systemd-timesyncd[1419]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 29 11:32:08.620934 systemd-timesyncd[1419]: Initial clock synchronization to Wed 2025-01-29 11:32:08.620788 UTC. Jan 29 11:32:08.628791 kernel: kvm_amd: TSC scaling supported Jan 29 11:32:08.628877 kernel: kvm_amd: Nested Virtualization enabled Jan 29 11:32:08.628931 kernel: kvm_amd: Nested Paging enabled Jan 29 11:32:08.628958 kernel: kvm_amd: LBR virtualization supported Jan 29 11:32:08.628983 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 29 11:32:08.629006 kernel: kvm_amd: Virtual GIF supported Jan 29 11:32:08.644719 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 11:32:08.647621 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:32:08.649747 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 11:32:08.657783 kernel: EDAC MC: Ver: 3.0.0 Jan 29 11:32:08.690170 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 11:32:08.703150 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 11:32:08.712878 lvm[1456]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:32:08.745902 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 11:32:08.747597 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:32:08.748739 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:32:08.749922 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 11:32:08.751260 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 11:32:08.752734 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 11:32:08.754158 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 11:32:08.755482 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 11:32:08.756832 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 11:32:08.756868 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:32:08.757850 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:32:08.759730 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 11:32:08.762746 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 11:32:08.773235 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 11:32:08.775811 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 11:32:08.777431 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 11:32:08.778666 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:32:08.779695 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:32:08.780729 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:32:08.780778 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:32:08.781840 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 11:32:08.784054 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 11:32:08.786767 lvm[1460]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:32:08.788865 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 11:32:08.792565 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 11:32:08.793694 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 11:32:08.797427 jq[1463]: false Jan 29 11:32:08.797911 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 11:32:08.800717 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 11:32:08.803337 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 11:32:08.809946 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 11:32:08.811632 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 11:32:08.812249 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 11:32:08.814157 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 11:32:08.815309 dbus-daemon[1462]: [system] SELinux support is enabled Jan 29 11:32:08.817839 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 11:32:08.819607 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 11:32:08.823883 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 11:32:08.828146 jq[1474]: true Jan 29 11:32:08.828874 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 11:32:08.829098 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 11:32:08.830240 extend-filesystems[1464]: Found loop3 Jan 29 11:32:08.830240 extend-filesystems[1464]: Found loop4 Jan 29 11:32:08.830240 extend-filesystems[1464]: Found loop5 Jan 29 11:32:08.830240 extend-filesystems[1464]: Found sr0 Jan 29 11:32:08.830240 extend-filesystems[1464]: Found vda Jan 29 11:32:08.830240 extend-filesystems[1464]: Found vda1 Jan 29 11:32:08.829477 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 11:32:08.836480 extend-filesystems[1464]: Found vda2 Jan 29 11:32:08.836480 extend-filesystems[1464]: Found vda3 Jan 29 11:32:08.836480 extend-filesystems[1464]: Found usr Jan 29 11:32:08.836480 extend-filesystems[1464]: Found vda4 Jan 29 11:32:08.836480 extend-filesystems[1464]: Found vda6 Jan 29 11:32:08.836480 extend-filesystems[1464]: Found vda7 Jan 29 11:32:08.836480 extend-filesystems[1464]: Found vda9 Jan 29 11:32:08.836480 extend-filesystems[1464]: Checking size of /dev/vda9 Jan 29 11:32:08.829675 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 11:32:08.832890 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 11:32:08.833113 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 11:32:08.853083 jq[1482]: true Jan 29 11:32:08.855717 update_engine[1471]: I20250129 11:32:08.855630 1471 main.cc:92] Flatcar Update Engine starting Jan 29 11:32:08.856812 (ntainerd)[1484]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 11:32:08.857131 update_engine[1471]: I20250129 11:32:08.856937 1471 update_check_scheduler.cc:74] Next update check in 7m41s Jan 29 11:32:08.862016 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 11:32:08.862086 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 11:32:08.863664 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 11:32:08.863691 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 11:32:08.865458 systemd[1]: Started update-engine.service - Update Engine. Jan 29 11:32:08.870273 extend-filesystems[1464]: Resized partition /dev/vda9 Jan 29 11:32:08.873987 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 11:32:08.877945 extend-filesystems[1498]: resize2fs 1.47.1 (20-May-2024) Jan 29 11:32:08.885783 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1369) Jan 29 11:32:08.917579 systemd-logind[1469]: Watching system buttons on /dev/input/event1 (Power Button) Jan 29 11:32:08.917612 systemd-logind[1469]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 11:32:08.918941 systemd-logind[1469]: New seat seat0. Jan 29 11:32:08.919824 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 11:32:08.949024 sshd_keygen[1481]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 11:32:08.956865 locksmithd[1496]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 11:32:08.973981 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 11:32:08.994186 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 11:32:08.994785 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 29 11:32:09.001914 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 11:32:09.002231 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 11:32:09.005400 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 11:32:09.044681 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 11:32:09.053030 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 11:32:09.056139 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 11:32:09.058290 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 11:32:09.081843 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 29 11:32:09.140418 extend-filesystems[1498]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 11:32:09.140418 extend-filesystems[1498]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 11:32:09.140418 extend-filesystems[1498]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 29 11:32:09.146160 extend-filesystems[1464]: Resized filesystem in /dev/vda9 Jan 29 11:32:09.142698 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 11:32:09.143043 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 11:32:09.149913 bash[1512]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:32:09.152714 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 11:32:09.154947 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 29 11:32:09.196022 containerd[1484]: time="2025-01-29T11:32:09.195929614Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 11:32:09.223278 containerd[1484]: time="2025-01-29T11:32:09.223226756Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:32:09.225060 containerd[1484]: time="2025-01-29T11:32:09.225010821Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:32:09.225060 containerd[1484]: time="2025-01-29T11:32:09.225040597Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 11:32:09.225138 containerd[1484]: time="2025-01-29T11:32:09.225063851Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 11:32:09.225259 containerd[1484]: time="2025-01-29T11:32:09.225237557Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 11:32:09.225280 containerd[1484]: time="2025-01-29T11:32:09.225257374Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 11:32:09.225339 containerd[1484]: time="2025-01-29T11:32:09.225323758Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:32:09.225365 containerd[1484]: time="2025-01-29T11:32:09.225338626Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:32:09.225549 containerd[1484]: time="2025-01-29T11:32:09.225526959Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:32:09.225549 containerd[1484]: time="2025-01-29T11:32:09.225545244Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 11:32:09.225585 containerd[1484]: time="2025-01-29T11:32:09.225559020Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:32:09.225585 containerd[1484]: time="2025-01-29T11:32:09.225568267Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 11:32:09.225675 containerd[1484]: time="2025-01-29T11:32:09.225661241Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:32:09.225955 containerd[1484]: time="2025-01-29T11:32:09.225930997Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:32:09.226072 containerd[1484]: time="2025-01-29T11:32:09.226056222Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:32:09.226093 containerd[1484]: time="2025-01-29T11:32:09.226071050Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 11:32:09.226182 containerd[1484]: time="2025-01-29T11:32:09.226168573Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 11:32:09.226236 containerd[1484]: time="2025-01-29T11:32:09.226224427Z" level=info msg="metadata content store policy set" policy=shared Jan 29 11:32:09.269632 containerd[1484]: time="2025-01-29T11:32:09.269579624Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 11:32:09.269714 containerd[1484]: time="2025-01-29T11:32:09.269644195Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 11:32:09.269714 containerd[1484]: time="2025-01-29T11:32:09.269660015Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 11:32:09.270420 containerd[1484]: time="2025-01-29T11:32:09.269942414Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 11:32:09.270420 containerd[1484]: time="2025-01-29T11:32:09.270099088Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 11:32:09.270420 containerd[1484]: time="2025-01-29T11:32:09.270301578Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 11:32:09.270928 containerd[1484]: time="2025-01-29T11:32:09.270854405Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 11:32:09.271304 containerd[1484]: time="2025-01-29T11:32:09.271257891Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 11:32:09.271304 containerd[1484]: time="2025-01-29T11:32:09.271285142Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 11:32:09.271304 containerd[1484]: time="2025-01-29T11:32:09.271304749Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 11:32:09.271419 containerd[1484]: time="2025-01-29T11:32:09.271321861Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 11:32:09.271419 containerd[1484]: time="2025-01-29T11:32:09.271336348Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 11:32:09.271419 containerd[1484]: time="2025-01-29T11:32:09.271349804Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 11:32:09.271419 containerd[1484]: time="2025-01-29T11:32:09.271369621Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 11:32:09.271419 containerd[1484]: time="2025-01-29T11:32:09.271384479Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 11:32:09.271419 containerd[1484]: time="2025-01-29T11:32:09.271397303Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 11:32:09.271419 containerd[1484]: time="2025-01-29T11:32:09.271409836Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 11:32:09.271419 containerd[1484]: time="2025-01-29T11:32:09.271422700Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 11:32:09.271613 containerd[1484]: time="2025-01-29T11:32:09.271443810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 11:32:09.271613 containerd[1484]: time="2025-01-29T11:32:09.271458267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 11:32:09.271613 containerd[1484]: time="2025-01-29T11:32:09.271471672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 11:32:09.271613 containerd[1484]: time="2025-01-29T11:32:09.271484176Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 11:32:09.271613 containerd[1484]: time="2025-01-29T11:32:09.271498112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 11:32:09.271613 containerd[1484]: time="2025-01-29T11:32:09.271511567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 11:32:09.271613 containerd[1484]: time="2025-01-29T11:32:09.271523880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 11:32:09.271613 containerd[1484]: time="2025-01-29T11:32:09.271537696Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 11:32:09.271613 containerd[1484]: time="2025-01-29T11:32:09.271550750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 11:32:09.271613 containerd[1484]: time="2025-01-29T11:32:09.271566690Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 11:32:09.271613 containerd[1484]: time="2025-01-29T11:32:09.271579945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 11:32:09.271613 containerd[1484]: time="2025-01-29T11:32:09.271591717Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 11:32:09.271613 containerd[1484]: time="2025-01-29T11:32:09.271605042Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 11:32:09.271613 containerd[1484]: time="2025-01-29T11:32:09.271618768Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 11:32:09.271984 containerd[1484]: time="2025-01-29T11:32:09.271642002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 11:32:09.271984 containerd[1484]: time="2025-01-29T11:32:09.271655216Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 11:32:09.271984 containerd[1484]: time="2025-01-29T11:32:09.271669513Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 11:32:09.271984 containerd[1484]: time="2025-01-29T11:32:09.271728293Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 11:32:09.271984 containerd[1484]: time="2025-01-29T11:32:09.271772136Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 11:32:09.271984 containerd[1484]: time="2025-01-29T11:32:09.271786723Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 11:32:09.271984 containerd[1484]: time="2025-01-29T11:32:09.271800479Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 11:32:09.271984 containerd[1484]: time="2025-01-29T11:32:09.271810217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 11:32:09.271984 containerd[1484]: time="2025-01-29T11:32:09.271823712Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 11:32:09.271984 containerd[1484]: time="2025-01-29T11:32:09.271834493Z" level=info msg="NRI interface is disabled by configuration." Jan 29 11:32:09.271984 containerd[1484]: time="2025-01-29T11:32:09.271845092Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 11:32:09.272280 containerd[1484]: time="2025-01-29T11:32:09.272120689Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 11:32:09.272280 containerd[1484]: time="2025-01-29T11:32:09.272163339Z" level=info msg="Connect containerd service" Jan 29 11:32:09.272280 containerd[1484]: time="2025-01-29T11:32:09.272198054Z" level=info msg="using legacy CRI server" Jan 29 11:32:09.272280 containerd[1484]: time="2025-01-29T11:32:09.272205759Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 11:32:09.272528 containerd[1484]: time="2025-01-29T11:32:09.272349478Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 11:32:09.273053 containerd[1484]: time="2025-01-29T11:32:09.273004938Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:32:09.273192 containerd[1484]: time="2025-01-29T11:32:09.273140732Z" level=info msg="Start subscribing containerd event" Jan 29 11:32:09.273232 containerd[1484]: time="2025-01-29T11:32:09.273207918Z" level=info msg="Start recovering state" Jan 29 11:32:09.273409 containerd[1484]: time="2025-01-29T11:32:09.273369141Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 11:32:09.273409 containerd[1484]: time="2025-01-29T11:32:09.273371275Z" level=info msg="Start event monitor" Jan 29 11:32:09.273492 containerd[1484]: time="2025-01-29T11:32:09.273418173Z" level=info msg="Start snapshots syncer" Jan 29 11:32:09.273492 containerd[1484]: time="2025-01-29T11:32:09.273433181Z" level=info msg="Start cni network conf syncer for default" Jan 29 11:32:09.273492 containerd[1484]: time="2025-01-29T11:32:09.273445404Z" level=info msg="Start streaming server" Jan 29 11:32:09.273561 containerd[1484]: time="2025-01-29T11:32:09.273438661Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 11:32:09.273583 containerd[1484]: time="2025-01-29T11:32:09.273568514Z" level=info msg="containerd successfully booted in 0.079223s" Jan 29 11:32:09.273725 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 11:32:10.419919 systemd-networkd[1416]: eth0: Gained IPv6LL Jan 29 11:32:10.423337 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 11:32:10.425341 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 11:32:10.434002 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 29 11:32:10.436687 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:32:10.439051 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 11:32:10.460486 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 29 11:32:10.460728 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 29 11:32:10.462449 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 11:32:10.466601 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 11:32:11.057510 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:32:11.059371 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 11:32:11.061546 systemd[1]: Startup finished in 1.023s (kernel) + 7.337s (initrd) + 4.560s (userspace) = 12.921s. Jan 29 11:32:11.074516 (kubelet)[1567]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:32:11.492634 kubelet[1567]: E0129 11:32:11.492486 1567 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:32:11.496473 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:32:11.496685 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:32:13.908305 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 11:32:13.909654 systemd[1]: Started sshd@0-10.0.0.79:22-10.0.0.1:56698.service - OpenSSH per-connection server daemon (10.0.0.1:56698). Jan 29 11:32:13.959459 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 56698 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:32:13.961512 sshd-session[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:32:13.971041 systemd-logind[1469]: New session 1 of user core. Jan 29 11:32:13.972369 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 11:32:13.984048 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 11:32:13.996108 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 11:32:13.998836 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 11:32:14.007641 (systemd)[1585]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 11:32:14.123120 systemd[1585]: Queued start job for default target default.target. Jan 29 11:32:14.134125 systemd[1585]: Created slice app.slice - User Application Slice. Jan 29 11:32:14.134153 systemd[1585]: Reached target paths.target - Paths. Jan 29 11:32:14.134167 systemd[1585]: Reached target timers.target - Timers. Jan 29 11:32:14.135790 systemd[1585]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 11:32:14.147508 systemd[1585]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 11:32:14.147645 systemd[1585]: Reached target sockets.target - Sockets. Jan 29 11:32:14.147664 systemd[1585]: Reached target basic.target - Basic System. Jan 29 11:32:14.147704 systemd[1585]: Reached target default.target - Main User Target. Jan 29 11:32:14.147741 systemd[1585]: Startup finished in 133ms. Jan 29 11:32:14.148307 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 11:32:14.150185 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 11:32:14.212641 systemd[1]: Started sshd@1-10.0.0.79:22-10.0.0.1:56712.service - OpenSSH per-connection server daemon (10.0.0.1:56712). Jan 29 11:32:14.257182 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 56712 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:32:14.259130 sshd-session[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:32:14.263540 systemd-logind[1469]: New session 2 of user core. Jan 29 11:32:14.273927 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 11:32:14.329719 sshd[1598]: Connection closed by 10.0.0.1 port 56712 Jan 29 11:32:14.330140 sshd-session[1596]: pam_unix(sshd:session): session closed for user core Jan 29 11:32:14.345588 systemd[1]: sshd@1-10.0.0.79:22-10.0.0.1:56712.service: Deactivated successfully. Jan 29 11:32:14.347296 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 11:32:14.349053 systemd-logind[1469]: Session 2 logged out. Waiting for processes to exit. Jan 29 11:32:14.357201 systemd[1]: Started sshd@2-10.0.0.79:22-10.0.0.1:56716.service - OpenSSH per-connection server daemon (10.0.0.1:56716). Jan 29 11:32:14.358305 systemd-logind[1469]: Removed session 2. Jan 29 11:32:14.392159 sshd[1603]: Accepted publickey for core from 10.0.0.1 port 56716 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:32:14.393735 sshd-session[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:32:14.397900 systemd-logind[1469]: New session 3 of user core. Jan 29 11:32:14.409990 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 11:32:14.461111 sshd[1605]: Connection closed by 10.0.0.1 port 56716 Jan 29 11:32:14.461507 sshd-session[1603]: pam_unix(sshd:session): session closed for user core Jan 29 11:32:14.478649 systemd[1]: sshd@2-10.0.0.79:22-10.0.0.1:56716.service: Deactivated successfully. Jan 29 11:32:14.480432 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 11:32:14.482048 systemd-logind[1469]: Session 3 logged out. Waiting for processes to exit. Jan 29 11:32:14.492271 systemd[1]: Started sshd@3-10.0.0.79:22-10.0.0.1:56730.service - OpenSSH per-connection server daemon (10.0.0.1:56730). Jan 29 11:32:14.493726 systemd-logind[1469]: Removed session 3. Jan 29 11:32:14.526409 sshd[1610]: Accepted publickey for core from 10.0.0.1 port 56730 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:32:14.528017 sshd-session[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:32:14.532065 systemd-logind[1469]: New session 4 of user core. Jan 29 11:32:14.541928 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 11:32:14.598982 sshd[1612]: Connection closed by 10.0.0.1 port 56730 Jan 29 11:32:14.599374 sshd-session[1610]: pam_unix(sshd:session): session closed for user core Jan 29 11:32:14.620286 systemd[1]: sshd@3-10.0.0.79:22-10.0.0.1:56730.service: Deactivated successfully. Jan 29 11:32:14.622573 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 11:32:14.624726 systemd-logind[1469]: Session 4 logged out. Waiting for processes to exit. Jan 29 11:32:14.643385 systemd[1]: Started sshd@4-10.0.0.79:22-10.0.0.1:56742.service - OpenSSH per-connection server daemon (10.0.0.1:56742). Jan 29 11:32:14.644748 systemd-logind[1469]: Removed session 4. Jan 29 11:32:14.680136 sshd[1617]: Accepted publickey for core from 10.0.0.1 port 56742 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:32:14.681933 sshd-session[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:32:14.686731 systemd-logind[1469]: New session 5 of user core. Jan 29 11:32:14.696967 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 11:32:14.759543 sudo[1620]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 11:32:14.760066 sudo[1620]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:32:14.784964 sudo[1620]: pam_unix(sudo:session): session closed for user root Jan 29 11:32:14.787152 sshd[1619]: Connection closed by 10.0.0.1 port 56742 Jan 29 11:32:14.787581 sshd-session[1617]: pam_unix(sshd:session): session closed for user core Jan 29 11:32:14.803399 systemd[1]: sshd@4-10.0.0.79:22-10.0.0.1:56742.service: Deactivated successfully. Jan 29 11:32:14.805413 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 11:32:14.807142 systemd-logind[1469]: Session 5 logged out. Waiting for processes to exit. Jan 29 11:32:14.824242 systemd[1]: Started sshd@5-10.0.0.79:22-10.0.0.1:56750.service - OpenSSH per-connection server daemon (10.0.0.1:56750). Jan 29 11:32:14.825565 systemd-logind[1469]: Removed session 5. Jan 29 11:32:14.861966 sshd[1625]: Accepted publickey for core from 10.0.0.1 port 56750 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:32:14.863916 sshd-session[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:32:14.868240 systemd-logind[1469]: New session 6 of user core. Jan 29 11:32:14.878878 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 11:32:14.934113 sudo[1629]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 11:32:14.934448 sudo[1629]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:32:14.940325 sudo[1629]: pam_unix(sudo:session): session closed for user root Jan 29 11:32:14.946791 sudo[1628]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 11:32:14.947129 sudo[1628]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:32:14.969064 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:32:14.999877 augenrules[1651]: No rules Jan 29 11:32:15.001635 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:32:15.001932 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:32:15.003087 sudo[1628]: pam_unix(sudo:session): session closed for user root Jan 29 11:32:15.004845 sshd[1627]: Connection closed by 10.0.0.1 port 56750 Jan 29 11:32:15.005343 sshd-session[1625]: pam_unix(sshd:session): session closed for user core Jan 29 11:32:15.014622 systemd[1]: sshd@5-10.0.0.79:22-10.0.0.1:56750.service: Deactivated successfully. Jan 29 11:32:15.016577 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 11:32:15.017956 systemd-logind[1469]: Session 6 logged out. Waiting for processes to exit. Jan 29 11:32:15.027142 systemd[1]: Started sshd@6-10.0.0.79:22-10.0.0.1:56754.service - OpenSSH per-connection server daemon (10.0.0.1:56754). Jan 29 11:32:15.027988 systemd-logind[1469]: Removed session 6. Jan 29 11:32:15.059675 sshd[1659]: Accepted publickey for core from 10.0.0.1 port 56754 ssh2: RSA SHA256:vglZfOE0APgUpJbg1gfFAEfTfpzlM1a6LSSiwuQWd4A Jan 29 11:32:15.061234 sshd-session[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:32:15.064823 systemd-logind[1469]: New session 7 of user core. Jan 29 11:32:15.078871 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 11:32:15.132873 sudo[1663]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 11:32:15.133311 sudo[1663]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:32:15.155161 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 29 11:32:15.178311 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 29 11:32:15.178565 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 29 11:32:15.908928 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:32:15.917061 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:32:15.950410 systemd[1]: Reloading requested from client PID 1711 ('systemctl') (unit session-7.scope)... Jan 29 11:32:15.950426 systemd[1]: Reloading... Jan 29 11:32:16.030797 zram_generator::config[1752]: No configuration found. Jan 29 11:32:16.627232 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:32:16.702494 systemd[1]: Reloading finished in 751 ms. Jan 29 11:32:16.750613 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:32:16.754143 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:32:16.755608 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 11:32:16.755928 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:32:16.757808 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:32:16.906858 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:32:16.913139 (kubelet)[1799]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:32:16.972742 kubelet[1799]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:32:16.972742 kubelet[1799]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:32:16.972742 kubelet[1799]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:32:16.973866 kubelet[1799]: I0129 11:32:16.973781 1799 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:32:17.259532 kubelet[1799]: I0129 11:32:17.259350 1799 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 11:32:17.259532 kubelet[1799]: I0129 11:32:17.259392 1799 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:32:17.259688 kubelet[1799]: I0129 11:32:17.259633 1799 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 11:32:17.300934 kubelet[1799]: I0129 11:32:17.300773 1799 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:32:17.318994 kubelet[1799]: I0129 11:32:17.318954 1799 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:32:17.320809 kubelet[1799]: I0129 11:32:17.320740 1799 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:32:17.320979 kubelet[1799]: I0129 11:32:17.320793 1799 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.79","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 11:32:17.320979 kubelet[1799]: I0129 11:32:17.320980 1799 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:32:17.321221 kubelet[1799]: I0129 11:32:17.320990 1799 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 11:32:17.321221 kubelet[1799]: I0129 11:32:17.321112 1799 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:32:17.327942 kubelet[1799]: I0129 11:32:17.327877 1799 kubelet.go:400] "Attempting to sync node with API server" Jan 29 11:32:17.327942 kubelet[1799]: I0129 11:32:17.327921 1799 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:32:17.327942 kubelet[1799]: I0129 11:32:17.327952 1799 kubelet.go:312] "Adding apiserver pod source" Jan 29 11:32:17.328101 kubelet[1799]: I0129 11:32:17.327974 1799 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:32:17.328175 kubelet[1799]: E0129 11:32:17.328130 1799 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:32:17.328255 kubelet[1799]: E0129 11:32:17.328225 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:32:17.333187 kubelet[1799]: I0129 11:32:17.333143 1799 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:32:17.335024 kubelet[1799]: I0129 11:32:17.334970 1799 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:32:17.335122 kubelet[1799]: W0129 11:32:17.335054 1799 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 11:32:17.335952 kubelet[1799]: I0129 11:32:17.335788 1799 server.go:1264] "Started kubelet" Jan 29 11:32:17.336621 kubelet[1799]: I0129 11:32:17.336150 1799 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:32:17.337198 kubelet[1799]: I0129 11:32:17.337135 1799 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:32:17.337445 kubelet[1799]: I0129 11:32:17.337418 1799 server.go:455] "Adding debug handlers to kubelet server" Jan 29 11:32:17.337943 kubelet[1799]: I0129 11:32:17.337527 1799 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:32:17.339612 kubelet[1799]: I0129 11:32:17.339082 1799 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:32:17.341112 kubelet[1799]: I0129 11:32:17.340912 1799 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 11:32:17.341112 kubelet[1799]: I0129 11:32:17.341043 1799 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 11:32:17.341888 kubelet[1799]: I0129 11:32:17.341140 1799 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:32:17.342359 kubelet[1799]: I0129 11:32:17.342131 1799 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:32:17.342359 kubelet[1799]: I0129 11:32:17.342223 1799 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:32:17.342454 kubelet[1799]: E0129 11:32:17.342407 1799 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:32:17.343900 kubelet[1799]: I0129 11:32:17.343862 1799 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:32:17.347152 kubelet[1799]: E0129 11:32:17.347129 1799 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.79\" not found" node="10.0.0.79" Jan 29 11:32:17.361704 kubelet[1799]: I0129 11:32:17.361649 1799 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:32:17.361704 kubelet[1799]: I0129 11:32:17.361676 1799 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:32:17.361817 kubelet[1799]: I0129 11:32:17.361722 1799 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:32:17.442519 kubelet[1799]: I0129 11:32:17.442468 1799 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.79" Jan 29 11:32:17.757706 kubelet[1799]: I0129 11:32:17.757368 1799 policy_none.go:49] "None policy: Start" Jan 29 11:32:17.757706 kubelet[1799]: I0129 11:32:17.757406 1799 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.79" Jan 29 11:32:17.759087 kubelet[1799]: I0129 11:32:17.758747 1799 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:32:17.759087 kubelet[1799]: I0129 11:32:17.758803 1799 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:32:17.759087 kubelet[1799]: I0129 11:32:17.758798 1799 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 29 11:32:17.759414 containerd[1484]: time="2025-01-29T11:32:17.759330096Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 11:32:17.759929 kubelet[1799]: I0129 11:32:17.759586 1799 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 29 11:32:17.772129 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 11:32:17.781919 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 11:32:17.785364 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 11:32:17.791573 kubelet[1799]: I0129 11:32:17.791508 1799 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:32:17.792871 kubelet[1799]: I0129 11:32:17.792838 1799 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:32:17.793564 kubelet[1799]: I0129 11:32:17.793075 1799 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:32:17.793607 kubelet[1799]: I0129 11:32:17.793594 1799 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:32:17.793776 kubelet[1799]: I0129 11:32:17.793729 1799 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:32:17.794073 kubelet[1799]: I0129 11:32:17.794059 1799 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:32:17.794154 kubelet[1799]: I0129 11:32:17.794143 1799 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 11:32:17.794249 kubelet[1799]: E0129 11:32:17.794235 1799 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 29 11:32:17.922430 sudo[1663]: pam_unix(sudo:session): session closed for user root Jan 29 11:32:17.923899 sshd[1662]: Connection closed by 10.0.0.1 port 56754 Jan 29 11:32:17.924368 sshd-session[1659]: pam_unix(sshd:session): session closed for user core Jan 29 11:32:17.928954 systemd[1]: sshd@6-10.0.0.79:22-10.0.0.1:56754.service: Deactivated successfully. Jan 29 11:32:17.931046 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 11:32:17.931646 systemd-logind[1469]: Session 7 logged out. Waiting for processes to exit. Jan 29 11:32:17.932529 systemd-logind[1469]: Removed session 7. Jan 29 11:32:18.283985 kubelet[1799]: I0129 11:32:18.283933 1799 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 29 11:32:18.284550 kubelet[1799]: W0129 11:32:18.284210 1799 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 11:32:18.284550 kubelet[1799]: W0129 11:32:18.284210 1799 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 11:32:18.284550 kubelet[1799]: W0129 11:32:18.284256 1799 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 11:32:18.328907 kubelet[1799]: E0129 11:32:18.328851 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:32:18.328907 kubelet[1799]: I0129 11:32:18.328878 1799 apiserver.go:52] "Watching apiserver" Jan 29 11:32:18.331725 kubelet[1799]: I0129 11:32:18.331685 1799 topology_manager.go:215] "Topology Admit Handler" podUID="f7e550cc-f714-4ac0-83d7-9dab0f53ba79" podNamespace="calico-system" podName="calico-node-bhlzn" Jan 29 11:32:18.331851 kubelet[1799]: I0129 11:32:18.331819 1799 topology_manager.go:215] "Topology Admit Handler" podUID="168d8980-ba5c-4483-9146-b7dc7884186d" podNamespace="calico-system" podName="csi-node-driver-7rbs5" Jan 29 11:32:18.331900 kubelet[1799]: I0129 11:32:18.331890 1799 topology_manager.go:215] "Topology Admit Handler" podUID="b99a0998-abda-45d5-b908-5d108cae604b" podNamespace="kube-system" podName="kube-proxy-f6jjd" Jan 29 11:32:18.332096 kubelet[1799]: E0129 11:32:18.332052 1799 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7rbs5" podUID="168d8980-ba5c-4483-9146-b7dc7884186d" Jan 29 11:32:18.340006 systemd[1]: Created slice kubepods-besteffort-podb99a0998_abda_45d5_b908_5d108cae604b.slice - libcontainer container kubepods-besteffort-podb99a0998_abda_45d5_b908_5d108cae604b.slice. Jan 29 11:32:18.343506 kubelet[1799]: I0129 11:32:18.343471 1799 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 11:32:18.347325 kubelet[1799]: I0129 11:32:18.347288 1799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7e550cc-f714-4ac0-83d7-9dab0f53ba79-tigera-ca-bundle\") pod \"calico-node-bhlzn\" (UID: \"f7e550cc-f714-4ac0-83d7-9dab0f53ba79\") " pod="calico-system/calico-node-bhlzn" Jan 29 11:32:18.347325 kubelet[1799]: I0129 11:32:18.347328 1799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f7e550cc-f714-4ac0-83d7-9dab0f53ba79-var-run-calico\") pod \"calico-node-bhlzn\" (UID: \"f7e550cc-f714-4ac0-83d7-9dab0f53ba79\") " pod="calico-system/calico-node-bhlzn" Jan 29 11:32:18.347433 kubelet[1799]: I0129 11:32:18.347350 1799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f7e550cc-f714-4ac0-83d7-9dab0f53ba79-cni-log-dir\") pod \"calico-node-bhlzn\" (UID: \"f7e550cc-f714-4ac0-83d7-9dab0f53ba79\") " pod="calico-system/calico-node-bhlzn" Jan 29 11:32:18.347433 kubelet[1799]: I0129 11:32:18.347370 1799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f7e550cc-f714-4ac0-83d7-9dab0f53ba79-flexvol-driver-host\") pod \"calico-node-bhlzn\" (UID: \"f7e550cc-f714-4ac0-83d7-9dab0f53ba79\") " pod="calico-system/calico-node-bhlzn" Jan 29 11:32:18.347433 kubelet[1799]: I0129 11:32:18.347405 1799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/168d8980-ba5c-4483-9146-b7dc7884186d-registration-dir\") pod \"csi-node-driver-7rbs5\" (UID: \"168d8980-ba5c-4483-9146-b7dc7884186d\") " pod="calico-system/csi-node-driver-7rbs5" Jan 29 11:32:18.347433 kubelet[1799]: I0129 11:32:18.347430 1799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b99a0998-abda-45d5-b908-5d108cae604b-kube-proxy\") pod \"kube-proxy-f6jjd\" (UID: \"b99a0998-abda-45d5-b908-5d108cae604b\") " pod="kube-system/kube-proxy-f6jjd" Jan 29 11:32:18.347528 kubelet[1799]: I0129 11:32:18.347445 1799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f7e550cc-f714-4ac0-83d7-9dab0f53ba79-cni-net-dir\") pod \"calico-node-bhlzn\" (UID: \"f7e550cc-f714-4ac0-83d7-9dab0f53ba79\") " pod="calico-system/calico-node-bhlzn" Jan 29 11:32:18.347528 kubelet[1799]: I0129 11:32:18.347512 1799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b99a0998-abda-45d5-b908-5d108cae604b-xtables-lock\") pod \"kube-proxy-f6jjd\" (UID: \"b99a0998-abda-45d5-b908-5d108cae604b\") " pod="kube-system/kube-proxy-f6jjd" Jan 29 11:32:18.347568 kubelet[1799]: I0129 11:32:18.347540 1799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f7e550cc-f714-4ac0-83d7-9dab0f53ba79-policysync\") pod \"calico-node-bhlzn\" (UID: \"f7e550cc-f714-4ac0-83d7-9dab0f53ba79\") " pod="calico-system/calico-node-bhlzn" Jan 29 11:32:18.347568 kubelet[1799]: I0129 11:32:18.347559 1799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvv4k\" (UniqueName: \"kubernetes.io/projected/f7e550cc-f714-4ac0-83d7-9dab0f53ba79-kube-api-access-jvv4k\") pod \"calico-node-bhlzn\" (UID: \"f7e550cc-f714-4ac0-83d7-9dab0f53ba79\") " pod="calico-system/calico-node-bhlzn" Jan 29 11:32:18.347620 kubelet[1799]: I0129 11:32:18.347578 1799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb69c\" (UniqueName: \"kubernetes.io/projected/168d8980-ba5c-4483-9146-b7dc7884186d-kube-api-access-sb69c\") pod \"csi-node-driver-7rbs5\" (UID: \"168d8980-ba5c-4483-9146-b7dc7884186d\") " pod="calico-system/csi-node-driver-7rbs5" Jan 29 11:32:18.347620 kubelet[1799]: I0129 11:32:18.347595 1799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7e550cc-f714-4ac0-83d7-9dab0f53ba79-xtables-lock\") pod \"calico-node-bhlzn\" (UID: \"f7e550cc-f714-4ac0-83d7-9dab0f53ba79\") " pod="calico-system/calico-node-bhlzn" Jan 29 11:32:18.347620 kubelet[1799]: I0129 11:32:18.347611 1799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6jzg\" (UniqueName: \"kubernetes.io/projected/b99a0998-abda-45d5-b908-5d108cae604b-kube-api-access-r6jzg\") pod \"kube-proxy-f6jjd\" (UID: \"b99a0998-abda-45d5-b908-5d108cae604b\") " pod="kube-system/kube-proxy-f6jjd" Jan 29 11:32:18.347684 kubelet[1799]: I0129 11:32:18.347628 1799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7e550cc-f714-4ac0-83d7-9dab0f53ba79-lib-modules\") pod \"calico-node-bhlzn\" (UID: \"f7e550cc-f714-4ac0-83d7-9dab0f53ba79\") " pod="calico-system/calico-node-bhlzn" Jan 29 11:32:18.347684 kubelet[1799]: I0129 11:32:18.347648 1799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f7e550cc-f714-4ac0-83d7-9dab0f53ba79-node-certs\") pod \"calico-node-bhlzn\" (UID: \"f7e550cc-f714-4ac0-83d7-9dab0f53ba79\") " pod="calico-system/calico-node-bhlzn" Jan 29 11:32:18.347684 kubelet[1799]: I0129 11:32:18.347665 1799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f7e550cc-f714-4ac0-83d7-9dab0f53ba79-var-lib-calico\") pod \"calico-node-bhlzn\" (UID: \"f7e550cc-f714-4ac0-83d7-9dab0f53ba79\") " pod="calico-system/calico-node-bhlzn" Jan 29 11:32:18.347851 kubelet[1799]: I0129 11:32:18.347696 1799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f7e550cc-f714-4ac0-83d7-9dab0f53ba79-cni-bin-dir\") pod \"calico-node-bhlzn\" (UID: \"f7e550cc-f714-4ac0-83d7-9dab0f53ba79\") " pod="calico-system/calico-node-bhlzn" Jan 29 11:32:18.347851 kubelet[1799]: I0129 11:32:18.347735 1799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/168d8980-ba5c-4483-9146-b7dc7884186d-varrun\") pod \"csi-node-driver-7rbs5\" (UID: \"168d8980-ba5c-4483-9146-b7dc7884186d\") " pod="calico-system/csi-node-driver-7rbs5" Jan 29 11:32:18.347851 kubelet[1799]: I0129 11:32:18.347783 1799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/168d8980-ba5c-4483-9146-b7dc7884186d-kubelet-dir\") pod \"csi-node-driver-7rbs5\" (UID: \"168d8980-ba5c-4483-9146-b7dc7884186d\") " pod="calico-system/csi-node-driver-7rbs5" Jan 29 11:32:18.347851 kubelet[1799]: I0129 11:32:18.347811 1799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b99a0998-abda-45d5-b908-5d108cae604b-lib-modules\") pod \"kube-proxy-f6jjd\" (UID: \"b99a0998-abda-45d5-b908-5d108cae604b\") " pod="kube-system/kube-proxy-f6jjd" Jan 29 11:32:18.347851 kubelet[1799]: I0129 11:32:18.347830 1799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/168d8980-ba5c-4483-9146-b7dc7884186d-socket-dir\") pod \"csi-node-driver-7rbs5\" (UID: \"168d8980-ba5c-4483-9146-b7dc7884186d\") " pod="calico-system/csi-node-driver-7rbs5" Jan 29 11:32:18.351501 systemd[1]: Created slice kubepods-besteffort-podf7e550cc_f714_4ac0_83d7_9dab0f53ba79.slice - libcontainer container kubepods-besteffort-podf7e550cc_f714_4ac0_83d7_9dab0f53ba79.slice. Jan 29 11:32:18.451114 kubelet[1799]: E0129 11:32:18.451059 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:18.451114 kubelet[1799]: W0129 11:32:18.451085 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:18.451114 kubelet[1799]: E0129 11:32:18.451107 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:18.452859 kubelet[1799]: E0129 11:32:18.452821 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:18.452859 kubelet[1799]: W0129 11:32:18.452844 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:18.452937 kubelet[1799]: E0129 11:32:18.452865 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:18.458194 kubelet[1799]: E0129 11:32:18.458142 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:18.458194 kubelet[1799]: W0129 11:32:18.458178 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:18.458314 kubelet[1799]: E0129 11:32:18.458210 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:18.458507 kubelet[1799]: E0129 11:32:18.458480 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:18.458507 kubelet[1799]: W0129 11:32:18.458498 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:18.458507 kubelet[1799]: E0129 11:32:18.458507 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:18.459227 kubelet[1799]: E0129 11:32:18.459195 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:18.459227 kubelet[1799]: W0129 11:32:18.459218 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:18.459303 kubelet[1799]: E0129 11:32:18.459233 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:18.650522 kubelet[1799]: E0129 11:32:18.650321 1799 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:32:18.651278 containerd[1484]: time="2025-01-29T11:32:18.651210300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f6jjd,Uid:b99a0998-abda-45d5-b908-5d108cae604b,Namespace:kube-system,Attempt:0,}" Jan 29 11:32:18.654440 kubelet[1799]: E0129 11:32:18.654417 1799 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:32:18.654970 containerd[1484]: time="2025-01-29T11:32:18.654916010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bhlzn,Uid:f7e550cc-f714-4ac0-83d7-9dab0f53ba79,Namespace:calico-system,Attempt:0,}" Jan 29 11:32:19.185737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2173729685.mount: Deactivated successfully. Jan 29 11:32:19.196747 containerd[1484]: time="2025-01-29T11:32:19.196691162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:32:19.198292 containerd[1484]: time="2025-01-29T11:32:19.198234466Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 29 11:32:19.199196 containerd[1484]: time="2025-01-29T11:32:19.199157797Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:32:19.200296 containerd[1484]: time="2025-01-29T11:32:19.200239055Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:32:19.203195 containerd[1484]: time="2025-01-29T11:32:19.203135136Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:32:19.204840 containerd[1484]: time="2025-01-29T11:32:19.204804777Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:32:19.205654 containerd[1484]: time="2025-01-29T11:32:19.205627070Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 550.594551ms" Jan 29 11:32:19.207106 containerd[1484]: time="2025-01-29T11:32:19.207075426Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 555.731194ms" Jan 29 11:32:19.330054 kubelet[1799]: E0129 11:32:19.329897 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:32:19.341672 containerd[1484]: time="2025-01-29T11:32:19.341126059Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:32:19.341672 containerd[1484]: time="2025-01-29T11:32:19.341337425Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:32:19.341672 containerd[1484]: time="2025-01-29T11:32:19.341403479Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:32:19.341672 containerd[1484]: time="2025-01-29T11:32:19.341422815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:32:19.341672 containerd[1484]: time="2025-01-29T11:32:19.341530968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:32:19.341976 containerd[1484]: time="2025-01-29T11:32:19.341918495Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:32:19.341976 containerd[1484]: time="2025-01-29T11:32:19.341945565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:32:19.342081 containerd[1484]: time="2025-01-29T11:32:19.342035985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:32:19.445907 systemd[1]: Started cri-containerd-01407be4dcb53d7ab2da775d628b9be0e750963f9d1b6c9aba072e02beee02fb.scope - libcontainer container 01407be4dcb53d7ab2da775d628b9be0e750963f9d1b6c9aba072e02beee02fb. Jan 29 11:32:19.447433 systemd[1]: Started cri-containerd-1ee4676dbe9f9072fb742ee5e317b0c72187a9324d7d478f0b8e63efe2057d82.scope - libcontainer container 1ee4676dbe9f9072fb742ee5e317b0c72187a9324d7d478f0b8e63efe2057d82. Jan 29 11:32:19.480222 containerd[1484]: time="2025-01-29T11:32:19.480085066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f6jjd,Uid:b99a0998-abda-45d5-b908-5d108cae604b,Namespace:kube-system,Attempt:0,} returns sandbox id \"01407be4dcb53d7ab2da775d628b9be0e750963f9d1b6c9aba072e02beee02fb\"" Jan 29 11:32:19.480222 containerd[1484]: time="2025-01-29T11:32:19.480212595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bhlzn,Uid:f7e550cc-f714-4ac0-83d7-9dab0f53ba79,Namespace:calico-system,Attempt:0,} returns sandbox id \"1ee4676dbe9f9072fb742ee5e317b0c72187a9324d7d478f0b8e63efe2057d82\"" Jan 29 11:32:19.481014 kubelet[1799]: E0129 11:32:19.480992 1799 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:32:19.481117 kubelet[1799]: E0129 11:32:19.481081 1799 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:32:19.482156 containerd[1484]: time="2025-01-29T11:32:19.482131013Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 29 11:32:19.794945 kubelet[1799]: E0129 11:32:19.794788 1799 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7rbs5" podUID="168d8980-ba5c-4483-9146-b7dc7884186d" Jan 29 11:32:20.330311 kubelet[1799]: E0129 11:32:20.330244 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:32:20.580575 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1386131475.mount: Deactivated successfully. Jan 29 11:32:21.330794 kubelet[1799]: E0129 11:32:21.330711 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:32:21.334082 containerd[1484]: time="2025-01-29T11:32:21.333999508Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:32:21.339189 containerd[1484]: time="2025-01-29T11:32:21.339139928Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058337" Jan 29 11:32:21.344724 containerd[1484]: time="2025-01-29T11:32:21.344657946Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:32:21.347779 containerd[1484]: time="2025-01-29T11:32:21.347712995Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:32:21.348393 containerd[1484]: time="2025-01-29T11:32:21.348331575Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 1.866169984s" Jan 29 11:32:21.348545 containerd[1484]: time="2025-01-29T11:32:21.348395846Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 29 11:32:21.349812 containerd[1484]: time="2025-01-29T11:32:21.349521276Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 29 11:32:21.351199 containerd[1484]: time="2025-01-29T11:32:21.351164318Z" level=info msg="CreateContainer within sandbox \"01407be4dcb53d7ab2da775d628b9be0e750963f9d1b6c9aba072e02beee02fb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 11:32:21.369569 containerd[1484]: time="2025-01-29T11:32:21.369520773Z" level=info msg="CreateContainer within sandbox \"01407be4dcb53d7ab2da775d628b9be0e750963f9d1b6c9aba072e02beee02fb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e1452b30b0310d9e073a631a605603f05900583e8fd95d753c2abc6a9df38ffc\"" Jan 29 11:32:21.370166 containerd[1484]: time="2025-01-29T11:32:21.370138591Z" level=info msg="StartContainer for \"e1452b30b0310d9e073a631a605603f05900583e8fd95d753c2abc6a9df38ffc\"" Jan 29 11:32:21.410891 systemd[1]: Started cri-containerd-e1452b30b0310d9e073a631a605603f05900583e8fd95d753c2abc6a9df38ffc.scope - libcontainer container e1452b30b0310d9e073a631a605603f05900583e8fd95d753c2abc6a9df38ffc. Jan 29 11:32:21.449139 containerd[1484]: time="2025-01-29T11:32:21.449095403Z" level=info msg="StartContainer for \"e1452b30b0310d9e073a631a605603f05900583e8fd95d753c2abc6a9df38ffc\" returns successfully" Jan 29 11:32:21.794872 kubelet[1799]: E0129 11:32:21.794518 1799 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7rbs5" podUID="168d8980-ba5c-4483-9146-b7dc7884186d" Jan 29 11:32:21.806417 kubelet[1799]: E0129 11:32:21.806366 1799 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:32:21.817691 kubelet[1799]: I0129 11:32:21.817596 1799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-f6jjd" podStartSLOduration=2.949893095 podStartE2EDuration="4.817576217s" podCreationTimestamp="2025-01-29 11:32:17 +0000 UTC" firstStartedPulling="2025-01-29 11:32:19.481711026 +0000 UTC m=+2.554050247" lastFinishedPulling="2025-01-29 11:32:21.349394148 +0000 UTC m=+4.421733369" observedRunningTime="2025-01-29 11:32:21.817202596 +0000 UTC m=+4.889541817" watchObservedRunningTime="2025-01-29 11:32:21.817576217 +0000 UTC m=+4.889915438" Jan 29 11:32:21.866907 kubelet[1799]: E0129 11:32:21.866866 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:21.866907 kubelet[1799]: W0129 11:32:21.866890 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:21.866907 kubelet[1799]: E0129 11:32:21.866910 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:21.867120 kubelet[1799]: E0129 11:32:21.867106 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:21.867120 kubelet[1799]: W0129 11:32:21.867115 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:21.867199 kubelet[1799]: E0129 11:32:21.867123 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:21.867322 kubelet[1799]: E0129 11:32:21.867304 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:21.867322 kubelet[1799]: W0129 11:32:21.867315 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:21.867322 kubelet[1799]: E0129 11:32:21.867323 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:21.867622 kubelet[1799]: E0129 11:32:21.867597 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:21.867622 kubelet[1799]: W0129 11:32:21.867611 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:21.867622 kubelet[1799]: E0129 11:32:21.867621 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:21.867935 kubelet[1799]: E0129 11:32:21.867920 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:21.867935 kubelet[1799]: W0129 11:32:21.867930 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:21.868007 kubelet[1799]: E0129 11:32:21.867939 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:21.868147 kubelet[1799]: E0129 11:32:21.868132 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:21.868147 kubelet[1799]: W0129 11:32:21.868141 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:21.868147 kubelet[1799]: E0129 11:32:21.868148 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:21.868330 kubelet[1799]: E0129 11:32:21.868316 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:21.868330 kubelet[1799]: W0129 11:32:21.868325 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:21.868330 kubelet[1799]: E0129 11:32:21.868332 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:21.868514 kubelet[1799]: E0129 11:32:21.868500 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:21.868514 kubelet[1799]: W0129 11:32:21.868509 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:21.868587 kubelet[1799]: E0129 11:32:21.868517 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:21.868732 kubelet[1799]: E0129 11:32:21.868718 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:21.868732 kubelet[1799]: W0129 11:32:21.868728 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:21.868808 kubelet[1799]: E0129 11:32:21.868736 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:21.868943 kubelet[1799]: E0129 11:32:21.868915 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:21.868943 kubelet[1799]: W0129 11:32:21.868926 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:21.868943 kubelet[1799]: E0129 11:32:21.868934 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:21.869128 kubelet[1799]: E0129 11:32:21.869113 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:21.869128 kubelet[1799]: W0129 11:32:21.869119 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:21.869128 kubelet[1799]: E0129 11:32:21.869127 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:21.869313 kubelet[1799]: E0129 11:32:21.869289 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:21.869313 kubelet[1799]: W0129 11:32:21.869301 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:21.869313 kubelet[1799]: E0129 11:32:21.869308 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:21.869499 kubelet[1799]: E0129 11:32:21.869485 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:21.869499 kubelet[1799]: W0129 11:32:21.869495 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:21.869541 kubelet[1799]: E0129 11:32:21.869502 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:21.869719 kubelet[1799]: E0129 11:32:21.869694 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:21.869719 kubelet[1799]: W0129 11:32:21.869716 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:21.869801 kubelet[1799]: E0129 11:32:21.869725 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:21.869941 kubelet[1799]: E0129 11:32:21.869910 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:21.869941 kubelet[1799]: W0129 11:32:21.869921 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:21.869941 kubelet[1799]: E0129 11:32:21.869928 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:21.870105 kubelet[1799]: E0129 11:32:21.870088 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:21.870105 kubelet[1799]: W0129 11:32:21.870098 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:21.870105 kubelet[1799]: E0129 11:32:21.870105 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:21.870314 kubelet[1799]: E0129 11:32:21.870298 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:21.870314 kubelet[1799]: W0129 11:32:21.870309 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:21.870366 kubelet[1799]: E0129 11:32:21.870317 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:21.870491 kubelet[1799]: E0129 11:32:21.870474 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:21.870491 kubelet[1799]: W0129 11:32:21.870484 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:21.870554 kubelet[1799]: E0129 11:32:21.870497 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:21.870673 kubelet[1799]: E0129 11:32:21.870658 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:21.870673 kubelet[1799]: W0129 11:32:21.870669 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:21.870723 kubelet[1799]: E0129 11:32:21.870676 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:21.870877 kubelet[1799]: E0129 11:32:21.870863 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:21.870877 kubelet[1799]: W0129 11:32:21.870873 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:21.870920 kubelet[1799]: E0129 11:32:21.870881 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:21.871105 kubelet[1799]: E0129 11:32:21.871087 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:21.871105 kubelet[1799]: W0129 11:32:21.871098 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:21.871105 kubelet[1799]: E0129 11:32:21.871105 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:21.871323 kubelet[1799]: E0129 11:32:21.871302 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:21.871323 kubelet[1799]: W0129 11:32:21.871313 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:21.871323 kubelet[1799]: E0129 11:32:21.871324 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:21.871533 kubelet[1799]: E0129 11:32:21.871517 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:21.871533 kubelet[1799]: W0129 11:32:21.871528 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:21.871588 kubelet[1799]: E0129 11:32:21.871539 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:21.871778 kubelet[1799]: E0129 11:32:21.871747 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:21.871778 kubelet[1799]: W0129 11:32:21.871772 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:21.871861 kubelet[1799]: E0129 11:32:21.871785 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:21.872005 kubelet[1799]: E0129 11:32:21.871986 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:21.872005 kubelet[1799]: W0129 11:32:21.872000 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:21.872079 kubelet[1799]: E0129 11:32:21.872018 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:21.872258 kubelet[1799]: E0129 11:32:21.872240 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:21.872258 kubelet[1799]: W0129 11:32:21.872251 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:21.872345 kubelet[1799]: E0129 11:32:21.872267 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:21.872548 kubelet[1799]: E0129 11:32:21.872530 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:21.872548 kubelet[1799]: W0129 11:32:21.872542 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:21.872625 kubelet[1799]: E0129 11:32:21.872555 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:21.872818 kubelet[1799]: E0129 11:32:21.872803 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:21.872818 kubelet[1799]: W0129 11:32:21.872813 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:21.872882 kubelet[1799]: E0129 11:32:21.872824 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:21.873042 kubelet[1799]: E0129 11:32:21.873029 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:21.873042 kubelet[1799]: W0129 11:32:21.873038 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:21.873099 kubelet[1799]: E0129 11:32:21.873049 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:21.873271 kubelet[1799]: E0129 11:32:21.873258 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:21.873271 kubelet[1799]: W0129 11:32:21.873267 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:21.873337 kubelet[1799]: E0129 11:32:21.873278 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:21.873667 kubelet[1799]: E0129 11:32:21.873626 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:21.873667 kubelet[1799]: W0129 11:32:21.873656 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:21.873738 kubelet[1799]: E0129 11:32:21.873685 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:21.873965 kubelet[1799]: E0129 11:32:21.873939 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:21.873965 kubelet[1799]: W0129 11:32:21.873952 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:21.873965 kubelet[1799]: E0129 11:32:21.873960 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:22.331800 kubelet[1799]: E0129 11:32:22.331748 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:32:22.808015 kubelet[1799]: E0129 11:32:22.807874 1799 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:32:22.876506 kubelet[1799]: E0129 11:32:22.876426 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:22.876506 kubelet[1799]: W0129 11:32:22.876461 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:22.876506 kubelet[1799]: E0129 11:32:22.876487 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:22.876920 kubelet[1799]: E0129 11:32:22.876875 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:22.876920 kubelet[1799]: W0129 11:32:22.876907 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:22.876984 kubelet[1799]: E0129 11:32:22.876935 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:22.877253 kubelet[1799]: E0129 11:32:22.877233 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:22.877253 kubelet[1799]: W0129 11:32:22.877246 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:22.877338 kubelet[1799]: E0129 11:32:22.877257 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:22.877471 kubelet[1799]: E0129 11:32:22.877437 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:22.877471 kubelet[1799]: W0129 11:32:22.877458 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:22.877471 kubelet[1799]: E0129 11:32:22.877468 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:22.877725 kubelet[1799]: E0129 11:32:22.877707 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:22.877725 kubelet[1799]: W0129 11:32:22.877718 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:22.877826 kubelet[1799]: E0129 11:32:22.877728 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:22.877939 kubelet[1799]: E0129 11:32:22.877923 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:22.877939 kubelet[1799]: W0129 11:32:22.877933 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:22.878008 kubelet[1799]: E0129 11:32:22.877942 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:22.878176 kubelet[1799]: E0129 11:32:22.878138 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:22.878176 kubelet[1799]: W0129 11:32:22.878158 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:22.878176 kubelet[1799]: E0129 11:32:22.878170 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:22.878383 kubelet[1799]: E0129 11:32:22.878366 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:22.878383 kubelet[1799]: W0129 11:32:22.878376 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:22.878456 kubelet[1799]: E0129 11:32:22.878386 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:22.878604 kubelet[1799]: E0129 11:32:22.878587 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:22.878604 kubelet[1799]: W0129 11:32:22.878598 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:22.878687 kubelet[1799]: E0129 11:32:22.878607 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:22.878832 kubelet[1799]: E0129 11:32:22.878815 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:22.878832 kubelet[1799]: W0129 11:32:22.878826 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:22.878911 kubelet[1799]: E0129 11:32:22.878836 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:22.879098 kubelet[1799]: E0129 11:32:22.879081 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:22.879098 kubelet[1799]: W0129 11:32:22.879092 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:22.879169 kubelet[1799]: E0129 11:32:22.879102 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:22.879297 kubelet[1799]: E0129 11:32:22.879280 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:22.879297 kubelet[1799]: W0129 11:32:22.879291 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:22.879368 kubelet[1799]: E0129 11:32:22.879300 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:22.879515 kubelet[1799]: E0129 11:32:22.879498 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:22.879515 kubelet[1799]: W0129 11:32:22.879509 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:22.879588 kubelet[1799]: E0129 11:32:22.879519 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:22.879738 kubelet[1799]: E0129 11:32:22.879721 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:22.879738 kubelet[1799]: W0129 11:32:22.879734 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:22.879826 kubelet[1799]: E0129 11:32:22.879744 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:22.880001 kubelet[1799]: E0129 11:32:22.879982 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:22.880001 kubelet[1799]: W0129 11:32:22.879993 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:22.880001 kubelet[1799]: E0129 11:32:22.880002 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:22.880208 kubelet[1799]: E0129 11:32:22.880192 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:22.880208 kubelet[1799]: W0129 11:32:22.880202 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:22.880289 kubelet[1799]: E0129 11:32:22.880213 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:22.880461 kubelet[1799]: E0129 11:32:22.880441 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:22.880461 kubelet[1799]: W0129 11:32:22.880452 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:22.880540 kubelet[1799]: E0129 11:32:22.880463 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:22.880664 kubelet[1799]: E0129 11:32:22.880648 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:22.880664 kubelet[1799]: W0129 11:32:22.880658 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:22.880772 kubelet[1799]: E0129 11:32:22.880667 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:22.880896 kubelet[1799]: E0129 11:32:22.880878 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:22.880896 kubelet[1799]: W0129 11:32:22.880891 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:22.880978 kubelet[1799]: E0129 11:32:22.880900 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:22.881087 kubelet[1799]: E0129 11:32:22.881071 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:22.881087 kubelet[1799]: W0129 11:32:22.881081 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:22.881160 kubelet[1799]: E0129 11:32:22.881090 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:22.881366 kubelet[1799]: E0129 11:32:22.881349 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:22.881366 kubelet[1799]: W0129 11:32:22.881360 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:22.881430 kubelet[1799]: E0129 11:32:22.881369 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:22.881630 kubelet[1799]: E0129 11:32:22.881614 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:22.881682 kubelet[1799]: W0129 11:32:22.881642 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:22.881682 kubelet[1799]: E0129 11:32:22.881658 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:22.881947 kubelet[1799]: E0129 11:32:22.881926 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:22.881947 kubelet[1799]: W0129 11:32:22.881944 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:22.882017 kubelet[1799]: E0129 11:32:22.881964 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:22.882224 kubelet[1799]: E0129 11:32:22.882207 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:22.882224 kubelet[1799]: W0129 11:32:22.882220 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:22.882311 kubelet[1799]: E0129 11:32:22.882236 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:22.882471 kubelet[1799]: E0129 11:32:22.882453 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:22.882471 kubelet[1799]: W0129 11:32:22.882469 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:22.882537 kubelet[1799]: E0129 11:32:22.882485 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:22.882720 kubelet[1799]: E0129 11:32:22.882703 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:22.882720 kubelet[1799]: W0129 11:32:22.882716 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:22.882819 kubelet[1799]: E0129 11:32:22.882732 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:22.883070 kubelet[1799]: E0129 11:32:22.883049 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:22.883070 kubelet[1799]: W0129 11:32:22.883066 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:22.883144 kubelet[1799]: E0129 11:32:22.883084 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:22.883438 kubelet[1799]: E0129 11:32:22.883420 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:22.883438 kubelet[1799]: W0129 11:32:22.883434 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:22.883529 kubelet[1799]: E0129 11:32:22.883452 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:22.883697 kubelet[1799]: E0129 11:32:22.883672 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:22.883697 kubelet[1799]: W0129 11:32:22.883695 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:22.883786 kubelet[1799]: E0129 11:32:22.883710 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:22.884002 kubelet[1799]: E0129 11:32:22.883985 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:22.884002 kubelet[1799]: W0129 11:32:22.883997 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:22.884075 kubelet[1799]: E0129 11:32:22.884014 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:22.884408 kubelet[1799]: E0129 11:32:22.884382 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:22.884408 kubelet[1799]: W0129 11:32:22.884400 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:22.884479 kubelet[1799]: E0129 11:32:22.884417 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:22.884714 kubelet[1799]: E0129 11:32:22.884671 1799 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:32:22.884714 kubelet[1799]: W0129 11:32:22.884696 1799 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:32:22.884714 kubelet[1799]: E0129 11:32:22.884708 1799 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:32:23.147974 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount553868681.mount: Deactivated successfully. Jan 29 11:32:23.332365 kubelet[1799]: E0129 11:32:23.332189 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:32:23.742893 containerd[1484]: time="2025-01-29T11:32:23.742796582Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:32:23.755275 containerd[1484]: time="2025-01-29T11:32:23.755217626Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 29 11:32:23.787097 containerd[1484]: time="2025-01-29T11:32:23.787049452Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:32:23.795462 kubelet[1799]: E0129 11:32:23.795404 1799 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7rbs5" podUID="168d8980-ba5c-4483-9146-b7dc7884186d" Jan 29 11:32:23.806120 containerd[1484]: time="2025-01-29T11:32:23.806078428Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:32:23.806747 containerd[1484]: time="2025-01-29T11:32:23.806718308Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 2.457164411s" Jan 29 11:32:23.806797 containerd[1484]: time="2025-01-29T11:32:23.806745569Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 29 11:32:23.808722 containerd[1484]: time="2025-01-29T11:32:23.808682522Z" level=info msg="CreateContainer within sandbox \"1ee4676dbe9f9072fb742ee5e317b0c72187a9324d7d478f0b8e63efe2057d82\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 11:32:24.223079 containerd[1484]: time="2025-01-29T11:32:24.222925695Z" level=info msg="CreateContainer within sandbox \"1ee4676dbe9f9072fb742ee5e317b0c72187a9324d7d478f0b8e63efe2057d82\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"29bcf67fcec50edb7d6d2ae3be86dc173eca8f107d9a3cf7cd9228438ebbf92e\"" Jan 29 11:32:24.223949 containerd[1484]: time="2025-01-29T11:32:24.223896225Z" level=info msg="StartContainer for \"29bcf67fcec50edb7d6d2ae3be86dc173eca8f107d9a3cf7cd9228438ebbf92e\"" Jan 29 11:32:24.293917 systemd[1]: Started cri-containerd-29bcf67fcec50edb7d6d2ae3be86dc173eca8f107d9a3cf7cd9228438ebbf92e.scope - libcontainer container 29bcf67fcec50edb7d6d2ae3be86dc173eca8f107d9a3cf7cd9228438ebbf92e. Jan 29 11:32:24.334968 kubelet[1799]: E0129 11:32:24.334883 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:32:24.357901 systemd[1]: cri-containerd-29bcf67fcec50edb7d6d2ae3be86dc173eca8f107d9a3cf7cd9228438ebbf92e.scope: Deactivated successfully. Jan 29 11:32:24.467561 containerd[1484]: time="2025-01-29T11:32:24.467490231Z" level=info msg="StartContainer for \"29bcf67fcec50edb7d6d2ae3be86dc173eca8f107d9a3cf7cd9228438ebbf92e\" returns successfully" Jan 29 11:32:24.486053 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29bcf67fcec50edb7d6d2ae3be86dc173eca8f107d9a3cf7cd9228438ebbf92e-rootfs.mount: Deactivated successfully. Jan 29 11:32:24.811260 kubelet[1799]: E0129 11:32:24.811141 1799 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:32:25.335957 kubelet[1799]: E0129 11:32:25.335906 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:32:25.661441 containerd[1484]: time="2025-01-29T11:32:25.661266157Z" level=info msg="shim disconnected" id=29bcf67fcec50edb7d6d2ae3be86dc173eca8f107d9a3cf7cd9228438ebbf92e namespace=k8s.io Jan 29 11:32:25.661441 containerd[1484]: time="2025-01-29T11:32:25.661324316Z" level=warning msg="cleaning up after shim disconnected" id=29bcf67fcec50edb7d6d2ae3be86dc173eca8f107d9a3cf7cd9228438ebbf92e namespace=k8s.io Jan 29 11:32:25.661441 containerd[1484]: time="2025-01-29T11:32:25.661333974Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:32:25.795527 kubelet[1799]: E0129 11:32:25.795481 1799 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7rbs5" podUID="168d8980-ba5c-4483-9146-b7dc7884186d" Jan 29 11:32:25.813558 kubelet[1799]: E0129 11:32:25.813522 1799 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:32:25.814166 containerd[1484]: time="2025-01-29T11:32:25.814124872Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 29 11:32:26.337068 kubelet[1799]: E0129 11:32:26.336997 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:32:27.338214 kubelet[1799]: E0129 11:32:27.338148 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:32:27.795155 kubelet[1799]: E0129 11:32:27.795009 1799 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7rbs5" podUID="168d8980-ba5c-4483-9146-b7dc7884186d" Jan 29 11:32:28.338345 kubelet[1799]: E0129 11:32:28.338295 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:32:29.339050 kubelet[1799]: E0129 11:32:29.338958 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:32:29.949845 kubelet[1799]: E0129 11:32:29.949000 1799 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7rbs5" podUID="168d8980-ba5c-4483-9146-b7dc7884186d" Jan 29 11:32:30.341830 kubelet[1799]: E0129 11:32:30.341631 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:32:30.998630 containerd[1484]: time="2025-01-29T11:32:30.998569857Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:32:30.999406 containerd[1484]: time="2025-01-29T11:32:30.999367473Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 29 11:32:31.000503 containerd[1484]: time="2025-01-29T11:32:31.000467846Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:32:31.003021 containerd[1484]: time="2025-01-29T11:32:31.002978004Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:32:31.003635 containerd[1484]: time="2025-01-29T11:32:31.003604859Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.189440373s" Jan 29 11:32:31.003677 containerd[1484]: time="2025-01-29T11:32:31.003633082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 29 11:32:31.005910 containerd[1484]: time="2025-01-29T11:32:31.005883753Z" level=info msg="CreateContainer within sandbox \"1ee4676dbe9f9072fb742ee5e317b0c72187a9324d7d478f0b8e63efe2057d82\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 11:32:31.021348 containerd[1484]: time="2025-01-29T11:32:31.021251904Z" level=info msg="CreateContainer within sandbox \"1ee4676dbe9f9072fb742ee5e317b0c72187a9324d7d478f0b8e63efe2057d82\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4586563fb84fa3cd390cf5559223a13eab68e91b74318d544977d9cad5eb2fd2\"" Jan 29 11:32:31.021638 containerd[1484]: time="2025-01-29T11:32:31.021601559Z" level=info msg="StartContainer for \"4586563fb84fa3cd390cf5559223a13eab68e91b74318d544977d9cad5eb2fd2\"" Jan 29 11:32:31.057807 systemd[1]: run-containerd-runc-k8s.io-4586563fb84fa3cd390cf5559223a13eab68e91b74318d544977d9cad5eb2fd2-runc.l0M7k5.mount: Deactivated successfully. Jan 29 11:32:31.076890 systemd[1]: Started cri-containerd-4586563fb84fa3cd390cf5559223a13eab68e91b74318d544977d9cad5eb2fd2.scope - libcontainer container 4586563fb84fa3cd390cf5559223a13eab68e91b74318d544977d9cad5eb2fd2. Jan 29 11:32:31.342730 kubelet[1799]: E0129 11:32:31.342550 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:32:31.352658 containerd[1484]: time="2025-01-29T11:32:31.352615439Z" level=info msg="StartContainer for \"4586563fb84fa3cd390cf5559223a13eab68e91b74318d544977d9cad5eb2fd2\" returns successfully" Jan 29 11:32:31.794692 kubelet[1799]: E0129 11:32:31.794622 1799 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7rbs5" podUID="168d8980-ba5c-4483-9146-b7dc7884186d" Jan 29 11:32:31.964056 kubelet[1799]: E0129 11:32:31.964011 1799 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:32:32.343341 kubelet[1799]: E0129 11:32:32.343272 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:32:32.965437 kubelet[1799]: E0129 11:32:32.965375 1799 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:32:33.344638 kubelet[1799]: E0129 11:32:33.344461 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:32:33.657454 containerd[1484]: time="2025-01-29T11:32:33.657193906Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:32:33.660922 systemd[1]: cri-containerd-4586563fb84fa3cd390cf5559223a13eab68e91b74318d544977d9cad5eb2fd2.scope: Deactivated successfully. Jan 29 11:32:33.661874 systemd[1]: cri-containerd-4586563fb84fa3cd390cf5559223a13eab68e91b74318d544977d9cad5eb2fd2.scope: Consumed 1.399s CPU time. Jan 29 11:32:33.671647 kubelet[1799]: I0129 11:32:33.671451 1799 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 29 11:32:33.690763 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4586563fb84fa3cd390cf5559223a13eab68e91b74318d544977d9cad5eb2fd2-rootfs.mount: Deactivated successfully. Jan 29 11:32:33.805646 systemd[1]: Created slice kubepods-besteffort-pod168d8980_ba5c_4483_9146_b7dc7884186d.slice - libcontainer container kubepods-besteffort-pod168d8980_ba5c_4483_9146_b7dc7884186d.slice. Jan 29 11:32:33.810232 containerd[1484]: time="2025-01-29T11:32:33.809875969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7rbs5,Uid:168d8980-ba5c-4483-9146-b7dc7884186d,Namespace:calico-system,Attempt:0,}" Jan 29 11:32:34.345467 kubelet[1799]: E0129 11:32:34.345394 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:32:35.123339 containerd[1484]: time="2025-01-29T11:32:35.123273438Z" level=info msg="shim disconnected" id=4586563fb84fa3cd390cf5559223a13eab68e91b74318d544977d9cad5eb2fd2 namespace=k8s.io Jan 29 11:32:35.123339 containerd[1484]: time="2025-01-29T11:32:35.123327450Z" level=warning msg="cleaning up after shim disconnected" id=4586563fb84fa3cd390cf5559223a13eab68e91b74318d544977d9cad5eb2fd2 namespace=k8s.io Jan 29 11:32:35.123339 containerd[1484]: time="2025-01-29T11:32:35.123336376Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:32:35.238188 kubelet[1799]: I0129 11:32:35.238137 1799 topology_manager.go:215] "Topology Admit Handler" podUID="7c2ca4ff-6c88-4156-83da-a023debbfb5e" podNamespace="default" podName="nginx-deployment-85f456d6dd-bccrr" Jan 29 11:32:35.244916 systemd[1]: Created slice kubepods-besteffort-pod7c2ca4ff_6c88_4156_83da_a023debbfb5e.slice - libcontainer container kubepods-besteffort-pod7c2ca4ff_6c88_4156_83da_a023debbfb5e.slice. Jan 29 11:32:35.270427 containerd[1484]: time="2025-01-29T11:32:35.270330904Z" level=error msg="Failed to destroy network for sandbox \"16f501e638a3ac116c003fcafbce0b8473a1e496ec5313be850130e687ffcd05\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:35.270869 containerd[1484]: time="2025-01-29T11:32:35.270835911Z" level=error msg="encountered an error cleaning up failed sandbox \"16f501e638a3ac116c003fcafbce0b8473a1e496ec5313be850130e687ffcd05\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:35.270949 containerd[1484]: time="2025-01-29T11:32:35.270922413Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7rbs5,Uid:168d8980-ba5c-4483-9146-b7dc7884186d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"16f501e638a3ac116c003fcafbce0b8473a1e496ec5313be850130e687ffcd05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:35.271225 kubelet[1799]: E0129 11:32:35.271178 1799 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16f501e638a3ac116c003fcafbce0b8473a1e496ec5313be850130e687ffcd05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:35.271311 kubelet[1799]: E0129 11:32:35.271249 1799 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16f501e638a3ac116c003fcafbce0b8473a1e496ec5313be850130e687ffcd05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7rbs5" Jan 29 11:32:35.271311 kubelet[1799]: E0129 11:32:35.271272 1799 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16f501e638a3ac116c003fcafbce0b8473a1e496ec5313be850130e687ffcd05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7rbs5" Jan 29 11:32:35.271404 kubelet[1799]: E0129 11:32:35.271311 1799 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7rbs5_calico-system(168d8980-ba5c-4483-9146-b7dc7884186d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7rbs5_calico-system(168d8980-ba5c-4483-9146-b7dc7884186d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"16f501e638a3ac116c003fcafbce0b8473a1e496ec5313be850130e687ffcd05\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7rbs5" podUID="168d8980-ba5c-4483-9146-b7dc7884186d" Jan 29 11:32:35.272388 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-16f501e638a3ac116c003fcafbce0b8473a1e496ec5313be850130e687ffcd05-shm.mount: Deactivated successfully. Jan 29 11:32:35.345834 kubelet[1799]: E0129 11:32:35.345772 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:32:35.380183 kubelet[1799]: I0129 11:32:35.380029 1799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfbbl\" (UniqueName: \"kubernetes.io/projected/7c2ca4ff-6c88-4156-83da-a023debbfb5e-kube-api-access-bfbbl\") pod \"nginx-deployment-85f456d6dd-bccrr\" (UID: \"7c2ca4ff-6c88-4156-83da-a023debbfb5e\") " pod="default/nginx-deployment-85f456d6dd-bccrr" Jan 29 11:32:35.549586 containerd[1484]: time="2025-01-29T11:32:35.549535983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-bccrr,Uid:7c2ca4ff-6c88-4156-83da-a023debbfb5e,Namespace:default,Attempt:0,}" Jan 29 11:32:35.622622 containerd[1484]: time="2025-01-29T11:32:35.622566741Z" level=error msg="Failed to destroy network for sandbox \"d29d91f6c3e4b1c8ae17cab21bff6e4fe9bd910ffc7eb3fdbe04273555fde998\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:35.622989 containerd[1484]: time="2025-01-29T11:32:35.622968355Z" level=error msg="encountered an error cleaning up failed sandbox \"d29d91f6c3e4b1c8ae17cab21bff6e4fe9bd910ffc7eb3fdbe04273555fde998\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:35.623037 containerd[1484]: time="2025-01-29T11:32:35.623022246Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-bccrr,Uid:7c2ca4ff-6c88-4156-83da-a023debbfb5e,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d29d91f6c3e4b1c8ae17cab21bff6e4fe9bd910ffc7eb3fdbe04273555fde998\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:35.623268 kubelet[1799]: E0129 11:32:35.623227 1799 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d29d91f6c3e4b1c8ae17cab21bff6e4fe9bd910ffc7eb3fdbe04273555fde998\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:35.623354 kubelet[1799]: E0129 11:32:35.623289 1799 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d29d91f6c3e4b1c8ae17cab21bff6e4fe9bd910ffc7eb3fdbe04273555fde998\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-bccrr" Jan 29 11:32:35.623354 kubelet[1799]: E0129 11:32:35.623309 1799 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d29d91f6c3e4b1c8ae17cab21bff6e4fe9bd910ffc7eb3fdbe04273555fde998\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-bccrr" Jan 29 11:32:35.623469 kubelet[1799]: E0129 11:32:35.623358 1799 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-bccrr_default(7c2ca4ff-6c88-4156-83da-a023debbfb5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-bccrr_default(7c2ca4ff-6c88-4156-83da-a023debbfb5e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d29d91f6c3e4b1c8ae17cab21bff6e4fe9bd910ffc7eb3fdbe04273555fde998\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-bccrr" podUID="7c2ca4ff-6c88-4156-83da-a023debbfb5e" Jan 29 11:32:35.973229 kubelet[1799]: E0129 11:32:35.973160 1799 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:32:35.974624 kubelet[1799]: I0129 11:32:35.973771 1799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d29d91f6c3e4b1c8ae17cab21bff6e4fe9bd910ffc7eb3fdbe04273555fde998" Jan 29 11:32:35.974686 containerd[1484]: time="2025-01-29T11:32:35.973977682Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 29 11:32:35.974686 containerd[1484]: time="2025-01-29T11:32:35.974262657Z" level=info msg="StopPodSandbox for \"d29d91f6c3e4b1c8ae17cab21bff6e4fe9bd910ffc7eb3fdbe04273555fde998\"" Jan 29 11:32:35.974686 containerd[1484]: time="2025-01-29T11:32:35.974471388Z" level=info msg="Ensure that sandbox d29d91f6c3e4b1c8ae17cab21bff6e4fe9bd910ffc7eb3fdbe04273555fde998 in task-service has been cleanup successfully" Jan 29 11:32:35.974934 containerd[1484]: time="2025-01-29T11:32:35.974911253Z" level=info msg="TearDown network for sandbox \"d29d91f6c3e4b1c8ae17cab21bff6e4fe9bd910ffc7eb3fdbe04273555fde998\" successfully" Jan 29 11:32:35.974934 containerd[1484]: time="2025-01-29T11:32:35.974931000Z" level=info msg="StopPodSandbox for \"d29d91f6c3e4b1c8ae17cab21bff6e4fe9bd910ffc7eb3fdbe04273555fde998\" returns successfully" Jan 29 11:32:35.975235 kubelet[1799]: I0129 11:32:35.975201 1799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16f501e638a3ac116c003fcafbce0b8473a1e496ec5313be850130e687ffcd05" Jan 29 11:32:35.975588 containerd[1484]: time="2025-01-29T11:32:35.975561873Z" level=info msg="StopPodSandbox for \"16f501e638a3ac116c003fcafbce0b8473a1e496ec5313be850130e687ffcd05\"" Jan 29 11:32:35.975687 containerd[1484]: time="2025-01-29T11:32:35.975662582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-bccrr,Uid:7c2ca4ff-6c88-4156-83da-a023debbfb5e,Namespace:default,Attempt:1,}" Jan 29 11:32:35.975767 containerd[1484]: time="2025-01-29T11:32:35.975719539Z" level=info msg="Ensure that sandbox 16f501e638a3ac116c003fcafbce0b8473a1e496ec5313be850130e687ffcd05 in task-service has been cleanup successfully" Jan 29 11:32:35.975938 containerd[1484]: time="2025-01-29T11:32:35.975907852Z" level=info msg="TearDown network for sandbox \"16f501e638a3ac116c003fcafbce0b8473a1e496ec5313be850130e687ffcd05\" successfully" Jan 29 11:32:35.975938 containerd[1484]: time="2025-01-29T11:32:35.975928250Z" level=info msg="StopPodSandbox for \"16f501e638a3ac116c003fcafbce0b8473a1e496ec5313be850130e687ffcd05\" returns successfully" Jan 29 11:32:35.976284 containerd[1484]: time="2025-01-29T11:32:35.976259271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7rbs5,Uid:168d8980-ba5c-4483-9146-b7dc7884186d,Namespace:calico-system,Attempt:1,}" Jan 29 11:32:36.199232 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d29d91f6c3e4b1c8ae17cab21bff6e4fe9bd910ffc7eb3fdbe04273555fde998-shm.mount: Deactivated successfully. Jan 29 11:32:36.199403 systemd[1]: run-netns-cni\x2dc4a50387\x2dc7b0\x2d8786\x2d1bc9\x2da02d20b3c21d.mount: Deactivated successfully. Jan 29 11:32:36.232893 containerd[1484]: time="2025-01-29T11:32:36.232692655Z" level=error msg="Failed to destroy network for sandbox \"412254714bd9d4bf93c35b4635361e829d9373a2ab247e5c41b015d665cda9b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:36.233326 containerd[1484]: time="2025-01-29T11:32:36.233212130Z" level=error msg="encountered an error cleaning up failed sandbox \"412254714bd9d4bf93c35b4635361e829d9373a2ab247e5c41b015d665cda9b8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:36.233326 containerd[1484]: time="2025-01-29T11:32:36.233283073Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7rbs5,Uid:168d8980-ba5c-4483-9146-b7dc7884186d,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"412254714bd9d4bf93c35b4635361e829d9373a2ab247e5c41b015d665cda9b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:36.233867 kubelet[1799]: E0129 11:32:36.233666 1799 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"412254714bd9d4bf93c35b4635361e829d9373a2ab247e5c41b015d665cda9b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:36.233867 kubelet[1799]: E0129 11:32:36.233773 1799 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"412254714bd9d4bf93c35b4635361e829d9373a2ab247e5c41b015d665cda9b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7rbs5" Jan 29 11:32:36.233867 kubelet[1799]: E0129 11:32:36.233802 1799 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"412254714bd9d4bf93c35b4635361e829d9373a2ab247e5c41b015d665cda9b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7rbs5" Jan 29 11:32:36.233981 kubelet[1799]: E0129 11:32:36.233847 1799 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7rbs5_calico-system(168d8980-ba5c-4483-9146-b7dc7884186d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7rbs5_calico-system(168d8980-ba5c-4483-9146-b7dc7884186d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"412254714bd9d4bf93c35b4635361e829d9373a2ab247e5c41b015d665cda9b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7rbs5" podUID="168d8980-ba5c-4483-9146-b7dc7884186d" Jan 29 11:32:36.235972 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-412254714bd9d4bf93c35b4635361e829d9373a2ab247e5c41b015d665cda9b8-shm.mount: Deactivated successfully. Jan 29 11:32:36.243811 containerd[1484]: time="2025-01-29T11:32:36.243741115Z" level=error msg="Failed to destroy network for sandbox \"921a137634cbe4a220d4eae738f0f2c5ad00d53c38e31d8425095022f60f272a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:36.244244 containerd[1484]: time="2025-01-29T11:32:36.244207109Z" level=error msg="encountered an error cleaning up failed sandbox \"921a137634cbe4a220d4eae738f0f2c5ad00d53c38e31d8425095022f60f272a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:36.244284 containerd[1484]: time="2025-01-29T11:32:36.244271590Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-bccrr,Uid:7c2ca4ff-6c88-4156-83da-a023debbfb5e,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"921a137634cbe4a220d4eae738f0f2c5ad00d53c38e31d8425095022f60f272a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:36.244537 kubelet[1799]: E0129 11:32:36.244499 1799 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"921a137634cbe4a220d4eae738f0f2c5ad00d53c38e31d8425095022f60f272a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:36.244582 kubelet[1799]: E0129 11:32:36.244559 1799 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"921a137634cbe4a220d4eae738f0f2c5ad00d53c38e31d8425095022f60f272a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-bccrr" Jan 29 11:32:36.244656 kubelet[1799]: E0129 11:32:36.244584 1799 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"921a137634cbe4a220d4eae738f0f2c5ad00d53c38e31d8425095022f60f272a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-bccrr" Jan 29 11:32:36.244656 kubelet[1799]: E0129 11:32:36.244626 1799 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-bccrr_default(7c2ca4ff-6c88-4156-83da-a023debbfb5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-bccrr_default(7c2ca4ff-6c88-4156-83da-a023debbfb5e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"921a137634cbe4a220d4eae738f0f2c5ad00d53c38e31d8425095022f60f272a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-bccrr" podUID="7c2ca4ff-6c88-4156-83da-a023debbfb5e" Jan 29 11:32:36.245841 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-921a137634cbe4a220d4eae738f0f2c5ad00d53c38e31d8425095022f60f272a-shm.mount: Deactivated successfully. Jan 29 11:32:36.346529 kubelet[1799]: E0129 11:32:36.346463 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:32:36.978138 kubelet[1799]: I0129 11:32:36.978097 1799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="412254714bd9d4bf93c35b4635361e829d9373a2ab247e5c41b015d665cda9b8" Jan 29 11:32:36.978984 containerd[1484]: time="2025-01-29T11:32:36.978640975Z" level=info msg="StopPodSandbox for \"412254714bd9d4bf93c35b4635361e829d9373a2ab247e5c41b015d665cda9b8\"" Jan 29 11:32:36.978984 containerd[1484]: time="2025-01-29T11:32:36.978858252Z" level=info msg="Ensure that sandbox 412254714bd9d4bf93c35b4635361e829d9373a2ab247e5c41b015d665cda9b8 in task-service has been cleanup successfully" Jan 29 11:32:36.979207 containerd[1484]: time="2025-01-29T11:32:36.979166260Z" level=info msg="TearDown network for sandbox \"412254714bd9d4bf93c35b4635361e829d9373a2ab247e5c41b015d665cda9b8\" successfully" Jan 29 11:32:36.979207 containerd[1484]: time="2025-01-29T11:32:36.979202578Z" level=info msg="StopPodSandbox for \"412254714bd9d4bf93c35b4635361e829d9373a2ab247e5c41b015d665cda9b8\" returns successfully" Jan 29 11:32:36.980892 systemd[1]: run-netns-cni\x2dada7d3a4\x2d8a4a\x2da8d8\x2dd561\x2d02d64becceed.mount: Deactivated successfully. Jan 29 11:32:36.981107 containerd[1484]: time="2025-01-29T11:32:36.981045404Z" level=info msg="StopPodSandbox for \"16f501e638a3ac116c003fcafbce0b8473a1e496ec5313be850130e687ffcd05\"" Jan 29 11:32:36.981191 containerd[1484]: time="2025-01-29T11:32:36.981164467Z" level=info msg="TearDown network for sandbox \"16f501e638a3ac116c003fcafbce0b8473a1e496ec5313be850130e687ffcd05\" successfully" Jan 29 11:32:36.981191 containerd[1484]: time="2025-01-29T11:32:36.981184024Z" level=info msg="StopPodSandbox for \"16f501e638a3ac116c003fcafbce0b8473a1e496ec5313be850130e687ffcd05\" returns successfully" Jan 29 11:32:36.981342 kubelet[1799]: I0129 11:32:36.981170 1799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="921a137634cbe4a220d4eae738f0f2c5ad00d53c38e31d8425095022f60f272a" Jan 29 11:32:36.981634 containerd[1484]: time="2025-01-29T11:32:36.981603440Z" level=info msg="StopPodSandbox for \"921a137634cbe4a220d4eae738f0f2c5ad00d53c38e31d8425095022f60f272a\"" Jan 29 11:32:36.981790 containerd[1484]: time="2025-01-29T11:32:36.981744816Z" level=info msg="Ensure that sandbox 921a137634cbe4a220d4eae738f0f2c5ad00d53c38e31d8425095022f60f272a in task-service has been cleanup successfully" Jan 29 11:32:36.982526 containerd[1484]: time="2025-01-29T11:32:36.982494481Z" level=info msg="TearDown network for sandbox \"921a137634cbe4a220d4eae738f0f2c5ad00d53c38e31d8425095022f60f272a\" successfully" Jan 29 11:32:36.982526 containerd[1484]: time="2025-01-29T11:32:36.982518256Z" level=info msg="StopPodSandbox for \"921a137634cbe4a220d4eae738f0f2c5ad00d53c38e31d8425095022f60f272a\" returns successfully" Jan 29 11:32:36.982673 containerd[1484]: time="2025-01-29T11:32:36.982506264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7rbs5,Uid:168d8980-ba5c-4483-9146-b7dc7884186d,Namespace:calico-system,Attempt:2,}" Jan 29 11:32:36.983189 containerd[1484]: time="2025-01-29T11:32:36.983153367Z" level=info msg="StopPodSandbox for \"d29d91f6c3e4b1c8ae17cab21bff6e4fe9bd910ffc7eb3fdbe04273555fde998\"" Jan 29 11:32:36.983261 containerd[1484]: time="2025-01-29T11:32:36.983242464Z" level=info msg="TearDown network for sandbox \"d29d91f6c3e4b1c8ae17cab21bff6e4fe9bd910ffc7eb3fdbe04273555fde998\" successfully" Jan 29 11:32:36.983261 containerd[1484]: time="2025-01-29T11:32:36.983252202Z" level=info msg="StopPodSandbox for \"d29d91f6c3e4b1c8ae17cab21bff6e4fe9bd910ffc7eb3fdbe04273555fde998\" returns successfully" Jan 29 11:32:36.983357 systemd[1]: run-netns-cni\x2d3a6af6cf\x2d08e9\x2d5119\x2d6fdf\x2d8ba3e7a258f0.mount: Deactivated successfully. Jan 29 11:32:36.983616 containerd[1484]: time="2025-01-29T11:32:36.983583754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-bccrr,Uid:7c2ca4ff-6c88-4156-83da-a023debbfb5e,Namespace:default,Attempt:2,}" Jan 29 11:32:37.328235 kubelet[1799]: E0129 11:32:37.328047 1799 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:32:37.347435 kubelet[1799]: E0129 11:32:37.347396 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:32:37.457078 containerd[1484]: time="2025-01-29T11:32:37.457017491Z" level=error msg="Failed to destroy network for sandbox \"3dc6f7d7aad1971a0324d00bc76d6e4b98dee66fff128002850bcc9e0b05f74e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:37.458361 containerd[1484]: time="2025-01-29T11:32:37.458151388Z" level=error msg="encountered an error cleaning up failed sandbox \"3dc6f7d7aad1971a0324d00bc76d6e4b98dee66fff128002850bcc9e0b05f74e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:37.458361 containerd[1484]: time="2025-01-29T11:32:37.458231999Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-bccrr,Uid:7c2ca4ff-6c88-4156-83da-a023debbfb5e,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"3dc6f7d7aad1971a0324d00bc76d6e4b98dee66fff128002850bcc9e0b05f74e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:37.458361 containerd[1484]: time="2025-01-29T11:32:37.458318632Z" level=error msg="Failed to destroy network for sandbox \"ca328d2245f1621d4a9b8f56f547e175394c05b8b52cd90a410ce0f9cd82ea75\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:37.458556 kubelet[1799]: E0129 11:32:37.458519 1799 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3dc6f7d7aad1971a0324d00bc76d6e4b98dee66fff128002850bcc9e0b05f74e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:37.458626 kubelet[1799]: E0129 11:32:37.458583 1799 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3dc6f7d7aad1971a0324d00bc76d6e4b98dee66fff128002850bcc9e0b05f74e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-bccrr" Jan 29 11:32:37.458626 kubelet[1799]: E0129 11:32:37.458604 1799 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3dc6f7d7aad1971a0324d00bc76d6e4b98dee66fff128002850bcc9e0b05f74e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-bccrr" Jan 29 11:32:37.458702 kubelet[1799]: E0129 11:32:37.458644 1799 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-bccrr_default(7c2ca4ff-6c88-4156-83da-a023debbfb5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-bccrr_default(7c2ca4ff-6c88-4156-83da-a023debbfb5e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3dc6f7d7aad1971a0324d00bc76d6e4b98dee66fff128002850bcc9e0b05f74e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-bccrr" podUID="7c2ca4ff-6c88-4156-83da-a023debbfb5e" Jan 29 11:32:37.458802 containerd[1484]: time="2025-01-29T11:32:37.458739290Z" level=error msg="encountered an error cleaning up failed sandbox \"ca328d2245f1621d4a9b8f56f547e175394c05b8b52cd90a410ce0f9cd82ea75\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:37.458853 containerd[1484]: time="2025-01-29T11:32:37.458803431Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7rbs5,Uid:168d8980-ba5c-4483-9146-b7dc7884186d,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"ca328d2245f1621d4a9b8f56f547e175394c05b8b52cd90a410ce0f9cd82ea75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:37.458955 kubelet[1799]: E0129 11:32:37.458894 1799 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca328d2245f1621d4a9b8f56f547e175394c05b8b52cd90a410ce0f9cd82ea75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:37.458955 kubelet[1799]: E0129 11:32:37.458938 1799 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca328d2245f1621d4a9b8f56f547e175394c05b8b52cd90a410ce0f9cd82ea75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7rbs5" Jan 29 11:32:37.459070 kubelet[1799]: E0129 11:32:37.458957 1799 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca328d2245f1621d4a9b8f56f547e175394c05b8b52cd90a410ce0f9cd82ea75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7rbs5" Jan 29 11:32:37.459070 kubelet[1799]: E0129 11:32:37.458982 1799 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7rbs5_calico-system(168d8980-ba5c-4483-9146-b7dc7884186d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7rbs5_calico-system(168d8980-ba5c-4483-9146-b7dc7884186d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ca328d2245f1621d4a9b8f56f547e175394c05b8b52cd90a410ce0f9cd82ea75\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7rbs5" podUID="168d8980-ba5c-4483-9146-b7dc7884186d" Jan 29 11:32:37.459261 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3dc6f7d7aad1971a0324d00bc76d6e4b98dee66fff128002850bcc9e0b05f74e-shm.mount: Deactivated successfully. Jan 29 11:32:38.000827 kubelet[1799]: I0129 11:32:38.000777 1799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca328d2245f1621d4a9b8f56f547e175394c05b8b52cd90a410ce0f9cd82ea75" Jan 29 11:32:38.001426 containerd[1484]: time="2025-01-29T11:32:38.001381358Z" level=info msg="StopPodSandbox for \"ca328d2245f1621d4a9b8f56f547e175394c05b8b52cd90a410ce0f9cd82ea75\"" Jan 29 11:32:38.001617 containerd[1484]: time="2025-01-29T11:32:38.001579249Z" level=info msg="Ensure that sandbox ca328d2245f1621d4a9b8f56f547e175394c05b8b52cd90a410ce0f9cd82ea75 in task-service has been cleanup successfully" Jan 29 11:32:38.002697 containerd[1484]: time="2025-01-29T11:32:38.002050242Z" level=info msg="TearDown network for sandbox \"ca328d2245f1621d4a9b8f56f547e175394c05b8b52cd90a410ce0f9cd82ea75\" successfully" Jan 29 11:32:38.002697 containerd[1484]: time="2025-01-29T11:32:38.002069869Z" level=info msg="StopPodSandbox for \"ca328d2245f1621d4a9b8f56f547e175394c05b8b52cd90a410ce0f9cd82ea75\" returns successfully" Jan 29 11:32:38.002697 containerd[1484]: time="2025-01-29T11:32:38.002325408Z" level=info msg="StopPodSandbox for \"412254714bd9d4bf93c35b4635361e829d9373a2ab247e5c41b015d665cda9b8\"" Jan 29 11:32:38.002697 containerd[1484]: time="2025-01-29T11:32:38.002566791Z" level=info msg="TearDown network for sandbox \"412254714bd9d4bf93c35b4635361e829d9373a2ab247e5c41b015d665cda9b8\" successfully" Jan 29 11:32:38.002697 containerd[1484]: time="2025-01-29T11:32:38.002584154Z" level=info msg="StopPodSandbox for \"412254714bd9d4bf93c35b4635361e829d9373a2ab247e5c41b015d665cda9b8\" returns successfully" Jan 29 11:32:38.003369 containerd[1484]: time="2025-01-29T11:32:38.003145907Z" level=info msg="StopPodSandbox for \"3dc6f7d7aad1971a0324d00bc76d6e4b98dee66fff128002850bcc9e0b05f74e\"" Jan 29 11:32:38.003369 containerd[1484]: time="2025-01-29T11:32:38.003323941Z" level=info msg="Ensure that sandbox 3dc6f7d7aad1971a0324d00bc76d6e4b98dee66fff128002850bcc9e0b05f74e in task-service has been cleanup successfully" Jan 29 11:32:38.003833 kubelet[1799]: I0129 11:32:38.002691 1799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3dc6f7d7aad1971a0324d00bc76d6e4b98dee66fff128002850bcc9e0b05f74e" Jan 29 11:32:38.003876 containerd[1484]: time="2025-01-29T11:32:38.003550336Z" level=info msg="TearDown network for sandbox \"3dc6f7d7aad1971a0324d00bc76d6e4b98dee66fff128002850bcc9e0b05f74e\" successfully" Jan 29 11:32:38.003876 containerd[1484]: time="2025-01-29T11:32:38.003649672Z" level=info msg="StopPodSandbox for \"3dc6f7d7aad1971a0324d00bc76d6e4b98dee66fff128002850bcc9e0b05f74e\" returns successfully" Jan 29 11:32:38.003876 containerd[1484]: time="2025-01-29T11:32:38.003611580Z" level=info msg="StopPodSandbox for \"16f501e638a3ac116c003fcafbce0b8473a1e496ec5313be850130e687ffcd05\"" Jan 29 11:32:38.003876 containerd[1484]: time="2025-01-29T11:32:38.003809712Z" level=info msg="TearDown network for sandbox \"16f501e638a3ac116c003fcafbce0b8473a1e496ec5313be850130e687ffcd05\" successfully" Jan 29 11:32:38.003876 containerd[1484]: time="2025-01-29T11:32:38.003825251Z" level=info msg="StopPodSandbox for \"16f501e638a3ac116c003fcafbce0b8473a1e496ec5313be850130e687ffcd05\" returns successfully" Jan 29 11:32:38.003876 containerd[1484]: time="2025-01-29T11:32:38.003829169Z" level=info msg="StopPodSandbox for \"921a137634cbe4a220d4eae738f0f2c5ad00d53c38e31d8425095022f60f272a\"" Jan 29 11:32:38.004005 containerd[1484]: time="2025-01-29T11:32:38.003949364Z" level=info msg="TearDown network for sandbox \"921a137634cbe4a220d4eae738f0f2c5ad00d53c38e31d8425095022f60f272a\" successfully" Jan 29 11:32:38.004005 containerd[1484]: time="2025-01-29T11:32:38.003962068Z" level=info msg="StopPodSandbox for \"921a137634cbe4a220d4eae738f0f2c5ad00d53c38e31d8425095022f60f272a\" returns successfully" Jan 29 11:32:38.004237 containerd[1484]: time="2025-01-29T11:32:38.004189665Z" level=info msg="StopPodSandbox for \"d29d91f6c3e4b1c8ae17cab21bff6e4fe9bd910ffc7eb3fdbe04273555fde998\"" Jan 29 11:32:38.005092 containerd[1484]: time="2025-01-29T11:32:38.004283310Z" level=info msg="TearDown network for sandbox \"d29d91f6c3e4b1c8ae17cab21bff6e4fe9bd910ffc7eb3fdbe04273555fde998\" successfully" Jan 29 11:32:38.005092 containerd[1484]: time="2025-01-29T11:32:38.004298088Z" level=info msg="StopPodSandbox for \"d29d91f6c3e4b1c8ae17cab21bff6e4fe9bd910ffc7eb3fdbe04273555fde998\" returns successfully" Jan 29 11:32:38.005092 containerd[1484]: time="2025-01-29T11:32:38.004385301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7rbs5,Uid:168d8980-ba5c-4483-9146-b7dc7884186d,Namespace:calico-system,Attempt:3,}" Jan 29 11:32:38.006010 containerd[1484]: time="2025-01-29T11:32:38.005223083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-bccrr,Uid:7c2ca4ff-6c88-4156-83da-a023debbfb5e,Namespace:default,Attempt:3,}" Jan 29 11:32:38.158186 containerd[1484]: time="2025-01-29T11:32:38.158124157Z" level=error msg="Failed to destroy network for sandbox \"c93ece865a072c0cb14d2f1bfe13ae827efec40e9f66919ed53be157201e987f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:38.158644 containerd[1484]: time="2025-01-29T11:32:38.158600891Z" level=error msg="encountered an error cleaning up failed sandbox \"c93ece865a072c0cb14d2f1bfe13ae827efec40e9f66919ed53be157201e987f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:38.158703 containerd[1484]: time="2025-01-29T11:32:38.158685519Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-bccrr,Uid:7c2ca4ff-6c88-4156-83da-a023debbfb5e,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"c93ece865a072c0cb14d2f1bfe13ae827efec40e9f66919ed53be157201e987f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:38.159322 kubelet[1799]: E0129 11:32:38.158913 1799 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c93ece865a072c0cb14d2f1bfe13ae827efec40e9f66919ed53be157201e987f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:38.159322 kubelet[1799]: E0129 11:32:38.158974 1799 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c93ece865a072c0cb14d2f1bfe13ae827efec40e9f66919ed53be157201e987f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-bccrr" Jan 29 11:32:38.159322 kubelet[1799]: E0129 11:32:38.158996 1799 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c93ece865a072c0cb14d2f1bfe13ae827efec40e9f66919ed53be157201e987f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-bccrr" Jan 29 11:32:38.161431 kubelet[1799]: E0129 11:32:38.159034 1799 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-bccrr_default(7c2ca4ff-6c88-4156-83da-a023debbfb5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-bccrr_default(7c2ca4ff-6c88-4156-83da-a023debbfb5e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c93ece865a072c0cb14d2f1bfe13ae827efec40e9f66919ed53be157201e987f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-bccrr" podUID="7c2ca4ff-6c88-4156-83da-a023debbfb5e" Jan 29 11:32:38.171331 containerd[1484]: time="2025-01-29T11:32:38.171293935Z" level=error msg="Failed to destroy network for sandbox \"5de5a9b89ac8c6b5f995d32eee7fe17833d74b22a0f409de2ab8cfcb16427df8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:38.171681 containerd[1484]: time="2025-01-29T11:32:38.171653900Z" level=error msg="encountered an error cleaning up failed sandbox \"5de5a9b89ac8c6b5f995d32eee7fe17833d74b22a0f409de2ab8cfcb16427df8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:38.171727 containerd[1484]: time="2025-01-29T11:32:38.171706679Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7rbs5,Uid:168d8980-ba5c-4483-9146-b7dc7884186d,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"5de5a9b89ac8c6b5f995d32eee7fe17833d74b22a0f409de2ab8cfcb16427df8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:38.172022 kubelet[1799]: E0129 11:32:38.171961 1799 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5de5a9b89ac8c6b5f995d32eee7fe17833d74b22a0f409de2ab8cfcb16427df8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:38.172075 kubelet[1799]: E0129 11:32:38.172034 1799 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5de5a9b89ac8c6b5f995d32eee7fe17833d74b22a0f409de2ab8cfcb16427df8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7rbs5" Jan 29 11:32:38.172075 kubelet[1799]: E0129 11:32:38.172059 1799 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5de5a9b89ac8c6b5f995d32eee7fe17833d74b22a0f409de2ab8cfcb16427df8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7rbs5" Jan 29 11:32:38.172129 kubelet[1799]: E0129 11:32:38.172110 1799 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7rbs5_calico-system(168d8980-ba5c-4483-9146-b7dc7884186d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7rbs5_calico-system(168d8980-ba5c-4483-9146-b7dc7884186d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5de5a9b89ac8c6b5f995d32eee7fe17833d74b22a0f409de2ab8cfcb16427df8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7rbs5" podUID="168d8980-ba5c-4483-9146-b7dc7884186d" Jan 29 11:32:38.348888 kubelet[1799]: E0129 11:32:38.348192 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:32:38.386018 systemd[1]: run-netns-cni\x2d84837590\x2d2d34\x2db6f5\x2d452e\x2d90e84e4fa6ee.mount: Deactivated successfully. Jan 29 11:32:38.386125 systemd[1]: run-netns-cni\x2d5110edb5\x2de18a\x2de0be\x2d3a95\x2dfe333a7de2db.mount: Deactivated successfully. Jan 29 11:32:38.386196 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ca328d2245f1621d4a9b8f56f547e175394c05b8b52cd90a410ce0f9cd82ea75-shm.mount: Deactivated successfully. Jan 29 11:32:39.005569 kubelet[1799]: I0129 11:32:39.005536 1799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c93ece865a072c0cb14d2f1bfe13ae827efec40e9f66919ed53be157201e987f" Jan 29 11:32:39.006281 containerd[1484]: time="2025-01-29T11:32:39.005948760Z" level=info msg="StopPodSandbox for \"c93ece865a072c0cb14d2f1bfe13ae827efec40e9f66919ed53be157201e987f\"" Jan 29 11:32:39.006281 containerd[1484]: time="2025-01-29T11:32:39.006135641Z" level=info msg="Ensure that sandbox c93ece865a072c0cb14d2f1bfe13ae827efec40e9f66919ed53be157201e987f in task-service has been cleanup successfully" Jan 29 11:32:39.006679 containerd[1484]: time="2025-01-29T11:32:39.006660375Z" level=info msg="TearDown network for sandbox \"c93ece865a072c0cb14d2f1bfe13ae827efec40e9f66919ed53be157201e987f\" successfully" Jan 29 11:32:39.006726 containerd[1484]: time="2025-01-29T11:32:39.006678759Z" level=info msg="StopPodSandbox for \"c93ece865a072c0cb14d2f1bfe13ae827efec40e9f66919ed53be157201e987f\" returns successfully" Jan 29 11:32:39.007868 containerd[1484]: time="2025-01-29T11:32:39.007740000Z" level=info msg="StopPodSandbox for \"3dc6f7d7aad1971a0324d00bc76d6e4b98dee66fff128002850bcc9e0b05f74e\"" Jan 29 11:32:39.007868 containerd[1484]: time="2025-01-29T11:32:39.007831451Z" level=info msg="TearDown network for sandbox \"3dc6f7d7aad1971a0324d00bc76d6e4b98dee66fff128002850bcc9e0b05f74e\" successfully" Jan 29 11:32:39.007868 containerd[1484]: time="2025-01-29T11:32:39.007841630Z" level=info msg="StopPodSandbox for \"3dc6f7d7aad1971a0324d00bc76d6e4b98dee66fff128002850bcc9e0b05f74e\" returns successfully" Jan 29 11:32:39.007973 systemd[1]: run-netns-cni\x2d2111977c\x2db272\x2d4d94\x2dfe52\x2dd3ce9acb7b30.mount: Deactivated successfully. Jan 29 11:32:39.008180 containerd[1484]: time="2025-01-29T11:32:39.008032608Z" level=info msg="StopPodSandbox for \"921a137634cbe4a220d4eae738f0f2c5ad00d53c38e31d8425095022f60f272a\"" Jan 29 11:32:39.008180 containerd[1484]: time="2025-01-29T11:32:39.008099594Z" level=info msg="TearDown network for sandbox \"921a137634cbe4a220d4eae738f0f2c5ad00d53c38e31d8425095022f60f272a\" successfully" Jan 29 11:32:39.008180 containerd[1484]: time="2025-01-29T11:32:39.008109132Z" level=info msg="StopPodSandbox for \"921a137634cbe4a220d4eae738f0f2c5ad00d53c38e31d8425095022f60f272a\" returns successfully" Jan 29 11:32:39.008593 containerd[1484]: time="2025-01-29T11:32:39.008374019Z" level=info msg="StopPodSandbox for \"d29d91f6c3e4b1c8ae17cab21bff6e4fe9bd910ffc7eb3fdbe04273555fde998\"" Jan 29 11:32:39.008593 containerd[1484]: time="2025-01-29T11:32:39.008445803Z" level=info msg="TearDown network for sandbox \"d29d91f6c3e4b1c8ae17cab21bff6e4fe9bd910ffc7eb3fdbe04273555fde998\" successfully" Jan 29 11:32:39.008593 containerd[1484]: time="2025-01-29T11:32:39.008455071Z" level=info msg="StopPodSandbox for \"d29d91f6c3e4b1c8ae17cab21bff6e4fe9bd910ffc7eb3fdbe04273555fde998\" returns successfully" Jan 29 11:32:39.008676 kubelet[1799]: I0129 11:32:39.008648 1799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5de5a9b89ac8c6b5f995d32eee7fe17833d74b22a0f409de2ab8cfcb16427df8" Jan 29 11:32:39.008998 containerd[1484]: time="2025-01-29T11:32:39.008974194Z" level=info msg="StopPodSandbox for \"5de5a9b89ac8c6b5f995d32eee7fe17833d74b22a0f409de2ab8cfcb16427df8\"" Jan 29 11:32:39.009072 containerd[1484]: time="2025-01-29T11:32:39.009051449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-bccrr,Uid:7c2ca4ff-6c88-4156-83da-a023debbfb5e,Namespace:default,Attempt:4,}" Jan 29 11:32:39.009122 containerd[1484]: time="2025-01-29T11:32:39.009107815Z" level=info msg="Ensure that sandbox 5de5a9b89ac8c6b5f995d32eee7fe17833d74b22a0f409de2ab8cfcb16427df8 in task-service has been cleanup successfully" Jan 29 11:32:39.009302 containerd[1484]: time="2025-01-29T11:32:39.009285448Z" level=info msg="TearDown network for sandbox \"5de5a9b89ac8c6b5f995d32eee7fe17833d74b22a0f409de2ab8cfcb16427df8\" successfully" Jan 29 11:32:39.009302 containerd[1484]: time="2025-01-29T11:32:39.009299615Z" level=info msg="StopPodSandbox for \"5de5a9b89ac8c6b5f995d32eee7fe17833d74b22a0f409de2ab8cfcb16427df8\" returns successfully" Jan 29 11:32:39.009503 containerd[1484]: time="2025-01-29T11:32:39.009486565Z" level=info msg="StopPodSandbox for \"ca328d2245f1621d4a9b8f56f547e175394c05b8b52cd90a410ce0f9cd82ea75\"" Jan 29 11:32:39.009567 containerd[1484]: time="2025-01-29T11:32:39.009555394Z" level=info msg="TearDown network for sandbox \"ca328d2245f1621d4a9b8f56f547e175394c05b8b52cd90a410ce0f9cd82ea75\" successfully" Jan 29 11:32:39.009605 containerd[1484]: time="2025-01-29T11:32:39.009566184Z" level=info msg="StopPodSandbox for \"ca328d2245f1621d4a9b8f56f547e175394c05b8b52cd90a410ce0f9cd82ea75\" returns successfully" Jan 29 11:32:39.009760 containerd[1484]: time="2025-01-29T11:32:39.009732877Z" level=info msg="StopPodSandbox for \"412254714bd9d4bf93c35b4635361e829d9373a2ab247e5c41b015d665cda9b8\"" Jan 29 11:32:39.009857 containerd[1484]: time="2025-01-29T11:32:39.009843995Z" level=info msg="TearDown network for sandbox \"412254714bd9d4bf93c35b4635361e829d9373a2ab247e5c41b015d665cda9b8\" successfully" Jan 29 11:32:39.009887 containerd[1484]: time="2025-01-29T11:32:39.009856419Z" level=info msg="StopPodSandbox for \"412254714bd9d4bf93c35b4635361e829d9373a2ab247e5c41b015d665cda9b8\" returns successfully" Jan 29 11:32:39.010068 containerd[1484]: time="2025-01-29T11:32:39.010049441Z" level=info msg="StopPodSandbox for \"16f501e638a3ac116c003fcafbce0b8473a1e496ec5313be850130e687ffcd05\"" Jan 29 11:32:39.010268 containerd[1484]: time="2025-01-29T11:32:39.010233957Z" level=info msg="TearDown network for sandbox \"16f501e638a3ac116c003fcafbce0b8473a1e496ec5313be850130e687ffcd05\" successfully" Jan 29 11:32:39.010268 containerd[1484]: time="2025-01-29T11:32:39.010255157Z" level=info msg="StopPodSandbox for \"16f501e638a3ac116c003fcafbce0b8473a1e496ec5313be850130e687ffcd05\" returns successfully" Jan 29 11:32:39.010519 systemd[1]: run-netns-cni\x2d74c1f4d8\x2d1f20\x2da690\x2d20c3\x2df15307347d76.mount: Deactivated successfully. Jan 29 11:32:39.010584 containerd[1484]: time="2025-01-29T11:32:39.010514493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7rbs5,Uid:168d8980-ba5c-4483-9146-b7dc7884186d,Namespace:calico-system,Attempt:4,}" Jan 29 11:32:39.348830 kubelet[1799]: E0129 11:32:39.348709 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:32:39.606531 containerd[1484]: time="2025-01-29T11:32:39.606374844Z" level=error msg="Failed to destroy network for sandbox \"9068ccd9f3ae0f6c39874f0fc07ce6e9739aeb96cb25bd1513317e5ce99db9d5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:39.607297 containerd[1484]: time="2025-01-29T11:32:39.607123037Z" level=error msg="encountered an error cleaning up failed sandbox \"9068ccd9f3ae0f6c39874f0fc07ce6e9739aeb96cb25bd1513317e5ce99db9d5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:39.607297 containerd[1484]: time="2025-01-29T11:32:39.607181647Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7rbs5,Uid:168d8980-ba5c-4483-9146-b7dc7884186d,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"9068ccd9f3ae0f6c39874f0fc07ce6e9739aeb96cb25bd1513317e5ce99db9d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:39.607972 kubelet[1799]: E0129 11:32:39.607935 1799 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9068ccd9f3ae0f6c39874f0fc07ce6e9739aeb96cb25bd1513317e5ce99db9d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:39.608453 kubelet[1799]: E0129 11:32:39.608417 1799 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9068ccd9f3ae0f6c39874f0fc07ce6e9739aeb96cb25bd1513317e5ce99db9d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7rbs5" Jan 29 11:32:39.608453 kubelet[1799]: E0129 11:32:39.608445 1799 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9068ccd9f3ae0f6c39874f0fc07ce6e9739aeb96cb25bd1513317e5ce99db9d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7rbs5" Jan 29 11:32:39.608541 kubelet[1799]: E0129 11:32:39.608488 1799 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7rbs5_calico-system(168d8980-ba5c-4483-9146-b7dc7884186d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7rbs5_calico-system(168d8980-ba5c-4483-9146-b7dc7884186d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9068ccd9f3ae0f6c39874f0fc07ce6e9739aeb96cb25bd1513317e5ce99db9d5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7rbs5" podUID="168d8980-ba5c-4483-9146-b7dc7884186d" Jan 29 11:32:39.608632 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9068ccd9f3ae0f6c39874f0fc07ce6e9739aeb96cb25bd1513317e5ce99db9d5-shm.mount: Deactivated successfully. Jan 29 11:32:39.635907 containerd[1484]: time="2025-01-29T11:32:39.635851162Z" level=error msg="Failed to destroy network for sandbox \"75a514a2254dd50be524a2ed1b22dca4ae719ba1a08352aa2192066d83ffc3bb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:39.636431 containerd[1484]: time="2025-01-29T11:32:39.636401084Z" level=error msg="encountered an error cleaning up failed sandbox \"75a514a2254dd50be524a2ed1b22dca4ae719ba1a08352aa2192066d83ffc3bb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:39.636501 containerd[1484]: time="2025-01-29T11:32:39.636478218Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-bccrr,Uid:7c2ca4ff-6c88-4156-83da-a023debbfb5e,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"75a514a2254dd50be524a2ed1b22dca4ae719ba1a08352aa2192066d83ffc3bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:39.636852 kubelet[1799]: E0129 11:32:39.636817 1799 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75a514a2254dd50be524a2ed1b22dca4ae719ba1a08352aa2192066d83ffc3bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:39.636917 kubelet[1799]: E0129 11:32:39.636877 1799 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75a514a2254dd50be524a2ed1b22dca4ae719ba1a08352aa2192066d83ffc3bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-bccrr" Jan 29 11:32:39.636917 kubelet[1799]: E0129 11:32:39.636899 1799 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75a514a2254dd50be524a2ed1b22dca4ae719ba1a08352aa2192066d83ffc3bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-bccrr" Jan 29 11:32:39.636979 kubelet[1799]: E0129 11:32:39.636935 1799 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-bccrr_default(7c2ca4ff-6c88-4156-83da-a023debbfb5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-bccrr_default(7c2ca4ff-6c88-4156-83da-a023debbfb5e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"75a514a2254dd50be524a2ed1b22dca4ae719ba1a08352aa2192066d83ffc3bb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-bccrr" podUID="7c2ca4ff-6c88-4156-83da-a023debbfb5e" Jan 29 11:32:39.638331 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-75a514a2254dd50be524a2ed1b22dca4ae719ba1a08352aa2192066d83ffc3bb-shm.mount: Deactivated successfully. Jan 29 11:32:40.013976 kubelet[1799]: I0129 11:32:40.013937 1799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75a514a2254dd50be524a2ed1b22dca4ae719ba1a08352aa2192066d83ffc3bb" Jan 29 11:32:40.014656 containerd[1484]: time="2025-01-29T11:32:40.014625199Z" level=info msg="StopPodSandbox for \"75a514a2254dd50be524a2ed1b22dca4ae719ba1a08352aa2192066d83ffc3bb\"" Jan 29 11:32:40.015292 containerd[1484]: time="2025-01-29T11:32:40.014852315Z" level=info msg="Ensure that sandbox 75a514a2254dd50be524a2ed1b22dca4ae719ba1a08352aa2192066d83ffc3bb in task-service has been cleanup successfully" Jan 29 11:32:40.015292 containerd[1484]: time="2025-01-29T11:32:40.015040297Z" level=info msg="TearDown network for sandbox \"75a514a2254dd50be524a2ed1b22dca4ae719ba1a08352aa2192066d83ffc3bb\" successfully" Jan 29 11:32:40.015292 containerd[1484]: time="2025-01-29T11:32:40.015051418Z" level=info msg="StopPodSandbox for \"75a514a2254dd50be524a2ed1b22dca4ae719ba1a08352aa2192066d83ffc3bb\" returns successfully" Jan 29 11:32:40.015441 containerd[1484]: time="2025-01-29T11:32:40.015400934Z" level=info msg="StopPodSandbox for \"c93ece865a072c0cb14d2f1bfe13ae827efec40e9f66919ed53be157201e987f\"" Jan 29 11:32:40.015574 containerd[1484]: time="2025-01-29T11:32:40.015519766Z" level=info msg="TearDown network for sandbox \"c93ece865a072c0cb14d2f1bfe13ae827efec40e9f66919ed53be157201e987f\" successfully" Jan 29 11:32:40.015574 containerd[1484]: time="2025-01-29T11:32:40.015568668Z" level=info msg="StopPodSandbox for \"c93ece865a072c0cb14d2f1bfe13ae827efec40e9f66919ed53be157201e987f\" returns successfully" Jan 29 11:32:40.016193 containerd[1484]: time="2025-01-29T11:32:40.016067965Z" level=info msg="StopPodSandbox for \"3dc6f7d7aad1971a0324d00bc76d6e4b98dee66fff128002850bcc9e0b05f74e\"" Jan 29 11:32:40.016193 containerd[1484]: time="2025-01-29T11:32:40.016149988Z" level=info msg="TearDown network for sandbox \"3dc6f7d7aad1971a0324d00bc76d6e4b98dee66fff128002850bcc9e0b05f74e\" successfully" Jan 29 11:32:40.016193 containerd[1484]: time="2025-01-29T11:32:40.016160518Z" level=info msg="StopPodSandbox for \"3dc6f7d7aad1971a0324d00bc76d6e4b98dee66fff128002850bcc9e0b05f74e\" returns successfully" Jan 29 11:32:40.016825 containerd[1484]: time="2025-01-29T11:32:40.016810086Z" level=info msg="StopPodSandbox for \"921a137634cbe4a220d4eae738f0f2c5ad00d53c38e31d8425095022f60f272a\"" Jan 29 11:32:40.017421 containerd[1484]: time="2025-01-29T11:32:40.017270390Z" level=info msg="TearDown network for sandbox \"921a137634cbe4a220d4eae738f0f2c5ad00d53c38e31d8425095022f60f272a\" successfully" Jan 29 11:32:40.017421 containerd[1484]: time="2025-01-29T11:32:40.017285788Z" level=info msg="StopPodSandbox for \"921a137634cbe4a220d4eae738f0f2c5ad00d53c38e31d8425095022f60f272a\" returns successfully" Jan 29 11:32:40.017500 kubelet[1799]: I0129 11:32:40.017465 1799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9068ccd9f3ae0f6c39874f0fc07ce6e9739aeb96cb25bd1513317e5ce99db9d5" Jan 29 11:32:40.017648 containerd[1484]: time="2025-01-29T11:32:40.017629964Z" level=info msg="StopPodSandbox for \"d29d91f6c3e4b1c8ae17cab21bff6e4fe9bd910ffc7eb3fdbe04273555fde998\"" Jan 29 11:32:40.017719 containerd[1484]: time="2025-01-29T11:32:40.017705946Z" level=info msg="TearDown network for sandbox \"d29d91f6c3e4b1c8ae17cab21bff6e4fe9bd910ffc7eb3fdbe04273555fde998\" successfully" Jan 29 11:32:40.017745 containerd[1484]: time="2025-01-29T11:32:40.017717278Z" level=info msg="StopPodSandbox for \"d29d91f6c3e4b1c8ae17cab21bff6e4fe9bd910ffc7eb3fdbe04273555fde998\" returns successfully" Jan 29 11:32:40.017887 containerd[1484]: time="2025-01-29T11:32:40.017868601Z" level=info msg="StopPodSandbox for \"9068ccd9f3ae0f6c39874f0fc07ce6e9739aeb96cb25bd1513317e5ce99db9d5\"" Jan 29 11:32:40.018000 containerd[1484]: time="2025-01-29T11:32:40.017985250Z" level=info msg="Ensure that sandbox 9068ccd9f3ae0f6c39874f0fc07ce6e9739aeb96cb25bd1513317e5ce99db9d5 in task-service has been cleanup successfully" Jan 29 11:32:40.018140 containerd[1484]: time="2025-01-29T11:32:40.018107780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-bccrr,Uid:7c2ca4ff-6c88-4156-83da-a023debbfb5e,Namespace:default,Attempt:5,}" Jan 29 11:32:40.018386 containerd[1484]: time="2025-01-29T11:32:40.018159186Z" level=info msg="TearDown network for sandbox \"9068ccd9f3ae0f6c39874f0fc07ce6e9739aeb96cb25bd1513317e5ce99db9d5\" successfully" Jan 29 11:32:40.018386 containerd[1484]: time="2025-01-29T11:32:40.018172591Z" level=info msg="StopPodSandbox for \"9068ccd9f3ae0f6c39874f0fc07ce6e9739aeb96cb25bd1513317e5ce99db9d5\" returns successfully" Jan 29 11:32:40.018386 containerd[1484]: time="2025-01-29T11:32:40.018350816Z" level=info msg="StopPodSandbox for \"5de5a9b89ac8c6b5f995d32eee7fe17833d74b22a0f409de2ab8cfcb16427df8\"" Jan 29 11:32:40.018492 containerd[1484]: time="2025-01-29T11:32:40.018413644Z" level=info msg="TearDown network for sandbox \"5de5a9b89ac8c6b5f995d32eee7fe17833d74b22a0f409de2ab8cfcb16427df8\" successfully" Jan 29 11:32:40.018492 containerd[1484]: time="2025-01-29T11:32:40.018422891Z" level=info msg="StopPodSandbox for \"5de5a9b89ac8c6b5f995d32eee7fe17833d74b22a0f409de2ab8cfcb16427df8\" returns successfully" Jan 29 11:32:40.018894 containerd[1484]: time="2025-01-29T11:32:40.018792674Z" level=info msg="StopPodSandbox for \"ca328d2245f1621d4a9b8f56f547e175394c05b8b52cd90a410ce0f9cd82ea75\"" Jan 29 11:32:40.018953 containerd[1484]: time="2025-01-29T11:32:40.018898643Z" level=info msg="TearDown network for sandbox \"ca328d2245f1621d4a9b8f56f547e175394c05b8b52cd90a410ce0f9cd82ea75\" successfully" Jan 29 11:32:40.018953 containerd[1484]: time="2025-01-29T11:32:40.018908321Z" level=info msg="StopPodSandbox for \"ca328d2245f1621d4a9b8f56f547e175394c05b8b52cd90a410ce0f9cd82ea75\" returns successfully" Jan 29 11:32:40.019193 containerd[1484]: time="2025-01-29T11:32:40.019165463Z" level=info msg="StopPodSandbox for \"412254714bd9d4bf93c35b4635361e829d9373a2ab247e5c41b015d665cda9b8\"" Jan 29 11:32:40.019256 containerd[1484]: time="2025-01-29T11:32:40.019247778Z" level=info msg="TearDown network for sandbox \"412254714bd9d4bf93c35b4635361e829d9373a2ab247e5c41b015d665cda9b8\" successfully" Jan 29 11:32:40.019283 containerd[1484]: time="2025-01-29T11:32:40.019258067Z" level=info msg="StopPodSandbox for \"412254714bd9d4bf93c35b4635361e829d9373a2ab247e5c41b015d665cda9b8\" returns successfully" Jan 29 11:32:40.019731 containerd[1484]: time="2025-01-29T11:32:40.019669889Z" level=info msg="StopPodSandbox for \"16f501e638a3ac116c003fcafbce0b8473a1e496ec5313be850130e687ffcd05\"" Jan 29 11:32:40.019795 containerd[1484]: time="2025-01-29T11:32:40.019766661Z" level=info msg="TearDown network for sandbox \"16f501e638a3ac116c003fcafbce0b8473a1e496ec5313be850130e687ffcd05\" successfully" Jan 29 11:32:40.019795 containerd[1484]: time="2025-01-29T11:32:40.019779856Z" level=info msg="StopPodSandbox for \"16f501e638a3ac116c003fcafbce0b8473a1e496ec5313be850130e687ffcd05\" returns successfully" Jan 29 11:32:40.020251 containerd[1484]: time="2025-01-29T11:32:40.020194082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7rbs5,Uid:168d8980-ba5c-4483-9146-b7dc7884186d,Namespace:calico-system,Attempt:5,}" Jan 29 11:32:40.164887 containerd[1484]: time="2025-01-29T11:32:40.164824135Z" level=error msg="Failed to destroy network for sandbox \"002769d5a6aa5c81b8842cd7446be2a5369fe5aa1bf1f66e4547c1e7a30a1547\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:40.253090 containerd[1484]: time="2025-01-29T11:32:40.253021918Z" level=error msg="encountered an error cleaning up failed sandbox \"002769d5a6aa5c81b8842cd7446be2a5369fe5aa1bf1f66e4547c1e7a30a1547\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:40.253224 containerd[1484]: time="2025-01-29T11:32:40.253112969Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7rbs5,Uid:168d8980-ba5c-4483-9146-b7dc7884186d,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"002769d5a6aa5c81b8842cd7446be2a5369fe5aa1bf1f66e4547c1e7a30a1547\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:40.253401 kubelet[1799]: E0129 11:32:40.253349 1799 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"002769d5a6aa5c81b8842cd7446be2a5369fe5aa1bf1f66e4547c1e7a30a1547\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:40.253454 kubelet[1799]: E0129 11:32:40.253418 1799 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"002769d5a6aa5c81b8842cd7446be2a5369fe5aa1bf1f66e4547c1e7a30a1547\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7rbs5" Jan 29 11:32:40.253454 kubelet[1799]: E0129 11:32:40.253438 1799 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"002769d5a6aa5c81b8842cd7446be2a5369fe5aa1bf1f66e4547c1e7a30a1547\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7rbs5" Jan 29 11:32:40.253505 kubelet[1799]: E0129 11:32:40.253477 1799 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7rbs5_calico-system(168d8980-ba5c-4483-9146-b7dc7884186d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7rbs5_calico-system(168d8980-ba5c-4483-9146-b7dc7884186d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"002769d5a6aa5c81b8842cd7446be2a5369fe5aa1bf1f66e4547c1e7a30a1547\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7rbs5" podUID="168d8980-ba5c-4483-9146-b7dc7884186d" Jan 29 11:32:40.276401 containerd[1484]: time="2025-01-29T11:32:40.276268333Z" level=error msg="Failed to destroy network for sandbox \"b5af3bf449dc596e76072a4303d7abb9d897f35a91879d6247c2a55c67d9c059\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:40.277962 containerd[1484]: time="2025-01-29T11:32:40.277927495Z" level=error msg="encountered an error cleaning up failed sandbox \"b5af3bf449dc596e76072a4303d7abb9d897f35a91879d6247c2a55c67d9c059\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:40.278056 containerd[1484]: time="2025-01-29T11:32:40.278007555Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-bccrr,Uid:7c2ca4ff-6c88-4156-83da-a023debbfb5e,Namespace:default,Attempt:5,} failed, error" error="failed to setup network for sandbox \"b5af3bf449dc596e76072a4303d7abb9d897f35a91879d6247c2a55c67d9c059\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:40.278707 kubelet[1799]: E0129 11:32:40.278651 1799 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5af3bf449dc596e76072a4303d7abb9d897f35a91879d6247c2a55c67d9c059\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:40.278773 kubelet[1799]: E0129 11:32:40.278734 1799 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5af3bf449dc596e76072a4303d7abb9d897f35a91879d6247c2a55c67d9c059\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-bccrr" Jan 29 11:32:40.278812 kubelet[1799]: E0129 11:32:40.278780 1799 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5af3bf449dc596e76072a4303d7abb9d897f35a91879d6247c2a55c67d9c059\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-bccrr" Jan 29 11:32:40.278872 kubelet[1799]: E0129 11:32:40.278836 1799 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-bccrr_default(7c2ca4ff-6c88-4156-83da-a023debbfb5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-bccrr_default(7c2ca4ff-6c88-4156-83da-a023debbfb5e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b5af3bf449dc596e76072a4303d7abb9d897f35a91879d6247c2a55c67d9c059\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-bccrr" podUID="7c2ca4ff-6c88-4156-83da-a023debbfb5e" Jan 29 11:32:40.349023 kubelet[1799]: E0129 11:32:40.348922 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:32:40.392252 systemd[1]: run-netns-cni\x2da4803cda\x2df9cc\x2dad39\x2d2b3c\x2dfe23f4ea3662.mount: Deactivated successfully. Jan 29 11:32:40.392385 systemd[1]: run-netns-cni\x2d579a72ff\x2d2f62\x2de25b\x2de3ba\x2d910e90d52f43.mount: Deactivated successfully. Jan 29 11:32:41.025238 kubelet[1799]: I0129 11:32:41.025198 1799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5af3bf449dc596e76072a4303d7abb9d897f35a91879d6247c2a55c67d9c059" Jan 29 11:32:41.026237 containerd[1484]: time="2025-01-29T11:32:41.025670338Z" level=info msg="StopPodSandbox for \"b5af3bf449dc596e76072a4303d7abb9d897f35a91879d6247c2a55c67d9c059\"" Jan 29 11:32:41.026237 containerd[1484]: time="2025-01-29T11:32:41.025977069Z" level=info msg="Ensure that sandbox b5af3bf449dc596e76072a4303d7abb9d897f35a91879d6247c2a55c67d9c059 in task-service has been cleanup successfully" Jan 29 11:32:41.026524 containerd[1484]: time="2025-01-29T11:32:41.026401788Z" level=info msg="TearDown network for sandbox \"b5af3bf449dc596e76072a4303d7abb9d897f35a91879d6247c2a55c67d9c059\" successfully" Jan 29 11:32:41.026524 containerd[1484]: time="2025-01-29T11:32:41.026417709Z" level=info msg="StopPodSandbox for \"b5af3bf449dc596e76072a4303d7abb9d897f35a91879d6247c2a55c67d9c059\" returns successfully" Jan 29 11:32:41.027950 containerd[1484]: time="2025-01-29T11:32:41.027925724Z" level=info msg="StopPodSandbox for \"75a514a2254dd50be524a2ed1b22dca4ae719ba1a08352aa2192066d83ffc3bb\"" Jan 29 11:32:41.028031 containerd[1484]: time="2025-01-29T11:32:41.028014034Z" level=info msg="TearDown network for sandbox \"75a514a2254dd50be524a2ed1b22dca4ae719ba1a08352aa2192066d83ffc3bb\" successfully" Jan 29 11:32:41.028053 containerd[1484]: time="2025-01-29T11:32:41.028029805Z" level=info msg="StopPodSandbox for \"75a514a2254dd50be524a2ed1b22dca4ae719ba1a08352aa2192066d83ffc3bb\" returns successfully" Jan 29 11:32:41.028454 containerd[1484]: time="2025-01-29T11:32:41.028397844Z" level=info msg="StopPodSandbox for \"c93ece865a072c0cb14d2f1bfe13ae827efec40e9f66919ed53be157201e987f\"" Jan 29 11:32:41.028600 containerd[1484]: time="2025-01-29T11:32:41.028566239Z" level=info msg="TearDown network for sandbox \"c93ece865a072c0cb14d2f1bfe13ae827efec40e9f66919ed53be157201e987f\" successfully" Jan 29 11:32:41.028600 containerd[1484]: time="2025-01-29T11:32:41.028587139Z" level=info msg="StopPodSandbox for \"c93ece865a072c0cb14d2f1bfe13ae827efec40e9f66919ed53be157201e987f\" returns successfully" Jan 29 11:32:41.029311 containerd[1484]: time="2025-01-29T11:32:41.029292288Z" level=info msg="StopPodSandbox for \"3dc6f7d7aad1971a0324d00bc76d6e4b98dee66fff128002850bcc9e0b05f74e\"" Jan 29 11:32:41.029353 systemd[1]: run-netns-cni\x2d9d4d21d2\x2da880\x2db3e3\x2de90e\x2dc43702e9efa0.mount: Deactivated successfully. Jan 29 11:32:41.029514 containerd[1484]: time="2025-01-29T11:32:41.029491412Z" level=info msg="TearDown network for sandbox \"3dc6f7d7aad1971a0324d00bc76d6e4b98dee66fff128002850bcc9e0b05f74e\" successfully" Jan 29 11:32:41.029514 containerd[1484]: time="2025-01-29T11:32:41.029506771Z" level=info msg="StopPodSandbox for \"3dc6f7d7aad1971a0324d00bc76d6e4b98dee66fff128002850bcc9e0b05f74e\" returns successfully" Jan 29 11:32:41.029787 containerd[1484]: time="2025-01-29T11:32:41.029681258Z" level=info msg="StopPodSandbox for \"921a137634cbe4a220d4eae738f0f2c5ad00d53c38e31d8425095022f60f272a\"" Jan 29 11:32:41.029824 containerd[1484]: time="2025-01-29T11:32:41.029812000Z" level=info msg="TearDown network for sandbox \"921a137634cbe4a220d4eae738f0f2c5ad00d53c38e31d8425095022f60f272a\" successfully" Jan 29 11:32:41.029873 containerd[1484]: time="2025-01-29T11:32:41.029827189Z" level=info msg="StopPodSandbox for \"921a137634cbe4a220d4eae738f0f2c5ad00d53c38e31d8425095022f60f272a\" returns successfully" Jan 29 11:32:41.030027 kubelet[1799]: I0129 11:32:41.030004 1799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="002769d5a6aa5c81b8842cd7446be2a5369fe5aa1bf1f66e4547c1e7a30a1547" Jan 29 11:32:41.030399 containerd[1484]: time="2025-01-29T11:32:41.030346991Z" level=info msg="StopPodSandbox for \"d29d91f6c3e4b1c8ae17cab21bff6e4fe9bd910ffc7eb3fdbe04273555fde998\"" Jan 29 11:32:41.030797 containerd[1484]: time="2025-01-29T11:32:41.030591191Z" level=info msg="StopPodSandbox for \"002769d5a6aa5c81b8842cd7446be2a5369fe5aa1bf1f66e4547c1e7a30a1547\"" Jan 29 11:32:41.030964 containerd[1484]: time="2025-01-29T11:32:41.030916418Z" level=info msg="Ensure that sandbox 002769d5a6aa5c81b8842cd7446be2a5369fe5aa1bf1f66e4547c1e7a30a1547 in task-service has been cleanup successfully" Jan 29 11:32:41.031151 containerd[1484]: time="2025-01-29T11:32:41.031107145Z" level=info msg="TearDown network for sandbox \"d29d91f6c3e4b1c8ae17cab21bff6e4fe9bd910ffc7eb3fdbe04273555fde998\" successfully" Jan 29 11:32:41.031151 containerd[1484]: time="2025-01-29T11:32:41.031126603Z" level=info msg="StopPodSandbox for \"d29d91f6c3e4b1c8ae17cab21bff6e4fe9bd910ffc7eb3fdbe04273555fde998\" returns successfully" Jan 29 11:32:41.032082 containerd[1484]: time="2025-01-29T11:32:41.032051956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-bccrr,Uid:7c2ca4ff-6c88-4156-83da-a023debbfb5e,Namespace:default,Attempt:6,}" Jan 29 11:32:41.033520 containerd[1484]: time="2025-01-29T11:32:41.033493154Z" level=info msg="TearDown network for sandbox \"002769d5a6aa5c81b8842cd7446be2a5369fe5aa1bf1f66e4547c1e7a30a1547\" successfully" Jan 29 11:32:41.033651 containerd[1484]: time="2025-01-29T11:32:41.033520115Z" level=info msg="StopPodSandbox for \"002769d5a6aa5c81b8842cd7446be2a5369fe5aa1bf1f66e4547c1e7a30a1547\" returns successfully" Jan 29 11:32:41.033796 containerd[1484]: time="2025-01-29T11:32:41.033768174Z" level=info msg="StopPodSandbox for \"9068ccd9f3ae0f6c39874f0fc07ce6e9739aeb96cb25bd1513317e5ce99db9d5\"" Jan 29 11:32:41.033884 containerd[1484]: time="2025-01-29T11:32:41.033865611Z" level=info msg="TearDown network for sandbox \"9068ccd9f3ae0f6c39874f0fc07ce6e9739aeb96cb25bd1513317e5ce99db9d5\" successfully" Jan 29 11:32:41.033914 containerd[1484]: time="2025-01-29T11:32:41.033884017Z" level=info msg="StopPodSandbox for \"9068ccd9f3ae0f6c39874f0fc07ce6e9739aeb96cb25bd1513317e5ce99db9d5\" returns successfully" Jan 29 11:32:41.033950 systemd[1]: run-netns-cni\x2d9f0bc2f1\x2d92fa\x2de641\x2d8765\x2d20a165431964.mount: Deactivated successfully. Jan 29 11:32:41.034360 containerd[1484]: time="2025-01-29T11:32:41.034337982Z" level=info msg="StopPodSandbox for \"5de5a9b89ac8c6b5f995d32eee7fe17833d74b22a0f409de2ab8cfcb16427df8\"" Jan 29 11:32:41.034469 containerd[1484]: time="2025-01-29T11:32:41.034452071Z" level=info msg="TearDown network for sandbox \"5de5a9b89ac8c6b5f995d32eee7fe17833d74b22a0f409de2ab8cfcb16427df8\" successfully" Jan 29 11:32:41.034492 containerd[1484]: time="2025-01-29T11:32:41.034468182Z" level=info msg="StopPodSandbox for \"5de5a9b89ac8c6b5f995d32eee7fe17833d74b22a0f409de2ab8cfcb16427df8\" returns successfully" Jan 29 11:32:41.034698 containerd[1484]: time="2025-01-29T11:32:41.034668609Z" level=info msg="StopPodSandbox for \"ca328d2245f1621d4a9b8f56f547e175394c05b8b52cd90a410ce0f9cd82ea75\"" Jan 29 11:32:41.035052 containerd[1484]: time="2025-01-29T11:32:41.034788069Z" level=info msg="TearDown network for sandbox \"ca328d2245f1621d4a9b8f56f547e175394c05b8b52cd90a410ce0f9cd82ea75\" successfully" Jan 29 11:32:41.035052 containerd[1484]: time="2025-01-29T11:32:41.034806835Z" level=info msg="StopPodSandbox for \"ca328d2245f1621d4a9b8f56f547e175394c05b8b52cd90a410ce0f9cd82ea75\" returns successfully" Jan 29 11:32:41.035436 containerd[1484]: time="2025-01-29T11:32:41.035286009Z" level=info msg="StopPodSandbox for \"412254714bd9d4bf93c35b4635361e829d9373a2ab247e5c41b015d665cda9b8\"" Jan 29 11:32:41.035674 containerd[1484]: time="2025-01-29T11:32:41.035651263Z" level=info msg="TearDown network for sandbox \"412254714bd9d4bf93c35b4635361e829d9373a2ab247e5c41b015d665cda9b8\" successfully" Jan 29 11:32:41.035674 containerd[1484]: time="2025-01-29T11:32:41.035672013Z" level=info msg="StopPodSandbox for \"412254714bd9d4bf93c35b4635361e829d9373a2ab247e5c41b015d665cda9b8\" returns successfully" Jan 29 11:32:41.036295 containerd[1484]: time="2025-01-29T11:32:41.036269815Z" level=info msg="StopPodSandbox for \"16f501e638a3ac116c003fcafbce0b8473a1e496ec5313be850130e687ffcd05\"" Jan 29 11:32:41.036358 containerd[1484]: time="2025-01-29T11:32:41.036341123Z" level=info msg="TearDown network for sandbox \"16f501e638a3ac116c003fcafbce0b8473a1e496ec5313be850130e687ffcd05\" successfully" Jan 29 11:32:41.036358 containerd[1484]: time="2025-01-29T11:32:41.036354728Z" level=info msg="StopPodSandbox for \"16f501e638a3ac116c003fcafbce0b8473a1e496ec5313be850130e687ffcd05\" returns successfully" Jan 29 11:32:41.037024 containerd[1484]: time="2025-01-29T11:32:41.036992798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7rbs5,Uid:168d8980-ba5c-4483-9146-b7dc7884186d,Namespace:calico-system,Attempt:6,}" Jan 29 11:32:41.349669 kubelet[1799]: E0129 11:32:41.349457 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:32:42.264469 containerd[1484]: time="2025-01-29T11:32:42.264402972Z" level=error msg="Failed to destroy network for sandbox \"d69a5086be4617be22031da2ba4a51dd38b8d89e1e7cce3813029a5f6609b513\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:42.265073 containerd[1484]: time="2025-01-29T11:32:42.265043945Z" level=error msg="encountered an error cleaning up failed sandbox \"d69a5086be4617be22031da2ba4a51dd38b8d89e1e7cce3813029a5f6609b513\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:42.265126 containerd[1484]: time="2025-01-29T11:32:42.265105754Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-bccrr,Uid:7c2ca4ff-6c88-4156-83da-a023debbfb5e,Namespace:default,Attempt:6,} failed, error" error="failed to setup network for sandbox \"d69a5086be4617be22031da2ba4a51dd38b8d89e1e7cce3813029a5f6609b513\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:42.265411 kubelet[1799]: E0129 11:32:42.265363 1799 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d69a5086be4617be22031da2ba4a51dd38b8d89e1e7cce3813029a5f6609b513\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:42.265824 kubelet[1799]: E0129 11:32:42.265427 1799 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d69a5086be4617be22031da2ba4a51dd38b8d89e1e7cce3813029a5f6609b513\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-bccrr" Jan 29 11:32:42.265824 kubelet[1799]: E0129 11:32:42.265449 1799 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d69a5086be4617be22031da2ba4a51dd38b8d89e1e7cce3813029a5f6609b513\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-bccrr" Jan 29 11:32:42.265824 kubelet[1799]: E0129 11:32:42.265492 1799 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-bccrr_default(7c2ca4ff-6c88-4156-83da-a023debbfb5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-bccrr_default(7c2ca4ff-6c88-4156-83da-a023debbfb5e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d69a5086be4617be22031da2ba4a51dd38b8d89e1e7cce3813029a5f6609b513\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-bccrr" podUID="7c2ca4ff-6c88-4156-83da-a023debbfb5e" Jan 29 11:32:42.266740 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d69a5086be4617be22031da2ba4a51dd38b8d89e1e7cce3813029a5f6609b513-shm.mount: Deactivated successfully. Jan 29 11:32:42.271432 containerd[1484]: time="2025-01-29T11:32:42.271383261Z" level=error msg="Failed to destroy network for sandbox \"94cc6525ae3229a11269958ee8281492a355f5a391307b289717091ddbb01238\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:42.271924 containerd[1484]: time="2025-01-29T11:32:42.271778301Z" level=error msg="encountered an error cleaning up failed sandbox \"94cc6525ae3229a11269958ee8281492a355f5a391307b289717091ddbb01238\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:42.271924 containerd[1484]: time="2025-01-29T11:32:42.271839739Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7rbs5,Uid:168d8980-ba5c-4483-9146-b7dc7884186d,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"94cc6525ae3229a11269958ee8281492a355f5a391307b289717091ddbb01238\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:42.272737 kubelet[1799]: E0129 11:32:42.272117 1799 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94cc6525ae3229a11269958ee8281492a355f5a391307b289717091ddbb01238\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:32:42.272737 kubelet[1799]: E0129 11:32:42.272157 1799 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94cc6525ae3229a11269958ee8281492a355f5a391307b289717091ddbb01238\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7rbs5" Jan 29 11:32:42.272737 kubelet[1799]: E0129 11:32:42.272174 1799 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94cc6525ae3229a11269958ee8281492a355f5a391307b289717091ddbb01238\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7rbs5" Jan 29 11:32:42.272885 kubelet[1799]: E0129 11:32:42.272198 1799 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7rbs5_calico-system(168d8980-ba5c-4483-9146-b7dc7884186d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7rbs5_calico-system(168d8980-ba5c-4483-9146-b7dc7884186d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"94cc6525ae3229a11269958ee8281492a355f5a391307b289717091ddbb01238\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7rbs5" podUID="168d8980-ba5c-4483-9146-b7dc7884186d" Jan 29 11:32:42.274239 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-94cc6525ae3229a11269958ee8281492a355f5a391307b289717091ddbb01238-shm.mount: Deactivated successfully. Jan 29 11:32:42.350616 kubelet[1799]: E0129 11:32:42.350567 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:32:42.403945 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4197183194.mount: Deactivated successfully. Jan 29 11:32:42.449899 containerd[1484]: time="2025-01-29T11:32:42.449822454Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:32:42.450720 containerd[1484]: time="2025-01-29T11:32:42.450535977Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 29 11:32:42.453771 containerd[1484]: time="2025-01-29T11:32:42.452343805Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:32:42.455027 containerd[1484]: time="2025-01-29T11:32:42.454984014Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:32:42.455729 containerd[1484]: time="2025-01-29T11:32:42.455680815Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 6.48166498s" Jan 29 11:32:42.455729 containerd[1484]: time="2025-01-29T11:32:42.455722635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 29 11:32:42.463644 containerd[1484]: time="2025-01-29T11:32:42.463599209Z" level=info msg="CreateContainer within sandbox \"1ee4676dbe9f9072fb742ee5e317b0c72187a9324d7d478f0b8e63efe2057d82\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 11:32:42.485382 containerd[1484]: time="2025-01-29T11:32:42.485241096Z" level=info msg="CreateContainer within sandbox \"1ee4676dbe9f9072fb742ee5e317b0c72187a9324d7d478f0b8e63efe2057d82\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d1878da98b745ed1f0a8b271dd55e2f15eff796b759d673bc4757b5f886291f9\"" Jan 29 11:32:42.485893 containerd[1484]: time="2025-01-29T11:32:42.485813638Z" level=info msg="StartContainer for \"d1878da98b745ed1f0a8b271dd55e2f15eff796b759d673bc4757b5f886291f9\"" Jan 29 11:32:42.526897 systemd[1]: Started cri-containerd-d1878da98b745ed1f0a8b271dd55e2f15eff796b759d673bc4757b5f886291f9.scope - libcontainer container d1878da98b745ed1f0a8b271dd55e2f15eff796b759d673bc4757b5f886291f9. Jan 29 11:32:42.561244 containerd[1484]: time="2025-01-29T11:32:42.561184726Z" level=info msg="StartContainer for \"d1878da98b745ed1f0a8b271dd55e2f15eff796b759d673bc4757b5f886291f9\" returns successfully" Jan 29 11:32:42.643904 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 29 11:32:42.644052 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 29 11:32:43.036982 kubelet[1799]: E0129 11:32:43.036954 1799 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:32:43.040508 kubelet[1799]: I0129 11:32:43.040477 1799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d69a5086be4617be22031da2ba4a51dd38b8d89e1e7cce3813029a5f6609b513" Jan 29 11:32:43.041118 containerd[1484]: time="2025-01-29T11:32:43.041087841Z" level=info msg="StopPodSandbox for \"d69a5086be4617be22031da2ba4a51dd38b8d89e1e7cce3813029a5f6609b513\"" Jan 29 11:32:43.041265 containerd[1484]: time="2025-01-29T11:32:43.041248128Z" level=info msg="Ensure that sandbox d69a5086be4617be22031da2ba4a51dd38b8d89e1e7cce3813029a5f6609b513 in task-service has been cleanup successfully" Jan 29 11:32:43.041528 containerd[1484]: time="2025-01-29T11:32:43.041443353Z" level=info msg="TearDown network for sandbox \"d69a5086be4617be22031da2ba4a51dd38b8d89e1e7cce3813029a5f6609b513\" successfully" Jan 29 11:32:43.041528 containerd[1484]: time="2025-01-29T11:32:43.041461418Z" level=info msg="StopPodSandbox for \"d69a5086be4617be22031da2ba4a51dd38b8d89e1e7cce3813029a5f6609b513\" returns successfully" Jan 29 11:32:43.041883 containerd[1484]: time="2025-01-29T11:32:43.041863420Z" level=info msg="StopPodSandbox for \"b5af3bf449dc596e76072a4303d7abb9d897f35a91879d6247c2a55c67d9c059\"" Jan 29 11:32:43.042125 containerd[1484]: time="2025-01-29T11:32:43.042081409Z" level=info msg="TearDown network for sandbox \"b5af3bf449dc596e76072a4303d7abb9d897f35a91879d6247c2a55c67d9c059\" successfully" Jan 29 11:32:43.042125 containerd[1484]: time="2025-01-29T11:32:43.042098612Z" level=info msg="StopPodSandbox for \"b5af3bf449dc596e76072a4303d7abb9d897f35a91879d6247c2a55c67d9c059\" returns successfully" Jan 29 11:32:43.042614 containerd[1484]: time="2025-01-29T11:32:43.042573966Z" level=info msg="StopPodSandbox for \"75a514a2254dd50be524a2ed1b22dca4ae719ba1a08352aa2192066d83ffc3bb\"" Jan 29 11:32:43.042715 containerd[1484]: time="2025-01-29T11:32:43.042694577Z" level=info msg="TearDown network for sandbox \"75a514a2254dd50be524a2ed1b22dca4ae719ba1a08352aa2192066d83ffc3bb\" successfully" Jan 29 11:32:43.042715 containerd[1484]: time="2025-01-29T11:32:43.042712481Z" level=info msg="StopPodSandbox for \"75a514a2254dd50be524a2ed1b22dca4ae719ba1a08352aa2192066d83ffc3bb\" returns successfully" Jan 29 11:32:43.043089 containerd[1484]: time="2025-01-29T11:32:43.043057134Z" level=info msg="StopPodSandbox for \"c93ece865a072c0cb14d2f1bfe13ae827efec40e9f66919ed53be157201e987f\"" Jan 29 11:32:43.043181 containerd[1484]: time="2025-01-29T11:32:43.043162196Z" level=info msg="TearDown network for sandbox \"c93ece865a072c0cb14d2f1bfe13ae827efec40e9f66919ed53be157201e987f\" successfully" Jan 29 11:32:43.043181 containerd[1484]: time="2025-01-29T11:32:43.043176122Z" level=info msg="StopPodSandbox for \"c93ece865a072c0cb14d2f1bfe13ae827efec40e9f66919ed53be157201e987f\" returns successfully" Jan 29 11:32:43.043528 containerd[1484]: time="2025-01-29T11:32:43.043465157Z" level=info msg="StopPodSandbox for \"3dc6f7d7aad1971a0324d00bc76d6e4b98dee66fff128002850bcc9e0b05f74e\"" Jan 29 11:32:43.043607 containerd[1484]: time="2025-01-29T11:32:43.043582864Z" level=info msg="TearDown network for sandbox \"3dc6f7d7aad1971a0324d00bc76d6e4b98dee66fff128002850bcc9e0b05f74e\" successfully" Jan 29 11:32:43.043607 containerd[1484]: time="2025-01-29T11:32:43.043601670Z" level=info msg="StopPodSandbox for \"3dc6f7d7aad1971a0324d00bc76d6e4b98dee66fff128002850bcc9e0b05f74e\" returns successfully" Jan 29 11:32:43.043973 containerd[1484]: time="2025-01-29T11:32:43.043938356Z" level=info msg="StopPodSandbox for \"921a137634cbe4a220d4eae738f0f2c5ad00d53c38e31d8425095022f60f272a\"" Jan 29 11:32:43.044097 containerd[1484]: time="2025-01-29T11:32:43.044025303Z" level=info msg="TearDown network for sandbox \"921a137634cbe4a220d4eae738f0f2c5ad00d53c38e31d8425095022f60f272a\" successfully" Jan 29 11:32:43.044097 containerd[1484]: time="2025-01-29T11:32:43.044036044Z" level=info msg="StopPodSandbox for \"921a137634cbe4a220d4eae738f0f2c5ad00d53c38e31d8425095022f60f272a\" returns successfully" Jan 29 11:32:43.044226 kubelet[1799]: I0129 11:32:43.044202 1799 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94cc6525ae3229a11269958ee8281492a355f5a391307b289717091ddbb01238" Jan 29 11:32:43.046789 containerd[1484]: time="2025-01-29T11:32:43.044908159Z" level=info msg="StopPodSandbox for \"94cc6525ae3229a11269958ee8281492a355f5a391307b289717091ddbb01238\"" Jan 29 11:32:43.046789 containerd[1484]: time="2025-01-29T11:32:43.044975628Z" level=info msg="StopPodSandbox for \"d29d91f6c3e4b1c8ae17cab21bff6e4fe9bd910ffc7eb3fdbe04273555fde998\"" Jan 29 11:32:43.046789 containerd[1484]: time="2025-01-29T11:32:43.045062145Z" level=info msg="TearDown network for sandbox \"d29d91f6c3e4b1c8ae17cab21bff6e4fe9bd910ffc7eb3fdbe04273555fde998\" successfully" Jan 29 11:32:43.046789 containerd[1484]: time="2025-01-29T11:32:43.045072184Z" level=info msg="StopPodSandbox for \"d29d91f6c3e4b1c8ae17cab21bff6e4fe9bd910ffc7eb3fdbe04273555fde998\" returns successfully" Jan 29 11:32:43.046789 containerd[1484]: time="2025-01-29T11:32:43.045100849Z" level=info msg="Ensure that sandbox 94cc6525ae3229a11269958ee8281492a355f5a391307b289717091ddbb01238 in task-service has been cleanup successfully" Jan 29 11:32:43.046789 containerd[1484]: time="2025-01-29T11:32:43.045535284Z" level=info msg="TearDown network for sandbox \"94cc6525ae3229a11269958ee8281492a355f5a391307b289717091ddbb01238\" successfully" Jan 29 11:32:43.046789 containerd[1484]: time="2025-01-29T11:32:43.045551796Z" level=info msg="StopPodSandbox for \"94cc6525ae3229a11269958ee8281492a355f5a391307b289717091ddbb01238\" returns successfully" Jan 29 11:32:43.046789 containerd[1484]: time="2025-01-29T11:32:43.045581222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-bccrr,Uid:7c2ca4ff-6c88-4156-83da-a023debbfb5e,Namespace:default,Attempt:7,}" Jan 29 11:32:43.046789 containerd[1484]: time="2025-01-29T11:32:43.045810924Z" level=info msg="StopPodSandbox for \"002769d5a6aa5c81b8842cd7446be2a5369fe5aa1bf1f66e4547c1e7a30a1547\"" Jan 29 11:32:43.046789 containerd[1484]: time="2025-01-29T11:32:43.045916526Z" level=info msg="TearDown network for sandbox \"002769d5a6aa5c81b8842cd7446be2a5369fe5aa1bf1f66e4547c1e7a30a1547\" successfully" Jan 29 11:32:43.046789 containerd[1484]: time="2025-01-29T11:32:43.045930533Z" level=info msg="StopPodSandbox for \"002769d5a6aa5c81b8842cd7446be2a5369fe5aa1bf1f66e4547c1e7a30a1547\" returns successfully" Jan 29 11:32:43.046789 containerd[1484]: time="2025-01-29T11:32:43.046175453Z" level=info msg="StopPodSandbox for \"9068ccd9f3ae0f6c39874f0fc07ce6e9739aeb96cb25bd1513317e5ce99db9d5\"" Jan 29 11:32:43.046789 containerd[1484]: time="2025-01-29T11:32:43.046307277Z" level=info msg="TearDown network for sandbox \"9068ccd9f3ae0f6c39874f0fc07ce6e9739aeb96cb25bd1513317e5ce99db9d5\" successfully" Jan 29 11:32:43.046789 containerd[1484]: time="2025-01-29T11:32:43.046318078Z" level=info msg="StopPodSandbox for \"9068ccd9f3ae0f6c39874f0fc07ce6e9739aeb96cb25bd1513317e5ce99db9d5\" returns successfully" Jan 29 11:32:43.046789 containerd[1484]: time="2025-01-29T11:32:43.046547157Z" level=info msg="StopPodSandbox for \"5de5a9b89ac8c6b5f995d32eee7fe17833d74b22a0f409de2ab8cfcb16427df8\"" Jan 29 11:32:43.046789 containerd[1484]: time="2025-01-29T11:32:43.046619586Z" level=info msg="TearDown network for sandbox \"5de5a9b89ac8c6b5f995d32eee7fe17833d74b22a0f409de2ab8cfcb16427df8\" successfully" Jan 29 11:32:43.046789 containerd[1484]: time="2025-01-29T11:32:43.046629025Z" level=info msg="StopPodSandbox for \"5de5a9b89ac8c6b5f995d32eee7fe17833d74b22a0f409de2ab8cfcb16427df8\" returns successfully" Jan 29 11:32:43.047180 containerd[1484]: time="2025-01-29T11:32:43.046922989Z" level=info msg="StopPodSandbox for \"ca328d2245f1621d4a9b8f56f547e175394c05b8b52cd90a410ce0f9cd82ea75\"" Jan 29 11:32:43.047180 containerd[1484]: time="2025-01-29T11:32:43.047040646Z" level=info msg="TearDown network for sandbox \"ca328d2245f1621d4a9b8f56f547e175394c05b8b52cd90a410ce0f9cd82ea75\" successfully" Jan 29 11:32:43.047180 containerd[1484]: time="2025-01-29T11:32:43.047052538Z" level=info msg="StopPodSandbox for \"ca328d2245f1621d4a9b8f56f547e175394c05b8b52cd90a410ce0f9cd82ea75\" returns successfully" Jan 29 11:32:43.047653 containerd[1484]: time="2025-01-29T11:32:43.047624757Z" level=info msg="StopPodSandbox for \"412254714bd9d4bf93c35b4635361e829d9373a2ab247e5c41b015d665cda9b8\"" Jan 29 11:32:43.047775 containerd[1484]: time="2025-01-29T11:32:43.047707927Z" level=info msg="TearDown network for sandbox \"412254714bd9d4bf93c35b4635361e829d9373a2ab247e5c41b015d665cda9b8\" successfully" Jan 29 11:32:43.047775 containerd[1484]: time="2025-01-29T11:32:43.047722305Z" level=info msg="StopPodSandbox for \"412254714bd9d4bf93c35b4635361e829d9373a2ab247e5c41b015d665cda9b8\" returns successfully" Jan 29 11:32:43.048328 containerd[1484]: time="2025-01-29T11:32:43.048294474Z" level=info msg="StopPodSandbox for \"16f501e638a3ac116c003fcafbce0b8473a1e496ec5313be850130e687ffcd05\"" Jan 29 11:32:43.048444 containerd[1484]: time="2025-01-29T11:32:43.048403844Z" level=info msg="TearDown network for sandbox \"16f501e638a3ac116c003fcafbce0b8473a1e496ec5313be850130e687ffcd05\" successfully" Jan 29 11:32:43.048444 containerd[1484]: time="2025-01-29T11:32:43.048439733Z" level=info msg="StopPodSandbox for \"16f501e638a3ac116c003fcafbce0b8473a1e496ec5313be850130e687ffcd05\" returns successfully" Jan 29 11:32:43.049065 containerd[1484]: time="2025-01-29T11:32:43.049039325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7rbs5,Uid:168d8980-ba5c-4483-9146-b7dc7884186d,Namespace:calico-system,Attempt:7,}" Jan 29 11:32:43.130419 systemd[1]: run-netns-cni\x2dc8d59b0a\x2d2e71\x2d2fc0\x2dce44\x2dde1b914a1469.mount: Deactivated successfully. Jan 29 11:32:43.130703 systemd[1]: run-netns-cni\x2d2d194422\x2db3d8\x2d206f\x2d7aa4\x2db56481ddcfb9.mount: Deactivated successfully. Jan 29 11:32:43.351198 kubelet[1799]: E0129 11:32:43.351056 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:32:43.397261 systemd-networkd[1416]: cali07b7466dba9: Link UP Jan 29 11:32:43.397515 systemd-networkd[1416]: cali07b7466dba9: Gained carrier Jan 29 11:32:43.471818 systemd-networkd[1416]: caliabc7ae8453e: Link UP Jan 29 11:32:43.472442 systemd-networkd[1416]: caliabc7ae8453e: Gained carrier Jan 29 11:32:43.472922 kubelet[1799]: I0129 11:32:43.472858 1799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-bhlzn" podStartSLOduration=3.498223347 podStartE2EDuration="26.472837222s" podCreationTimestamp="2025-01-29 11:32:17 +0000 UTC" firstStartedPulling="2025-01-29 11:32:19.481844426 +0000 UTC m=+2.554183647" lastFinishedPulling="2025-01-29 11:32:42.456458311 +0000 UTC m=+25.528797522" observedRunningTime="2025-01-29 11:32:43.050424345 +0000 UTC m=+26.122763566" watchObservedRunningTime="2025-01-29 11:32:43.472837222 +0000 UTC m=+26.545176443" Jan 29 11:32:43.475958 containerd[1484]: 2025-01-29 11:32:43.109 [INFO][2942] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:32:43.475958 containerd[1484]: 2025-01-29 11:32:43.120 [INFO][2942] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.79-k8s-csi--node--driver--7rbs5-eth0 csi-node-driver- calico-system 168d8980-ba5c-4483-9146-b7dc7884186d 694 0 2025-01-29 11:32:17 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.79 csi-node-driver-7rbs5 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali07b7466dba9 [] []}} ContainerID="f41182c9c76bd71a0faf3b483c90aa5e567cec7a374b95412f72f12d1ebab60d" Namespace="calico-system" Pod="csi-node-driver-7rbs5" WorkloadEndpoint="10.0.0.79-k8s-csi--node--driver--7rbs5-" Jan 29 11:32:43.475958 containerd[1484]: 2025-01-29 11:32:43.120 [INFO][2942] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f41182c9c76bd71a0faf3b483c90aa5e567cec7a374b95412f72f12d1ebab60d" Namespace="calico-system" Pod="csi-node-driver-7rbs5" WorkloadEndpoint="10.0.0.79-k8s-csi--node--driver--7rbs5-eth0" Jan 29 11:32:43.475958 containerd[1484]: 2025-01-29 11:32:43.155 [INFO][2961] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f41182c9c76bd71a0faf3b483c90aa5e567cec7a374b95412f72f12d1ebab60d" HandleID="k8s-pod-network.f41182c9c76bd71a0faf3b483c90aa5e567cec7a374b95412f72f12d1ebab60d" Workload="10.0.0.79-k8s-csi--node--driver--7rbs5-eth0" Jan 29 11:32:43.475958 containerd[1484]: 2025-01-29 11:32:43.168 [INFO][2961] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f41182c9c76bd71a0faf3b483c90aa5e567cec7a374b95412f72f12d1ebab60d" HandleID="k8s-pod-network.f41182c9c76bd71a0faf3b483c90aa5e567cec7a374b95412f72f12d1ebab60d" Workload="10.0.0.79-k8s-csi--node--driver--7rbs5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003e1590), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.79", "pod":"csi-node-driver-7rbs5", "timestamp":"2025-01-29 11:32:43.155735807 +0000 UTC"}, Hostname:"10.0.0.79", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:32:43.475958 containerd[1484]: 2025-01-29 11:32:43.168 [INFO][2961] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:32:43.475958 containerd[1484]: 2025-01-29 11:32:43.168 [INFO][2961] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:32:43.475958 containerd[1484]: 2025-01-29 11:32:43.168 [INFO][2961] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.79' Jan 29 11:32:43.475958 containerd[1484]: 2025-01-29 11:32:43.170 [INFO][2961] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f41182c9c76bd71a0faf3b483c90aa5e567cec7a374b95412f72f12d1ebab60d" host="10.0.0.79" Jan 29 11:32:43.475958 containerd[1484]: 2025-01-29 11:32:43.173 [INFO][2961] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.79" Jan 29 11:32:43.475958 containerd[1484]: 2025-01-29 11:32:43.176 [INFO][2961] ipam/ipam.go 489: Trying affinity for 192.168.68.192/26 host="10.0.0.79" Jan 29 11:32:43.475958 containerd[1484]: 2025-01-29 11:32:43.178 [INFO][2961] ipam/ipam.go 155: Attempting to load block cidr=192.168.68.192/26 host="10.0.0.79" Jan 29 11:32:43.475958 containerd[1484]: 2025-01-29 11:32:43.180 [INFO][2961] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.68.192/26 host="10.0.0.79" Jan 29 11:32:43.475958 containerd[1484]: 2025-01-29 11:32:43.180 [INFO][2961] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.68.192/26 handle="k8s-pod-network.f41182c9c76bd71a0faf3b483c90aa5e567cec7a374b95412f72f12d1ebab60d" host="10.0.0.79" Jan 29 11:32:43.475958 containerd[1484]: 2025-01-29 11:32:43.183 [INFO][2961] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f41182c9c76bd71a0faf3b483c90aa5e567cec7a374b95412f72f12d1ebab60d Jan 29 11:32:43.475958 containerd[1484]: 2025-01-29 11:32:43.367 [INFO][2961] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.68.192/26 handle="k8s-pod-network.f41182c9c76bd71a0faf3b483c90aa5e567cec7a374b95412f72f12d1ebab60d" host="10.0.0.79" Jan 29 11:32:43.475958 containerd[1484]: 2025-01-29 11:32:43.385 [INFO][2961] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.68.193/26] block=192.168.68.192/26 handle="k8s-pod-network.f41182c9c76bd71a0faf3b483c90aa5e567cec7a374b95412f72f12d1ebab60d" host="10.0.0.79" Jan 29 11:32:43.475958 containerd[1484]: 2025-01-29 11:32:43.385 [INFO][2961] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.68.193/26] handle="k8s-pod-network.f41182c9c76bd71a0faf3b483c90aa5e567cec7a374b95412f72f12d1ebab60d" host="10.0.0.79" Jan 29 11:32:43.475958 containerd[1484]: 2025-01-29 11:32:43.385 [INFO][2961] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:32:43.475958 containerd[1484]: 2025-01-29 11:32:43.385 [INFO][2961] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.68.193/26] IPv6=[] ContainerID="f41182c9c76bd71a0faf3b483c90aa5e567cec7a374b95412f72f12d1ebab60d" HandleID="k8s-pod-network.f41182c9c76bd71a0faf3b483c90aa5e567cec7a374b95412f72f12d1ebab60d" Workload="10.0.0.79-k8s-csi--node--driver--7rbs5-eth0" Jan 29 11:32:43.476894 containerd[1484]: 2025-01-29 11:32:43.389 [INFO][2942] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f41182c9c76bd71a0faf3b483c90aa5e567cec7a374b95412f72f12d1ebab60d" Namespace="calico-system" Pod="csi-node-driver-7rbs5" WorkloadEndpoint="10.0.0.79-k8s-csi--node--driver--7rbs5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.79-k8s-csi--node--driver--7rbs5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"168d8980-ba5c-4483-9146-b7dc7884186d", ResourceVersion:"694", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 32, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.79", ContainerID:"", Pod:"csi-node-driver-7rbs5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.68.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali07b7466dba9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:32:43.476894 containerd[1484]: 2025-01-29 11:32:43.389 [INFO][2942] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.68.193/32] ContainerID="f41182c9c76bd71a0faf3b483c90aa5e567cec7a374b95412f72f12d1ebab60d" Namespace="calico-system" Pod="csi-node-driver-7rbs5" WorkloadEndpoint="10.0.0.79-k8s-csi--node--driver--7rbs5-eth0" Jan 29 11:32:43.476894 containerd[1484]: 2025-01-29 11:32:43.389 [INFO][2942] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali07b7466dba9 ContainerID="f41182c9c76bd71a0faf3b483c90aa5e567cec7a374b95412f72f12d1ebab60d" Namespace="calico-system" Pod="csi-node-driver-7rbs5" WorkloadEndpoint="10.0.0.79-k8s-csi--node--driver--7rbs5-eth0" Jan 29 11:32:43.476894 containerd[1484]: 2025-01-29 11:32:43.396 [INFO][2942] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f41182c9c76bd71a0faf3b483c90aa5e567cec7a374b95412f72f12d1ebab60d" Namespace="calico-system" Pod="csi-node-driver-7rbs5" WorkloadEndpoint="10.0.0.79-k8s-csi--node--driver--7rbs5-eth0" Jan 29 11:32:43.476894 containerd[1484]: 2025-01-29 11:32:43.396 [INFO][2942] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f41182c9c76bd71a0faf3b483c90aa5e567cec7a374b95412f72f12d1ebab60d" Namespace="calico-system" Pod="csi-node-driver-7rbs5" WorkloadEndpoint="10.0.0.79-k8s-csi--node--driver--7rbs5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.79-k8s-csi--node--driver--7rbs5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"168d8980-ba5c-4483-9146-b7dc7884186d", ResourceVersion:"694", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 32, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.79", ContainerID:"f41182c9c76bd71a0faf3b483c90aa5e567cec7a374b95412f72f12d1ebab60d", Pod:"csi-node-driver-7rbs5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.68.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali07b7466dba9", MAC:"9a:83:78:98:aa:82", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:32:43.476894 containerd[1484]: 2025-01-29 11:32:43.472 [INFO][2942] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f41182c9c76bd71a0faf3b483c90aa5e567cec7a374b95412f72f12d1ebab60d" Namespace="calico-system" Pod="csi-node-driver-7rbs5" WorkloadEndpoint="10.0.0.79-k8s-csi--node--driver--7rbs5-eth0" Jan 29 11:32:43.481082 containerd[1484]: 2025-01-29 11:32:43.104 [INFO][2928] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:32:43.481082 containerd[1484]: 2025-01-29 11:32:43.117 [INFO][2928] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.79-k8s-nginx--deployment--85f456d6dd--bccrr-eth0 nginx-deployment-85f456d6dd- default 7c2ca4ff-6c88-4156-83da-a023debbfb5e 891 0 2025-01-29 11:32:35 +0000 UTC map[app:nginx pod-template-hash:85f456d6dd projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.79 nginx-deployment-85f456d6dd-bccrr eth0 default [] [] [kns.default ksa.default.default] caliabc7ae8453e [] []}} ContainerID="0c15ca841576a278f026c995bf8f911eecfad8003dccb5844fdef9b756b6143a" Namespace="default" Pod="nginx-deployment-85f456d6dd-bccrr" WorkloadEndpoint="10.0.0.79-k8s-nginx--deployment--85f456d6dd--bccrr-" Jan 29 11:32:43.481082 containerd[1484]: 2025-01-29 11:32:43.117 [INFO][2928] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0c15ca841576a278f026c995bf8f911eecfad8003dccb5844fdef9b756b6143a" Namespace="default" Pod="nginx-deployment-85f456d6dd-bccrr" WorkloadEndpoint="10.0.0.79-k8s-nginx--deployment--85f456d6dd--bccrr-eth0" Jan 29 11:32:43.481082 containerd[1484]: 2025-01-29 11:32:43.162 [INFO][2960] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0c15ca841576a278f026c995bf8f911eecfad8003dccb5844fdef9b756b6143a" HandleID="k8s-pod-network.0c15ca841576a278f026c995bf8f911eecfad8003dccb5844fdef9b756b6143a" Workload="10.0.0.79-k8s-nginx--deployment--85f456d6dd--bccrr-eth0" Jan 29 11:32:43.481082 containerd[1484]: 2025-01-29 11:32:43.168 [INFO][2960] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0c15ca841576a278f026c995bf8f911eecfad8003dccb5844fdef9b756b6143a" HandleID="k8s-pod-network.0c15ca841576a278f026c995bf8f911eecfad8003dccb5844fdef9b756b6143a" Workload="10.0.0.79-k8s-nginx--deployment--85f456d6dd--bccrr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f5c50), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.79", "pod":"nginx-deployment-85f456d6dd-bccrr", "timestamp":"2025-01-29 11:32:43.16201563 +0000 UTC"}, Hostname:"10.0.0.79", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:32:43.481082 containerd[1484]: 2025-01-29 11:32:43.168 [INFO][2960] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:32:43.481082 containerd[1484]: 2025-01-29 11:32:43.385 [INFO][2960] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:32:43.481082 containerd[1484]: 2025-01-29 11:32:43.385 [INFO][2960] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.79' Jan 29 11:32:43.481082 containerd[1484]: 2025-01-29 11:32:43.387 [INFO][2960] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0c15ca841576a278f026c995bf8f911eecfad8003dccb5844fdef9b756b6143a" host="10.0.0.79" Jan 29 11:32:43.481082 containerd[1484]: 2025-01-29 11:32:43.390 [INFO][2960] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.79" Jan 29 11:32:43.481082 containerd[1484]: 2025-01-29 11:32:43.393 [INFO][2960] ipam/ipam.go 489: Trying affinity for 192.168.68.192/26 host="10.0.0.79" Jan 29 11:32:43.481082 containerd[1484]: 2025-01-29 11:32:43.395 [INFO][2960] ipam/ipam.go 155: Attempting to load block cidr=192.168.68.192/26 host="10.0.0.79" Jan 29 11:32:43.481082 containerd[1484]: 2025-01-29 11:32:43.397 [INFO][2960] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.68.192/26 host="10.0.0.79" Jan 29 11:32:43.481082 containerd[1484]: 2025-01-29 11:32:43.397 [INFO][2960] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.68.192/26 handle="k8s-pod-network.0c15ca841576a278f026c995bf8f911eecfad8003dccb5844fdef9b756b6143a" host="10.0.0.79" Jan 29 11:32:43.481082 containerd[1484]: 2025-01-29 11:32:43.398 [INFO][2960] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0c15ca841576a278f026c995bf8f911eecfad8003dccb5844fdef9b756b6143a Jan 29 11:32:43.481082 containerd[1484]: 2025-01-29 11:32:43.434 [INFO][2960] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.68.192/26 handle="k8s-pod-network.0c15ca841576a278f026c995bf8f911eecfad8003dccb5844fdef9b756b6143a" host="10.0.0.79" Jan 29 11:32:43.481082 containerd[1484]: 2025-01-29 11:32:43.462 [INFO][2960] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.68.194/26] block=192.168.68.192/26 handle="k8s-pod-network.0c15ca841576a278f026c995bf8f911eecfad8003dccb5844fdef9b756b6143a" host="10.0.0.79" Jan 29 11:32:43.481082 containerd[1484]: 2025-01-29 11:32:43.462 [INFO][2960] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.68.194/26] handle="k8s-pod-network.0c15ca841576a278f026c995bf8f911eecfad8003dccb5844fdef9b756b6143a" host="10.0.0.79" Jan 29 11:32:43.481082 containerd[1484]: 2025-01-29 11:32:43.463 [INFO][2960] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:32:43.481082 containerd[1484]: 2025-01-29 11:32:43.463 [INFO][2960] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.68.194/26] IPv6=[] ContainerID="0c15ca841576a278f026c995bf8f911eecfad8003dccb5844fdef9b756b6143a" HandleID="k8s-pod-network.0c15ca841576a278f026c995bf8f911eecfad8003dccb5844fdef9b756b6143a" Workload="10.0.0.79-k8s-nginx--deployment--85f456d6dd--bccrr-eth0" Jan 29 11:32:43.481687 containerd[1484]: 2025-01-29 11:32:43.465 [INFO][2928] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0c15ca841576a278f026c995bf8f911eecfad8003dccb5844fdef9b756b6143a" Namespace="default" Pod="nginx-deployment-85f456d6dd-bccrr" WorkloadEndpoint="10.0.0.79-k8s-nginx--deployment--85f456d6dd--bccrr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.79-k8s-nginx--deployment--85f456d6dd--bccrr-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"7c2ca4ff-6c88-4156-83da-a023debbfb5e", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 32, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.79", ContainerID:"", Pod:"nginx-deployment-85f456d6dd-bccrr", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.68.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"caliabc7ae8453e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:32:43.481687 containerd[1484]: 2025-01-29 11:32:43.465 [INFO][2928] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.68.194/32] ContainerID="0c15ca841576a278f026c995bf8f911eecfad8003dccb5844fdef9b756b6143a" Namespace="default" Pod="nginx-deployment-85f456d6dd-bccrr" WorkloadEndpoint="10.0.0.79-k8s-nginx--deployment--85f456d6dd--bccrr-eth0" Jan 29 11:32:43.481687 containerd[1484]: 2025-01-29 11:32:43.466 [INFO][2928] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliabc7ae8453e ContainerID="0c15ca841576a278f026c995bf8f911eecfad8003dccb5844fdef9b756b6143a" Namespace="default" Pod="nginx-deployment-85f456d6dd-bccrr" WorkloadEndpoint="10.0.0.79-k8s-nginx--deployment--85f456d6dd--bccrr-eth0" Jan 29 11:32:43.481687 containerd[1484]: 2025-01-29 11:32:43.472 [INFO][2928] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0c15ca841576a278f026c995bf8f911eecfad8003dccb5844fdef9b756b6143a" Namespace="default" Pod="nginx-deployment-85f456d6dd-bccrr" WorkloadEndpoint="10.0.0.79-k8s-nginx--deployment--85f456d6dd--bccrr-eth0" Jan 29 11:32:43.481687 containerd[1484]: 2025-01-29 11:32:43.472 [INFO][2928] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0c15ca841576a278f026c995bf8f911eecfad8003dccb5844fdef9b756b6143a" Namespace="default" Pod="nginx-deployment-85f456d6dd-bccrr" WorkloadEndpoint="10.0.0.79-k8s-nginx--deployment--85f456d6dd--bccrr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.79-k8s-nginx--deployment--85f456d6dd--bccrr-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"7c2ca4ff-6c88-4156-83da-a023debbfb5e", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 32, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.79", ContainerID:"0c15ca841576a278f026c995bf8f911eecfad8003dccb5844fdef9b756b6143a", Pod:"nginx-deployment-85f456d6dd-bccrr", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.68.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"caliabc7ae8453e", MAC:"ba:c5:ac:98:70:04", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:32:43.481687 containerd[1484]: 2025-01-29 11:32:43.478 [INFO][2928] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0c15ca841576a278f026c995bf8f911eecfad8003dccb5844fdef9b756b6143a" Namespace="default" Pod="nginx-deployment-85f456d6dd-bccrr" WorkloadEndpoint="10.0.0.79-k8s-nginx--deployment--85f456d6dd--bccrr-eth0" Jan 29 11:32:43.501020 containerd[1484]: time="2025-01-29T11:32:43.500840874Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:32:43.501020 containerd[1484]: time="2025-01-29T11:32:43.500899526Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:32:43.501020 containerd[1484]: time="2025-01-29T11:32:43.500913844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:32:43.501891 containerd[1484]: time="2025-01-29T11:32:43.501836386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:32:43.503419 containerd[1484]: time="2025-01-29T11:32:43.502654467Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:32:43.503419 containerd[1484]: time="2025-01-29T11:32:43.502713361Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:32:43.503419 containerd[1484]: time="2025-01-29T11:32:43.502727228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:32:43.503419 containerd[1484]: time="2025-01-29T11:32:43.502845134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:32:43.542880 systemd[1]: Started cri-containerd-0c15ca841576a278f026c995bf8f911eecfad8003dccb5844fdef9b756b6143a.scope - libcontainer container 0c15ca841576a278f026c995bf8f911eecfad8003dccb5844fdef9b756b6143a. Jan 29 11:32:43.547369 systemd[1]: Started cri-containerd-f41182c9c76bd71a0faf3b483c90aa5e567cec7a374b95412f72f12d1ebab60d.scope - libcontainer container f41182c9c76bd71a0faf3b483c90aa5e567cec7a374b95412f72f12d1ebab60d. Jan 29 11:32:43.558100 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:32:43.561281 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:32:43.575326 containerd[1484]: time="2025-01-29T11:32:43.574706852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7rbs5,Uid:168d8980-ba5c-4483-9146-b7dc7884186d,Namespace:calico-system,Attempt:7,} returns sandbox id \"f41182c9c76bd71a0faf3b483c90aa5e567cec7a374b95412f72f12d1ebab60d\"" Jan 29 11:32:43.578773 containerd[1484]: time="2025-01-29T11:32:43.578718398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 29 11:32:43.588188 containerd[1484]: time="2025-01-29T11:32:43.588146638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-bccrr,Uid:7c2ca4ff-6c88-4156-83da-a023debbfb5e,Namespace:default,Attempt:7,} returns sandbox id \"0c15ca841576a278f026c995bf8f911eecfad8003dccb5844fdef9b756b6143a\"" Jan 29 11:32:44.061628 kubelet[1799]: E0129 11:32:44.061579 1799 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:32:44.134693 systemd[1]: run-containerd-runc-k8s.io-f41182c9c76bd71a0faf3b483c90aa5e567cec7a374b95412f72f12d1ebab60d-runc.lCJ40V.mount: Deactivated successfully. Jan 29 11:32:44.145581 systemd[1]: run-containerd-runc-k8s.io-d1878da98b745ed1f0a8b271dd55e2f15eff796b759d673bc4757b5f886291f9-runc.5FZ21d.mount: Deactivated successfully. Jan 29 11:32:44.204796 kernel: bpftool[3231]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 29 11:32:44.351462 kubelet[1799]: E0129 11:32:44.351248 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:32:44.480911 systemd-networkd[1416]: vxlan.calico: Link UP Jan 29 11:32:44.480925 systemd-networkd[1416]: vxlan.calico: Gained carrier Jan 29 11:32:44.595912 systemd-networkd[1416]: caliabc7ae8453e: Gained IPv6LL Jan 29 11:32:45.108085 systemd-networkd[1416]: cali07b7466dba9: Gained IPv6LL Jan 29 11:32:45.261799 containerd[1484]: time="2025-01-29T11:32:45.261708298Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:32:45.265282 containerd[1484]: time="2025-01-29T11:32:45.265229181Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 29 11:32:45.266545 containerd[1484]: time="2025-01-29T11:32:45.266510806Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:32:45.268587 containerd[1484]: time="2025-01-29T11:32:45.268535353Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:32:45.269152 containerd[1484]: time="2025-01-29T11:32:45.269112579Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.690232781s" Jan 29 11:32:45.269152 containerd[1484]: time="2025-01-29T11:32:45.269141455Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 29 11:32:45.270214 containerd[1484]: time="2025-01-29T11:32:45.270192488Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 29 11:32:45.271241 containerd[1484]: time="2025-01-29T11:32:45.271186652Z" level=info msg="CreateContainer within sandbox \"f41182c9c76bd71a0faf3b483c90aa5e567cec7a374b95412f72f12d1ebab60d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 29 11:32:45.288024 containerd[1484]: time="2025-01-29T11:32:45.287969607Z" level=info msg="CreateContainer within sandbox \"f41182c9c76bd71a0faf3b483c90aa5e567cec7a374b95412f72f12d1ebab60d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"33248071b2419debdf78493fbbe9fca9756f6b1dddd067524802fea4601bffd4\"" Jan 29 11:32:45.288572 containerd[1484]: time="2025-01-29T11:32:45.288395292Z" level=info msg="StartContainer for \"33248071b2419debdf78493fbbe9fca9756f6b1dddd067524802fea4601bffd4\"" Jan 29 11:32:45.327910 systemd[1]: Started cri-containerd-33248071b2419debdf78493fbbe9fca9756f6b1dddd067524802fea4601bffd4.scope - libcontainer container 33248071b2419debdf78493fbbe9fca9756f6b1dddd067524802fea4601bffd4. Jan 29 11:32:45.352804 kubelet[1799]: E0129 11:32:45.352305 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:32:45.360675 containerd[1484]: time="2025-01-29T11:32:45.360566668Z" level=info msg="StartContainer for \"33248071b2419debdf78493fbbe9fca9756f6b1dddd067524802fea4601bffd4\" returns successfully" Jan 29 11:32:45.620918 systemd-networkd[1416]: vxlan.calico: Gained IPv6LL Jan 29 11:32:46.353420 kubelet[1799]: E0129 11:32:46.353358 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:32:47.353696 kubelet[1799]: E0129 11:32:47.353642 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:32:48.354184 kubelet[1799]: E0129 11:32:48.354101 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:32:49.255193 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2419897686.mount: Deactivated successfully. Jan 29 11:32:49.354776 kubelet[1799]: E0129 11:32:49.354699 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:32:50.355474 kubelet[1799]: E0129 11:32:50.355405 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:32:50.656562 containerd[1484]: time="2025-01-29T11:32:50.656423137Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:32:50.657335 containerd[1484]: time="2025-01-29T11:32:50.657273647Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71015561" Jan 29 11:32:50.658544 containerd[1484]: time="2025-01-29T11:32:50.658506725Z" level=info msg="ImageCreate event name:\"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:32:50.660968 containerd[1484]: time="2025-01-29T11:32:50.660936472Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:32:50.661862 containerd[1484]: time="2025-01-29T11:32:50.661829622Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 5.391526964s" Jan 29 11:32:50.661862 containerd[1484]: time="2025-01-29T11:32:50.661861173Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 29 11:32:50.662963 containerd[1484]: time="2025-01-29T11:32:50.662782948Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 29 11:32:50.663805 containerd[1484]: time="2025-01-29T11:32:50.663778123Z" level=info msg="CreateContainer within sandbox \"0c15ca841576a278f026c995bf8f911eecfad8003dccb5844fdef9b756b6143a\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 29 11:32:50.679387 containerd[1484]: time="2025-01-29T11:32:50.679330120Z" level=info msg="CreateContainer within sandbox \"0c15ca841576a278f026c995bf8f911eecfad8003dccb5844fdef9b756b6143a\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"7d9b52eac02d83dcfefb3c809985a82fe784781945b27f0fab6f3b9aad6b511a\"" Jan 29 11:32:50.679803 containerd[1484]: time="2025-01-29T11:32:50.679760740Z" level=info msg="StartContainer for \"7d9b52eac02d83dcfefb3c809985a82fe784781945b27f0fab6f3b9aad6b511a\"" Jan 29 11:32:50.754975 systemd[1]: Started cri-containerd-7d9b52eac02d83dcfefb3c809985a82fe784781945b27f0fab6f3b9aad6b511a.scope - libcontainer container 7d9b52eac02d83dcfefb3c809985a82fe784781945b27f0fab6f3b9aad6b511a. Jan 29 11:32:50.782398 containerd[1484]: time="2025-01-29T11:32:50.782349654Z" level=info msg="StartContainer for \"7d9b52eac02d83dcfefb3c809985a82fe784781945b27f0fab6f3b9aad6b511a\" returns successfully" Jan 29 11:32:51.126622 kubelet[1799]: I0129 11:32:51.126547 1799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-bccrr" podStartSLOduration=9.053416907 podStartE2EDuration="16.126532177s" podCreationTimestamp="2025-01-29 11:32:35 +0000 UTC" firstStartedPulling="2025-01-29 11:32:43.589520458 +0000 UTC m=+26.661859679" lastFinishedPulling="2025-01-29 11:32:50.662635738 +0000 UTC m=+33.734974949" observedRunningTime="2025-01-29 11:32:51.126393102 +0000 UTC m=+34.198732323" watchObservedRunningTime="2025-01-29 11:32:51.126532177 +0000 UTC m=+34.198871398" Jan 29 11:32:51.355641 kubelet[1799]: E0129 11:32:51.355593 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:32:52.221428 containerd[1484]: time="2025-01-29T11:32:52.221353016Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:32:52.222773 containerd[1484]: time="2025-01-29T11:32:52.222623862Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 29 11:32:52.224695 containerd[1484]: time="2025-01-29T11:32:52.224645715Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:32:52.229489 containerd[1484]: time="2025-01-29T11:32:52.229430360Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:32:52.230223 containerd[1484]: time="2025-01-29T11:32:52.230178191Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.567363041s" Jan 29 11:32:52.230223 containerd[1484]: time="2025-01-29T11:32:52.230217747Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 29 11:32:52.232771 containerd[1484]: time="2025-01-29T11:32:52.232724672Z" level=info msg="CreateContainer within sandbox \"f41182c9c76bd71a0faf3b483c90aa5e567cec7a374b95412f72f12d1ebab60d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 29 11:32:52.259472 containerd[1484]: time="2025-01-29T11:32:52.259396150Z" level=info msg="CreateContainer within sandbox \"f41182c9c76bd71a0faf3b483c90aa5e567cec7a374b95412f72f12d1ebab60d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"db6d86e71a527476a7567ef8be762fd1782f832bb69ee4b5f508afe4988e3ac1\"" Jan 29 11:32:52.260022 containerd[1484]: time="2025-01-29T11:32:52.259983166Z" level=info msg="StartContainer for \"db6d86e71a527476a7567ef8be762fd1782f832bb69ee4b5f508afe4988e3ac1\"" Jan 29 11:32:52.292043 systemd[1]: Started cri-containerd-db6d86e71a527476a7567ef8be762fd1782f832bb69ee4b5f508afe4988e3ac1.scope - libcontainer container db6d86e71a527476a7567ef8be762fd1782f832bb69ee4b5f508afe4988e3ac1. Jan 29 11:32:52.356324 kubelet[1799]: E0129 11:32:52.356249 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:32:52.374362 containerd[1484]: time="2025-01-29T11:32:52.374274584Z" level=info msg="StartContainer for \"db6d86e71a527476a7567ef8be762fd1782f832bb69ee4b5f508afe4988e3ac1\" returns successfully" Jan 29 11:32:52.982833 kubelet[1799]: I0129 11:32:52.982793 1799 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 29 11:32:52.982833 kubelet[1799]: I0129 11:32:52.982827 1799 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 29 11:32:53.113433 kubelet[1799]: I0129 11:32:53.113361 1799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-7rbs5" podStartSLOduration=27.460137728 podStartE2EDuration="36.113343707s" podCreationTimestamp="2025-01-29 11:32:17 +0000 UTC" firstStartedPulling="2025-01-29 11:32:43.577955071 +0000 UTC m=+26.650294292" lastFinishedPulling="2025-01-29 11:32:52.23116105 +0000 UTC m=+35.303500271" observedRunningTime="2025-01-29 11:32:53.112863536 +0000 UTC m=+36.185202757" watchObservedRunningTime="2025-01-29 11:32:53.113343707 +0000 UTC m=+36.185682928" Jan 29 11:32:53.214589 kubelet[1799]: I0129 11:32:53.214530 1799 topology_manager.go:215] "Topology Admit Handler" podUID="a6d551f2-02e5-406a-9395-a858f9d284fe" podNamespace="default" podName="nfs-server-provisioner-0" Jan 29 11:32:53.220742 systemd[1]: Created slice kubepods-besteffort-poda6d551f2_02e5_406a_9395_a858f9d284fe.slice - libcontainer container kubepods-besteffort-poda6d551f2_02e5_406a_9395_a858f9d284fe.slice. Jan 29 11:32:53.356838 kubelet[1799]: E0129 11:32:53.356663 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:32:53.399085 kubelet[1799]: I0129 11:32:53.399031 1799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqtrb\" (UniqueName: \"kubernetes.io/projected/a6d551f2-02e5-406a-9395-a858f9d284fe-kube-api-access-xqtrb\") pod \"nfs-server-provisioner-0\" (UID: \"a6d551f2-02e5-406a-9395-a858f9d284fe\") " pod="default/nfs-server-provisioner-0" Jan 29 11:32:53.399085 kubelet[1799]: I0129 11:32:53.399074 1799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/a6d551f2-02e5-406a-9395-a858f9d284fe-data\") pod \"nfs-server-provisioner-0\" (UID: \"a6d551f2-02e5-406a-9395-a858f9d284fe\") " pod="default/nfs-server-provisioner-0" Jan 29 11:32:53.524608 containerd[1484]: time="2025-01-29T11:32:53.524540206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:a6d551f2-02e5-406a-9395-a858f9d284fe,Namespace:default,Attempt:0,}" Jan 29 11:32:53.644501 systemd-networkd[1416]: cali60e51b789ff: Link UP Jan 29 11:32:53.644707 systemd-networkd[1416]: cali60e51b789ff: Gained carrier Jan 29 11:32:53.665877 containerd[1484]: 2025-01-29 11:32:53.570 [INFO][3493] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.79-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default a6d551f2-02e5-406a-9395-a858f9d284fe 1140 0 2025-01-29 11:32:53 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.79 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="7cbdf73a1dc5e280b18dfdef84c1c8357ea675f1f4fb75ffe2cb79cb36dc5ca0" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.79-k8s-nfs--server--provisioner--0-" Jan 29 11:32:53.665877 containerd[1484]: 2025-01-29 11:32:53.570 [INFO][3493] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7cbdf73a1dc5e280b18dfdef84c1c8357ea675f1f4fb75ffe2cb79cb36dc5ca0" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.79-k8s-nfs--server--provisioner--0-eth0" Jan 29 11:32:53.665877 containerd[1484]: 2025-01-29 11:32:53.599 [INFO][3507] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7cbdf73a1dc5e280b18dfdef84c1c8357ea675f1f4fb75ffe2cb79cb36dc5ca0" HandleID="k8s-pod-network.7cbdf73a1dc5e280b18dfdef84c1c8357ea675f1f4fb75ffe2cb79cb36dc5ca0" Workload="10.0.0.79-k8s-nfs--server--provisioner--0-eth0" Jan 29 11:32:53.665877 containerd[1484]: 2025-01-29 11:32:53.609 [INFO][3507] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7cbdf73a1dc5e280b18dfdef84c1c8357ea675f1f4fb75ffe2cb79cb36dc5ca0" HandleID="k8s-pod-network.7cbdf73a1dc5e280b18dfdef84c1c8357ea675f1f4fb75ffe2cb79cb36dc5ca0" Workload="10.0.0.79-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003090e0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.79", "pod":"nfs-server-provisioner-0", "timestamp":"2025-01-29 11:32:53.599429328 +0000 UTC"}, Hostname:"10.0.0.79", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:32:53.665877 containerd[1484]: 2025-01-29 11:32:53.609 [INFO][3507] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:32:53.665877 containerd[1484]: 2025-01-29 11:32:53.609 [INFO][3507] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:32:53.665877 containerd[1484]: 2025-01-29 11:32:53.609 [INFO][3507] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.79' Jan 29 11:32:53.665877 containerd[1484]: 2025-01-29 11:32:53.611 [INFO][3507] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7cbdf73a1dc5e280b18dfdef84c1c8357ea675f1f4fb75ffe2cb79cb36dc5ca0" host="10.0.0.79" Jan 29 11:32:53.665877 containerd[1484]: 2025-01-29 11:32:53.616 [INFO][3507] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.79" Jan 29 11:32:53.665877 containerd[1484]: 2025-01-29 11:32:53.622 [INFO][3507] ipam/ipam.go 489: Trying affinity for 192.168.68.192/26 host="10.0.0.79" Jan 29 11:32:53.665877 containerd[1484]: 2025-01-29 11:32:53.623 [INFO][3507] ipam/ipam.go 155: Attempting to load block cidr=192.168.68.192/26 host="10.0.0.79" Jan 29 11:32:53.665877 containerd[1484]: 2025-01-29 11:32:53.626 [INFO][3507] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.68.192/26 host="10.0.0.79" Jan 29 11:32:53.665877 containerd[1484]: 2025-01-29 11:32:53.626 [INFO][3507] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.68.192/26 handle="k8s-pod-network.7cbdf73a1dc5e280b18dfdef84c1c8357ea675f1f4fb75ffe2cb79cb36dc5ca0" host="10.0.0.79" Jan 29 11:32:53.665877 containerd[1484]: 2025-01-29 11:32:53.628 [INFO][3507] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7cbdf73a1dc5e280b18dfdef84c1c8357ea675f1f4fb75ffe2cb79cb36dc5ca0 Jan 29 11:32:53.665877 containerd[1484]: 2025-01-29 11:32:53.632 [INFO][3507] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.68.192/26 handle="k8s-pod-network.7cbdf73a1dc5e280b18dfdef84c1c8357ea675f1f4fb75ffe2cb79cb36dc5ca0" host="10.0.0.79" Jan 29 11:32:53.665877 containerd[1484]: 2025-01-29 11:32:53.638 [INFO][3507] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.68.195/26] block=192.168.68.192/26 handle="k8s-pod-network.7cbdf73a1dc5e280b18dfdef84c1c8357ea675f1f4fb75ffe2cb79cb36dc5ca0" host="10.0.0.79" Jan 29 11:32:53.665877 containerd[1484]: 2025-01-29 11:32:53.638 [INFO][3507] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.68.195/26] handle="k8s-pod-network.7cbdf73a1dc5e280b18dfdef84c1c8357ea675f1f4fb75ffe2cb79cb36dc5ca0" host="10.0.0.79" Jan 29 11:32:53.665877 containerd[1484]: 2025-01-29 11:32:53.638 [INFO][3507] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:32:53.665877 containerd[1484]: 2025-01-29 11:32:53.638 [INFO][3507] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.68.195/26] IPv6=[] ContainerID="7cbdf73a1dc5e280b18dfdef84c1c8357ea675f1f4fb75ffe2cb79cb36dc5ca0" HandleID="k8s-pod-network.7cbdf73a1dc5e280b18dfdef84c1c8357ea675f1f4fb75ffe2cb79cb36dc5ca0" Workload="10.0.0.79-k8s-nfs--server--provisioner--0-eth0" Jan 29 11:32:53.666633 containerd[1484]: 2025-01-29 11:32:53.641 [INFO][3493] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7cbdf73a1dc5e280b18dfdef84c1c8357ea675f1f4fb75ffe2cb79cb36dc5ca0" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.79-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.79-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"a6d551f2-02e5-406a-9395-a858f9d284fe", ResourceVersion:"1140", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 32, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.79", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.68.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:32:53.666633 containerd[1484]: 2025-01-29 11:32:53.641 [INFO][3493] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.68.195/32] ContainerID="7cbdf73a1dc5e280b18dfdef84c1c8357ea675f1f4fb75ffe2cb79cb36dc5ca0" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.79-k8s-nfs--server--provisioner--0-eth0" Jan 29 11:32:53.666633 containerd[1484]: 2025-01-29 11:32:53.641 [INFO][3493] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="7cbdf73a1dc5e280b18dfdef84c1c8357ea675f1f4fb75ffe2cb79cb36dc5ca0" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.79-k8s-nfs--server--provisioner--0-eth0" Jan 29 11:32:53.666633 containerd[1484]: 2025-01-29 11:32:53.644 [INFO][3493] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7cbdf73a1dc5e280b18dfdef84c1c8357ea675f1f4fb75ffe2cb79cb36dc5ca0" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.79-k8s-nfs--server--provisioner--0-eth0" Jan 29 11:32:53.668645 containerd[1484]: 2025-01-29 11:32:53.645 [INFO][3493] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7cbdf73a1dc5e280b18dfdef84c1c8357ea675f1f4fb75ffe2cb79cb36dc5ca0" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.79-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.79-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"a6d551f2-02e5-406a-9395-a858f9d284fe", ResourceVersion:"1140", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 32, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.79", ContainerID:"7cbdf73a1dc5e280b18dfdef84c1c8357ea675f1f4fb75ffe2cb79cb36dc5ca0", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.68.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"9e:8d:f0:72:9d:4d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:32:53.668645 containerd[1484]: 2025-01-29 11:32:53.654 [INFO][3493] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7cbdf73a1dc5e280b18dfdef84c1c8357ea675f1f4fb75ffe2cb79cb36dc5ca0" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.79-k8s-nfs--server--provisioner--0-eth0" Jan 29 11:32:54.018475 containerd[1484]: time="2025-01-29T11:32:54.018370652Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:32:54.018475 containerd[1484]: time="2025-01-29T11:32:54.018435124Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:32:54.019455 containerd[1484]: time="2025-01-29T11:32:54.018446956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:32:54.019551 containerd[1484]: time="2025-01-29T11:32:54.019407219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:32:54.052905 systemd[1]: Started cri-containerd-7cbdf73a1dc5e280b18dfdef84c1c8357ea675f1f4fb75ffe2cb79cb36dc5ca0.scope - libcontainer container 7cbdf73a1dc5e280b18dfdef84c1c8357ea675f1f4fb75ffe2cb79cb36dc5ca0. Jan 29 11:32:54.065425 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:32:54.091512 containerd[1484]: time="2025-01-29T11:32:54.091468352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:a6d551f2-02e5-406a-9395-a858f9d284fe,Namespace:default,Attempt:0,} returns sandbox id \"7cbdf73a1dc5e280b18dfdef84c1c8357ea675f1f4fb75ffe2cb79cb36dc5ca0\"" Jan 29 11:32:54.093322 containerd[1484]: time="2025-01-29T11:32:54.093263499Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 29 11:32:54.140147 update_engine[1471]: I20250129 11:32:54.140056 1471 update_attempter.cc:509] Updating boot flags... Jan 29 11:32:54.164813 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3575) Jan 29 11:32:54.195853 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3518) Jan 29 11:32:54.357310 kubelet[1799]: E0129 11:32:54.357173 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:32:55.092888 systemd-networkd[1416]: cali60e51b789ff: Gained IPv6LL Jan 29 11:32:55.358074 kubelet[1799]: E0129 11:32:55.357849 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:32:56.359083 kubelet[1799]: E0129 11:32:56.359016 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:32:56.693252 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2495759956.mount: Deactivated successfully. Jan 29 11:32:57.328293 kubelet[1799]: E0129 11:32:57.328244 1799 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:32:57.361766 kubelet[1799]: E0129 11:32:57.359422 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:32:58.359797 kubelet[1799]: E0129 11:32:58.359676 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:32:58.406900 containerd[1484]: time="2025-01-29T11:32:58.406840673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:32:58.407669 containerd[1484]: time="2025-01-29T11:32:58.407622132Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jan 29 11:32:58.408979 containerd[1484]: time="2025-01-29T11:32:58.408921061Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:32:58.411574 containerd[1484]: time="2025-01-29T11:32:58.411532483Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:32:58.412562 containerd[1484]: time="2025-01-29T11:32:58.412533227Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 4.319238128s" Jan 29 11:32:58.412617 containerd[1484]: time="2025-01-29T11:32:58.412564176Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 29 11:32:58.414657 containerd[1484]: time="2025-01-29T11:32:58.414631779Z" level=info msg="CreateContainer within sandbox \"7cbdf73a1dc5e280b18dfdef84c1c8357ea675f1f4fb75ffe2cb79cb36dc5ca0\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 29 11:32:58.432090 containerd[1484]: time="2025-01-29T11:32:58.432046779Z" level=info msg="CreateContainer within sandbox \"7cbdf73a1dc5e280b18dfdef84c1c8357ea675f1f4fb75ffe2cb79cb36dc5ca0\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"9a47b7b43c06e27a2943c2631e168edccb6e08f7f8b267791309deb537548ec3\"" Jan 29 11:32:58.432560 containerd[1484]: time="2025-01-29T11:32:58.432482925Z" level=info msg="StartContainer for \"9a47b7b43c06e27a2943c2631e168edccb6e08f7f8b267791309deb537548ec3\"" Jan 29 11:32:58.462882 systemd[1]: Started cri-containerd-9a47b7b43c06e27a2943c2631e168edccb6e08f7f8b267791309deb537548ec3.scope - libcontainer container 9a47b7b43c06e27a2943c2631e168edccb6e08f7f8b267791309deb537548ec3. Jan 29 11:32:58.489613 containerd[1484]: time="2025-01-29T11:32:58.489573537Z" level=info msg="StartContainer for \"9a47b7b43c06e27a2943c2631e168edccb6e08f7f8b267791309deb537548ec3\" returns successfully" Jan 29 11:32:59.193967 kubelet[1799]: I0129 11:32:59.193873 1799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.873578734 podStartE2EDuration="6.193852022s" podCreationTimestamp="2025-01-29 11:32:53 +0000 UTC" firstStartedPulling="2025-01-29 11:32:54.093056116 +0000 UTC m=+37.165395337" lastFinishedPulling="2025-01-29 11:32:58.413329404 +0000 UTC m=+41.485668625" observedRunningTime="2025-01-29 11:32:59.193180752 +0000 UTC m=+42.265519963" watchObservedRunningTime="2025-01-29 11:32:59.193852022 +0000 UTC m=+42.266191243" Jan 29 11:32:59.360671 kubelet[1799]: E0129 11:32:59.360563 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:33:00.361680 kubelet[1799]: E0129 11:33:00.361600 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:33:01.362075 kubelet[1799]: E0129 11:33:01.361984 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:33:02.362199 kubelet[1799]: E0129 11:33:02.362135 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:33:03.362904 kubelet[1799]: E0129 11:33:03.362843 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:33:04.363951 kubelet[1799]: E0129 11:33:04.363866 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:33:05.364384 kubelet[1799]: E0129 11:33:05.364300 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:33:06.364838 kubelet[1799]: E0129 11:33:06.364783 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:33:07.365381 kubelet[1799]: E0129 11:33:07.365324 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:33:08.212023 kubelet[1799]: E0129 11:33:08.211986 1799 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:33:08.366117 kubelet[1799]: E0129 11:33:08.366027 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:33:08.619868 kubelet[1799]: I0129 11:33:08.619678 1799 topology_manager.go:215] "Topology Admit Handler" podUID="0795575e-37d5-4ed2-ae07-dc56cffb81b1" podNamespace="default" podName="test-pod-1" Jan 29 11:33:08.625773 systemd[1]: Created slice kubepods-besteffort-pod0795575e_37d5_4ed2_ae07_dc56cffb81b1.slice - libcontainer container kubepods-besteffort-pod0795575e_37d5_4ed2_ae07_dc56cffb81b1.slice. Jan 29 11:33:08.776937 kubelet[1799]: I0129 11:33:08.776875 1799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxrzb\" (UniqueName: \"kubernetes.io/projected/0795575e-37d5-4ed2-ae07-dc56cffb81b1-kube-api-access-cxrzb\") pod \"test-pod-1\" (UID: \"0795575e-37d5-4ed2-ae07-dc56cffb81b1\") " pod="default/test-pod-1" Jan 29 11:33:08.776937 kubelet[1799]: I0129 11:33:08.776919 1799 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9a0b3b4f-5ace-45d7-baa9-6a2ac3606757\" (UniqueName: \"kubernetes.io/nfs/0795575e-37d5-4ed2-ae07-dc56cffb81b1-pvc-9a0b3b4f-5ace-45d7-baa9-6a2ac3606757\") pod \"test-pod-1\" (UID: \"0795575e-37d5-4ed2-ae07-dc56cffb81b1\") " pod="default/test-pod-1" Jan 29 11:33:08.904881 kernel: FS-Cache: Loaded Jan 29 11:33:08.973207 kernel: RPC: Registered named UNIX socket transport module. Jan 29 11:33:08.973304 kernel: RPC: Registered udp transport module. Jan 29 11:33:08.973326 kernel: RPC: Registered tcp transport module. Jan 29 11:33:08.973345 kernel: RPC: Registered tcp-with-tls transport module. Jan 29 11:33:08.973909 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 29 11:33:09.263929 kernel: NFS: Registering the id_resolver key type Jan 29 11:33:09.264096 kernel: Key type id_resolver registered Jan 29 11:33:09.264130 kernel: Key type id_legacy registered Jan 29 11:33:09.296985 nfsidmap[3742]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 29 11:33:09.303321 nfsidmap[3745]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 29 11:33:09.367064 kubelet[1799]: E0129 11:33:09.366990 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:33:09.528794 containerd[1484]: time="2025-01-29T11:33:09.528720181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:0795575e-37d5-4ed2-ae07-dc56cffb81b1,Namespace:default,Attempt:0,}" Jan 29 11:33:09.733021 systemd-networkd[1416]: cali5ec59c6bf6e: Link UP Jan 29 11:33:09.734724 systemd-networkd[1416]: cali5ec59c6bf6e: Gained carrier Jan 29 11:33:09.745911 containerd[1484]: 2025-01-29 11:33:09.662 [INFO][3748] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.79-k8s-test--pod--1-eth0 default 0795575e-37d5-4ed2-ae07-dc56cffb81b1 1209 0 2025-01-29 11:32:53 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.79 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="de5e21583b29aa5ba19a684c6f626fb472bf06b48f7bf18666d92242ffa567e1" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.79-k8s-test--pod--1-" Jan 29 11:33:09.745911 containerd[1484]: 2025-01-29 11:33:09.662 [INFO][3748] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="de5e21583b29aa5ba19a684c6f626fb472bf06b48f7bf18666d92242ffa567e1" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.79-k8s-test--pod--1-eth0" Jan 29 11:33:09.745911 containerd[1484]: 2025-01-29 11:33:09.691 [INFO][3761] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="de5e21583b29aa5ba19a684c6f626fb472bf06b48f7bf18666d92242ffa567e1" HandleID="k8s-pod-network.de5e21583b29aa5ba19a684c6f626fb472bf06b48f7bf18666d92242ffa567e1" Workload="10.0.0.79-k8s-test--pod--1-eth0" Jan 29 11:33:09.745911 containerd[1484]: 2025-01-29 11:33:09.701 [INFO][3761] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="de5e21583b29aa5ba19a684c6f626fb472bf06b48f7bf18666d92242ffa567e1" HandleID="k8s-pod-network.de5e21583b29aa5ba19a684c6f626fb472bf06b48f7bf18666d92242ffa567e1" Workload="10.0.0.79-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000289f00), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.79", "pod":"test-pod-1", "timestamp":"2025-01-29 11:33:09.691699301 +0000 UTC"}, Hostname:"10.0.0.79", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:33:09.745911 containerd[1484]: 2025-01-29 11:33:09.701 [INFO][3761] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:33:09.745911 containerd[1484]: 2025-01-29 11:33:09.701 [INFO][3761] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:33:09.745911 containerd[1484]: 2025-01-29 11:33:09.701 [INFO][3761] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.79' Jan 29 11:33:09.745911 containerd[1484]: 2025-01-29 11:33:09.704 [INFO][3761] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.de5e21583b29aa5ba19a684c6f626fb472bf06b48f7bf18666d92242ffa567e1" host="10.0.0.79" Jan 29 11:33:09.745911 containerd[1484]: 2025-01-29 11:33:09.709 [INFO][3761] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.79" Jan 29 11:33:09.745911 containerd[1484]: 2025-01-29 11:33:09.713 [INFO][3761] ipam/ipam.go 489: Trying affinity for 192.168.68.192/26 host="10.0.0.79" Jan 29 11:33:09.745911 containerd[1484]: 2025-01-29 11:33:09.715 [INFO][3761] ipam/ipam.go 155: Attempting to load block cidr=192.168.68.192/26 host="10.0.0.79" Jan 29 11:33:09.745911 containerd[1484]: 2025-01-29 11:33:09.718 [INFO][3761] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.68.192/26 host="10.0.0.79" Jan 29 11:33:09.745911 containerd[1484]: 2025-01-29 11:33:09.718 [INFO][3761] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.68.192/26 handle="k8s-pod-network.de5e21583b29aa5ba19a684c6f626fb472bf06b48f7bf18666d92242ffa567e1" host="10.0.0.79" Jan 29 11:33:09.745911 containerd[1484]: 2025-01-29 11:33:09.719 [INFO][3761] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.de5e21583b29aa5ba19a684c6f626fb472bf06b48f7bf18666d92242ffa567e1 Jan 29 11:33:09.745911 containerd[1484]: 2025-01-29 11:33:09.722 [INFO][3761] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.68.192/26 handle="k8s-pod-network.de5e21583b29aa5ba19a684c6f626fb472bf06b48f7bf18666d92242ffa567e1" host="10.0.0.79" Jan 29 11:33:09.745911 containerd[1484]: 2025-01-29 11:33:09.727 [INFO][3761] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.68.196/26] block=192.168.68.192/26 handle="k8s-pod-network.de5e21583b29aa5ba19a684c6f626fb472bf06b48f7bf18666d92242ffa567e1" host="10.0.0.79" Jan 29 11:33:09.745911 containerd[1484]: 2025-01-29 11:33:09.727 [INFO][3761] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.68.196/26] handle="k8s-pod-network.de5e21583b29aa5ba19a684c6f626fb472bf06b48f7bf18666d92242ffa567e1" host="10.0.0.79" Jan 29 11:33:09.745911 containerd[1484]: 2025-01-29 11:33:09.727 [INFO][3761] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:33:09.745911 containerd[1484]: 2025-01-29 11:33:09.727 [INFO][3761] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.68.196/26] IPv6=[] ContainerID="de5e21583b29aa5ba19a684c6f626fb472bf06b48f7bf18666d92242ffa567e1" HandleID="k8s-pod-network.de5e21583b29aa5ba19a684c6f626fb472bf06b48f7bf18666d92242ffa567e1" Workload="10.0.0.79-k8s-test--pod--1-eth0" Jan 29 11:33:09.745911 containerd[1484]: 2025-01-29 11:33:09.730 [INFO][3748] cni-plugin/k8s.go 386: Populated endpoint ContainerID="de5e21583b29aa5ba19a684c6f626fb472bf06b48f7bf18666d92242ffa567e1" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.79-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.79-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"0795575e-37d5-4ed2-ae07-dc56cffb81b1", ResourceVersion:"1209", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 32, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.79", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.68.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:33:09.746925 containerd[1484]: 2025-01-29 11:33:09.731 [INFO][3748] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.68.196/32] ContainerID="de5e21583b29aa5ba19a684c6f626fb472bf06b48f7bf18666d92242ffa567e1" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.79-k8s-test--pod--1-eth0" Jan 29 11:33:09.746925 containerd[1484]: 2025-01-29 11:33:09.731 [INFO][3748] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="de5e21583b29aa5ba19a684c6f626fb472bf06b48f7bf18666d92242ffa567e1" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.79-k8s-test--pod--1-eth0" Jan 29 11:33:09.746925 containerd[1484]: 2025-01-29 11:33:09.733 [INFO][3748] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="de5e21583b29aa5ba19a684c6f626fb472bf06b48f7bf18666d92242ffa567e1" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.79-k8s-test--pod--1-eth0" Jan 29 11:33:09.746925 containerd[1484]: 2025-01-29 11:33:09.733 [INFO][3748] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="de5e21583b29aa5ba19a684c6f626fb472bf06b48f7bf18666d92242ffa567e1" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.79-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.79-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"0795575e-37d5-4ed2-ae07-dc56cffb81b1", ResourceVersion:"1209", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 32, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.79", ContainerID:"de5e21583b29aa5ba19a684c6f626fb472bf06b48f7bf18666d92242ffa567e1", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.68.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"92:dd:cf:1e:e9:6d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:33:09.746925 containerd[1484]: 2025-01-29 11:33:09.740 [INFO][3748] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="de5e21583b29aa5ba19a684c6f626fb472bf06b48f7bf18666d92242ffa567e1" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.79-k8s-test--pod--1-eth0" Jan 29 11:33:09.990993 containerd[1484]: time="2025-01-29T11:33:09.988961100Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:33:09.990993 containerd[1484]: time="2025-01-29T11:33:09.989026854Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:33:09.990993 containerd[1484]: time="2025-01-29T11:33:09.989038406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:33:09.990993 containerd[1484]: time="2025-01-29T11:33:09.990301866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:33:10.012915 systemd[1]: Started cri-containerd-de5e21583b29aa5ba19a684c6f626fb472bf06b48f7bf18666d92242ffa567e1.scope - libcontainer container de5e21583b29aa5ba19a684c6f626fb472bf06b48f7bf18666d92242ffa567e1. Jan 29 11:33:10.025130 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:33:10.050084 containerd[1484]: time="2025-01-29T11:33:10.050045609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:0795575e-37d5-4ed2-ae07-dc56cffb81b1,Namespace:default,Attempt:0,} returns sandbox id \"de5e21583b29aa5ba19a684c6f626fb472bf06b48f7bf18666d92242ffa567e1\"" Jan 29 11:33:10.052189 containerd[1484]: time="2025-01-29T11:33:10.051911914Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 29 11:33:10.367717 kubelet[1799]: E0129 11:33:10.367598 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:33:10.569297 containerd[1484]: time="2025-01-29T11:33:10.569235954Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:33:10.570244 containerd[1484]: time="2025-01-29T11:33:10.570201984Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 29 11:33:10.572768 containerd[1484]: time="2025-01-29T11:33:10.572731858Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 520.792623ms" Jan 29 11:33:10.572821 containerd[1484]: time="2025-01-29T11:33:10.572768227Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 29 11:33:10.574472 containerd[1484]: time="2025-01-29T11:33:10.574446227Z" level=info msg="CreateContainer within sandbox \"de5e21583b29aa5ba19a684c6f626fb472bf06b48f7bf18666d92242ffa567e1\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 29 11:33:10.595191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2037102.mount: Deactivated successfully. Jan 29 11:33:10.595734 containerd[1484]: time="2025-01-29T11:33:10.595696639Z" level=info msg="CreateContainer within sandbox \"de5e21583b29aa5ba19a684c6f626fb472bf06b48f7bf18666d92242ffa567e1\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"9c19d47e26fbbeebadacece1b8720d91e66e9553ad94174a221917ca5f406c1a\"" Jan 29 11:33:10.596257 containerd[1484]: time="2025-01-29T11:33:10.596228210Z" level=info msg="StartContainer for \"9c19d47e26fbbeebadacece1b8720d91e66e9553ad94174a221917ca5f406c1a\"" Jan 29 11:33:10.628891 systemd[1]: Started cri-containerd-9c19d47e26fbbeebadacece1b8720d91e66e9553ad94174a221917ca5f406c1a.scope - libcontainer container 9c19d47e26fbbeebadacece1b8720d91e66e9553ad94174a221917ca5f406c1a. Jan 29 11:33:10.655424 containerd[1484]: time="2025-01-29T11:33:10.655377677Z" level=info msg="StartContainer for \"9c19d47e26fbbeebadacece1b8720d91e66e9553ad94174a221917ca5f406c1a\" returns successfully" Jan 29 11:33:11.187397 kubelet[1799]: I0129 11:33:11.187311 1799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=17.665608053 podStartE2EDuration="18.187287166s" podCreationTimestamp="2025-01-29 11:32:53 +0000 UTC" firstStartedPulling="2025-01-29 11:33:10.051619203 +0000 UTC m=+53.123958424" lastFinishedPulling="2025-01-29 11:33:10.573298326 +0000 UTC m=+53.645637537" observedRunningTime="2025-01-29 11:33:11.187181226 +0000 UTC m=+54.259520447" watchObservedRunningTime="2025-01-29 11:33:11.187287166 +0000 UTC m=+54.259626387" Jan 29 11:33:11.220026 systemd-networkd[1416]: cali5ec59c6bf6e: Gained IPv6LL Jan 29 11:33:11.367956 kubelet[1799]: E0129 11:33:11.367874 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:33:12.368787 kubelet[1799]: E0129 11:33:12.368692 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:33:13.369569 kubelet[1799]: E0129 11:33:13.369492 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:33:14.370061 kubelet[1799]: E0129 11:33:14.369984 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:33:15.371037 kubelet[1799]: E0129 11:33:15.370948 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"