Jan 29 16:25:44.882496 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 14:51:22 -00 2025 Jan 29 16:25:44.882524 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:25:44.882538 kernel: BIOS-provided physical RAM map: Jan 29 16:25:44.882547 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 29 16:25:44.882555 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 29 16:25:44.882564 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 29 16:25:44.882574 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 29 16:25:44.882582 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 29 16:25:44.882591 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 29 16:25:44.882718 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 29 16:25:44.882728 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 29 16:25:44.882736 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 29 16:25:44.882744 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 29 16:25:44.882753 kernel: NX (Execute Disable) protection: active Jan 29 16:25:44.882763 kernel: APIC: Static calls initialized Jan 29 16:25:44.882776 kernel: SMBIOS 2.8 present. Jan 29 16:25:44.882786 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 29 16:25:44.882794 kernel: Hypervisor detected: KVM Jan 29 16:25:44.882803 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 16:25:44.882811 kernel: kvm-clock: using sched offset of 2287466657 cycles Jan 29 16:25:44.882820 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 16:25:44.882830 kernel: tsc: Detected 2794.748 MHz processor Jan 29 16:25:44.882840 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 16:25:44.882849 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 16:25:44.882859 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 29 16:25:44.882872 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 29 16:25:44.882881 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 16:25:44.882891 kernel: Using GB pages for direct mapping Jan 29 16:25:44.882900 kernel: ACPI: Early table checksum verification disabled Jan 29 16:25:44.882910 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 29 16:25:44.882919 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:25:44.882929 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:25:44.882938 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:25:44.882947 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 29 16:25:44.882959 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:25:44.882969 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:25:44.882978 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:25:44.882987 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:25:44.882997 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Jan 29 16:25:44.883007 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Jan 29 16:25:44.883021 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 29 16:25:44.883052 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Jan 29 16:25:44.883076 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Jan 29 16:25:44.883103 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Jan 29 16:25:44.883113 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Jan 29 16:25:44.883123 kernel: No NUMA configuration found Jan 29 16:25:44.883132 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 29 16:25:44.883142 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 29 16:25:44.883155 kernel: Zone ranges: Jan 29 16:25:44.883165 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 16:25:44.883174 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 29 16:25:44.883188 kernel: Normal empty Jan 29 16:25:44.883198 kernel: Movable zone start for each node Jan 29 16:25:44.883208 kernel: Early memory node ranges Jan 29 16:25:44.883217 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 29 16:25:44.883227 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 29 16:25:44.883236 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 29 16:25:44.883259 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 16:25:44.883269 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 29 16:25:44.883279 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 29 16:25:44.883288 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 29 16:25:44.883298 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 16:25:44.883308 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 16:25:44.883317 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 16:25:44.883327 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 16:25:44.883337 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 16:25:44.883347 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 16:25:44.883360 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 16:25:44.883369 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 16:25:44.883378 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 29 16:25:44.883387 kernel: TSC deadline timer available Jan 29 16:25:44.883397 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 29 16:25:44.883406 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 16:25:44.883416 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 29 16:25:44.883426 kernel: kvm-guest: setup PV sched yield Jan 29 16:25:44.883435 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 29 16:25:44.883448 kernel: Booting paravirtualized kernel on KVM Jan 29 16:25:44.883458 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 16:25:44.883468 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 29 16:25:44.883478 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 29 16:25:44.883487 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 29 16:25:44.883497 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 29 16:25:44.883506 kernel: kvm-guest: PV spinlocks enabled Jan 29 16:25:44.883515 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 29 16:25:44.883527 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:25:44.883540 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 16:25:44.883550 kernel: random: crng init done Jan 29 16:25:44.883559 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 16:25:44.883569 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 16:25:44.883579 kernel: Fallback order for Node 0: 0 Jan 29 16:25:44.883589 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 29 16:25:44.883611 kernel: Policy zone: DMA32 Jan 29 16:25:44.883621 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 16:25:44.883635 kernel: Memory: 2432544K/2571752K available (14336K kernel code, 2301K rwdata, 22852K rodata, 43472K init, 1600K bss, 138948K reserved, 0K cma-reserved) Jan 29 16:25:44.883645 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 29 16:25:44.883654 kernel: ftrace: allocating 37893 entries in 149 pages Jan 29 16:25:44.883664 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 16:25:44.883674 kernel: Dynamic Preempt: voluntary Jan 29 16:25:44.883684 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 16:25:44.883698 kernel: rcu: RCU event tracing is enabled. Jan 29 16:25:44.883708 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 29 16:25:44.883718 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 16:25:44.883730 kernel: Rude variant of Tasks RCU enabled. Jan 29 16:25:44.883740 kernel: Tracing variant of Tasks RCU enabled. Jan 29 16:25:44.883749 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 16:25:44.883759 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 29 16:25:44.883769 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 29 16:25:44.883779 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 16:25:44.883789 kernel: Console: colour VGA+ 80x25 Jan 29 16:25:44.883798 kernel: printk: console [ttyS0] enabled Jan 29 16:25:44.883808 kernel: ACPI: Core revision 20230628 Jan 29 16:25:44.883818 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 29 16:25:44.883831 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 16:25:44.883840 kernel: x2apic enabled Jan 29 16:25:44.883850 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 16:25:44.883860 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 29 16:25:44.883870 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 29 16:25:44.883880 kernel: kvm-guest: setup PV IPIs Jan 29 16:25:44.883903 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 29 16:25:44.883913 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 29 16:25:44.883924 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jan 29 16:25:44.883934 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 29 16:25:44.883944 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 29 16:25:44.883966 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 29 16:25:44.883991 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 16:25:44.884003 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 16:25:44.884014 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 16:25:44.884024 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 16:25:44.884037 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 29 16:25:44.884047 kernel: RETBleed: Mitigation: untrained return thunk Jan 29 16:25:44.884061 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 16:25:44.884068 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 16:25:44.884076 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 29 16:25:44.884084 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 29 16:25:44.884092 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 29 16:25:44.884100 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 16:25:44.884110 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 16:25:44.884118 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 16:25:44.884125 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 16:25:44.884133 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 29 16:25:44.884141 kernel: Freeing SMP alternatives memory: 32K Jan 29 16:25:44.884148 kernel: pid_max: default: 32768 minimum: 301 Jan 29 16:25:44.884159 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 16:25:44.884169 kernel: landlock: Up and running. Jan 29 16:25:44.884179 kernel: SELinux: Initializing. Jan 29 16:25:44.884192 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 16:25:44.884203 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 16:25:44.884214 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 29 16:25:44.884224 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 16:25:44.884234 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 16:25:44.884254 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 16:25:44.884263 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 29 16:25:44.884273 kernel: ... version: 0 Jan 29 16:25:44.884283 kernel: ... bit width: 48 Jan 29 16:25:44.884298 kernel: ... generic registers: 6 Jan 29 16:25:44.884308 kernel: ... value mask: 0000ffffffffffff Jan 29 16:25:44.884318 kernel: ... max period: 00007fffffffffff Jan 29 16:25:44.884328 kernel: ... fixed-purpose events: 0 Jan 29 16:25:44.884338 kernel: ... event mask: 000000000000003f Jan 29 16:25:44.884349 kernel: signal: max sigframe size: 1776 Jan 29 16:25:44.884360 kernel: rcu: Hierarchical SRCU implementation. Jan 29 16:25:44.884370 kernel: rcu: Max phase no-delay instances is 400. Jan 29 16:25:44.884380 kernel: smp: Bringing up secondary CPUs ... Jan 29 16:25:44.884391 kernel: smpboot: x86: Booting SMP configuration: Jan 29 16:25:44.884402 kernel: .... node #0, CPUs: #1 #2 #3 Jan 29 16:25:44.884412 kernel: smp: Brought up 1 node, 4 CPUs Jan 29 16:25:44.884422 kernel: smpboot: Max logical packages: 1 Jan 29 16:25:44.884432 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jan 29 16:25:44.884443 kernel: devtmpfs: initialized Jan 29 16:25:44.884453 kernel: x86/mm: Memory block size: 128MB Jan 29 16:25:44.884464 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 16:25:44.884474 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 29 16:25:44.884488 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 16:25:44.884499 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 16:25:44.884509 kernel: audit: initializing netlink subsys (disabled) Jan 29 16:25:44.884520 kernel: audit: type=2000 audit(1738167944.467:1): state=initialized audit_enabled=0 res=1 Jan 29 16:25:44.884530 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 16:25:44.884541 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 16:25:44.884551 kernel: cpuidle: using governor menu Jan 29 16:25:44.884561 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 16:25:44.884570 kernel: dca service started, version 1.12.1 Jan 29 16:25:44.884584 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 29 16:25:44.884595 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 29 16:25:44.884633 kernel: PCI: Using configuration type 1 for base access Jan 29 16:25:44.884644 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 16:25:44.884654 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 16:25:44.884664 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 16:25:44.884674 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 16:25:44.884684 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 16:25:44.884694 kernel: ACPI: Added _OSI(Module Device) Jan 29 16:25:44.884708 kernel: ACPI: Added _OSI(Processor Device) Jan 29 16:25:44.884718 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 16:25:44.884729 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 16:25:44.884740 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 16:25:44.884750 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 16:25:44.884760 kernel: ACPI: Interpreter enabled Jan 29 16:25:44.884770 kernel: ACPI: PM: (supports S0 S3 S5) Jan 29 16:25:44.884780 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 16:25:44.884790 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 16:25:44.884804 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 16:25:44.884814 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 29 16:25:44.884824 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 16:25:44.885039 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 16:25:44.885208 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 29 16:25:44.885375 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 29 16:25:44.885390 kernel: PCI host bridge to bus 0000:00 Jan 29 16:25:44.885545 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 16:25:44.885721 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 16:25:44.885865 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 16:25:44.886006 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 29 16:25:44.886145 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 29 16:25:44.886298 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 29 16:25:44.886440 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 16:25:44.886645 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 29 16:25:44.886815 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 29 16:25:44.886971 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 29 16:25:44.887128 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 29 16:25:44.887295 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 29 16:25:44.887452 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 16:25:44.887656 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 16:25:44.887827 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 29 16:25:44.887985 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 29 16:25:44.888144 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 29 16:25:44.888322 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 29 16:25:44.888483 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 29 16:25:44.888658 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 29 16:25:44.888817 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 29 16:25:44.888992 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 29 16:25:44.889154 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 29 16:25:44.889323 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 29 16:25:44.889484 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 29 16:25:44.889670 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 29 16:25:44.889838 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 29 16:25:44.889995 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 29 16:25:44.890158 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 29 16:25:44.890325 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 29 16:25:44.890480 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 29 16:25:44.890661 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 29 16:25:44.890818 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 29 16:25:44.890833 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 16:25:44.890848 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 16:25:44.890859 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 16:25:44.890869 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 16:25:44.890879 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 29 16:25:44.890889 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 29 16:25:44.890900 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 29 16:25:44.890910 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 29 16:25:44.890920 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 29 16:25:44.890930 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 29 16:25:44.890944 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 29 16:25:44.890954 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 29 16:25:44.890964 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 29 16:25:44.890974 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 29 16:25:44.890984 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 29 16:25:44.890994 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 29 16:25:44.891004 kernel: iommu: Default domain type: Translated Jan 29 16:25:44.891015 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 16:25:44.891025 kernel: PCI: Using ACPI for IRQ routing Jan 29 16:25:44.891038 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 16:25:44.891047 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 29 16:25:44.891054 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 29 16:25:44.891205 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 29 16:25:44.891371 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 29 16:25:44.891523 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 16:25:44.891538 kernel: vgaarb: loaded Jan 29 16:25:44.891549 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 29 16:25:44.891563 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 29 16:25:44.891573 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 16:25:44.891583 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 16:25:44.891593 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 16:25:44.891623 kernel: pnp: PnP ACPI init Jan 29 16:25:44.891802 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 29 16:25:44.891818 kernel: pnp: PnP ACPI: found 6 devices Jan 29 16:25:44.891829 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 16:25:44.891840 kernel: NET: Registered PF_INET protocol family Jan 29 16:25:44.891854 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 16:25:44.891865 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 16:25:44.891875 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 16:25:44.891885 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 16:25:44.891895 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 16:25:44.891906 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 16:25:44.891916 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 16:25:44.891926 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 16:25:44.891940 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 16:25:44.891951 kernel: NET: Registered PF_XDP protocol family Jan 29 16:25:44.892095 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 16:25:44.892248 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 16:25:44.892394 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 16:25:44.892534 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 29 16:25:44.892729 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 29 16:25:44.892869 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 29 16:25:44.892885 kernel: PCI: CLS 0 bytes, default 64 Jan 29 16:25:44.892900 kernel: Initialise system trusted keyrings Jan 29 16:25:44.892910 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 16:25:44.892921 kernel: Key type asymmetric registered Jan 29 16:25:44.892931 kernel: Asymmetric key parser 'x509' registered Jan 29 16:25:44.892941 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 16:25:44.892951 kernel: io scheduler mq-deadline registered Jan 29 16:25:44.892961 kernel: io scheduler kyber registered Jan 29 16:25:44.892972 kernel: io scheduler bfq registered Jan 29 16:25:44.892982 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 16:25:44.892996 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 29 16:25:44.893007 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 29 16:25:44.893017 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 29 16:25:44.893027 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 16:25:44.893038 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 16:25:44.893048 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 16:25:44.893058 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 16:25:44.893068 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 16:25:44.893079 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 16:25:44.893237 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 29 16:25:44.893386 kernel: rtc_cmos 00:04: registered as rtc0 Jan 29 16:25:44.893526 kernel: rtc_cmos 00:04: setting system clock to 2025-01-29T16:25:44 UTC (1738167944) Jan 29 16:25:44.893683 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 29 16:25:44.893698 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 29 16:25:44.893709 kernel: NET: Registered PF_INET6 protocol family Jan 29 16:25:44.893718 kernel: Segment Routing with IPv6 Jan 29 16:25:44.893729 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 16:25:44.893745 kernel: NET: Registered PF_PACKET protocol family Jan 29 16:25:44.893755 kernel: Key type dns_resolver registered Jan 29 16:25:44.893765 kernel: IPI shorthand broadcast: enabled Jan 29 16:25:44.893775 kernel: sched_clock: Marking stable (579002199, 104770494)->(728336611, -44563918) Jan 29 16:25:44.893785 kernel: registered taskstats version 1 Jan 29 16:25:44.893795 kernel: Loading compiled-in X.509 certificates Jan 29 16:25:44.893805 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 68134fdf6dac3690da6e3bc9c22b042a5c364340' Jan 29 16:25:44.893815 kernel: Key type .fscrypt registered Jan 29 16:25:44.893826 kernel: Key type fscrypt-provisioning registered Jan 29 16:25:44.893839 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 16:25:44.893849 kernel: ima: Allocated hash algorithm: sha1 Jan 29 16:25:44.893859 kernel: ima: No architecture policies found Jan 29 16:25:44.893869 kernel: clk: Disabling unused clocks Jan 29 16:25:44.893879 kernel: Freeing unused kernel image (initmem) memory: 43472K Jan 29 16:25:44.893889 kernel: Write protecting the kernel read-only data: 38912k Jan 29 16:25:44.893900 kernel: Freeing unused kernel image (rodata/data gap) memory: 1724K Jan 29 16:25:44.893910 kernel: Run /init as init process Jan 29 16:25:44.893923 kernel: with arguments: Jan 29 16:25:44.893933 kernel: /init Jan 29 16:25:44.893944 kernel: with environment: Jan 29 16:25:44.893953 kernel: HOME=/ Jan 29 16:25:44.893963 kernel: TERM=linux Jan 29 16:25:44.893973 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 16:25:44.893984 systemd[1]: Successfully made /usr/ read-only. Jan 29 16:25:44.893998 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 16:25:44.894013 systemd[1]: Detected virtualization kvm. Jan 29 16:25:44.894023 systemd[1]: Detected architecture x86-64. Jan 29 16:25:44.894034 systemd[1]: Running in initrd. Jan 29 16:25:44.894044 systemd[1]: No hostname configured, using default hostname. Jan 29 16:25:44.894055 systemd[1]: Hostname set to . Jan 29 16:25:44.894066 systemd[1]: Initializing machine ID from VM UUID. Jan 29 16:25:44.894077 systemd[1]: Queued start job for default target initrd.target. Jan 29 16:25:44.894088 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:25:44.894102 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:25:44.894132 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 16:25:44.894147 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 16:25:44.894159 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 16:25:44.894171 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 16:25:44.894187 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 16:25:44.894198 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 16:25:44.894209 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:25:44.894220 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:25:44.894231 systemd[1]: Reached target paths.target - Path Units. Jan 29 16:25:44.894250 systemd[1]: Reached target slices.target - Slice Units. Jan 29 16:25:44.894261 systemd[1]: Reached target swap.target - Swaps. Jan 29 16:25:44.894272 systemd[1]: Reached target timers.target - Timer Units. Jan 29 16:25:44.894286 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 16:25:44.894297 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 16:25:44.894308 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 16:25:44.894319 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 29 16:25:44.894331 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:25:44.894343 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 16:25:44.894354 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:25:44.894365 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 16:25:44.894377 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 16:25:44.894391 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 16:25:44.894402 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 16:25:44.894413 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 16:25:44.894424 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 16:25:44.894436 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 16:25:44.894447 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:25:44.894458 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 16:25:44.894469 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:25:44.894484 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 16:25:44.894495 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 16:25:44.894537 systemd-journald[193]: Collecting audit messages is disabled. Jan 29 16:25:44.894563 systemd-journald[193]: Journal started Jan 29 16:25:44.894594 systemd-journald[193]: Runtime Journal (/run/log/journal/939e1aa388cf4cbfa169d61752fad666) is 6M, max 48.4M, 42.3M free. Jan 29 16:25:44.881957 systemd-modules-load[195]: Inserted module 'overlay' Jan 29 16:25:44.917267 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 16:25:44.917290 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 16:25:44.917305 kernel: Bridge firewalling registered Jan 29 16:25:44.908271 systemd-modules-load[195]: Inserted module 'br_netfilter' Jan 29 16:25:44.915812 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 16:25:44.918525 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:25:44.931742 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:25:44.934741 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:25:44.935402 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 16:25:44.943552 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:25:44.946850 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 16:25:44.950251 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:25:44.953208 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:25:44.955992 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:25:44.958378 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 16:25:44.960684 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 16:25:44.961994 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:25:44.981165 dracut-cmdline[229]: dracut-dracut-053 Jan 29 16:25:44.981165 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:25:45.016827 systemd-resolved[230]: Positive Trust Anchors: Jan 29 16:25:45.016843 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 16:25:45.016874 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 16:25:45.019320 systemd-resolved[230]: Defaulting to hostname 'linux'. Jan 29 16:25:45.020361 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 16:25:45.026495 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:25:45.063629 kernel: SCSI subsystem initialized Jan 29 16:25:45.072619 kernel: Loading iSCSI transport class v2.0-870. Jan 29 16:25:45.084620 kernel: iscsi: registered transport (tcp) Jan 29 16:25:45.109620 kernel: iscsi: registered transport (qla4xxx) Jan 29 16:25:45.109640 kernel: QLogic iSCSI HBA Driver Jan 29 16:25:45.152140 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 16:25:45.188807 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 16:25:45.215381 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 16:25:45.215422 kernel: device-mapper: uevent: version 1.0.3 Jan 29 16:25:45.215441 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 16:25:45.256628 kernel: raid6: avx2x4 gen() 30191 MB/s Jan 29 16:25:45.273628 kernel: raid6: avx2x2 gen() 29849 MB/s Jan 29 16:25:45.290708 kernel: raid6: avx2x1 gen() 25606 MB/s Jan 29 16:25:45.290730 kernel: raid6: using algorithm avx2x4 gen() 30191 MB/s Jan 29 16:25:45.308717 kernel: raid6: .... xor() 8184 MB/s, rmw enabled Jan 29 16:25:45.308743 kernel: raid6: using avx2x2 recovery algorithm Jan 29 16:25:45.329623 kernel: xor: automatically using best checksumming function avx Jan 29 16:25:45.490636 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 16:25:45.503907 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 16:25:45.521767 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:25:45.539043 systemd-udevd[414]: Using default interface naming scheme 'v255'. Jan 29 16:25:45.544551 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:25:45.559763 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 16:25:45.572090 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Jan 29 16:25:45.600897 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 16:25:45.612735 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 16:25:45.678138 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:25:45.687765 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 16:25:45.715634 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 16:25:45.720913 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 16:25:45.730746 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 16:25:45.730766 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 29 16:25:45.759637 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 29 16:25:45.759816 kernel: libata version 3.00 loaded. Jan 29 16:25:45.759829 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 16:25:45.759840 kernel: GPT:9289727 != 19775487 Jan 29 16:25:45.759850 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 16:25:45.759860 kernel: GPT:9289727 != 19775487 Jan 29 16:25:45.759870 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 16:25:45.759884 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 16:25:45.759895 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 16:25:45.759905 kernel: AES CTR mode by8 optimization enabled Jan 29 16:25:45.726863 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:25:45.766912 kernel: ahci 0000:00:1f.2: version 3.0 Jan 29 16:25:45.798075 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 29 16:25:45.798091 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 29 16:25:45.798258 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 29 16:25:45.798409 kernel: BTRFS: device fsid b756ea5d-2d08-456f-8231-a684aa2555c3 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (462) Jan 29 16:25:45.798422 kernel: scsi host0: ahci Jan 29 16:25:45.798578 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (463) Jan 29 16:25:45.798590 kernel: scsi host1: ahci Jan 29 16:25:45.799085 kernel: scsi host2: ahci Jan 29 16:25:45.799246 kernel: scsi host3: ahci Jan 29 16:25:45.799393 kernel: scsi host4: ahci Jan 29 16:25:45.799535 kernel: scsi host5: ahci Jan 29 16:25:45.799692 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 29 16:25:45.799704 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 29 16:25:45.799714 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 29 16:25:45.799724 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 29 16:25:45.799735 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 29 16:25:45.799745 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 29 16:25:45.729307 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 16:25:45.743745 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 16:25:45.759497 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 16:25:45.764287 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 16:25:45.764421 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:25:45.768294 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:25:45.769779 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:25:45.769981 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:25:45.771762 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:25:45.782156 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:25:45.808927 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 16:25:45.836228 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:25:45.861907 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 16:25:45.868907 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 16:25:45.868979 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 16:25:45.880193 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 16:25:45.895703 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 16:25:45.897554 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:25:45.914241 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:25:45.933318 disk-uuid[566]: Primary Header is updated. Jan 29 16:25:45.933318 disk-uuid[566]: Secondary Entries is updated. Jan 29 16:25:45.933318 disk-uuid[566]: Secondary Header is updated. Jan 29 16:25:45.937635 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 16:25:45.941627 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 16:25:46.109696 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 29 16:25:46.109755 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 29 16:25:46.111305 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 29 16:25:46.111331 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 29 16:25:46.111616 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 29 16:25:46.112624 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 29 16:25:46.113624 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 29 16:25:46.113638 kernel: ata3.00: applying bridge limits Jan 29 16:25:46.114621 kernel: ata3.00: configured for UDMA/100 Jan 29 16:25:46.116626 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 29 16:25:46.154109 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 29 16:25:46.172309 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 16:25:46.172327 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 29 16:25:46.942694 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 16:25:46.942752 disk-uuid[575]: The operation has completed successfully. Jan 29 16:25:46.976678 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 16:25:46.976799 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 16:25:47.023820 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 16:25:47.028795 sh[590]: Success Jan 29 16:25:47.041629 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 29 16:25:47.078197 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 16:25:47.088450 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 16:25:47.091114 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 16:25:47.106207 kernel: BTRFS info (device dm-0): first mount of filesystem b756ea5d-2d08-456f-8231-a684aa2555c3 Jan 29 16:25:47.106247 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:25:47.106259 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 16:25:47.107231 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 16:25:47.107962 kernel: BTRFS info (device dm-0): using free space tree Jan 29 16:25:47.112490 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 16:25:47.113172 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 16:25:47.114015 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 16:25:47.114991 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 16:25:47.131450 kernel: BTRFS info (device vda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:25:47.131503 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:25:47.131521 kernel: BTRFS info (device vda6): using free space tree Jan 29 16:25:47.134645 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 16:25:47.143125 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 16:25:47.144751 kernel: BTRFS info (device vda6): last unmount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:25:47.153521 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 16:25:47.160769 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 16:25:47.217067 ignition[699]: Ignition 2.20.0 Jan 29 16:25:47.217082 ignition[699]: Stage: fetch-offline Jan 29 16:25:47.217129 ignition[699]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:25:47.217139 ignition[699]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 16:25:47.217258 ignition[699]: parsed url from cmdline: "" Jan 29 16:25:47.217262 ignition[699]: no config URL provided Jan 29 16:25:47.217268 ignition[699]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 16:25:47.217277 ignition[699]: no config at "/usr/lib/ignition/user.ign" Jan 29 16:25:47.217306 ignition[699]: op(1): [started] loading QEMU firmware config module Jan 29 16:25:47.217311 ignition[699]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 29 16:25:47.229657 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 16:25:47.232254 ignition[699]: op(1): [finished] loading QEMU firmware config module Jan 29 16:25:47.233668 ignition[699]: parsing config with SHA512: 97b129935c99dbae6d0051e61a00296d0321e03b7584f3e2baf2208c874dfd892c916b682925216beb32e7681f812723347661b37879d3e614883c35ea5a4299 Jan 29 16:25:47.234752 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 16:25:47.238719 unknown[699]: fetched base config from "system" Jan 29 16:25:47.238731 unknown[699]: fetched user config from "qemu" Jan 29 16:25:47.240563 ignition[699]: fetch-offline: fetch-offline passed Jan 29 16:25:47.240671 ignition[699]: Ignition finished successfully Jan 29 16:25:47.242976 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 16:25:47.266301 systemd-networkd[780]: lo: Link UP Jan 29 16:25:47.266311 systemd-networkd[780]: lo: Gained carrier Jan 29 16:25:47.268036 systemd-networkd[780]: Enumeration completed Jan 29 16:25:47.268127 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 16:25:47.268492 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:25:47.268497 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:25:47.269328 systemd-networkd[780]: eth0: Link UP Jan 29 16:25:47.269332 systemd-networkd[780]: eth0: Gained carrier Jan 29 16:25:47.269340 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:25:47.270514 systemd[1]: Reached target network.target - Network. Jan 29 16:25:47.272757 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 29 16:25:47.281804 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 16:25:47.287669 systemd-networkd[780]: eth0: DHCPv4 address 10.0.0.149/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 16:25:47.300295 ignition[785]: Ignition 2.20.0 Jan 29 16:25:47.300307 ignition[785]: Stage: kargs Jan 29 16:25:47.300479 ignition[785]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:25:47.300490 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 16:25:47.301107 ignition[785]: kargs: kargs passed Jan 29 16:25:47.304713 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 16:25:47.301148 ignition[785]: Ignition finished successfully Jan 29 16:25:47.318956 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 16:25:47.329217 ignition[795]: Ignition 2.20.0 Jan 29 16:25:47.329228 ignition[795]: Stage: disks Jan 29 16:25:47.329391 ignition[795]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:25:47.329402 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 16:25:47.330025 ignition[795]: disks: disks passed Jan 29 16:25:47.330067 ignition[795]: Ignition finished successfully Jan 29 16:25:47.336283 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 16:25:47.338421 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 16:25:47.338493 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 16:25:47.340686 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 16:25:47.343106 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 16:25:47.345004 systemd[1]: Reached target basic.target - Basic System. Jan 29 16:25:47.357733 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 16:25:47.371913 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 16:25:47.378659 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 16:25:48.114739 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 16:25:48.207620 kernel: EXT4-fs (vda9): mounted filesystem 93ea9bb6-d6ba-4a18-a828-f0002683a7b4 r/w with ordered data mode. Quota mode: none. Jan 29 16:25:48.208128 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 16:25:48.208766 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 16:25:48.229739 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 16:25:48.231870 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 16:25:48.233266 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 16:25:48.233309 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 16:25:48.241942 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (814) Jan 29 16:25:48.241966 kernel: BTRFS info (device vda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:25:48.233334 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 16:25:48.248945 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:25:48.248972 kernel: BTRFS info (device vda6): using free space tree Jan 29 16:25:48.248987 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 16:25:48.239550 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 16:25:48.247394 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 16:25:48.250000 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 16:25:48.284432 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 16:25:48.289466 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Jan 29 16:25:48.293016 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 16:25:48.319490 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 16:25:48.412737 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 16:25:48.420710 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 16:25:48.422643 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 16:25:48.428629 kernel: BTRFS info (device vda6): last unmount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:25:48.445904 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 16:25:48.536412 ignition[930]: INFO : Ignition 2.20.0 Jan 29 16:25:48.536412 ignition[930]: INFO : Stage: mount Jan 29 16:25:48.538283 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:25:48.538283 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 16:25:48.540742 ignition[930]: INFO : mount: mount passed Jan 29 16:25:48.541532 ignition[930]: INFO : Ignition finished successfully Jan 29 16:25:48.544391 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 16:25:48.550682 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 16:25:48.606749 systemd-networkd[780]: eth0: Gained IPv6LL Jan 29 16:25:49.105533 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 16:25:49.119002 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 16:25:49.125636 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (940) Jan 29 16:25:49.127743 kernel: BTRFS info (device vda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:25:49.127767 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:25:49.127781 kernel: BTRFS info (device vda6): using free space tree Jan 29 16:25:49.131627 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 16:25:49.133460 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 16:25:49.161402 ignition[957]: INFO : Ignition 2.20.0 Jan 29 16:25:49.161402 ignition[957]: INFO : Stage: files Jan 29 16:25:49.163724 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:25:49.163724 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 16:25:49.163724 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Jan 29 16:25:49.167579 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 16:25:49.167579 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 16:25:49.172245 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 16:25:49.173701 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 16:25:49.175450 unknown[957]: wrote ssh authorized keys file for user: core Jan 29 16:25:49.176704 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 16:25:49.178421 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 29 16:25:49.178421 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 16:25:49.178421 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 16:25:49.178421 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 16:25:49.178421 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 16:25:49.178421 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 16:25:49.178421 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 16:25:49.178421 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 29 16:25:49.609687 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 29 16:25:50.189940 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 16:25:50.189940 ignition[957]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Jan 29 16:25:50.195347 ignition[957]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 16:25:50.195347 ignition[957]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 16:25:50.195347 ignition[957]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Jan 29 16:25:50.195347 ignition[957]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Jan 29 16:25:50.211176 ignition[957]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 16:25:50.217747 ignition[957]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 16:25:50.219899 ignition[957]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Jan 29 16:25:50.219899 ignition[957]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 16:25:50.219899 ignition[957]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 16:25:50.219899 ignition[957]: INFO : files: files passed Jan 29 16:25:50.219899 ignition[957]: INFO : Ignition finished successfully Jan 29 16:25:50.231384 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 16:25:50.244743 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 16:25:50.246873 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 16:25:50.249897 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 16:25:50.250039 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 16:25:50.257960 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory Jan 29 16:25:50.261060 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:25:50.261060 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:25:50.267254 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:25:50.264171 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 16:25:50.267495 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 16:25:50.280768 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 16:25:50.306795 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 16:25:50.306920 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 16:25:50.309385 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 16:25:50.311405 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 16:25:50.313486 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 16:25:50.314318 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 16:25:50.332034 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 16:25:50.334591 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 16:25:50.348693 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:25:50.349980 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:25:50.352226 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 16:25:50.354292 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 16:25:50.354407 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 16:25:50.356823 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 16:25:50.358413 systemd[1]: Stopped target basic.target - Basic System. Jan 29 16:25:50.360549 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 16:25:50.362653 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 16:25:50.364750 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 16:25:50.366893 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 16:25:50.369044 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 16:25:50.371402 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 16:25:50.373444 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 16:25:50.375743 systemd[1]: Stopped target swap.target - Swaps. Jan 29 16:25:50.377590 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 16:25:50.377722 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 16:25:50.380144 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:25:50.381612 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:25:50.383708 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 16:25:50.383820 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:25:50.385966 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 16:25:50.386130 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 16:25:50.388507 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 16:25:50.388678 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 16:25:50.390539 systemd[1]: Stopped target paths.target - Path Units. Jan 29 16:25:50.392274 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 16:25:50.395685 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:25:50.397988 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 16:25:50.400004 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 16:25:50.401976 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 16:25:50.402150 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 16:25:50.404286 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 16:25:50.404400 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 16:25:50.406999 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 16:25:50.407178 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 16:25:50.409155 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 16:25:50.409314 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 16:25:50.421755 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 16:25:50.424424 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 16:25:50.425357 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 16:25:50.425475 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:25:50.427615 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 16:25:50.427718 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 16:25:50.436742 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 16:25:50.436871 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 16:25:50.448623 ignition[1011]: INFO : Ignition 2.20.0 Jan 29 16:25:50.448623 ignition[1011]: INFO : Stage: umount Jan 29 16:25:50.450511 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:25:50.450511 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 16:25:50.450511 ignition[1011]: INFO : umount: umount passed Jan 29 16:25:50.450511 ignition[1011]: INFO : Ignition finished successfully Jan 29 16:25:50.450513 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 16:25:50.451200 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 16:25:50.451325 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 16:25:50.453618 systemd[1]: Stopped target network.target - Network. Jan 29 16:25:50.455337 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 16:25:50.455414 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 16:25:50.458198 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 16:25:50.458263 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 16:25:50.460020 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 16:25:50.460081 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 16:25:50.462055 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 16:25:50.462118 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 16:25:50.464264 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 16:25:50.466303 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 16:25:50.468622 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 16:25:50.468755 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 16:25:50.473253 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 29 16:25:50.474015 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 16:25:50.474106 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:25:50.478225 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:25:50.478532 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 16:25:50.478681 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 16:25:50.481514 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 29 16:25:50.482319 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 16:25:50.482388 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:25:50.497731 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 16:25:50.497841 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 16:25:50.497905 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 16:25:50.498251 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 16:25:50.498296 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:25:50.520394 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 16:25:50.520458 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 16:25:50.521467 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:25:50.537866 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 29 16:25:50.551588 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 16:25:50.552735 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:25:50.555597 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 16:25:50.556682 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 16:25:50.559305 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 16:25:50.560382 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 16:25:50.562800 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 16:25:50.562863 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:25:50.566469 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 16:25:50.567617 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 16:25:50.570223 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 16:25:50.571371 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 16:25:50.573884 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 16:25:50.575024 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:25:50.588781 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 16:25:50.591375 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 16:25:50.591437 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:25:50.595567 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 16:25:50.596703 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:25:50.599444 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 16:25:50.599497 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:25:50.603952 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:25:50.604023 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:25:50.608549 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 16:25:50.608716 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 16:25:50.613060 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 16:25:50.614192 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 16:25:50.617672 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 16:25:50.620176 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 16:25:50.621274 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 16:25:50.634904 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 16:25:50.645318 systemd[1]: Switching root. Jan 29 16:25:50.681036 systemd-journald[193]: Journal stopped Jan 29 16:25:51.870777 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Jan 29 16:25:51.870847 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 16:25:51.870868 kernel: SELinux: policy capability open_perms=1 Jan 29 16:25:51.870883 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 16:25:51.870896 kernel: SELinux: policy capability always_check_network=0 Jan 29 16:25:51.870910 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 16:25:51.870929 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 16:25:51.870947 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 16:25:51.870963 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 16:25:51.870980 kernel: audit: type=1403 audit(1738167951.006:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 16:25:51.870999 systemd[1]: Successfully loaded SELinux policy in 41.652ms. Jan 29 16:25:51.871025 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 16.556ms. Jan 29 16:25:51.871039 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 16:25:51.871068 systemd[1]: Detected virtualization kvm. Jan 29 16:25:51.871081 systemd[1]: Detected architecture x86-64. Jan 29 16:25:51.871093 systemd[1]: Detected first boot. Jan 29 16:25:51.871105 systemd[1]: Initializing machine ID from VM UUID. Jan 29 16:25:51.871120 zram_generator::config[1058]: No configuration found. Jan 29 16:25:51.871134 kernel: Guest personality initialized and is inactive Jan 29 16:25:51.871146 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jan 29 16:25:51.871157 kernel: Initialized host personality Jan 29 16:25:51.871171 kernel: NET: Registered PF_VSOCK protocol family Jan 29 16:25:51.871183 systemd[1]: Populated /etc with preset unit settings. Jan 29 16:25:51.871196 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 29 16:25:51.871208 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 16:25:51.871221 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 16:25:51.871233 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 16:25:51.871245 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 16:25:51.871258 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 16:25:51.871271 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 16:25:51.871286 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 16:25:51.871298 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 16:25:51.871310 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 16:25:51.871323 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 16:25:51.871335 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 16:25:51.871347 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:25:51.871360 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:25:51.871372 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 16:25:51.871386 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 16:25:51.871401 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 16:25:51.871413 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 16:25:51.871426 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 16:25:51.871438 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:25:51.871450 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 16:25:51.871463 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 16:25:51.871475 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 16:25:51.871490 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 16:25:51.871502 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:25:51.871514 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 16:25:51.871527 systemd[1]: Reached target slices.target - Slice Units. Jan 29 16:25:51.871539 systemd[1]: Reached target swap.target - Swaps. Jan 29 16:25:51.871551 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 16:25:51.871563 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 16:25:51.871576 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 29 16:25:51.871588 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:25:51.871676 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 16:25:51.871694 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:25:51.871706 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 16:25:51.871719 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 16:25:51.871731 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 16:25:51.871743 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 16:25:51.871756 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:25:51.871784 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 16:25:51.871811 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 16:25:51.871831 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 16:25:51.871847 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 16:25:51.871865 systemd[1]: Reached target machines.target - Containers. Jan 29 16:25:51.871878 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 16:25:51.871890 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:25:51.871903 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 16:25:51.871916 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 16:25:51.871928 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:25:51.871940 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 16:25:51.871955 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:25:51.871967 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 16:25:51.871979 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:25:51.871993 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 16:25:51.872005 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 16:25:51.872017 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 16:25:51.872029 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 16:25:51.872041 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 16:25:51.872066 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:25:51.872079 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 16:25:51.872091 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 16:25:51.872104 kernel: loop: module loaded Jan 29 16:25:51.872115 kernel: fuse: init (API version 7.39) Jan 29 16:25:51.872127 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 16:25:51.872141 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 16:25:51.872154 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 29 16:25:51.872167 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 16:25:51.872181 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 16:25:51.872194 systemd[1]: Stopped verity-setup.service. Jan 29 16:25:51.872207 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:25:51.872219 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 16:25:51.872234 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 16:25:51.872246 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 16:25:51.872259 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 16:25:51.872274 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 16:25:51.872287 kernel: ACPI: bus type drm_connector registered Jan 29 16:25:51.872301 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 16:25:51.872335 systemd-journald[1133]: Collecting audit messages is disabled. Jan 29 16:25:51.872357 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 16:25:51.872372 systemd-journald[1133]: Journal started Jan 29 16:25:51.872395 systemd-journald[1133]: Runtime Journal (/run/log/journal/939e1aa388cf4cbfa169d61752fad666) is 6M, max 48.4M, 42.3M free. Jan 29 16:25:51.872434 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:25:51.599204 systemd[1]: Queued start job for default target multi-user.target. Jan 29 16:25:51.612911 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 16:25:51.613418 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 16:25:51.876651 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 16:25:51.877667 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 16:25:51.877908 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 16:25:51.879628 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:25:51.879870 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:25:51.881534 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 16:25:51.881787 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 16:25:51.883268 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:25:51.883509 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:25:51.885191 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 16:25:51.885434 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 16:25:51.886930 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:25:51.887156 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:25:51.888855 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 16:25:51.890378 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 16:25:51.892056 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 16:25:51.893726 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 29 16:25:51.909933 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 16:25:51.922709 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 16:25:51.925021 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 16:25:51.926439 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 16:25:51.926535 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 16:25:51.928702 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 29 16:25:51.931055 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 16:25:51.933361 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 16:25:51.934621 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:25:51.937232 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 16:25:51.940151 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 16:25:51.942367 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:25:51.945390 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 16:25:51.946742 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:25:51.948516 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:25:51.953829 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 16:25:51.957812 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 16:25:51.964478 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 16:25:51.965946 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 16:25:51.967713 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 16:25:52.018639 systemd-journald[1133]: Time spent on flushing to /var/log/journal/939e1aa388cf4cbfa169d61752fad666 is 26.928ms for 952 entries. Jan 29 16:25:52.018639 systemd-journald[1133]: System Journal (/var/log/journal/939e1aa388cf4cbfa169d61752fad666) is 8M, max 195.6M, 187.6M free. Jan 29 16:25:52.110412 systemd-journald[1133]: Received client request to flush runtime journal. Jan 29 16:25:52.110483 kernel: loop0: detected capacity change from 0 to 205544 Jan 29 16:25:52.110520 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 16:25:52.035462 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:25:52.043979 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 16:25:52.045736 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:25:52.069003 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 16:25:52.072502 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 16:25:52.087934 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 29 16:25:52.095353 udevadm[1186]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 16:25:52.097991 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Jan 29 16:25:52.098008 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Jan 29 16:25:52.108647 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:25:52.119922 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 16:25:52.122121 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 16:25:52.163863 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 29 16:25:52.170070 kernel: loop1: detected capacity change from 0 to 138176 Jan 29 16:25:52.190383 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 16:25:52.200764 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 16:25:52.209796 kernel: loop2: detected capacity change from 0 to 147912 Jan 29 16:25:52.219075 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Jan 29 16:25:52.219095 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Jan 29 16:25:52.245853 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:25:52.343655 kernel: loop3: detected capacity change from 0 to 205544 Jan 29 16:25:52.361620 kernel: loop4: detected capacity change from 0 to 138176 Jan 29 16:25:52.380649 kernel: loop5: detected capacity change from 0 to 147912 Jan 29 16:25:52.398520 (sd-merge)[1206]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 29 16:25:52.399217 (sd-merge)[1206]: Merged extensions into '/usr'. Jan 29 16:25:52.404024 systemd[1]: Reload requested from client PID 1178 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 16:25:52.404191 systemd[1]: Reloading... Jan 29 16:25:52.524633 zram_generator::config[1234]: No configuration found. Jan 29 16:25:52.697281 ldconfig[1173]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 16:25:52.758566 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:25:52.824066 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 16:25:52.824668 systemd[1]: Reloading finished in 419 ms. Jan 29 16:25:52.844874 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 16:25:52.846758 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 16:25:52.865134 systemd[1]: Starting ensure-sysext.service... Jan 29 16:25:52.867406 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 16:25:52.884161 systemd[1]: Reload requested from client PID 1271 ('systemctl') (unit ensure-sysext.service)... Jan 29 16:25:52.884182 systemd[1]: Reloading... Jan 29 16:25:52.934247 systemd-tmpfiles[1272]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 16:25:52.934623 systemd-tmpfiles[1272]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 16:25:52.935891 systemd-tmpfiles[1272]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 16:25:52.936280 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Jan 29 16:25:52.936398 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Jan 29 16:25:52.943943 systemd-tmpfiles[1272]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 16:25:52.944485 systemd-tmpfiles[1272]: Skipping /boot Jan 29 16:25:52.992945 systemd-tmpfiles[1272]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 16:25:52.992965 systemd-tmpfiles[1272]: Skipping /boot Jan 29 16:25:53.006636 zram_generator::config[1304]: No configuration found. Jan 29 16:25:53.308035 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:25:53.373289 systemd[1]: Reloading finished in 488 ms. Jan 29 16:25:53.402396 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:25:53.410937 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:25:53.413414 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 16:25:53.415913 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 16:25:53.422064 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 16:25:53.426194 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 16:25:53.430568 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:25:53.431730 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:25:53.434813 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:25:53.439679 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:25:53.444832 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:25:53.449918 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:25:53.450038 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:25:53.454718 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 16:25:53.455989 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:25:53.457507 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:25:53.458376 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:25:53.460517 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 16:25:53.483719 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:25:53.484028 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:25:53.486354 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:25:53.486808 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:25:53.499302 augenrules[1371]: No rules Jan 29 16:25:53.501620 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:25:53.502222 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:25:53.504366 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 16:25:53.508209 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:25:53.509845 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:25:53.516068 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:25:53.541115 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:25:53.544459 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:25:53.547475 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:25:53.547778 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:25:53.547939 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:25:53.548942 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 16:25:53.551237 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 16:25:53.553152 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 16:25:53.555358 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:25:53.555588 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:25:53.559304 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:25:53.559526 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:25:53.561298 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:25:53.561520 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:25:53.570110 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:25:53.582810 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:25:53.619530 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:25:53.621059 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:25:53.625743 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 16:25:53.629393 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:25:53.635745 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:25:53.637219 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:25:53.637344 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:25:53.638231 systemd-resolved[1342]: Positive Trust Anchors: Jan 29 16:25:53.638241 systemd-resolved[1342]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 16:25:53.638272 systemd-resolved[1342]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 16:25:53.641861 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:25:53.642158 systemd-resolved[1342]: Defaulting to hostname 'linux'. Jan 29 16:25:53.675399 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 16:25:53.676744 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 16:25:53.676856 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:25:53.678160 augenrules[1392]: /sbin/augenrules: No change Jan 29 16:25:53.678562 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 16:25:53.680911 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:25:53.681162 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:25:53.683013 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 16:25:53.683259 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 16:25:53.685218 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:25:53.685447 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:25:53.687458 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:25:53.687840 augenrules[1415]: No rules Jan 29 16:25:53.687689 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:25:53.689431 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:25:53.689733 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:25:53.694620 systemd[1]: Finished ensure-sysext.service. Jan 29 16:25:53.700226 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:25:53.701710 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:25:53.701772 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:25:53.722812 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 16:25:53.724633 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 16:25:53.726217 systemd-udevd[1403]: Using default interface naming scheme 'v255'. Jan 29 16:25:53.747195 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:25:53.794132 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 16:25:53.796654 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1433) Jan 29 16:25:53.806771 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 16:25:53.880147 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 16:25:53.896784 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 16:25:53.901617 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 29 16:25:53.915844 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 29 16:25:53.926756 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 29 16:25:53.926910 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 29 16:25:53.927393 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 16:25:53.927784 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 16:25:53.978632 kernel: ACPI: button: Power Button [PWRF] Jan 29 16:25:53.991641 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 29 16:25:53.994021 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 16:25:54.017728 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:25:54.048631 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 16:25:54.076126 systemd-networkd[1447]: lo: Link UP Jan 29 16:25:54.076137 systemd-networkd[1447]: lo: Gained carrier Jan 29 16:25:54.078343 systemd-networkd[1447]: Enumeration completed Jan 29 16:25:54.078495 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 16:25:54.080044 systemd[1]: Reached target network.target - Network. Jan 29 16:25:54.081043 systemd-networkd[1447]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:25:54.081048 systemd-networkd[1447]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:25:54.082556 systemd-networkd[1447]: eth0: Link UP Jan 29 16:25:54.082565 systemd-networkd[1447]: eth0: Gained carrier Jan 29 16:25:54.082586 systemd-networkd[1447]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:25:54.083789 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 29 16:25:54.089756 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 16:25:54.108187 systemd-networkd[1447]: eth0: DHCPv4 address 10.0.0.149/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 16:25:54.109482 systemd-timesyncd[1426]: Network configuration changed, trying to establish connection. Jan 29 16:25:54.109923 kernel: kvm_amd: TSC scaling supported Jan 29 16:25:54.109947 kernel: kvm_amd: Nested Virtualization enabled Jan 29 16:25:54.109960 kernel: kvm_amd: Nested Paging enabled Jan 29 16:25:54.109972 kernel: kvm_amd: LBR virtualization supported Jan 29 16:25:54.687156 systemd-resolved[1342]: Clock change detected. Flushing caches. Jan 29 16:25:54.687324 systemd-timesyncd[1426]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 29 16:25:54.687497 systemd-timesyncd[1426]: Initial clock synchronization to Wed 2025-01-29 16:25:54.687040 UTC. Jan 29 16:25:54.688109 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 29 16:25:54.688133 kernel: kvm_amd: Virtual GIF supported Jan 29 16:25:54.696938 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 29 16:25:54.709668 kernel: EDAC MC: Ver: 3.0.0 Jan 29 16:25:54.742218 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 16:25:54.769043 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:25:54.781022 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 16:25:54.790217 lvm[1476]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 16:25:54.820108 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 16:25:54.821714 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:25:54.822862 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 16:25:54.824058 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 16:25:54.825352 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 16:25:54.826851 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 16:25:54.828092 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 16:25:54.829387 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 16:25:54.830678 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 16:25:54.830721 systemd[1]: Reached target paths.target - Path Units. Jan 29 16:25:54.831705 systemd[1]: Reached target timers.target - Timer Units. Jan 29 16:25:54.833877 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 16:25:54.836793 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 16:25:54.840361 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 29 16:25:54.841853 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 29 16:25:54.843144 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 29 16:25:54.849230 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 16:25:54.850825 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 29 16:25:54.853413 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 16:25:54.855138 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 16:25:54.856327 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 16:25:54.857317 systemd[1]: Reached target basic.target - Basic System. Jan 29 16:25:54.858312 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 16:25:54.858349 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 16:25:54.859429 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 16:25:54.861730 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 16:25:54.866621 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 16:25:54.869337 lvm[1480]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 16:25:54.870083 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 16:25:54.871527 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 16:25:54.874380 jq[1483]: false Jan 29 16:25:54.876103 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 16:25:54.880929 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 16:25:54.888836 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 16:25:54.894418 dbus-daemon[1482]: [system] SELinux support is enabled Jan 29 16:25:54.894980 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 16:25:54.897118 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 16:25:54.897774 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 16:25:54.899262 extend-filesystems[1484]: Found loop3 Jan 29 16:25:54.900811 extend-filesystems[1484]: Found loop4 Jan 29 16:25:54.900811 extend-filesystems[1484]: Found loop5 Jan 29 16:25:54.900811 extend-filesystems[1484]: Found sr0 Jan 29 16:25:54.900811 extend-filesystems[1484]: Found vda Jan 29 16:25:54.900811 extend-filesystems[1484]: Found vda1 Jan 29 16:25:54.900811 extend-filesystems[1484]: Found vda2 Jan 29 16:25:54.900811 extend-filesystems[1484]: Found vda3 Jan 29 16:25:54.900811 extend-filesystems[1484]: Found usr Jan 29 16:25:54.900811 extend-filesystems[1484]: Found vda4 Jan 29 16:25:54.900811 extend-filesystems[1484]: Found vda6 Jan 29 16:25:54.900811 extend-filesystems[1484]: Found vda7 Jan 29 16:25:54.900811 extend-filesystems[1484]: Found vda9 Jan 29 16:25:54.900811 extend-filesystems[1484]: Checking size of /dev/vda9 Jan 29 16:25:54.900258 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 16:25:54.905758 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 16:25:54.907148 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 16:25:54.912851 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 16:25:54.918597 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 16:25:54.918870 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 16:25:54.919208 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 16:25:54.919446 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 16:25:54.921134 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 16:25:54.921371 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 16:25:54.928169 jq[1497]: true Jan 29 16:25:54.932713 extend-filesystems[1484]: Resized partition /dev/vda9 Jan 29 16:25:54.941208 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 16:25:54.941240 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 16:25:54.943267 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 16:25:54.943294 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 16:25:54.944226 update_engine[1495]: I20250129 16:25:54.944141 1495 main.cc:92] Flatcar Update Engine starting Jan 29 16:25:54.948812 update_engine[1495]: I20250129 16:25:54.947944 1495 update_check_scheduler.cc:74] Next update check in 9m42s Jan 29 16:25:54.948154 systemd[1]: Started update-engine.service - Update Engine. Jan 29 16:25:54.948935 extend-filesystems[1514]: resize2fs 1.47.1 (20-May-2024) Jan 29 16:25:54.948985 (ntainerd)[1511]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 16:25:54.955660 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 29 16:25:54.955701 jq[1510]: true Jan 29 16:25:54.957816 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 16:25:54.958704 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1432) Jan 29 16:25:55.043092 systemd-logind[1494]: Watching system buttons on /dev/input/event1 (Power Button) Jan 29 16:25:55.043126 systemd-logind[1494]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 16:25:55.047663 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 29 16:25:55.053032 systemd-logind[1494]: New seat seat0. Jan 29 16:25:55.054806 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 16:25:55.065706 locksmithd[1516]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 16:25:55.073243 extend-filesystems[1514]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 16:25:55.073243 extend-filesystems[1514]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 16:25:55.073243 extend-filesystems[1514]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 29 16:25:55.125275 extend-filesystems[1484]: Resized filesystem in /dev/vda9 Jan 29 16:25:55.126247 bash[1533]: Updated "/home/core/.ssh/authorized_keys" Jan 29 16:25:55.126374 sshd_keygen[1502]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 16:25:55.074488 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 16:25:55.074952 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 16:25:55.128870 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 16:25:55.130666 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 16:25:55.155620 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 16:25:55.215113 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 16:25:55.217764 systemd[1]: Started sshd@0-10.0.0.149:22-10.0.0.1:56224.service - OpenSSH per-connection server daemon (10.0.0.1:56224). Jan 29 16:25:55.220009 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 29 16:25:55.225298 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 16:25:55.225627 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 16:25:55.228943 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 16:25:55.244523 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 16:25:55.300164 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 16:25:55.313524 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 16:25:55.315373 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 16:25:55.334249 sshd[1551]: Accepted publickey for core from 10.0.0.1 port 56224 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:25:55.361869 sshd-session[1551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:25:55.373312 systemd-logind[1494]: New session 1 of user core. Jan 29 16:25:55.374524 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 16:25:55.390857 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 16:25:55.424787 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 16:25:55.435102 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 16:25:55.441057 (systemd)[1567]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 16:25:55.443425 systemd-logind[1494]: New session c1 of user core. Jan 29 16:25:55.569941 containerd[1511]: time="2025-01-29T16:25:55.569782894Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 16:25:55.645790 containerd[1511]: time="2025-01-29T16:25:55.645707209Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:25:55.647780 containerd[1511]: time="2025-01-29T16:25:55.647675932Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:25:55.647780 containerd[1511]: time="2025-01-29T16:25:55.647763466Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 16:25:55.647780 containerd[1511]: time="2025-01-29T16:25:55.647784595Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 16:25:55.648078 containerd[1511]: time="2025-01-29T16:25:55.648052868Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 16:25:55.648124 containerd[1511]: time="2025-01-29T16:25:55.648079128Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 16:25:55.648235 containerd[1511]: time="2025-01-29T16:25:55.648214912Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:25:55.648258 containerd[1511]: time="2025-01-29T16:25:55.648236272Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:25:55.648591 containerd[1511]: time="2025-01-29T16:25:55.648564388Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:25:55.648616 containerd[1511]: time="2025-01-29T16:25:55.648587271Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 16:25:55.648616 containerd[1511]: time="2025-01-29T16:25:55.648605184Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:25:55.648676 containerd[1511]: time="2025-01-29T16:25:55.648617938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 16:25:55.648785 containerd[1511]: time="2025-01-29T16:25:55.648763311Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:25:55.649113 containerd[1511]: time="2025-01-29T16:25:55.649088781Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:25:55.649324 containerd[1511]: time="2025-01-29T16:25:55.649299476Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:25:55.649350 containerd[1511]: time="2025-01-29T16:25:55.649320626Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 16:25:55.649477 containerd[1511]: time="2025-01-29T16:25:55.649453756Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 16:25:55.649618 containerd[1511]: time="2025-01-29T16:25:55.649598156Z" level=info msg="metadata content store policy set" policy=shared Jan 29 16:25:55.655126 containerd[1511]: time="2025-01-29T16:25:55.655080057Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 16:25:55.655169 containerd[1511]: time="2025-01-29T16:25:55.655141883Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 16:25:55.655169 containerd[1511]: time="2025-01-29T16:25:55.655163012Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 16:25:55.655227 containerd[1511]: time="2025-01-29T16:25:55.655183190Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 16:25:55.655227 containerd[1511]: time="2025-01-29T16:25:55.655203649Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 16:25:55.655391 containerd[1511]: time="2025-01-29T16:25:55.655362907Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 16:25:55.655688 containerd[1511]: time="2025-01-29T16:25:55.655630680Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 16:25:55.655816 containerd[1511]: time="2025-01-29T16:25:55.655793284Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 16:25:55.655841 containerd[1511]: time="2025-01-29T16:25:55.655816378Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 16:25:55.655841 containerd[1511]: time="2025-01-29T16:25:55.655832909Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 16:25:55.655880 containerd[1511]: time="2025-01-29T16:25:55.655847196Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 16:25:55.655880 containerd[1511]: time="2025-01-29T16:25:55.655862324Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 16:25:55.655931 containerd[1511]: time="2025-01-29T16:25:55.655878023Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 16:25:55.655931 containerd[1511]: time="2025-01-29T16:25:55.655894594Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 16:25:55.655931 containerd[1511]: time="2025-01-29T16:25:55.655910144Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 16:25:55.655931 containerd[1511]: time="2025-01-29T16:25:55.655924821Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 16:25:55.656020 containerd[1511]: time="2025-01-29T16:25:55.655947093Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 16:25:55.656020 containerd[1511]: time="2025-01-29T16:25:55.655961901Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 16:25:55.656020 containerd[1511]: time="2025-01-29T16:25:55.655993169Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 16:25:55.656085 containerd[1511]: time="2025-01-29T16:25:55.656021993Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 16:25:55.656085 containerd[1511]: time="2025-01-29T16:25:55.656037292Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 16:25:55.656085 containerd[1511]: time="2025-01-29T16:25:55.656055346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 16:25:55.656085 containerd[1511]: time="2025-01-29T16:25:55.656069643Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 16:25:55.656181 containerd[1511]: time="2025-01-29T16:25:55.656085533Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 16:25:55.656181 containerd[1511]: time="2025-01-29T16:25:55.656099429Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 16:25:55.656181 containerd[1511]: time="2025-01-29T16:25:55.656117873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 16:25:55.656181 containerd[1511]: time="2025-01-29T16:25:55.656135406Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 16:25:55.656181 containerd[1511]: time="2025-01-29T16:25:55.656152468Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 16:25:55.656181 containerd[1511]: time="2025-01-29T16:25:55.656167767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 16:25:55.656372 containerd[1511]: time="2025-01-29T16:25:55.656182144Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 16:25:55.656372 containerd[1511]: time="2025-01-29T16:25:55.656196100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 16:25:55.656372 containerd[1511]: time="2025-01-29T16:25:55.656217290Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 16:25:55.656372 containerd[1511]: time="2025-01-29T16:25:55.656243379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 16:25:55.656372 containerd[1511]: time="2025-01-29T16:25:55.656258787Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 16:25:55.656372 containerd[1511]: time="2025-01-29T16:25:55.656271662Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 16:25:55.656372 containerd[1511]: time="2025-01-29T16:25:55.656325483Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 16:25:55.656372 containerd[1511]: time="2025-01-29T16:25:55.656344618Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 16:25:55.656372 containerd[1511]: time="2025-01-29T16:25:55.656357032Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 16:25:55.656372 containerd[1511]: time="2025-01-29T16:25:55.656370798Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 16:25:55.656578 containerd[1511]: time="2025-01-29T16:25:55.656382139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 16:25:55.656578 containerd[1511]: time="2025-01-29T16:25:55.656396866Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 16:25:55.656578 containerd[1511]: time="2025-01-29T16:25:55.656419268Z" level=info msg="NRI interface is disabled by configuration." Jan 29 16:25:55.656578 containerd[1511]: time="2025-01-29T16:25:55.656442482Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 16:25:55.656886 containerd[1511]: time="2025-01-29T16:25:55.656823567Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 16:25:55.656886 containerd[1511]: time="2025-01-29T16:25:55.656886685Z" level=info msg="Connect containerd service" Jan 29 16:25:55.657063 containerd[1511]: time="2025-01-29T16:25:55.656938853Z" level=info msg="using legacy CRI server" Jan 29 16:25:55.657063 containerd[1511]: time="2025-01-29T16:25:55.656947489Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 16:25:55.657122 containerd[1511]: time="2025-01-29T16:25:55.657103842Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 16:25:55.657905 containerd[1511]: time="2025-01-29T16:25:55.657874347Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 16:25:55.658125 containerd[1511]: time="2025-01-29T16:25:55.658067029Z" level=info msg="Start subscribing containerd event" Jan 29 16:25:55.658358 containerd[1511]: time="2025-01-29T16:25:55.658262124Z" level=info msg="Start recovering state" Jan 29 16:25:55.658484 containerd[1511]: time="2025-01-29T16:25:55.658464995Z" level=info msg="Start event monitor" Jan 29 16:25:55.658555 containerd[1511]: time="2025-01-29T16:25:55.658539755Z" level=info msg="Start snapshots syncer" Jan 29 16:25:55.658582 containerd[1511]: time="2025-01-29T16:25:55.658556857Z" level=info msg="Start cni network conf syncer for default" Jan 29 16:25:55.658582 containerd[1511]: time="2025-01-29T16:25:55.658572426Z" level=info msg="Start streaming server" Jan 29 16:25:55.659322 containerd[1511]: time="2025-01-29T16:25:55.659064519Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 16:25:55.659322 containerd[1511]: time="2025-01-29T16:25:55.659151522Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 16:25:55.659322 containerd[1511]: time="2025-01-29T16:25:55.659278230Z" level=info msg="containerd successfully booted in 0.090758s" Jan 29 16:25:55.659379 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 16:25:55.693459 systemd[1567]: Queued start job for default target default.target. Jan 29 16:25:55.706470 systemd[1567]: Created slice app.slice - User Application Slice. Jan 29 16:25:55.706511 systemd[1567]: Reached target paths.target - Paths. Jan 29 16:25:55.706566 systemd[1567]: Reached target timers.target - Timers. Jan 29 16:25:55.708608 systemd[1567]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 16:25:55.748805 systemd[1567]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 16:25:55.748983 systemd[1567]: Reached target sockets.target - Sockets. Jan 29 16:25:55.749061 systemd[1567]: Reached target basic.target - Basic System. Jan 29 16:25:55.749131 systemd[1567]: Reached target default.target - Main User Target. Jan 29 16:25:55.749183 systemd[1567]: Startup finished in 277ms. Jan 29 16:25:55.749213 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 16:25:55.768785 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 16:25:55.843772 systemd[1]: Started sshd@1-10.0.0.149:22-10.0.0.1:56234.service - OpenSSH per-connection server daemon (10.0.0.1:56234). Jan 29 16:25:55.883956 sshd[1582]: Accepted publickey for core from 10.0.0.1 port 56234 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:25:55.885747 sshd-session[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:25:55.890095 systemd-logind[1494]: New session 2 of user core. Jan 29 16:25:55.902783 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 16:25:55.960862 sshd[1584]: Connection closed by 10.0.0.1 port 56234 Jan 29 16:25:55.961298 sshd-session[1582]: pam_unix(sshd:session): session closed for user core Jan 29 16:25:55.978213 systemd[1]: sshd@1-10.0.0.149:22-10.0.0.1:56234.service: Deactivated successfully. Jan 29 16:25:55.979978 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 16:25:55.981376 systemd-logind[1494]: Session 2 logged out. Waiting for processes to exit. Jan 29 16:25:55.982716 systemd[1]: Started sshd@2-10.0.0.149:22-10.0.0.1:56248.service - OpenSSH per-connection server daemon (10.0.0.1:56248). Jan 29 16:25:55.985155 systemd-logind[1494]: Removed session 2. Jan 29 16:25:56.028072 sshd[1589]: Accepted publickey for core from 10.0.0.1 port 56248 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:25:56.029799 sshd-session[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:25:56.034097 systemd-logind[1494]: New session 3 of user core. Jan 29 16:25:56.043806 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 16:25:56.101970 sshd[1592]: Connection closed by 10.0.0.1 port 56248 Jan 29 16:25:56.102311 sshd-session[1589]: pam_unix(sshd:session): session closed for user core Jan 29 16:25:56.108265 systemd[1]: sshd@2-10.0.0.149:22-10.0.0.1:56248.service: Deactivated successfully. Jan 29 16:25:56.110486 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 16:25:56.111201 systemd-logind[1494]: Session 3 logged out. Waiting for processes to exit. Jan 29 16:25:56.112459 systemd-logind[1494]: Removed session 3. Jan 29 16:25:56.222907 systemd-networkd[1447]: eth0: Gained IPv6LL Jan 29 16:25:56.226540 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 16:25:56.228525 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 16:25:56.239905 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 29 16:25:56.242776 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:25:56.245037 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 16:25:56.267857 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 29 16:25:56.268338 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 29 16:25:56.277487 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 16:25:56.280958 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 16:25:57.841004 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:25:57.842584 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 16:25:57.846746 systemd[1]: Startup finished in 717ms (kernel) + 6.307s (initrd) + 6.303s (userspace) = 13.328s. Jan 29 16:25:57.874109 (kubelet)[1619]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:25:58.553083 kubelet[1619]: E0129 16:25:58.552946 1619 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:25:58.556637 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:25:58.556892 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:25:58.557283 systemd[1]: kubelet.service: Consumed 2.078s CPU time, 239.5M memory peak. Jan 29 16:26:06.119149 systemd[1]: Started sshd@3-10.0.0.149:22-10.0.0.1:57568.service - OpenSSH per-connection server daemon (10.0.0.1:57568). Jan 29 16:26:06.157892 sshd[1632]: Accepted publickey for core from 10.0.0.1 port 57568 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:26:06.159587 sshd-session[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:26:06.164289 systemd-logind[1494]: New session 4 of user core. Jan 29 16:26:06.177803 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 16:26:06.230966 sshd[1634]: Connection closed by 10.0.0.1 port 57568 Jan 29 16:26:06.231324 sshd-session[1632]: pam_unix(sshd:session): session closed for user core Jan 29 16:26:06.245276 systemd[1]: sshd@3-10.0.0.149:22-10.0.0.1:57568.service: Deactivated successfully. Jan 29 16:26:06.246983 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 16:26:06.248228 systemd-logind[1494]: Session 4 logged out. Waiting for processes to exit. Jan 29 16:26:06.249458 systemd[1]: Started sshd@4-10.0.0.149:22-10.0.0.1:57576.service - OpenSSH per-connection server daemon (10.0.0.1:57576). Jan 29 16:26:06.250176 systemd-logind[1494]: Removed session 4. Jan 29 16:26:06.288421 sshd[1639]: Accepted publickey for core from 10.0.0.1 port 57576 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:26:06.289812 sshd-session[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:26:06.293666 systemd-logind[1494]: New session 5 of user core. Jan 29 16:26:06.309763 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 16:26:06.357983 sshd[1642]: Connection closed by 10.0.0.1 port 57576 Jan 29 16:26:06.358354 sshd-session[1639]: pam_unix(sshd:session): session closed for user core Jan 29 16:26:06.366560 systemd[1]: sshd@4-10.0.0.149:22-10.0.0.1:57576.service: Deactivated successfully. Jan 29 16:26:06.368499 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 16:26:06.369827 systemd-logind[1494]: Session 5 logged out. Waiting for processes to exit. Jan 29 16:26:06.371126 systemd[1]: Started sshd@5-10.0.0.149:22-10.0.0.1:57586.service - OpenSSH per-connection server daemon (10.0.0.1:57586). Jan 29 16:26:06.372035 systemd-logind[1494]: Removed session 5. Jan 29 16:26:06.426055 sshd[1647]: Accepted publickey for core from 10.0.0.1 port 57586 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:26:06.427397 sshd-session[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:26:06.431452 systemd-logind[1494]: New session 6 of user core. Jan 29 16:26:06.441784 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 16:26:06.494097 sshd[1650]: Connection closed by 10.0.0.1 port 57586 Jan 29 16:26:06.494401 sshd-session[1647]: pam_unix(sshd:session): session closed for user core Jan 29 16:26:06.515135 systemd[1]: sshd@5-10.0.0.149:22-10.0.0.1:57586.service: Deactivated successfully. Jan 29 16:26:06.516969 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 16:26:06.518430 systemd-logind[1494]: Session 6 logged out. Waiting for processes to exit. Jan 29 16:26:06.527892 systemd[1]: Started sshd@6-10.0.0.149:22-10.0.0.1:57592.service - OpenSSH per-connection server daemon (10.0.0.1:57592). Jan 29 16:26:06.528868 systemd-logind[1494]: Removed session 6. Jan 29 16:26:06.561473 sshd[1655]: Accepted publickey for core from 10.0.0.1 port 57592 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:26:06.562989 sshd-session[1655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:26:06.567124 systemd-logind[1494]: New session 7 of user core. Jan 29 16:26:06.582816 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 16:26:06.640124 sudo[1659]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 16:26:06.640463 sudo[1659]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:26:06.659616 sudo[1659]: pam_unix(sudo:session): session closed for user root Jan 29 16:26:06.661370 sshd[1658]: Connection closed by 10.0.0.1 port 57592 Jan 29 16:26:06.661808 sshd-session[1655]: pam_unix(sshd:session): session closed for user core Jan 29 16:26:06.677399 systemd[1]: sshd@6-10.0.0.149:22-10.0.0.1:57592.service: Deactivated successfully. Jan 29 16:26:06.679125 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 16:26:06.680776 systemd-logind[1494]: Session 7 logged out. Waiting for processes to exit. Jan 29 16:26:06.682127 systemd[1]: Started sshd@7-10.0.0.149:22-10.0.0.1:57594.service - OpenSSH per-connection server daemon (10.0.0.1:57594). Jan 29 16:26:06.683045 systemd-logind[1494]: Removed session 7. Jan 29 16:26:06.720505 sshd[1664]: Accepted publickey for core from 10.0.0.1 port 57594 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:26:06.722265 sshd-session[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:26:06.726383 systemd-logind[1494]: New session 8 of user core. Jan 29 16:26:06.739774 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 16:26:06.794154 sudo[1669]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 16:26:06.794535 sudo[1669]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:26:06.798555 sudo[1669]: pam_unix(sudo:session): session closed for user root Jan 29 16:26:06.805070 sudo[1668]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 16:26:06.805396 sudo[1668]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:26:06.826951 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:26:06.857413 augenrules[1691]: No rules Jan 29 16:26:06.858429 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:26:06.858761 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:26:06.859964 sudo[1668]: pam_unix(sudo:session): session closed for user root Jan 29 16:26:06.861580 sshd[1667]: Connection closed by 10.0.0.1 port 57594 Jan 29 16:26:06.861965 sshd-session[1664]: pam_unix(sshd:session): session closed for user core Jan 29 16:26:06.875913 systemd[1]: sshd@7-10.0.0.149:22-10.0.0.1:57594.service: Deactivated successfully. Jan 29 16:26:06.877774 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 16:26:06.878467 systemd-logind[1494]: Session 8 logged out. Waiting for processes to exit. Jan 29 16:26:06.887910 systemd[1]: Started sshd@8-10.0.0.149:22-10.0.0.1:57604.service - OpenSSH per-connection server daemon (10.0.0.1:57604). Jan 29 16:26:06.888840 systemd-logind[1494]: Removed session 8. Jan 29 16:26:06.923084 sshd[1699]: Accepted publickey for core from 10.0.0.1 port 57604 ssh2: RSA SHA256:cY969aNwVd9R5zop7YhxFiRwg6M+CFzjYSBWBeowLAQ Jan 29 16:26:06.924705 sshd-session[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:26:06.929106 systemd-logind[1494]: New session 9 of user core. Jan 29 16:26:06.938782 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 16:26:06.990855 sudo[1703]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 16:26:06.991175 sudo[1703]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:26:07.012942 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 29 16:26:07.032409 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 29 16:26:07.032730 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 29 16:26:07.481473 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:26:07.481632 systemd[1]: kubelet.service: Consumed 2.078s CPU time, 239.5M memory peak. Jan 29 16:26:07.492912 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:26:07.521107 systemd[1]: Reload requested from client PID 1744 ('systemctl') (unit session-9.scope)... Jan 29 16:26:07.521124 systemd[1]: Reloading... Jan 29 16:26:07.621687 zram_generator::config[1791]: No configuration found. Jan 29 16:26:07.977012 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:26:08.079882 systemd[1]: Reloading finished in 558 ms. Jan 29 16:26:08.124188 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 16:26:08.124285 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 16:26:08.124560 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:26:08.124600 systemd[1]: kubelet.service: Consumed 191ms CPU time, 83.5M memory peak. Jan 29 16:26:08.126963 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:26:08.271376 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:26:08.275449 (kubelet)[1836]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:26:08.319997 kubelet[1836]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:26:08.319997 kubelet[1836]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 16:26:08.319997 kubelet[1836]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:26:08.320985 kubelet[1836]: I0129 16:26:08.320936 1836 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 16:26:09.020505 kubelet[1836]: I0129 16:26:09.020457 1836 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 16:26:09.020505 kubelet[1836]: I0129 16:26:09.020490 1836 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 16:26:09.020778 kubelet[1836]: I0129 16:26:09.020755 1836 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 16:26:09.043208 kubelet[1836]: I0129 16:26:09.043063 1836 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 16:26:09.049190 kubelet[1836]: E0129 16:26:09.049154 1836 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 16:26:09.049190 kubelet[1836]: I0129 16:26:09.049180 1836 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 16:26:09.055420 kubelet[1836]: I0129 16:26:09.055318 1836 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 16:26:09.056698 kubelet[1836]: I0129 16:26:09.056680 1836 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 16:26:09.057263 kubelet[1836]: I0129 16:26:09.056839 1836 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 16:26:09.057263 kubelet[1836]: I0129 16:26:09.056862 1836 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.149","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 16:26:09.057263 kubelet[1836]: I0129 16:26:09.057134 1836 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 16:26:09.057263 kubelet[1836]: I0129 16:26:09.057144 1836 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 16:26:09.057425 kubelet[1836]: I0129 16:26:09.057296 1836 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:26:09.058837 kubelet[1836]: I0129 16:26:09.058809 1836 kubelet.go:408] "Attempting to sync node with API server" Jan 29 16:26:09.058837 kubelet[1836]: I0129 16:26:09.058831 1836 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 16:26:09.058900 kubelet[1836]: I0129 16:26:09.058884 1836 kubelet.go:314] "Adding apiserver pod source" Jan 29 16:26:09.058925 kubelet[1836]: I0129 16:26:09.058916 1836 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 16:26:09.059034 kubelet[1836]: E0129 16:26:09.058944 1836 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:09.059034 kubelet[1836]: E0129 16:26:09.059004 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:09.065128 kubelet[1836]: I0129 16:26:09.065107 1836 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 16:26:09.066586 kubelet[1836]: I0129 16:26:09.066569 1836 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 16:26:09.067187 kubelet[1836]: W0129 16:26:09.067165 1836 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 16:26:09.067873 kubelet[1836]: I0129 16:26:09.067857 1836 server.go:1269] "Started kubelet" Jan 29 16:26:09.069724 kubelet[1836]: I0129 16:26:09.067977 1836 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 16:26:09.069724 kubelet[1836]: I0129 16:26:09.068410 1836 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 16:26:09.069724 kubelet[1836]: I0129 16:26:09.068509 1836 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 16:26:09.069724 kubelet[1836]: I0129 16:26:09.069354 1836 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 16:26:09.069724 kubelet[1836]: I0129 16:26:09.069500 1836 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 16:26:09.069724 kubelet[1836]: I0129 16:26:09.069563 1836 server.go:460] "Adding debug handlers to kubelet server" Jan 29 16:26:09.070631 kubelet[1836]: I0129 16:26:09.070599 1836 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 16:26:09.070792 kubelet[1836]: I0129 16:26:09.070763 1836 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 16:26:09.070883 kubelet[1836]: I0129 16:26:09.070847 1836 reconciler.go:26] "Reconciler: start to sync state" Jan 29 16:26:09.071143 kubelet[1836]: E0129 16:26:09.071121 1836 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.149\" not found" Jan 29 16:26:09.071786 kubelet[1836]: I0129 16:26:09.071759 1836 factory.go:221] Registration of the systemd container factory successfully Jan 29 16:26:09.072766 kubelet[1836]: I0129 16:26:09.071850 1836 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 16:26:09.073082 kubelet[1836]: E0129 16:26:09.073061 1836 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 16:26:09.073207 kubelet[1836]: I0129 16:26:09.073189 1836 factory.go:221] Registration of the containerd container factory successfully Jan 29 16:26:09.085682 kubelet[1836]: I0129 16:26:09.085518 1836 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 16:26:09.085682 kubelet[1836]: I0129 16:26:09.085531 1836 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 16:26:09.085682 kubelet[1836]: I0129 16:26:09.085549 1836 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:26:09.171776 kubelet[1836]: E0129 16:26:09.171730 1836 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.149\" not found" Jan 29 16:26:09.272269 kubelet[1836]: E0129 16:26:09.272157 1836 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.149\" not found" Jan 29 16:26:09.372844 kubelet[1836]: E0129 16:26:09.372803 1836 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.149\" not found" Jan 29 16:26:09.473601 kubelet[1836]: E0129 16:26:09.473500 1836 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.149\" not found" Jan 29 16:26:09.573823 kubelet[1836]: E0129 16:26:09.573786 1836 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.149\" not found" Jan 29 16:26:09.674174 kubelet[1836]: E0129 16:26:09.674148 1836 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.149\" not found" Jan 29 16:26:09.774542 kubelet[1836]: E0129 16:26:09.774519 1836 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.149\" not found" Jan 29 16:26:09.875033 kubelet[1836]: E0129 16:26:09.874881 1836 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.149\" not found" Jan 29 16:26:09.975458 kubelet[1836]: E0129 16:26:09.975405 1836 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.149\" not found" Jan 29 16:26:10.060063 kubelet[1836]: E0129 16:26:10.059996 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:10.075687 kubelet[1836]: E0129 16:26:10.075652 1836 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.149\" not found" Jan 29 16:26:10.077785 kubelet[1836]: I0129 16:26:10.077743 1836 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 16:26:10.078991 kubelet[1836]: I0129 16:26:10.078968 1836 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 16:26:10.079050 kubelet[1836]: I0129 16:26:10.079033 1836 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 16:26:10.079098 kubelet[1836]: I0129 16:26:10.079067 1836 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 16:26:10.079239 kubelet[1836]: E0129 16:26:10.079203 1836 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 16:26:10.089099 kubelet[1836]: I0129 16:26:10.089055 1836 policy_none.go:49] "None policy: Start" Jan 29 16:26:10.089837 kubelet[1836]: I0129 16:26:10.089817 1836 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 16:26:10.089879 kubelet[1836]: I0129 16:26:10.089854 1836 state_mem.go:35] "Initializing new in-memory state store" Jan 29 16:26:10.116333 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 16:26:10.123163 kubelet[1836]: W0129 16:26:10.123130 1836 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Jan 29 16:26:10.123163 kubelet[1836]: W0129 16:26:10.123149 1836 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 29 16:26:10.123457 kubelet[1836]: E0129 16:26:10.123171 1836 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Jan 29 16:26:10.123457 kubelet[1836]: E0129 16:26:10.123199 1836 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 29 16:26:10.123457 kubelet[1836]: W0129 16:26:10.123216 1836 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.149" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 29 16:26:10.123457 kubelet[1836]: E0129 16:26:10.123227 1836 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.149\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 29 16:26:10.123457 kubelet[1836]: E0129 16:26:10.123303 1836 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.149\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 29 16:26:10.123457 kubelet[1836]: W0129 16:26:10.123337 1836 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 29 16:26:10.123655 kubelet[1836]: E0129 16:26:10.123359 1836 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Jan 29 16:26:10.128518 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 16:26:10.132246 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 16:26:10.134757 kubelet[1836]: E0129 16:26:10.123126 1836 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.149.181f368f04401b43 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.149,UID:10.0.0.149,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.149,},FirstTimestamp:2025-01-29 16:26:09.067834179 +0000 UTC m=+0.788541238,LastTimestamp:2025-01-29 16:26:09.067834179 +0000 UTC m=+0.788541238,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.149,}" Jan 29 16:26:10.137587 kubelet[1836]: E0129 16:26:10.137450 1836 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.149.181f368f048fb8ec default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.149,UID:10.0.0.149,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.149,},FirstTimestamp:2025-01-29 16:26:09.073051884 +0000 UTC m=+0.793758943,LastTimestamp:2025-01-29 16:26:09.073051884 +0000 UTC m=+0.793758943,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.149,}" Jan 29 16:26:10.141161 kubelet[1836]: E0129 16:26:10.141030 1836 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.149.181f368f05471175 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.149,UID:10.0.0.149,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.149 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.149,},FirstTimestamp:2025-01-29 16:26:09.085067637 +0000 UTC m=+0.805774697,LastTimestamp:2025-01-29 16:26:09.085067637 +0000 UTC m=+0.805774697,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.149,}" Jan 29 16:26:10.144891 kubelet[1836]: E0129 16:26:10.144794 1836 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.149.181f368f0547227c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.149,UID:10.0.0.149,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.149 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.149,},FirstTimestamp:2025-01-29 16:26:09.085071996 +0000 UTC m=+0.805779055,LastTimestamp:2025-01-29 16:26:09.085071996 +0000 UTC m=+0.805779055,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.149,}" Jan 29 16:26:10.146790 kubelet[1836]: I0129 16:26:10.146756 1836 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 16:26:10.146996 kubelet[1836]: I0129 16:26:10.146976 1836 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 16:26:10.147057 kubelet[1836]: I0129 16:26:10.146993 1836 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 16:26:10.147424 kubelet[1836]: I0129 16:26:10.147368 1836 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 16:26:10.148662 kubelet[1836]: E0129 16:26:10.148599 1836 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.149\" not found" Jan 29 16:26:10.149770 kubelet[1836]: E0129 16:26:10.149666 1836 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.149.181f368f05472c76 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.149,UID:10.0.0.149,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.149 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.149,},FirstTimestamp:2025-01-29 16:26:09.08507455 +0000 UTC m=+0.805781610,LastTimestamp:2025-01-29 16:26:09.08507455 +0000 UTC m=+0.805781610,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.149,}" Jan 29 16:26:10.153561 kubelet[1836]: E0129 16:26:10.153496 1836 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.149.181f368f44b209da default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.149,UID:10.0.0.149,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:10.0.0.149,},FirstTimestamp:2025-01-29 16:26:10.14904265 +0000 UTC m=+1.869749740,LastTimestamp:2025-01-29 16:26:10.14904265 +0000 UTC m=+1.869749740,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.149,}" Jan 29 16:26:10.248272 kubelet[1836]: I0129 16:26:10.248235 1836 kubelet_node_status.go:72] "Attempting to register node" node="10.0.0.149" Jan 29 16:26:10.252803 kubelet[1836]: E0129 16:26:10.252736 1836 kubelet_node_status.go:95] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.149" Jan 29 16:26:10.328341 kubelet[1836]: E0129 16:26:10.328299 1836 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.149\" not found" node="10.0.0.149" Jan 29 16:26:10.453807 kubelet[1836]: I0129 16:26:10.453632 1836 kubelet_node_status.go:72] "Attempting to register node" node="10.0.0.149" Jan 29 16:26:10.500710 kubelet[1836]: I0129 16:26:10.500628 1836 kubelet_node_status.go:75] "Successfully registered node" node="10.0.0.149" Jan 29 16:26:10.500710 kubelet[1836]: E0129 16:26:10.500705 1836 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"10.0.0.149\": node \"10.0.0.149\" not found" Jan 29 16:26:10.528940 kubelet[1836]: E0129 16:26:10.528895 1836 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.149\" not found" Jan 29 16:26:10.629969 kubelet[1836]: E0129 16:26:10.629904 1836 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.149\" not found" Jan 29 16:26:10.730819 kubelet[1836]: E0129 16:26:10.730618 1836 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.149\" not found" Jan 29 16:26:10.813096 sudo[1703]: pam_unix(sudo:session): session closed for user root Jan 29 16:26:10.814555 sshd[1702]: Connection closed by 10.0.0.1 port 57604 Jan 29 16:26:10.814975 sshd-session[1699]: pam_unix(sshd:session): session closed for user core Jan 29 16:26:10.818574 systemd[1]: sshd@8-10.0.0.149:22-10.0.0.1:57604.service: Deactivated successfully. Jan 29 16:26:10.820665 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 16:26:10.820873 systemd[1]: session-9.scope: Consumed 489ms CPU time, 74.5M memory peak. Jan 29 16:26:10.822130 systemd-logind[1494]: Session 9 logged out. Waiting for processes to exit. Jan 29 16:26:10.823074 systemd-logind[1494]: Removed session 9. Jan 29 16:26:10.831305 kubelet[1836]: E0129 16:26:10.831270 1836 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.149\" not found" Jan 29 16:26:10.931526 kubelet[1836]: E0129 16:26:10.931440 1836 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.149\" not found" Jan 29 16:26:11.024392 kubelet[1836]: I0129 16:26:11.024225 1836 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 29 16:26:11.024522 kubelet[1836]: W0129 16:26:11.024508 1836 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 16:26:11.032560 kubelet[1836]: E0129 16:26:11.032460 1836 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.149\" not found" Jan 29 16:26:11.060338 kubelet[1836]: E0129 16:26:11.060256 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:11.133427 kubelet[1836]: E0129 16:26:11.133325 1836 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.149\" not found" Jan 29 16:26:11.234522 kubelet[1836]: E0129 16:26:11.234429 1836 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.149\" not found" Jan 29 16:26:11.335810 kubelet[1836]: E0129 16:26:11.335622 1836 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.149\" not found" Jan 29 16:26:11.438330 kubelet[1836]: I0129 16:26:11.437977 1836 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 29 16:26:11.440654 containerd[1511]: time="2025-01-29T16:26:11.440533877Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 16:26:11.447742 kubelet[1836]: I0129 16:26:11.444880 1836 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 29 16:26:12.060603 kubelet[1836]: I0129 16:26:12.060241 1836 apiserver.go:52] "Watching apiserver" Jan 29 16:26:12.060603 kubelet[1836]: E0129 16:26:12.060335 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:12.066695 kubelet[1836]: E0129 16:26:12.066188 1836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4pfpp" podUID="ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553" Jan 29 16:26:12.072153 kubelet[1836]: I0129 16:26:12.072040 1836 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 16:26:12.080102 systemd[1]: Created slice kubepods-besteffort-pod0c0fdee8_3cb7_42b5_96d5_625570e3c10f.slice - libcontainer container kubepods-besteffort-pod0c0fdee8_3cb7_42b5_96d5_625570e3c10f.slice. Jan 29 16:26:12.090296 kubelet[1836]: I0129 16:26:12.089953 1836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c0fdee8-3cb7-42b5-96d5-625570e3c10f-lib-modules\") pod \"calico-node-9sc8p\" (UID: \"0c0fdee8-3cb7-42b5-96d5-625570e3c10f\") " pod="calico-system/calico-node-9sc8p" Jan 29 16:26:12.090296 kubelet[1836]: I0129 16:26:12.090010 1836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0c0fdee8-3cb7-42b5-96d5-625570e3c10f-flexvol-driver-host\") pod \"calico-node-9sc8p\" (UID: \"0c0fdee8-3cb7-42b5-96d5-625570e3c10f\") " pod="calico-system/calico-node-9sc8p" Jan 29 16:26:12.090296 kubelet[1836]: I0129 16:26:12.090035 1836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/09f72663-ad8a-4c18-8818-d26fd29043ee-xtables-lock\") pod \"kube-proxy-99wwp\" (UID: \"09f72663-ad8a-4c18-8818-d26fd29043ee\") " pod="kube-system/kube-proxy-99wwp" Jan 29 16:26:12.090296 kubelet[1836]: I0129 16:26:12.090054 1836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0c0fdee8-3cb7-42b5-96d5-625570e3c10f-var-run-calico\") pod \"calico-node-9sc8p\" (UID: \"0c0fdee8-3cb7-42b5-96d5-625570e3c10f\") " pod="calico-system/calico-node-9sc8p" Jan 29 16:26:12.090296 kubelet[1836]: I0129 16:26:12.090074 1836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtsl4\" (UniqueName: \"kubernetes.io/projected/0c0fdee8-3cb7-42b5-96d5-625570e3c10f-kube-api-access-gtsl4\") pod \"calico-node-9sc8p\" (UID: \"0c0fdee8-3cb7-42b5-96d5-625570e3c10f\") " pod="calico-system/calico-node-9sc8p" Jan 29 16:26:12.090632 kubelet[1836]: I0129 16:26:12.090094 1836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0c0fdee8-3cb7-42b5-96d5-625570e3c10f-policysync\") pod \"calico-node-9sc8p\" (UID: \"0c0fdee8-3cb7-42b5-96d5-625570e3c10f\") " pod="calico-system/calico-node-9sc8p" Jan 29 16:26:12.090632 kubelet[1836]: I0129 16:26:12.090113 1836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c0fdee8-3cb7-42b5-96d5-625570e3c10f-tigera-ca-bundle\") pod \"calico-node-9sc8p\" (UID: \"0c0fdee8-3cb7-42b5-96d5-625570e3c10f\") " pod="calico-system/calico-node-9sc8p" Jan 29 16:26:12.090632 kubelet[1836]: I0129 16:26:12.090133 1836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0c0fdee8-3cb7-42b5-96d5-625570e3c10f-var-lib-calico\") pod \"calico-node-9sc8p\" (UID: \"0c0fdee8-3cb7-42b5-96d5-625570e3c10f\") " pod="calico-system/calico-node-9sc8p" Jan 29 16:26:12.090632 kubelet[1836]: I0129 16:26:12.090153 1836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0c0fdee8-3cb7-42b5-96d5-625570e3c10f-cni-bin-dir\") pod \"calico-node-9sc8p\" (UID: \"0c0fdee8-3cb7-42b5-96d5-625570e3c10f\") " pod="calico-system/calico-node-9sc8p" Jan 29 16:26:12.090632 kubelet[1836]: I0129 16:26:12.090172 1836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2k64\" (UniqueName: \"kubernetes.io/projected/ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553-kube-api-access-g2k64\") pod \"csi-node-driver-4pfpp\" (UID: \"ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553\") " pod="calico-system/csi-node-driver-4pfpp" Jan 29 16:26:12.090827 kubelet[1836]: I0129 16:26:12.090191 1836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/09f72663-ad8a-4c18-8818-d26fd29043ee-lib-modules\") pod \"kube-proxy-99wwp\" (UID: \"09f72663-ad8a-4c18-8818-d26fd29043ee\") " pod="kube-system/kube-proxy-99wwp" Jan 29 16:26:12.090827 kubelet[1836]: I0129 16:26:12.090214 1836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0c0fdee8-3cb7-42b5-96d5-625570e3c10f-xtables-lock\") pod \"calico-node-9sc8p\" (UID: \"0c0fdee8-3cb7-42b5-96d5-625570e3c10f\") " pod="calico-system/calico-node-9sc8p" Jan 29 16:26:12.090827 kubelet[1836]: I0129 16:26:12.090239 1836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0c0fdee8-3cb7-42b5-96d5-625570e3c10f-node-certs\") pod \"calico-node-9sc8p\" (UID: \"0c0fdee8-3cb7-42b5-96d5-625570e3c10f\") " pod="calico-system/calico-node-9sc8p" Jan 29 16:26:12.090827 kubelet[1836]: I0129 16:26:12.090260 1836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0c0fdee8-3cb7-42b5-96d5-625570e3c10f-cni-net-dir\") pod \"calico-node-9sc8p\" (UID: \"0c0fdee8-3cb7-42b5-96d5-625570e3c10f\") " pod="calico-system/calico-node-9sc8p" Jan 29 16:26:12.090827 kubelet[1836]: I0129 16:26:12.090279 1836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0c0fdee8-3cb7-42b5-96d5-625570e3c10f-cni-log-dir\") pod \"calico-node-9sc8p\" (UID: \"0c0fdee8-3cb7-42b5-96d5-625570e3c10f\") " pod="calico-system/calico-node-9sc8p" Jan 29 16:26:12.090983 kubelet[1836]: I0129 16:26:12.090299 1836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553-varrun\") pod \"csi-node-driver-4pfpp\" (UID: \"ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553\") " pod="calico-system/csi-node-driver-4pfpp" Jan 29 16:26:12.090983 kubelet[1836]: I0129 16:26:12.090457 1836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553-kubelet-dir\") pod \"csi-node-driver-4pfpp\" (UID: \"ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553\") " pod="calico-system/csi-node-driver-4pfpp" Jan 29 16:26:12.090983 kubelet[1836]: I0129 16:26:12.090479 1836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553-socket-dir\") pod \"csi-node-driver-4pfpp\" (UID: \"ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553\") " pod="calico-system/csi-node-driver-4pfpp" Jan 29 16:26:12.090983 kubelet[1836]: I0129 16:26:12.090501 1836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553-registration-dir\") pod \"csi-node-driver-4pfpp\" (UID: \"ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553\") " pod="calico-system/csi-node-driver-4pfpp" Jan 29 16:26:12.090983 kubelet[1836]: I0129 16:26:12.090522 1836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/09f72663-ad8a-4c18-8818-d26fd29043ee-kube-proxy\") pod \"kube-proxy-99wwp\" (UID: \"09f72663-ad8a-4c18-8818-d26fd29043ee\") " pod="kube-system/kube-proxy-99wwp" Jan 29 16:26:12.091138 kubelet[1836]: I0129 16:26:12.090570 1836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsgsc\" (UniqueName: \"kubernetes.io/projected/09f72663-ad8a-4c18-8818-d26fd29043ee-kube-api-access-xsgsc\") pod \"kube-proxy-99wwp\" (UID: \"09f72663-ad8a-4c18-8818-d26fd29043ee\") " pod="kube-system/kube-proxy-99wwp" Jan 29 16:26:12.098378 systemd[1]: Created slice kubepods-besteffort-pod09f72663_ad8a_4c18_8818_d26fd29043ee.slice - libcontainer container kubepods-besteffort-pod09f72663_ad8a_4c18_8818_d26fd29043ee.slice. Jan 29 16:26:12.194782 kubelet[1836]: E0129 16:26:12.194700 1836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:26:12.194782 kubelet[1836]: W0129 16:26:12.194736 1836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:26:12.194782 kubelet[1836]: E0129 16:26:12.194781 1836 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:26:12.196830 kubelet[1836]: E0129 16:26:12.196805 1836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:26:12.196998 kubelet[1836]: W0129 16:26:12.196915 1836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:26:12.196998 kubelet[1836]: E0129 16:26:12.196950 1836 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:26:12.208506 kubelet[1836]: E0129 16:26:12.208411 1836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:26:12.208506 kubelet[1836]: W0129 16:26:12.208432 1836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:26:12.208506 kubelet[1836]: E0129 16:26:12.208459 1836 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:26:12.221957 kubelet[1836]: E0129 16:26:12.221919 1836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:26:12.221957 kubelet[1836]: W0129 16:26:12.221946 1836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:26:12.222284 kubelet[1836]: E0129 16:26:12.221980 1836 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:26:12.222284 kubelet[1836]: E0129 16:26:12.222229 1836 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:26:12.222284 kubelet[1836]: W0129 16:26:12.222240 1836 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:26:12.222284 kubelet[1836]: E0129 16:26:12.222251 1836 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:26:12.395120 kubelet[1836]: E0129 16:26:12.395033 1836 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:26:12.396142 containerd[1511]: time="2025-01-29T16:26:12.396068175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9sc8p,Uid:0c0fdee8-3cb7-42b5-96d5-625570e3c10f,Namespace:calico-system,Attempt:0,}" Jan 29 16:26:12.403487 kubelet[1836]: E0129 16:26:12.403432 1836 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:26:12.404292 containerd[1511]: time="2025-01-29T16:26:12.404213130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-99wwp,Uid:09f72663-ad8a-4c18-8818-d26fd29043ee,Namespace:kube-system,Attempt:0,}" Jan 29 16:26:13.061335 kubelet[1836]: E0129 16:26:13.061018 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:13.359332 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1043923786.mount: Deactivated successfully. Jan 29 16:26:13.388829 containerd[1511]: time="2025-01-29T16:26:13.388681369Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:26:13.390978 containerd[1511]: time="2025-01-29T16:26:13.390898107Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:26:13.394304 containerd[1511]: time="2025-01-29T16:26:13.394163029Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 29 16:26:13.396510 containerd[1511]: time="2025-01-29T16:26:13.396420493Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 16:26:13.400322 containerd[1511]: time="2025-01-29T16:26:13.400208617Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:26:13.407854 containerd[1511]: time="2025-01-29T16:26:13.406664985Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:26:13.409496 containerd[1511]: time="2025-01-29T16:26:13.409433196Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.005047884s" Jan 29 16:26:13.410658 containerd[1511]: time="2025-01-29T16:26:13.410590126Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.014350319s" Jan 29 16:26:13.749970 containerd[1511]: time="2025-01-29T16:26:13.747407940Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:26:13.749970 containerd[1511]: time="2025-01-29T16:26:13.749790939Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:26:13.749970 containerd[1511]: time="2025-01-29T16:26:13.749807390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:26:13.750312 containerd[1511]: time="2025-01-29T16:26:13.750220985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:26:13.753682 containerd[1511]: time="2025-01-29T16:26:13.753570747Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:26:13.753682 containerd[1511]: time="2025-01-29T16:26:13.753624779Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:26:13.755998 containerd[1511]: time="2025-01-29T16:26:13.753660145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:26:13.756078 containerd[1511]: time="2025-01-29T16:26:13.755877884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:26:13.888806 systemd[1]: Started cri-containerd-2222abaf5a67530e1401aaf3eba7ee1e906c3d3426357c208f4776af86b27b3b.scope - libcontainer container 2222abaf5a67530e1401aaf3eba7ee1e906c3d3426357c208f4776af86b27b3b. Jan 29 16:26:13.891982 systemd[1]: Started cri-containerd-7efdcbe398c005a6bc28bc3736a3150fc4b978e7093f17d398a67e3d8f7f67fe.scope - libcontainer container 7efdcbe398c005a6bc28bc3736a3150fc4b978e7093f17d398a67e3d8f7f67fe. Jan 29 16:26:13.920356 containerd[1511]: time="2025-01-29T16:26:13.920305484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9sc8p,Uid:0c0fdee8-3cb7-42b5-96d5-625570e3c10f,Namespace:calico-system,Attempt:0,} returns sandbox id \"2222abaf5a67530e1401aaf3eba7ee1e906c3d3426357c208f4776af86b27b3b\"" Jan 29 16:26:13.921302 kubelet[1836]: E0129 16:26:13.921272 1836 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:26:13.923204 containerd[1511]: time="2025-01-29T16:26:13.923161731Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 29 16:26:13.929257 containerd[1511]: time="2025-01-29T16:26:13.929163005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-99wwp,Uid:09f72663-ad8a-4c18-8818-d26fd29043ee,Namespace:kube-system,Attempt:0,} returns sandbox id \"7efdcbe398c005a6bc28bc3736a3150fc4b978e7093f17d398a67e3d8f7f67fe\"" Jan 29 16:26:13.929982 kubelet[1836]: E0129 16:26:13.929962 1836 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:26:14.061804 kubelet[1836]: E0129 16:26:14.061666 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:14.079699 kubelet[1836]: E0129 16:26:14.079633 1836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4pfpp" podUID="ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553" Jan 29 16:26:15.061897 kubelet[1836]: E0129 16:26:15.061833 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:15.843320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1374018544.mount: Deactivated successfully. Jan 29 16:26:16.062233 kubelet[1836]: E0129 16:26:16.062175 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:16.079970 kubelet[1836]: E0129 16:26:16.079881 1836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4pfpp" podUID="ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553" Jan 29 16:26:16.357959 containerd[1511]: time="2025-01-29T16:26:16.357900034Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:16.358690 containerd[1511]: time="2025-01-29T16:26:16.358621427Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 29 16:26:16.359704 containerd[1511]: time="2025-01-29T16:26:16.359672098Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:16.361888 containerd[1511]: time="2025-01-29T16:26:16.361853809Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:16.362628 containerd[1511]: time="2025-01-29T16:26:16.362594959Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 2.43939701s" Jan 29 16:26:16.362675 containerd[1511]: time="2025-01-29T16:26:16.362628542Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 29 16:26:16.363764 containerd[1511]: time="2025-01-29T16:26:16.363742160Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 29 16:26:16.364860 containerd[1511]: time="2025-01-29T16:26:16.364836763Z" level=info msg="CreateContainer within sandbox \"2222abaf5a67530e1401aaf3eba7ee1e906c3d3426357c208f4776af86b27b3b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 16:26:16.384001 containerd[1511]: time="2025-01-29T16:26:16.383946180Z" level=info msg="CreateContainer within sandbox \"2222abaf5a67530e1401aaf3eba7ee1e906c3d3426357c208f4776af86b27b3b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6dda46a66ff4db69f1067b6faa0b94a241fe5dfd92d2fe325e458cedd955456b\"" Jan 29 16:26:16.384709 containerd[1511]: time="2025-01-29T16:26:16.384682220Z" level=info msg="StartContainer for \"6dda46a66ff4db69f1067b6faa0b94a241fe5dfd92d2fe325e458cedd955456b\"" Jan 29 16:26:16.428803 systemd[1]: Started cri-containerd-6dda46a66ff4db69f1067b6faa0b94a241fe5dfd92d2fe325e458cedd955456b.scope - libcontainer container 6dda46a66ff4db69f1067b6faa0b94a241fe5dfd92d2fe325e458cedd955456b. Jan 29 16:26:16.504821 containerd[1511]: time="2025-01-29T16:26:16.504770298Z" level=info msg="StartContainer for \"6dda46a66ff4db69f1067b6faa0b94a241fe5dfd92d2fe325e458cedd955456b\" returns successfully" Jan 29 16:26:16.626501 systemd[1]: cri-containerd-6dda46a66ff4db69f1067b6faa0b94a241fe5dfd92d2fe325e458cedd955456b.scope: Deactivated successfully. Jan 29 16:26:16.734661 containerd[1511]: time="2025-01-29T16:26:16.734557011Z" level=info msg="shim disconnected" id=6dda46a66ff4db69f1067b6faa0b94a241fe5dfd92d2fe325e458cedd955456b namespace=k8s.io Jan 29 16:26:16.734875 containerd[1511]: time="2025-01-29T16:26:16.734639054Z" level=warning msg="cleaning up after shim disconnected" id=6dda46a66ff4db69f1067b6faa0b94a241fe5dfd92d2fe325e458cedd955456b namespace=k8s.io Jan 29 16:26:16.734875 containerd[1511]: time="2025-01-29T16:26:16.734687135Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:26:16.823407 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6dda46a66ff4db69f1067b6faa0b94a241fe5dfd92d2fe325e458cedd955456b-rootfs.mount: Deactivated successfully. Jan 29 16:26:17.062993 kubelet[1836]: E0129 16:26:17.062835 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:17.317065 kubelet[1836]: E0129 16:26:17.317041 1836 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:26:17.991701 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4002857883.mount: Deactivated successfully. Jan 29 16:26:18.063573 kubelet[1836]: E0129 16:26:18.063548 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:18.080258 kubelet[1836]: E0129 16:26:18.080221 1836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4pfpp" podUID="ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553" Jan 29 16:26:18.744978 containerd[1511]: time="2025-01-29T16:26:18.744906544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:18.745690 containerd[1511]: time="2025-01-29T16:26:18.745632816Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=30231128" Jan 29 16:26:18.746896 containerd[1511]: time="2025-01-29T16:26:18.746860859Z" level=info msg="ImageCreate event name:\"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:18.748998 containerd[1511]: time="2025-01-29T16:26:18.748958663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:18.749664 containerd[1511]: time="2025-01-29T16:26:18.749611487Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"30230147\" in 2.385841094s" Jan 29 16:26:18.749664 containerd[1511]: time="2025-01-29T16:26:18.749658124Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\"" Jan 29 16:26:18.750945 containerd[1511]: time="2025-01-29T16:26:18.750918859Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 29 16:26:18.751824 containerd[1511]: time="2025-01-29T16:26:18.751789091Z" level=info msg="CreateContainer within sandbox \"7efdcbe398c005a6bc28bc3736a3150fc4b978e7093f17d398a67e3d8f7f67fe\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 16:26:18.770256 containerd[1511]: time="2025-01-29T16:26:18.770204166Z" level=info msg="CreateContainer within sandbox \"7efdcbe398c005a6bc28bc3736a3150fc4b978e7093f17d398a67e3d8f7f67fe\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9d7e86f68e2be7368d3e4a94c22f9c8a764c550b3bd69f5ba646a84aaf13352e\"" Jan 29 16:26:18.770926 containerd[1511]: time="2025-01-29T16:26:18.770758836Z" level=info msg="StartContainer for \"9d7e86f68e2be7368d3e4a94c22f9c8a764c550b3bd69f5ba646a84aaf13352e\"" Jan 29 16:26:18.810811 systemd[1]: Started cri-containerd-9d7e86f68e2be7368d3e4a94c22f9c8a764c550b3bd69f5ba646a84aaf13352e.scope - libcontainer container 9d7e86f68e2be7368d3e4a94c22f9c8a764c550b3bd69f5ba646a84aaf13352e. Jan 29 16:26:18.911872 containerd[1511]: time="2025-01-29T16:26:18.911813343Z" level=info msg="StartContainer for \"9d7e86f68e2be7368d3e4a94c22f9c8a764c550b3bd69f5ba646a84aaf13352e\" returns successfully" Jan 29 16:26:19.064669 kubelet[1836]: E0129 16:26:19.064617 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:19.321133 kubelet[1836]: E0129 16:26:19.321022 1836 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:26:19.330201 kubelet[1836]: I0129 16:26:19.330121 1836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-99wwp" podStartSLOduration=4.510142609 podStartE2EDuration="9.330094159s" podCreationTimestamp="2025-01-29 16:26:10 +0000 UTC" firstStartedPulling="2025-01-29 16:26:13.930449137 +0000 UTC m=+5.651156196" lastFinishedPulling="2025-01-29 16:26:18.750400687 +0000 UTC m=+10.471107746" observedRunningTime="2025-01-29 16:26:19.329858868 +0000 UTC m=+11.050565927" watchObservedRunningTime="2025-01-29 16:26:19.330094159 +0000 UTC m=+11.050801218" Jan 29 16:26:20.065094 kubelet[1836]: E0129 16:26:20.065008 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:20.079944 kubelet[1836]: E0129 16:26:20.079892 1836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4pfpp" podUID="ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553" Jan 29 16:26:20.325055 kubelet[1836]: E0129 16:26:20.324623 1836 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:26:21.065299 kubelet[1836]: E0129 16:26:21.065265 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:22.065546 kubelet[1836]: E0129 16:26:22.065497 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:22.080341 kubelet[1836]: E0129 16:26:22.080250 1836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4pfpp" podUID="ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553" Jan 29 16:26:23.065871 kubelet[1836]: E0129 16:26:23.065830 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:23.590783 containerd[1511]: time="2025-01-29T16:26:23.590738498Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:23.591559 containerd[1511]: time="2025-01-29T16:26:23.591511378Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 29 16:26:23.593063 containerd[1511]: time="2025-01-29T16:26:23.593024365Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:23.595262 containerd[1511]: time="2025-01-29T16:26:23.595211366Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:23.595920 containerd[1511]: time="2025-01-29T16:26:23.595891552Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.844940834s" Jan 29 16:26:23.595954 containerd[1511]: time="2025-01-29T16:26:23.595919625Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 29 16:26:23.597996 containerd[1511]: time="2025-01-29T16:26:23.597958729Z" level=info msg="CreateContainer within sandbox \"2222abaf5a67530e1401aaf3eba7ee1e906c3d3426357c208f4776af86b27b3b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 16:26:23.615715 containerd[1511]: time="2025-01-29T16:26:23.615681766Z" level=info msg="CreateContainer within sandbox \"2222abaf5a67530e1401aaf3eba7ee1e906c3d3426357c208f4776af86b27b3b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"37969a4d4f12ffd8810d6bebff8e17d7e18ad64da05563c95cf8ea89e542c3e5\"" Jan 29 16:26:23.616340 containerd[1511]: time="2025-01-29T16:26:23.616101282Z" level=info msg="StartContainer for \"37969a4d4f12ffd8810d6bebff8e17d7e18ad64da05563c95cf8ea89e542c3e5\"" Jan 29 16:26:23.656824 systemd[1]: Started cri-containerd-37969a4d4f12ffd8810d6bebff8e17d7e18ad64da05563c95cf8ea89e542c3e5.scope - libcontainer container 37969a4d4f12ffd8810d6bebff8e17d7e18ad64da05563c95cf8ea89e542c3e5. Jan 29 16:26:23.761139 containerd[1511]: time="2025-01-29T16:26:23.761084508Z" level=info msg="StartContainer for \"37969a4d4f12ffd8810d6bebff8e17d7e18ad64da05563c95cf8ea89e542c3e5\" returns successfully" Jan 29 16:26:24.066098 kubelet[1836]: E0129 16:26:24.066050 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:24.079622 kubelet[1836]: E0129 16:26:24.079580 1836 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4pfpp" podUID="ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553" Jan 29 16:26:24.356326 kubelet[1836]: E0129 16:26:24.355930 1836 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:26:25.092779 kubelet[1836]: E0129 16:26:25.092686 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:25.294466 systemd[1]: cri-containerd-37969a4d4f12ffd8810d6bebff8e17d7e18ad64da05563c95cf8ea89e542c3e5.scope: Deactivated successfully. Jan 29 16:26:25.294795 systemd[1]: cri-containerd-37969a4d4f12ffd8810d6bebff8e17d7e18ad64da05563c95cf8ea89e542c3e5.scope: Consumed 1.167s CPU time, 169.2M memory peak, 151M written to disk. Jan 29 16:26:25.296663 kubelet[1836]: I0129 16:26:25.296615 1836 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 29 16:26:25.315765 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-37969a4d4f12ffd8810d6bebff8e17d7e18ad64da05563c95cf8ea89e542c3e5-rootfs.mount: Deactivated successfully. Jan 29 16:26:25.357670 kubelet[1836]: E0129 16:26:25.357274 1836 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:26:25.874842 containerd[1511]: time="2025-01-29T16:26:25.874750977Z" level=info msg="shim disconnected" id=37969a4d4f12ffd8810d6bebff8e17d7e18ad64da05563c95cf8ea89e542c3e5 namespace=k8s.io Jan 29 16:26:25.874842 containerd[1511]: time="2025-01-29T16:26:25.874828773Z" level=warning msg="cleaning up after shim disconnected" id=37969a4d4f12ffd8810d6bebff8e17d7e18ad64da05563c95cf8ea89e542c3e5 namespace=k8s.io Jan 29 16:26:25.874842 containerd[1511]: time="2025-01-29T16:26:25.874842639Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:26:26.084902 systemd[1]: Created slice kubepods-besteffort-podebfca53b_4c8c_4d66_9ac2_1e0da5e0f553.slice - libcontainer container kubepods-besteffort-podebfca53b_4c8c_4d66_9ac2_1e0da5e0f553.slice. Jan 29 16:26:26.086873 containerd[1511]: time="2025-01-29T16:26:26.086831433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4pfpp,Uid:ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553,Namespace:calico-system,Attempt:0,}" Jan 29 16:26:26.093020 kubelet[1836]: E0129 16:26:26.092984 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:26.155804 containerd[1511]: time="2025-01-29T16:26:26.155668287Z" level=error msg="Failed to destroy network for sandbox \"6c04ed8c505d3c2699018369350b5a60291c70a0e570bba0d517856b46be2103\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:26.156439 containerd[1511]: time="2025-01-29T16:26:26.156096440Z" level=error msg="encountered an error cleaning up failed sandbox \"6c04ed8c505d3c2699018369350b5a60291c70a0e570bba0d517856b46be2103\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:26.156439 containerd[1511]: time="2025-01-29T16:26:26.156166081Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4pfpp,Uid:ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6c04ed8c505d3c2699018369350b5a60291c70a0e570bba0d517856b46be2103\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:26.156515 kubelet[1836]: E0129 16:26:26.156445 1836 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c04ed8c505d3c2699018369350b5a60291c70a0e570bba0d517856b46be2103\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:26.156552 kubelet[1836]: E0129 16:26:26.156523 1836 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c04ed8c505d3c2699018369350b5a60291c70a0e570bba0d517856b46be2103\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4pfpp" Jan 29 16:26:26.156552 kubelet[1836]: E0129 16:26:26.156546 1836 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c04ed8c505d3c2699018369350b5a60291c70a0e570bba0d517856b46be2103\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4pfpp" Jan 29 16:26:26.156624 kubelet[1836]: E0129 16:26:26.156595 1836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4pfpp_calico-system(ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4pfpp_calico-system(ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6c04ed8c505d3c2699018369350b5a60291c70a0e570bba0d517856b46be2103\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4pfpp" podUID="ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553" Jan 29 16:26:26.157982 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6c04ed8c505d3c2699018369350b5a60291c70a0e570bba0d517856b46be2103-shm.mount: Deactivated successfully. Jan 29 16:26:26.360066 kubelet[1836]: I0129 16:26:26.360031 1836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c04ed8c505d3c2699018369350b5a60291c70a0e570bba0d517856b46be2103" Jan 29 16:26:26.360783 containerd[1511]: time="2025-01-29T16:26:26.360738337Z" level=info msg="StopPodSandbox for \"6c04ed8c505d3c2699018369350b5a60291c70a0e570bba0d517856b46be2103\"" Jan 29 16:26:26.361017 containerd[1511]: time="2025-01-29T16:26:26.360980120Z" level=info msg="Ensure that sandbox 6c04ed8c505d3c2699018369350b5a60291c70a0e570bba0d517856b46be2103 in task-service has been cleanup successfully" Jan 29 16:26:26.362933 systemd[1]: run-netns-cni\x2d43f8cb01\x2d93de\x2d3f7e\x2d6104\x2ddd2bc90940d7.mount: Deactivated successfully. Jan 29 16:26:26.363536 kubelet[1836]: E0129 16:26:26.363486 1836 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:26:26.363744 containerd[1511]: time="2025-01-29T16:26:26.363719377Z" level=info msg="TearDown network for sandbox \"6c04ed8c505d3c2699018369350b5a60291c70a0e570bba0d517856b46be2103\" successfully" Jan 29 16:26:26.363744 containerd[1511]: time="2025-01-29T16:26:26.363742200Z" level=info msg="StopPodSandbox for \"6c04ed8c505d3c2699018369350b5a60291c70a0e570bba0d517856b46be2103\" returns successfully" Jan 29 16:26:26.364358 containerd[1511]: time="2025-01-29T16:26:26.364203565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4pfpp,Uid:ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553,Namespace:calico-system,Attempt:1,}" Jan 29 16:26:26.364358 containerd[1511]: time="2025-01-29T16:26:26.364287843Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 29 16:26:26.428961 containerd[1511]: time="2025-01-29T16:26:26.428812280Z" level=error msg="Failed to destroy network for sandbox \"afa262d39c5c6f6043a47d50e1d4bd36db26b4baad18c34639f502235890860f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:26.429417 containerd[1511]: time="2025-01-29T16:26:26.429206118Z" level=error msg="encountered an error cleaning up failed sandbox \"afa262d39c5c6f6043a47d50e1d4bd36db26b4baad18c34639f502235890860f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:26.429417 containerd[1511]: time="2025-01-29T16:26:26.429268655Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4pfpp,Uid:ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"afa262d39c5c6f6043a47d50e1d4bd36db26b4baad18c34639f502235890860f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:26.429625 kubelet[1836]: E0129 16:26:26.429490 1836 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afa262d39c5c6f6043a47d50e1d4bd36db26b4baad18c34639f502235890860f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:26.429625 kubelet[1836]: E0129 16:26:26.429550 1836 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afa262d39c5c6f6043a47d50e1d4bd36db26b4baad18c34639f502235890860f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4pfpp" Jan 29 16:26:26.429625 kubelet[1836]: E0129 16:26:26.429573 1836 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afa262d39c5c6f6043a47d50e1d4bd36db26b4baad18c34639f502235890860f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4pfpp" Jan 29 16:26:26.429781 kubelet[1836]: E0129 16:26:26.429610 1836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4pfpp_calico-system(ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4pfpp_calico-system(ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"afa262d39c5c6f6043a47d50e1d4bd36db26b4baad18c34639f502235890860f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4pfpp" podUID="ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553" Jan 29 16:26:26.431266 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-afa262d39c5c6f6043a47d50e1d4bd36db26b4baad18c34639f502235890860f-shm.mount: Deactivated successfully. Jan 29 16:26:27.089028 systemd[1]: Created slice kubepods-besteffort-pod90a3e9fd_1da6_44cf_aa2e_ea467e1b0e35.slice - libcontainer container kubepods-besteffort-pod90a3e9fd_1da6_44cf_aa2e_ea467e1b0e35.slice. Jan 29 16:26:27.093259 kubelet[1836]: E0129 16:26:27.093235 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:27.202905 kubelet[1836]: I0129 16:26:27.202853 1836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw8qr\" (UniqueName: \"kubernetes.io/projected/90a3e9fd-1da6-44cf-aa2e-ea467e1b0e35-kube-api-access-rw8qr\") pod \"nginx-deployment-8587fbcb89-w488j\" (UID: \"90a3e9fd-1da6-44cf-aa2e-ea467e1b0e35\") " pod="default/nginx-deployment-8587fbcb89-w488j" Jan 29 16:26:27.366285 kubelet[1836]: I0129 16:26:27.366169 1836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afa262d39c5c6f6043a47d50e1d4bd36db26b4baad18c34639f502235890860f" Jan 29 16:26:27.366821 containerd[1511]: time="2025-01-29T16:26:27.366766175Z" level=info msg="StopPodSandbox for \"afa262d39c5c6f6043a47d50e1d4bd36db26b4baad18c34639f502235890860f\"" Jan 29 16:26:27.367299 containerd[1511]: time="2025-01-29T16:26:27.367035944Z" level=info msg="Ensure that sandbox afa262d39c5c6f6043a47d50e1d4bd36db26b4baad18c34639f502235890860f in task-service has been cleanup successfully" Jan 29 16:26:27.367299 containerd[1511]: time="2025-01-29T16:26:27.367237693Z" level=info msg="TearDown network for sandbox \"afa262d39c5c6f6043a47d50e1d4bd36db26b4baad18c34639f502235890860f\" successfully" Jan 29 16:26:27.367299 containerd[1511]: time="2025-01-29T16:26:27.367252652Z" level=info msg="StopPodSandbox for \"afa262d39c5c6f6043a47d50e1d4bd36db26b4baad18c34639f502235890860f\" returns successfully" Jan 29 16:26:27.367528 containerd[1511]: time="2025-01-29T16:26:27.367506389Z" level=info msg="StopPodSandbox for \"6c04ed8c505d3c2699018369350b5a60291c70a0e570bba0d517856b46be2103\"" Jan 29 16:26:27.367596 containerd[1511]: time="2025-01-29T16:26:27.367582366Z" level=info msg="TearDown network for sandbox \"6c04ed8c505d3c2699018369350b5a60291c70a0e570bba0d517856b46be2103\" successfully" Jan 29 16:26:27.367596 containerd[1511]: time="2025-01-29T16:26:27.367593827Z" level=info msg="StopPodSandbox for \"6c04ed8c505d3c2699018369350b5a60291c70a0e570bba0d517856b46be2103\" returns successfully" Jan 29 16:26:27.367934 containerd[1511]: time="2025-01-29T16:26:27.367912982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4pfpp,Uid:ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553,Namespace:calico-system,Attempt:2,}" Jan 29 16:26:27.368937 systemd[1]: run-netns-cni\x2d276fd054\x2d3bfa\x2d3f4a\x2de3d2\x2dd2a2822cb455.mount: Deactivated successfully. Jan 29 16:26:27.392709 containerd[1511]: time="2025-01-29T16:26:27.392660810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-w488j,Uid:90a3e9fd-1da6-44cf-aa2e-ea467e1b0e35,Namespace:default,Attempt:0,}" Jan 29 16:26:27.446478 containerd[1511]: time="2025-01-29T16:26:27.446427365Z" level=error msg="Failed to destroy network for sandbox \"cffaf574b277ef4135603b2b501e688b2c83bfacc0fe535cc819f8712de32824\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:27.569192 containerd[1511]: time="2025-01-29T16:26:27.569112011Z" level=error msg="encountered an error cleaning up failed sandbox \"cffaf574b277ef4135603b2b501e688b2c83bfacc0fe535cc819f8712de32824\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:27.569358 containerd[1511]: time="2025-01-29T16:26:27.569228014Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4pfpp,Uid:ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"cffaf574b277ef4135603b2b501e688b2c83bfacc0fe535cc819f8712de32824\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:27.569577 kubelet[1836]: E0129 16:26:27.569531 1836 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cffaf574b277ef4135603b2b501e688b2c83bfacc0fe535cc819f8712de32824\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:27.569667 kubelet[1836]: E0129 16:26:27.569607 1836 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cffaf574b277ef4135603b2b501e688b2c83bfacc0fe535cc819f8712de32824\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4pfpp" Jan 29 16:26:27.569667 kubelet[1836]: E0129 16:26:27.569635 1836 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cffaf574b277ef4135603b2b501e688b2c83bfacc0fe535cc819f8712de32824\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4pfpp" Jan 29 16:26:27.569818 kubelet[1836]: E0129 16:26:27.569778 1836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4pfpp_calico-system(ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4pfpp_calico-system(ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cffaf574b277ef4135603b2b501e688b2c83bfacc0fe535cc819f8712de32824\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4pfpp" podUID="ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553" Jan 29 16:26:27.592392 containerd[1511]: time="2025-01-29T16:26:27.592323053Z" level=error msg="Failed to destroy network for sandbox \"8053056e315849889aa194b8a66cdbc30602f9f7d4e7b291ce8ae981ba0f2769\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:27.592980 containerd[1511]: time="2025-01-29T16:26:27.592920393Z" level=error msg="encountered an error cleaning up failed sandbox \"8053056e315849889aa194b8a66cdbc30602f9f7d4e7b291ce8ae981ba0f2769\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:27.593060 containerd[1511]: time="2025-01-29T16:26:27.593021677Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-w488j,Uid:90a3e9fd-1da6-44cf-aa2e-ea467e1b0e35,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8053056e315849889aa194b8a66cdbc30602f9f7d4e7b291ce8ae981ba0f2769\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:27.593338 kubelet[1836]: E0129 16:26:27.593300 1836 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8053056e315849889aa194b8a66cdbc30602f9f7d4e7b291ce8ae981ba0f2769\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:27.593401 kubelet[1836]: E0129 16:26:27.593371 1836 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8053056e315849889aa194b8a66cdbc30602f9f7d4e7b291ce8ae981ba0f2769\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-w488j" Jan 29 16:26:27.593425 kubelet[1836]: E0129 16:26:27.593395 1836 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8053056e315849889aa194b8a66cdbc30602f9f7d4e7b291ce8ae981ba0f2769\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-w488j" Jan 29 16:26:27.593483 kubelet[1836]: E0129 16:26:27.593455 1836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-w488j_default(90a3e9fd-1da6-44cf-aa2e-ea467e1b0e35)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-w488j_default(90a3e9fd-1da6-44cf-aa2e-ea467e1b0e35)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8053056e315849889aa194b8a66cdbc30602f9f7d4e7b291ce8ae981ba0f2769\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-w488j" podUID="90a3e9fd-1da6-44cf-aa2e-ea467e1b0e35" Jan 29 16:26:28.094414 kubelet[1836]: E0129 16:26:28.094358 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:28.370263 kubelet[1836]: I0129 16:26:28.369331 1836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cffaf574b277ef4135603b2b501e688b2c83bfacc0fe535cc819f8712de32824" Jan 29 16:26:28.369516 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8053056e315849889aa194b8a66cdbc30602f9f7d4e7b291ce8ae981ba0f2769-shm.mount: Deactivated successfully. Jan 29 16:26:28.370669 kubelet[1836]: I0129 16:26:28.370356 1836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8053056e315849889aa194b8a66cdbc30602f9f7d4e7b291ce8ae981ba0f2769" Jan 29 16:26:28.369697 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cffaf574b277ef4135603b2b501e688b2c83bfacc0fe535cc819f8712de32824-shm.mount: Deactivated successfully. Jan 29 16:26:28.370774 containerd[1511]: time="2025-01-29T16:26:28.370738557Z" level=info msg="StopPodSandbox for \"cffaf574b277ef4135603b2b501e688b2c83bfacc0fe535cc819f8712de32824\"" Jan 29 16:26:28.371097 containerd[1511]: time="2025-01-29T16:26:28.370938050Z" level=info msg="Ensure that sandbox cffaf574b277ef4135603b2b501e688b2c83bfacc0fe535cc819f8712de32824 in task-service has been cleanup successfully" Jan 29 16:26:28.371258 containerd[1511]: time="2025-01-29T16:26:28.371151250Z" level=info msg="StopPodSandbox for \"8053056e315849889aa194b8a66cdbc30602f9f7d4e7b291ce8ae981ba0f2769\"" Jan 29 16:26:28.371305 containerd[1511]: time="2025-01-29T16:26:28.371285438Z" level=info msg="Ensure that sandbox 8053056e315849889aa194b8a66cdbc30602f9f7d4e7b291ce8ae981ba0f2769 in task-service has been cleanup successfully" Jan 29 16:26:28.371718 containerd[1511]: time="2025-01-29T16:26:28.371585665Z" level=info msg="TearDown network for sandbox \"cffaf574b277ef4135603b2b501e688b2c83bfacc0fe535cc819f8712de32824\" successfully" Jan 29 16:26:28.371718 containerd[1511]: time="2025-01-29T16:26:28.371621403Z" level=info msg="StopPodSandbox for \"cffaf574b277ef4135603b2b501e688b2c83bfacc0fe535cc819f8712de32824\" returns successfully" Jan 29 16:26:28.371916 containerd[1511]: time="2025-01-29T16:26:28.371867997Z" level=info msg="TearDown network for sandbox \"8053056e315849889aa194b8a66cdbc30602f9f7d4e7b291ce8ae981ba0f2769\" successfully" Jan 29 16:26:28.371916 containerd[1511]: time="2025-01-29T16:26:28.371887525Z" level=info msg="StopPodSandbox for \"8053056e315849889aa194b8a66cdbc30602f9f7d4e7b291ce8ae981ba0f2769\" returns successfully" Jan 29 16:26:28.372308 containerd[1511]: time="2025-01-29T16:26:28.372265350Z" level=info msg="StopPodSandbox for \"afa262d39c5c6f6043a47d50e1d4bd36db26b4baad18c34639f502235890860f\"" Jan 29 16:26:28.372373 containerd[1511]: time="2025-01-29T16:26:28.372355643Z" level=info msg="TearDown network for sandbox \"afa262d39c5c6f6043a47d50e1d4bd36db26b4baad18c34639f502235890860f\" successfully" Jan 29 16:26:28.372373 containerd[1511]: time="2025-01-29T16:26:28.372368649Z" level=info msg="StopPodSandbox for \"afa262d39c5c6f6043a47d50e1d4bd36db26b4baad18c34639f502235890860f\" returns successfully" Jan 29 16:26:28.372540 containerd[1511]: time="2025-01-29T16:26:28.372479271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-w488j,Uid:90a3e9fd-1da6-44cf-aa2e-ea467e1b0e35,Namespace:default,Attempt:1,}" Jan 29 16:26:28.373243 containerd[1511]: time="2025-01-29T16:26:28.373209784Z" level=info msg="StopPodSandbox for \"6c04ed8c505d3c2699018369350b5a60291c70a0e570bba0d517856b46be2103\"" Jan 29 16:26:28.373309 containerd[1511]: time="2025-01-29T16:26:28.373292273Z" level=info msg="TearDown network for sandbox \"6c04ed8c505d3c2699018369350b5a60291c70a0e570bba0d517856b46be2103\" successfully" Jan 29 16:26:28.373309 containerd[1511]: time="2025-01-29T16:26:28.373306240Z" level=info msg="StopPodSandbox for \"6c04ed8c505d3c2699018369350b5a60291c70a0e570bba0d517856b46be2103\" returns successfully" Jan 29 16:26:28.373793 containerd[1511]: time="2025-01-29T16:26:28.373721027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4pfpp,Uid:ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553,Namespace:calico-system,Attempt:3,}" Jan 29 16:26:28.374504 systemd[1]: run-netns-cni\x2dfbb86dd3\x2dce88\x2d2cad\x2dc5ba\x2da436cc944f89.mount: Deactivated successfully. Jan 29 16:26:28.374731 systemd[1]: run-netns-cni\x2d21282f18\x2dbc87\x2dad8e\x2d455b\x2d28bfd92c3d74.mount: Deactivated successfully. Jan 29 16:26:28.533201 containerd[1511]: time="2025-01-29T16:26:28.532603449Z" level=error msg="Failed to destroy network for sandbox \"e8c193de622ce73e832a3fda7a9bfabf864dadaaa147a9039e467ccbf7a365c7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:28.533864 containerd[1511]: time="2025-01-29T16:26:28.533709784Z" level=error msg="encountered an error cleaning up failed sandbox \"e8c193de622ce73e832a3fda7a9bfabf864dadaaa147a9039e467ccbf7a365c7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:28.533864 containerd[1511]: time="2025-01-29T16:26:28.533813733Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4pfpp,Uid:ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"e8c193de622ce73e832a3fda7a9bfabf864dadaaa147a9039e467ccbf7a365c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:28.534205 kubelet[1836]: E0129 16:26:28.534106 1836 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8c193de622ce73e832a3fda7a9bfabf864dadaaa147a9039e467ccbf7a365c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:28.534338 kubelet[1836]: E0129 16:26:28.534204 1836 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8c193de622ce73e832a3fda7a9bfabf864dadaaa147a9039e467ccbf7a365c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4pfpp" Jan 29 16:26:28.534338 kubelet[1836]: E0129 16:26:28.534231 1836 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8c193de622ce73e832a3fda7a9bfabf864dadaaa147a9039e467ccbf7a365c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4pfpp" Jan 29 16:26:28.534338 kubelet[1836]: E0129 16:26:28.534285 1836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4pfpp_calico-system(ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4pfpp_calico-system(ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e8c193de622ce73e832a3fda7a9bfabf864dadaaa147a9039e467ccbf7a365c7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4pfpp" podUID="ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553" Jan 29 16:26:28.534520 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e8c193de622ce73e832a3fda7a9bfabf864dadaaa147a9039e467ccbf7a365c7-shm.mount: Deactivated successfully. Jan 29 16:26:28.542363 containerd[1511]: time="2025-01-29T16:26:28.542307682Z" level=error msg="Failed to destroy network for sandbox \"bf31a2cdcb0fbdb2a27c9594ace24b451214b41a29d73adba02deb722d8cdcc7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:28.542747 containerd[1511]: time="2025-01-29T16:26:28.542714333Z" level=error msg="encountered an error cleaning up failed sandbox \"bf31a2cdcb0fbdb2a27c9594ace24b451214b41a29d73adba02deb722d8cdcc7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:28.542794 containerd[1511]: time="2025-01-29T16:26:28.542767625Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-w488j,Uid:90a3e9fd-1da6-44cf-aa2e-ea467e1b0e35,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"bf31a2cdcb0fbdb2a27c9594ace24b451214b41a29d73adba02deb722d8cdcc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:28.543016 kubelet[1836]: E0129 16:26:28.542986 1836 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf31a2cdcb0fbdb2a27c9594ace24b451214b41a29d73adba02deb722d8cdcc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:28.543083 kubelet[1836]: E0129 16:26:28.543034 1836 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf31a2cdcb0fbdb2a27c9594ace24b451214b41a29d73adba02deb722d8cdcc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-w488j" Jan 29 16:26:28.543083 kubelet[1836]: E0129 16:26:28.543057 1836 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf31a2cdcb0fbdb2a27c9594ace24b451214b41a29d73adba02deb722d8cdcc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-w488j" Jan 29 16:26:28.543149 kubelet[1836]: E0129 16:26:28.543101 1836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-w488j_default(90a3e9fd-1da6-44cf-aa2e-ea467e1b0e35)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-w488j_default(90a3e9fd-1da6-44cf-aa2e-ea467e1b0e35)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bf31a2cdcb0fbdb2a27c9594ace24b451214b41a29d73adba02deb722d8cdcc7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-w488j" podUID="90a3e9fd-1da6-44cf-aa2e-ea467e1b0e35" Jan 29 16:26:29.059786 kubelet[1836]: E0129 16:26:29.059740 1836 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:29.095397 kubelet[1836]: E0129 16:26:29.095353 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:29.370074 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bf31a2cdcb0fbdb2a27c9594ace24b451214b41a29d73adba02deb722d8cdcc7-shm.mount: Deactivated successfully. Jan 29 16:26:29.373841 kubelet[1836]: I0129 16:26:29.373816 1836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf31a2cdcb0fbdb2a27c9594ace24b451214b41a29d73adba02deb722d8cdcc7" Jan 29 16:26:29.376444 containerd[1511]: time="2025-01-29T16:26:29.376396348Z" level=info msg="StopPodSandbox for \"bf31a2cdcb0fbdb2a27c9594ace24b451214b41a29d73adba02deb722d8cdcc7\"" Jan 29 16:26:29.376864 containerd[1511]: time="2025-01-29T16:26:29.376628183Z" level=info msg="Ensure that sandbox bf31a2cdcb0fbdb2a27c9594ace24b451214b41a29d73adba02deb722d8cdcc7 in task-service has been cleanup successfully" Jan 29 16:26:29.376890 containerd[1511]: time="2025-01-29T16:26:29.376862833Z" level=info msg="TearDown network for sandbox \"bf31a2cdcb0fbdb2a27c9594ace24b451214b41a29d73adba02deb722d8cdcc7\" successfully" Jan 29 16:26:29.376890 containerd[1511]: time="2025-01-29T16:26:29.376876479Z" level=info msg="StopPodSandbox for \"bf31a2cdcb0fbdb2a27c9594ace24b451214b41a29d73adba02deb722d8cdcc7\" returns successfully" Jan 29 16:26:29.378272 systemd[1]: run-netns-cni\x2d4249c92d\x2de746\x2db41c\x2d7bf9\x2d9a05784cbeca.mount: Deactivated successfully. Jan 29 16:26:29.379361 containerd[1511]: time="2025-01-29T16:26:29.378923716Z" level=info msg="StopPodSandbox for \"8053056e315849889aa194b8a66cdbc30602f9f7d4e7b291ce8ae981ba0f2769\"" Jan 29 16:26:29.379361 containerd[1511]: time="2025-01-29T16:26:29.379044167Z" level=info msg="TearDown network for sandbox \"8053056e315849889aa194b8a66cdbc30602f9f7d4e7b291ce8ae981ba0f2769\" successfully" Jan 29 16:26:29.379361 containerd[1511]: time="2025-01-29T16:26:29.379099994Z" level=info msg="StopPodSandbox for \"8053056e315849889aa194b8a66cdbc30602f9f7d4e7b291ce8ae981ba0f2769\" returns successfully" Jan 29 16:26:29.380054 containerd[1511]: time="2025-01-29T16:26:29.380030570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-w488j,Uid:90a3e9fd-1da6-44cf-aa2e-ea467e1b0e35,Namespace:default,Attempt:2,}" Jan 29 16:26:29.380566 kubelet[1836]: I0129 16:26:29.380547 1836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8c193de622ce73e832a3fda7a9bfabf864dadaaa147a9039e467ccbf7a365c7" Jan 29 16:26:29.381139 containerd[1511]: time="2025-01-29T16:26:29.381046398Z" level=info msg="StopPodSandbox for \"e8c193de622ce73e832a3fda7a9bfabf864dadaaa147a9039e467ccbf7a365c7\"" Jan 29 16:26:29.381243 containerd[1511]: time="2025-01-29T16:26:29.381220012Z" level=info msg="Ensure that sandbox e8c193de622ce73e832a3fda7a9bfabf864dadaaa147a9039e467ccbf7a365c7 in task-service has been cleanup successfully" Jan 29 16:26:29.381679 containerd[1511]: time="2025-01-29T16:26:29.381489488Z" level=info msg="TearDown network for sandbox \"e8c193de622ce73e832a3fda7a9bfabf864dadaaa147a9039e467ccbf7a365c7\" successfully" Jan 29 16:26:29.381679 containerd[1511]: time="2025-01-29T16:26:29.381504748Z" level=info msg="StopPodSandbox for \"e8c193de622ce73e832a3fda7a9bfabf864dadaaa147a9039e467ccbf7a365c7\" returns successfully" Jan 29 16:26:29.381768 containerd[1511]: time="2025-01-29T16:26:29.381745238Z" level=info msg="StopPodSandbox for \"cffaf574b277ef4135603b2b501e688b2c83bfacc0fe535cc819f8712de32824\"" Jan 29 16:26:29.381846 containerd[1511]: time="2025-01-29T16:26:29.381829531Z" level=info msg="TearDown network for sandbox \"cffaf574b277ef4135603b2b501e688b2c83bfacc0fe535cc819f8712de32824\" successfully" Jan 29 16:26:29.381846 containerd[1511]: time="2025-01-29T16:26:29.381843147Z" level=info msg="StopPodSandbox for \"cffaf574b277ef4135603b2b501e688b2c83bfacc0fe535cc819f8712de32824\" returns successfully" Jan 29 16:26:29.382105 containerd[1511]: time="2025-01-29T16:26:29.382087085Z" level=info msg="StopPodSandbox for \"afa262d39c5c6f6043a47d50e1d4bd36db26b4baad18c34639f502235890860f\"" Jan 29 16:26:29.382173 containerd[1511]: time="2025-01-29T16:26:29.382155646Z" level=info msg="TearDown network for sandbox \"afa262d39c5c6f6043a47d50e1d4bd36db26b4baad18c34639f502235890860f\" successfully" Jan 29 16:26:29.382309 containerd[1511]: time="2025-01-29T16:26:29.382220531Z" level=info msg="StopPodSandbox for \"afa262d39c5c6f6043a47d50e1d4bd36db26b4baad18c34639f502235890860f\" returns successfully" Jan 29 16:26:29.383091 systemd[1]: run-netns-cni\x2d361ba196\x2d5fb4\x2d10d6\x2d50aa\x2d92f3dda8aec7.mount: Deactivated successfully. Jan 29 16:26:29.383446 containerd[1511]: time="2025-01-29T16:26:29.383243313Z" level=info msg="StopPodSandbox for \"6c04ed8c505d3c2699018369350b5a60291c70a0e570bba0d517856b46be2103\"" Jan 29 16:26:29.383446 containerd[1511]: time="2025-01-29T16:26:29.383364635Z" level=info msg="TearDown network for sandbox \"6c04ed8c505d3c2699018369350b5a60291c70a0e570bba0d517856b46be2103\" successfully" Jan 29 16:26:29.383446 containerd[1511]: time="2025-01-29T16:26:29.383374133Z" level=info msg="StopPodSandbox for \"6c04ed8c505d3c2699018369350b5a60291c70a0e570bba0d517856b46be2103\" returns successfully" Jan 29 16:26:29.384113 containerd[1511]: time="2025-01-29T16:26:29.384093012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4pfpp,Uid:ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553,Namespace:calico-system,Attempt:4,}" Jan 29 16:26:29.551259 containerd[1511]: time="2025-01-29T16:26:29.551181232Z" level=error msg="Failed to destroy network for sandbox \"02d75da93423d92f36ecd20e0389bb3c66c200651ce5cea4662a1f1999f6aebc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:29.552043 containerd[1511]: time="2025-01-29T16:26:29.551584124Z" level=error msg="encountered an error cleaning up failed sandbox \"02d75da93423d92f36ecd20e0389bb3c66c200651ce5cea4662a1f1999f6aebc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:29.552043 containerd[1511]: time="2025-01-29T16:26:29.551666061Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-w488j,Uid:90a3e9fd-1da6-44cf-aa2e-ea467e1b0e35,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"02d75da93423d92f36ecd20e0389bb3c66c200651ce5cea4662a1f1999f6aebc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:29.552151 kubelet[1836]: E0129 16:26:29.552029 1836 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02d75da93423d92f36ecd20e0389bb3c66c200651ce5cea4662a1f1999f6aebc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:29.552151 kubelet[1836]: E0129 16:26:29.552104 1836 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02d75da93423d92f36ecd20e0389bb3c66c200651ce5cea4662a1f1999f6aebc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-w488j" Jan 29 16:26:29.552151 kubelet[1836]: E0129 16:26:29.552130 1836 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02d75da93423d92f36ecd20e0389bb3c66c200651ce5cea4662a1f1999f6aebc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-w488j" Jan 29 16:26:29.552243 kubelet[1836]: E0129 16:26:29.552176 1836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-w488j_default(90a3e9fd-1da6-44cf-aa2e-ea467e1b0e35)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-w488j_default(90a3e9fd-1da6-44cf-aa2e-ea467e1b0e35)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"02d75da93423d92f36ecd20e0389bb3c66c200651ce5cea4662a1f1999f6aebc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-w488j" podUID="90a3e9fd-1da6-44cf-aa2e-ea467e1b0e35" Jan 29 16:26:29.557890 containerd[1511]: time="2025-01-29T16:26:29.557696991Z" level=error msg="Failed to destroy network for sandbox \"823b60199170bed3f24aa396770a59c06672b7c1574810890698fa7b482ad0dd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:29.558300 containerd[1511]: time="2025-01-29T16:26:29.558260211Z" level=error msg="encountered an error cleaning up failed sandbox \"823b60199170bed3f24aa396770a59c06672b7c1574810890698fa7b482ad0dd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:29.558351 containerd[1511]: time="2025-01-29T16:26:29.558302993Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4pfpp,Uid:ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"823b60199170bed3f24aa396770a59c06672b7c1574810890698fa7b482ad0dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:29.558465 kubelet[1836]: E0129 16:26:29.558438 1836 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"823b60199170bed3f24aa396770a59c06672b7c1574810890698fa7b482ad0dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:29.558543 kubelet[1836]: E0129 16:26:29.558473 1836 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"823b60199170bed3f24aa396770a59c06672b7c1574810890698fa7b482ad0dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4pfpp" Jan 29 16:26:29.558543 kubelet[1836]: E0129 16:26:29.558490 1836 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"823b60199170bed3f24aa396770a59c06672b7c1574810890698fa7b482ad0dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4pfpp" Jan 29 16:26:29.558543 kubelet[1836]: E0129 16:26:29.558519 1836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4pfpp_calico-system(ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4pfpp_calico-system(ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"823b60199170bed3f24aa396770a59c06672b7c1574810890698fa7b482ad0dd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4pfpp" podUID="ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553" Jan 29 16:26:30.096302 kubelet[1836]: E0129 16:26:30.096267 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:30.370306 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-02d75da93423d92f36ecd20e0389bb3c66c200651ce5cea4662a1f1999f6aebc-shm.mount: Deactivated successfully. Jan 29 16:26:30.385558 kubelet[1836]: I0129 16:26:30.385507 1836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02d75da93423d92f36ecd20e0389bb3c66c200651ce5cea4662a1f1999f6aebc" Jan 29 16:26:30.386665 containerd[1511]: time="2025-01-29T16:26:30.386582722Z" level=info msg="StopPodSandbox for \"02d75da93423d92f36ecd20e0389bb3c66c200651ce5cea4662a1f1999f6aebc\"" Jan 29 16:26:30.388063 containerd[1511]: time="2025-01-29T16:26:30.386867988Z" level=info msg="Ensure that sandbox 02d75da93423d92f36ecd20e0389bb3c66c200651ce5cea4662a1f1999f6aebc in task-service has been cleanup successfully" Jan 29 16:26:30.388063 containerd[1511]: time="2025-01-29T16:26:30.387346275Z" level=info msg="TearDown network for sandbox \"02d75da93423d92f36ecd20e0389bb3c66c200651ce5cea4662a1f1999f6aebc\" successfully" Jan 29 16:26:30.388063 containerd[1511]: time="2025-01-29T16:26:30.387367645Z" level=info msg="StopPodSandbox for \"02d75da93423d92f36ecd20e0389bb3c66c200651ce5cea4662a1f1999f6aebc\" returns successfully" Jan 29 16:26:30.388596 containerd[1511]: time="2025-01-29T16:26:30.388560461Z" level=info msg="StopPodSandbox for \"bf31a2cdcb0fbdb2a27c9594ace24b451214b41a29d73adba02deb722d8cdcc7\"" Jan 29 16:26:30.388866 containerd[1511]: time="2025-01-29T16:26:30.388700148Z" level=info msg="TearDown network for sandbox \"bf31a2cdcb0fbdb2a27c9594ace24b451214b41a29d73adba02deb722d8cdcc7\" successfully" Jan 29 16:26:30.388866 containerd[1511]: time="2025-01-29T16:26:30.388722150Z" level=info msg="StopPodSandbox for \"bf31a2cdcb0fbdb2a27c9594ace24b451214b41a29d73adba02deb722d8cdcc7\" returns successfully" Jan 29 16:26:30.389657 containerd[1511]: time="2025-01-29T16:26:30.389373558Z" level=info msg="StopPodSandbox for \"8053056e315849889aa194b8a66cdbc30602f9f7d4e7b291ce8ae981ba0f2769\"" Jan 29 16:26:30.389657 containerd[1511]: time="2025-01-29T16:26:30.389495041Z" level=info msg="TearDown network for sandbox \"8053056e315849889aa194b8a66cdbc30602f9f7d4e7b291ce8ae981ba0f2769\" successfully" Jan 29 16:26:30.389657 containerd[1511]: time="2025-01-29T16:26:30.389507385Z" level=info msg="StopPodSandbox for \"8053056e315849889aa194b8a66cdbc30602f9f7d4e7b291ce8ae981ba0f2769\" returns successfully" Jan 29 16:26:30.389608 systemd[1]: run-netns-cni\x2dca6c9094\x2d25f6\x2d812e\x2dc02d\x2dec741c8a5223.mount: Deactivated successfully. Jan 29 16:26:30.390824 containerd[1511]: time="2025-01-29T16:26:30.390775214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-w488j,Uid:90a3e9fd-1da6-44cf-aa2e-ea467e1b0e35,Namespace:default,Attempt:3,}" Jan 29 16:26:30.391357 kubelet[1836]: I0129 16:26:30.391302 1836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="823b60199170bed3f24aa396770a59c06672b7c1574810890698fa7b482ad0dd" Jan 29 16:26:30.392010 containerd[1511]: time="2025-01-29T16:26:30.391821948Z" level=info msg="StopPodSandbox for \"823b60199170bed3f24aa396770a59c06672b7c1574810890698fa7b482ad0dd\"" Jan 29 16:26:30.392059 containerd[1511]: time="2025-01-29T16:26:30.392027061Z" level=info msg="Ensure that sandbox 823b60199170bed3f24aa396770a59c06672b7c1574810890698fa7b482ad0dd in task-service has been cleanup successfully" Jan 29 16:26:30.392540 containerd[1511]: time="2025-01-29T16:26:30.392444721Z" level=info msg="TearDown network for sandbox \"823b60199170bed3f24aa396770a59c06672b7c1574810890698fa7b482ad0dd\" successfully" Jan 29 16:26:30.392540 containerd[1511]: time="2025-01-29T16:26:30.392467054Z" level=info msg="StopPodSandbox for \"823b60199170bed3f24aa396770a59c06672b7c1574810890698fa7b482ad0dd\" returns successfully" Jan 29 16:26:30.394470 systemd[1]: run-netns-cni\x2dca375507\x2dc0b2\x2d6181\x2d6b4b\x2de9fae6834a61.mount: Deactivated successfully. Jan 29 16:26:30.394905 containerd[1511]: time="2025-01-29T16:26:30.394693620Z" level=info msg="StopPodSandbox for \"e8c193de622ce73e832a3fda7a9bfabf864dadaaa147a9039e467ccbf7a365c7\"" Jan 29 16:26:30.394905 containerd[1511]: time="2025-01-29T16:26:30.394807637Z" level=info msg="TearDown network for sandbox \"e8c193de622ce73e832a3fda7a9bfabf864dadaaa147a9039e467ccbf7a365c7\" successfully" Jan 29 16:26:30.394905 containerd[1511]: time="2025-01-29T16:26:30.394823608Z" level=info msg="StopPodSandbox for \"e8c193de622ce73e832a3fda7a9bfabf864dadaaa147a9039e467ccbf7a365c7\" returns successfully" Jan 29 16:26:30.395737 containerd[1511]: time="2025-01-29T16:26:30.395703864Z" level=info msg="StopPodSandbox for \"cffaf574b277ef4135603b2b501e688b2c83bfacc0fe535cc819f8712de32824\"" Jan 29 16:26:30.395862 containerd[1511]: time="2025-01-29T16:26:30.395802433Z" level=info msg="TearDown network for sandbox \"cffaf574b277ef4135603b2b501e688b2c83bfacc0fe535cc819f8712de32824\" successfully" Jan 29 16:26:30.395862 containerd[1511]: time="2025-01-29T16:26:30.395821981Z" level=info msg="StopPodSandbox for \"cffaf574b277ef4135603b2b501e688b2c83bfacc0fe535cc819f8712de32824\" returns successfully" Jan 29 16:26:30.396614 containerd[1511]: time="2025-01-29T16:26:30.396562800Z" level=info msg="StopPodSandbox for \"afa262d39c5c6f6043a47d50e1d4bd36db26b4baad18c34639f502235890860f\"" Jan 29 16:26:30.396831 containerd[1511]: time="2025-01-29T16:26:30.396774426Z" level=info msg="TearDown network for sandbox \"afa262d39c5c6f6043a47d50e1d4bd36db26b4baad18c34639f502235890860f\" successfully" Jan 29 16:26:30.396831 containerd[1511]: time="2025-01-29T16:26:30.396797629Z" level=info msg="StopPodSandbox for \"afa262d39c5c6f6043a47d50e1d4bd36db26b4baad18c34639f502235890860f\" returns successfully" Jan 29 16:26:30.397733 containerd[1511]: time="2025-01-29T16:26:30.397206653Z" level=info msg="StopPodSandbox for \"6c04ed8c505d3c2699018369350b5a60291c70a0e570bba0d517856b46be2103\"" Jan 29 16:26:30.397733 containerd[1511]: time="2025-01-29T16:26:30.397367321Z" level=info msg="TearDown network for sandbox \"6c04ed8c505d3c2699018369350b5a60291c70a0e570bba0d517856b46be2103\" successfully" Jan 29 16:26:30.397733 containerd[1511]: time="2025-01-29T16:26:30.397379695Z" level=info msg="StopPodSandbox for \"6c04ed8c505d3c2699018369350b5a60291c70a0e570bba0d517856b46be2103\" returns successfully" Jan 29 16:26:30.397915 containerd[1511]: time="2025-01-29T16:26:30.397880935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4pfpp,Uid:ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553,Namespace:calico-system,Attempt:5,}" Jan 29 16:26:31.097152 kubelet[1836]: E0129 16:26:31.097101 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:31.101840 containerd[1511]: time="2025-01-29T16:26:31.101792484Z" level=error msg="Failed to destroy network for sandbox \"e163d3ac954194cd8144c2b133d7e41db5728e141349f8fa0295aabb409874b7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:31.244212 containerd[1511]: time="2025-01-29T16:26:31.244149292Z" level=error msg="encountered an error cleaning up failed sandbox \"e163d3ac954194cd8144c2b133d7e41db5728e141349f8fa0295aabb409874b7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:31.244665 containerd[1511]: time="2025-01-29T16:26:31.244610594Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-w488j,Uid:90a3e9fd-1da6-44cf-aa2e-ea467e1b0e35,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"e163d3ac954194cd8144c2b133d7e41db5728e141349f8fa0295aabb409874b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:31.245187 kubelet[1836]: E0129 16:26:31.245141 1836 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e163d3ac954194cd8144c2b133d7e41db5728e141349f8fa0295aabb409874b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:31.245277 kubelet[1836]: E0129 16:26:31.245231 1836 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e163d3ac954194cd8144c2b133d7e41db5728e141349f8fa0295aabb409874b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-w488j" Jan 29 16:26:31.245277 kubelet[1836]: E0129 16:26:31.245256 1836 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e163d3ac954194cd8144c2b133d7e41db5728e141349f8fa0295aabb409874b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-w488j" Jan 29 16:26:31.245453 kubelet[1836]: E0129 16:26:31.245348 1836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-w488j_default(90a3e9fd-1da6-44cf-aa2e-ea467e1b0e35)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-w488j_default(90a3e9fd-1da6-44cf-aa2e-ea467e1b0e35)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e163d3ac954194cd8144c2b133d7e41db5728e141349f8fa0295aabb409874b7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-w488j" podUID="90a3e9fd-1da6-44cf-aa2e-ea467e1b0e35" Jan 29 16:26:31.276635 containerd[1511]: time="2025-01-29T16:26:31.276580515Z" level=error msg="Failed to destroy network for sandbox \"504b29301c6d060e8e0cce3b5c0c88c18609b640aaaf2d3136da0c37107c8cde\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:31.277032 containerd[1511]: time="2025-01-29T16:26:31.277005438Z" level=error msg="encountered an error cleaning up failed sandbox \"504b29301c6d060e8e0cce3b5c0c88c18609b640aaaf2d3136da0c37107c8cde\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:31.277093 containerd[1511]: time="2025-01-29T16:26:31.277073218Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4pfpp,Uid:ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"504b29301c6d060e8e0cce3b5c0c88c18609b640aaaf2d3136da0c37107c8cde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:31.277384 kubelet[1836]: E0129 16:26:31.277344 1836 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"504b29301c6d060e8e0cce3b5c0c88c18609b640aaaf2d3136da0c37107c8cde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:31.277852 kubelet[1836]: E0129 16:26:31.277497 1836 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"504b29301c6d060e8e0cce3b5c0c88c18609b640aaaf2d3136da0c37107c8cde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4pfpp" Jan 29 16:26:31.277852 kubelet[1836]: E0129 16:26:31.277532 1836 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"504b29301c6d060e8e0cce3b5c0c88c18609b640aaaf2d3136da0c37107c8cde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4pfpp" Jan 29 16:26:31.277852 kubelet[1836]: E0129 16:26:31.277577 1836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4pfpp_calico-system(ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4pfpp_calico-system(ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"504b29301c6d060e8e0cce3b5c0c88c18609b640aaaf2d3136da0c37107c8cde\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4pfpp" podUID="ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553" Jan 29 16:26:31.370443 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-504b29301c6d060e8e0cce3b5c0c88c18609b640aaaf2d3136da0c37107c8cde-shm.mount: Deactivated successfully. Jan 29 16:26:31.371244 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e163d3ac954194cd8144c2b133d7e41db5728e141349f8fa0295aabb409874b7-shm.mount: Deactivated successfully. Jan 29 16:26:31.395868 kubelet[1836]: I0129 16:26:31.395833 1836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="504b29301c6d060e8e0cce3b5c0c88c18609b640aaaf2d3136da0c37107c8cde" Jan 29 16:26:31.396326 containerd[1511]: time="2025-01-29T16:26:31.396292010Z" level=info msg="StopPodSandbox for \"504b29301c6d060e8e0cce3b5c0c88c18609b640aaaf2d3136da0c37107c8cde\"" Jan 29 16:26:31.396682 containerd[1511]: time="2025-01-29T16:26:31.396523764Z" level=info msg="Ensure that sandbox 504b29301c6d060e8e0cce3b5c0c88c18609b640aaaf2d3136da0c37107c8cde in task-service has been cleanup successfully" Jan 29 16:26:31.398004 containerd[1511]: time="2025-01-29T16:26:31.397805876Z" level=info msg="TearDown network for sandbox \"504b29301c6d060e8e0cce3b5c0c88c18609b640aaaf2d3136da0c37107c8cde\" successfully" Jan 29 16:26:31.398004 containerd[1511]: time="2025-01-29T16:26:31.397830644Z" level=info msg="StopPodSandbox for \"504b29301c6d060e8e0cce3b5c0c88c18609b640aaaf2d3136da0c37107c8cde\" returns successfully" Jan 29 16:26:31.398093 containerd[1511]: time="2025-01-29T16:26:31.398030246Z" level=info msg="StopPodSandbox for \"823b60199170bed3f24aa396770a59c06672b7c1574810890698fa7b482ad0dd\"" Jan 29 16:26:31.398181 containerd[1511]: time="2025-01-29T16:26:31.398114126Z" level=info msg="TearDown network for sandbox \"823b60199170bed3f24aa396770a59c06672b7c1574810890698fa7b482ad0dd\" successfully" Jan 29 16:26:31.398181 containerd[1511]: time="2025-01-29T16:26:31.398136449Z" level=info msg="StopPodSandbox for \"823b60199170bed3f24aa396770a59c06672b7c1574810890698fa7b482ad0dd\" returns successfully" Jan 29 16:26:31.398768 containerd[1511]: time="2025-01-29T16:26:31.398634933Z" level=info msg="StopPodSandbox for \"e8c193de622ce73e832a3fda7a9bfabf864dadaaa147a9039e467ccbf7a365c7\"" Jan 29 16:26:31.398816 containerd[1511]: time="2025-01-29T16:26:31.398791382Z" level=info msg="TearDown network for sandbox \"e8c193de622ce73e832a3fda7a9bfabf864dadaaa147a9039e467ccbf7a365c7\" successfully" Jan 29 16:26:31.398816 containerd[1511]: time="2025-01-29T16:26:31.398806471Z" level=info msg="StopPodSandbox for \"e8c193de622ce73e832a3fda7a9bfabf864dadaaa147a9039e467ccbf7a365c7\" returns successfully" Jan 29 16:26:31.399216 kubelet[1836]: I0129 16:26:31.399193 1836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e163d3ac954194cd8144c2b133d7e41db5728e141349f8fa0295aabb409874b7" Jan 29 16:26:31.399462 containerd[1511]: time="2025-01-29T16:26:31.399311817Z" level=info msg="StopPodSandbox for \"cffaf574b277ef4135603b2b501e688b2c83bfacc0fe535cc819f8712de32824\"" Jan 29 16:26:31.399462 containerd[1511]: time="2025-01-29T16:26:31.399407150Z" level=info msg="TearDown network for sandbox \"cffaf574b277ef4135603b2b501e688b2c83bfacc0fe535cc819f8712de32824\" successfully" Jan 29 16:26:31.399462 containerd[1511]: time="2025-01-29T16:26:31.399455723Z" level=info msg="StopPodSandbox for \"cffaf574b277ef4135603b2b501e688b2c83bfacc0fe535cc819f8712de32824\" returns successfully" Jan 29 16:26:31.399758 systemd[1]: run-netns-cni\x2d1317654c\x2d52f0\x2d710d\x2d514f\x2ddff47276ca41.mount: Deactivated successfully. Jan 29 16:26:31.401445 containerd[1511]: time="2025-01-29T16:26:31.401416514Z" level=info msg="StopPodSandbox for \"afa262d39c5c6f6043a47d50e1d4bd36db26b4baad18c34639f502235890860f\"" Jan 29 16:26:31.401598 containerd[1511]: time="2025-01-29T16:26:31.401493772Z" level=info msg="TearDown network for sandbox \"afa262d39c5c6f6043a47d50e1d4bd36db26b4baad18c34639f502235890860f\" successfully" Jan 29 16:26:31.401598 containerd[1511]: time="2025-01-29T16:26:31.401504452Z" level=info msg="StopPodSandbox for \"afa262d39c5c6f6043a47d50e1d4bd36db26b4baad18c34639f502235890860f\" returns successfully" Jan 29 16:26:31.401598 containerd[1511]: time="2025-01-29T16:26:31.401542826Z" level=info msg="StopPodSandbox for \"e163d3ac954194cd8144c2b133d7e41db5728e141349f8fa0295aabb409874b7\"" Jan 29 16:26:31.401720 containerd[1511]: time="2025-01-29T16:26:31.401687793Z" level=info msg="Ensure that sandbox e163d3ac954194cd8144c2b133d7e41db5728e141349f8fa0295aabb409874b7 in task-service has been cleanup successfully" Jan 29 16:26:31.402179 containerd[1511]: time="2025-01-29T16:26:31.402005290Z" level=info msg="StopPodSandbox for \"6c04ed8c505d3c2699018369350b5a60291c70a0e570bba0d517856b46be2103\"" Jan 29 16:26:31.402179 containerd[1511]: time="2025-01-29T16:26:31.402095353Z" level=info msg="TearDown network for sandbox \"6c04ed8c505d3c2699018369350b5a60291c70a0e570bba0d517856b46be2103\" successfully" Jan 29 16:26:31.402179 containerd[1511]: time="2025-01-29T16:26:31.402107456Z" level=info msg="StopPodSandbox for \"6c04ed8c505d3c2699018369350b5a60291c70a0e570bba0d517856b46be2103\" returns successfully" Jan 29 16:26:31.402179 containerd[1511]: time="2025-01-29T16:26:31.402015420Z" level=info msg="TearDown network for sandbox \"e163d3ac954194cd8144c2b133d7e41db5728e141349f8fa0295aabb409874b7\" successfully" Jan 29 16:26:31.402179 containerd[1511]: time="2025-01-29T16:26:31.402147743Z" level=info msg="StopPodSandbox for \"e163d3ac954194cd8144c2b133d7e41db5728e141349f8fa0295aabb409874b7\" returns successfully" Jan 29 16:26:31.403124 containerd[1511]: time="2025-01-29T16:26:31.403090767Z" level=info msg="StopPodSandbox for \"02d75da93423d92f36ecd20e0389bb3c66c200651ce5cea4662a1f1999f6aebc\"" Jan 29 16:26:31.403215 containerd[1511]: time="2025-01-29T16:26:31.403188925Z" level=info msg="TearDown network for sandbox \"02d75da93423d92f36ecd20e0389bb3c66c200651ce5cea4662a1f1999f6aebc\" successfully" Jan 29 16:26:31.403215 containerd[1511]: time="2025-01-29T16:26:31.403211358Z" level=info msg="StopPodSandbox for \"02d75da93423d92f36ecd20e0389bb3c66c200651ce5cea4662a1f1999f6aebc\" returns successfully" Jan 29 16:26:31.403630 containerd[1511]: time="2025-01-29T16:26:31.403548142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4pfpp,Uid:ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553,Namespace:calico-system,Attempt:6,}" Jan 29 16:26:31.404623 systemd[1]: run-netns-cni\x2d68cd6c84\x2d251d\x2d1110\x2d6f8c\x2de35672ba4cef.mount: Deactivated successfully. Jan 29 16:26:31.404764 containerd[1511]: time="2025-01-29T16:26:31.404738769Z" level=info msg="StopPodSandbox for \"bf31a2cdcb0fbdb2a27c9594ace24b451214b41a29d73adba02deb722d8cdcc7\"" Jan 29 16:26:31.405202 containerd[1511]: time="2025-01-29T16:26:31.404832870Z" level=info msg="TearDown network for sandbox \"bf31a2cdcb0fbdb2a27c9594ace24b451214b41a29d73adba02deb722d8cdcc7\" successfully" Jan 29 16:26:31.405202 containerd[1511]: time="2025-01-29T16:26:31.404854341Z" level=info msg="StopPodSandbox for \"bf31a2cdcb0fbdb2a27c9594ace24b451214b41a29d73adba02deb722d8cdcc7\" returns successfully" Jan 29 16:26:31.405202 containerd[1511]: time="2025-01-29T16:26:31.405139637Z" level=info msg="StopPodSandbox for \"8053056e315849889aa194b8a66cdbc30602f9f7d4e7b291ce8ae981ba0f2769\"" Jan 29 16:26:31.405297 containerd[1511]: time="2025-01-29T16:26:31.405240058Z" level=info msg="TearDown network for sandbox \"8053056e315849889aa194b8a66cdbc30602f9f7d4e7b291ce8ae981ba0f2769\" successfully" Jan 29 16:26:31.405297 containerd[1511]: time="2025-01-29T16:26:31.405253284Z" level=info msg="StopPodSandbox for \"8053056e315849889aa194b8a66cdbc30602f9f7d4e7b291ce8ae981ba0f2769\" returns successfully" Jan 29 16:26:31.406931 containerd[1511]: time="2025-01-29T16:26:31.406838637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-w488j,Uid:90a3e9fd-1da6-44cf-aa2e-ea467e1b0e35,Namespace:default,Attempt:4,}" Jan 29 16:26:32.025081 containerd[1511]: time="2025-01-29T16:26:32.025018687Z" level=error msg="Failed to destroy network for sandbox \"6623c8a7c832d651c3aa1ef79082fd27e44019070bb4c878e903891b9d2749cf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:32.025965 containerd[1511]: time="2025-01-29T16:26:32.025925169Z" level=error msg="encountered an error cleaning up failed sandbox \"6623c8a7c832d651c3aa1ef79082fd27e44019070bb4c878e903891b9d2749cf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:32.026110 containerd[1511]: time="2025-01-29T16:26:32.026010882Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4pfpp,Uid:ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"6623c8a7c832d651c3aa1ef79082fd27e44019070bb4c878e903891b9d2749cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:32.026388 kubelet[1836]: E0129 16:26:32.026336 1836 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6623c8a7c832d651c3aa1ef79082fd27e44019070bb4c878e903891b9d2749cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:32.026570 kubelet[1836]: E0129 16:26:32.026420 1836 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6623c8a7c832d651c3aa1ef79082fd27e44019070bb4c878e903891b9d2749cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4pfpp" Jan 29 16:26:32.026570 kubelet[1836]: E0129 16:26:32.026445 1836 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6623c8a7c832d651c3aa1ef79082fd27e44019070bb4c878e903891b9d2749cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4pfpp" Jan 29 16:26:32.026570 kubelet[1836]: E0129 16:26:32.026489 1836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4pfpp_calico-system(ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4pfpp_calico-system(ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6623c8a7c832d651c3aa1ef79082fd27e44019070bb4c878e903891b9d2749cf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4pfpp" podUID="ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553" Jan 29 16:26:32.029432 containerd[1511]: time="2025-01-29T16:26:32.029379449Z" level=error msg="Failed to destroy network for sandbox \"400417133a608367f0f160429588b3a5fdd64d6f988ed6b92498c5c4cd59973c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:32.029822 containerd[1511]: time="2025-01-29T16:26:32.029790855Z" level=error msg="encountered an error cleaning up failed sandbox \"400417133a608367f0f160429588b3a5fdd64d6f988ed6b92498c5c4cd59973c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:32.030132 containerd[1511]: time="2025-01-29T16:26:32.029866901Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-w488j,Uid:90a3e9fd-1da6-44cf-aa2e-ea467e1b0e35,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"400417133a608367f0f160429588b3a5fdd64d6f988ed6b92498c5c4cd59973c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:32.030244 kubelet[1836]: E0129 16:26:32.030048 1836 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"400417133a608367f0f160429588b3a5fdd64d6f988ed6b92498c5c4cd59973c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:26:32.030244 kubelet[1836]: E0129 16:26:32.030079 1836 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"400417133a608367f0f160429588b3a5fdd64d6f988ed6b92498c5c4cd59973c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-w488j" Jan 29 16:26:32.030244 kubelet[1836]: E0129 16:26:32.030094 1836 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"400417133a608367f0f160429588b3a5fdd64d6f988ed6b92498c5c4cd59973c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-w488j" Jan 29 16:26:32.030331 kubelet[1836]: E0129 16:26:32.030118 1836 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-w488j_default(90a3e9fd-1da6-44cf-aa2e-ea467e1b0e35)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-w488j_default(90a3e9fd-1da6-44cf-aa2e-ea467e1b0e35)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"400417133a608367f0f160429588b3a5fdd64d6f988ed6b92498c5c4cd59973c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-w488j" podUID="90a3e9fd-1da6-44cf-aa2e-ea467e1b0e35" Jan 29 16:26:32.098124 kubelet[1836]: E0129 16:26:32.098090 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:32.126140 containerd[1511]: time="2025-01-29T16:26:32.126083582Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:32.126917 containerd[1511]: time="2025-01-29T16:26:32.126873762Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 29 16:26:32.127931 containerd[1511]: time="2025-01-29T16:26:32.127899081Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:32.129958 containerd[1511]: time="2025-01-29T16:26:32.129910915Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:32.130405 containerd[1511]: time="2025-01-29T16:26:32.130374561Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 5.766058595s" Jan 29 16:26:32.130432 containerd[1511]: time="2025-01-29T16:26:32.130405120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 29 16:26:32.137675 containerd[1511]: time="2025-01-29T16:26:32.137636838Z" level=info msg="CreateContainer within sandbox \"2222abaf5a67530e1401aaf3eba7ee1e906c3d3426357c208f4776af86b27b3b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 16:26:32.152286 containerd[1511]: time="2025-01-29T16:26:32.152246943Z" level=info msg="CreateContainer within sandbox \"2222abaf5a67530e1401aaf3eba7ee1e906c3d3426357c208f4776af86b27b3b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5259055c182394c690fbde1f11be7314a0f3d3978facd08e2431ea25ae939439\"" Jan 29 16:26:32.152735 containerd[1511]: time="2025-01-29T16:26:32.152689569Z" level=info msg="StartContainer for \"5259055c182394c690fbde1f11be7314a0f3d3978facd08e2431ea25ae939439\"" Jan 29 16:26:32.181872 systemd[1]: Started cri-containerd-5259055c182394c690fbde1f11be7314a0f3d3978facd08e2431ea25ae939439.scope - libcontainer container 5259055c182394c690fbde1f11be7314a0f3d3978facd08e2431ea25ae939439. Jan 29 16:26:32.215683 containerd[1511]: time="2025-01-29T16:26:32.215634818Z" level=info msg="StartContainer for \"5259055c182394c690fbde1f11be7314a0f3d3978facd08e2431ea25ae939439\" returns successfully" Jan 29 16:26:32.328449 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 29 16:26:32.328572 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 29 16:26:32.372653 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-400417133a608367f0f160429588b3a5fdd64d6f988ed6b92498c5c4cd59973c-shm.mount: Deactivated successfully. Jan 29 16:26:32.372798 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6623c8a7c832d651c3aa1ef79082fd27e44019070bb4c878e903891b9d2749cf-shm.mount: Deactivated successfully. Jan 29 16:26:32.372895 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount903080936.mount: Deactivated successfully. Jan 29 16:26:32.402403 kubelet[1836]: E0129 16:26:32.402377 1836 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:26:32.405562 kubelet[1836]: I0129 16:26:32.405527 1836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6623c8a7c832d651c3aa1ef79082fd27e44019070bb4c878e903891b9d2749cf" Jan 29 16:26:32.405977 containerd[1511]: time="2025-01-29T16:26:32.405946087Z" level=info msg="StopPodSandbox for \"6623c8a7c832d651c3aa1ef79082fd27e44019070bb4c878e903891b9d2749cf\"" Jan 29 16:26:32.407889 containerd[1511]: time="2025-01-29T16:26:32.406134267Z" level=info msg="Ensure that sandbox 6623c8a7c832d651c3aa1ef79082fd27e44019070bb4c878e903891b9d2749cf in task-service has been cleanup successfully" Jan 29 16:26:32.407889 containerd[1511]: time="2025-01-29T16:26:32.406537377Z" level=info msg="TearDown network for sandbox \"6623c8a7c832d651c3aa1ef79082fd27e44019070bb4c878e903891b9d2749cf\" successfully" Jan 29 16:26:32.407889 containerd[1511]: time="2025-01-29T16:26:32.406551093Z" level=info msg="StopPodSandbox for \"6623c8a7c832d651c3aa1ef79082fd27e44019070bb4c878e903891b9d2749cf\" returns successfully" Jan 29 16:26:32.407889 containerd[1511]: time="2025-01-29T16:26:32.407777477Z" level=info msg="StopPodSandbox for \"504b29301c6d060e8e0cce3b5c0c88c18609b640aaaf2d3136da0c37107c8cde\"" Jan 29 16:26:32.407889 containerd[1511]: time="2025-01-29T16:26:32.407866948Z" level=info msg="TearDown network for sandbox \"504b29301c6d060e8e0cce3b5c0c88c18609b640aaaf2d3136da0c37107c8cde\" successfully" Jan 29 16:26:32.407889 containerd[1511]: time="2025-01-29T16:26:32.407881456Z" level=info msg="StopPodSandbox for \"504b29301c6d060e8e0cce3b5c0c88c18609b640aaaf2d3136da0c37107c8cde\" returns successfully" Jan 29 16:26:32.408361 systemd[1]: run-netns-cni\x2dd7866b18\x2d6d24\x2d797b\x2d46f1\x2dc9dbf39072c8.mount: Deactivated successfully. Jan 29 16:26:32.408622 containerd[1511]: time="2025-01-29T16:26:32.408328800Z" level=info msg="StopPodSandbox for \"823b60199170bed3f24aa396770a59c06672b7c1574810890698fa7b482ad0dd\"" Jan 29 16:26:32.408622 containerd[1511]: time="2025-01-29T16:26:32.408480931Z" level=info msg="TearDown network for sandbox \"823b60199170bed3f24aa396770a59c06672b7c1574810890698fa7b482ad0dd\" successfully" Jan 29 16:26:32.408622 containerd[1511]: time="2025-01-29T16:26:32.408495889Z" level=info msg="StopPodSandbox for \"823b60199170bed3f24aa396770a59c06672b7c1574810890698fa7b482ad0dd\" returns successfully" Jan 29 16:26:32.409030 containerd[1511]: time="2025-01-29T16:26:32.409005513Z" level=info msg="StopPodSandbox for \"e8c193de622ce73e832a3fda7a9bfabf864dadaaa147a9039e467ccbf7a365c7\"" Jan 29 16:26:32.409137 containerd[1511]: time="2025-01-29T16:26:32.409116785Z" level=info msg="TearDown network for sandbox \"e8c193de622ce73e832a3fda7a9bfabf864dadaaa147a9039e467ccbf7a365c7\" successfully" Jan 29 16:26:32.409137 containerd[1511]: time="2025-01-29T16:26:32.409131514Z" level=info msg="StopPodSandbox for \"e8c193de622ce73e832a3fda7a9bfabf864dadaaa147a9039e467ccbf7a365c7\" returns successfully" Jan 29 16:26:32.410190 containerd[1511]: time="2025-01-29T16:26:32.410157965Z" level=info msg="StopPodSandbox for \"cffaf574b277ef4135603b2b501e688b2c83bfacc0fe535cc819f8712de32824\"" Jan 29 16:26:32.410330 containerd[1511]: time="2025-01-29T16:26:32.410303633Z" level=info msg="TearDown network for sandbox \"cffaf574b277ef4135603b2b501e688b2c83bfacc0fe535cc819f8712de32824\" successfully" Jan 29 16:26:32.410330 containerd[1511]: time="2025-01-29T16:26:32.410322099Z" level=info msg="StopPodSandbox for \"cffaf574b277ef4135603b2b501e688b2c83bfacc0fe535cc819f8712de32824\" returns successfully" Jan 29 16:26:32.410572 containerd[1511]: time="2025-01-29T16:26:32.410544253Z" level=info msg="StopPodSandbox for \"afa262d39c5c6f6043a47d50e1d4bd36db26b4baad18c34639f502235890860f\"" Jan 29 16:26:32.410672 containerd[1511]: time="2025-01-29T16:26:32.410638623Z" level=info msg="TearDown network for sandbox \"afa262d39c5c6f6043a47d50e1d4bd36db26b4baad18c34639f502235890860f\" successfully" Jan 29 16:26:32.410713 containerd[1511]: time="2025-01-29T16:26:32.410670534Z" level=info msg="StopPodSandbox for \"afa262d39c5c6f6043a47d50e1d4bd36db26b4baad18c34639f502235890860f\" returns successfully" Jan 29 16:26:32.411270 containerd[1511]: time="2025-01-29T16:26:32.411241214Z" level=info msg="StopPodSandbox for \"6c04ed8c505d3c2699018369350b5a60291c70a0e570bba0d517856b46be2103\"" Jan 29 16:26:32.412973 containerd[1511]: time="2025-01-29T16:26:32.411342097Z" level=info msg="TearDown network for sandbox \"6c04ed8c505d3c2699018369350b5a60291c70a0e570bba0d517856b46be2103\" successfully" Jan 29 16:26:32.412973 containerd[1511]: time="2025-01-29T16:26:32.411405278Z" level=info msg="StopPodSandbox for \"6c04ed8c505d3c2699018369350b5a60291c70a0e570bba0d517856b46be2103\" returns successfully" Jan 29 16:26:32.413068 kubelet[1836]: I0129 16:26:32.412988 1836 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="400417133a608367f0f160429588b3a5fdd64d6f988ed6b92498c5c4cd59973c" Jan 29 16:26:32.413713 containerd[1511]: time="2025-01-29T16:26:32.413686637Z" level=info msg="StopPodSandbox for \"400417133a608367f0f160429588b3a5fdd64d6f988ed6b92498c5c4cd59973c\"" Jan 29 16:26:32.413846 containerd[1511]: time="2025-01-29T16:26:32.413753496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4pfpp,Uid:ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553,Namespace:calico-system,Attempt:7,}" Jan 29 16:26:32.413957 containerd[1511]: time="2025-01-29T16:26:32.413931816Z" level=info msg="Ensure that sandbox 400417133a608367f0f160429588b3a5fdd64d6f988ed6b92498c5c4cd59973c in task-service has been cleanup successfully" Jan 29 16:26:32.414756 containerd[1511]: time="2025-01-29T16:26:32.414709222Z" level=info msg="TearDown network for sandbox \"400417133a608367f0f160429588b3a5fdd64d6f988ed6b92498c5c4cd59973c\" successfully" Jan 29 16:26:32.414756 containerd[1511]: time="2025-01-29T16:26:32.414734469Z" level=info msg="StopPodSandbox for \"400417133a608367f0f160429588b3a5fdd64d6f988ed6b92498c5c4cd59973c\" returns successfully" Jan 29 16:26:32.415123 containerd[1511]: time="2025-01-29T16:26:32.415100489Z" level=info msg="StopPodSandbox for \"e163d3ac954194cd8144c2b133d7e41db5728e141349f8fa0295aabb409874b7\"" Jan 29 16:26:32.415764 kubelet[1836]: I0129 16:26:32.415309 1836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9sc8p" podStartSLOduration=4.206592309 podStartE2EDuration="22.415295792s" podCreationTimestamp="2025-01-29 16:26:10 +0000 UTC" firstStartedPulling="2025-01-29 16:26:13.922418577 +0000 UTC m=+5.643125636" lastFinishedPulling="2025-01-29 16:26:32.131122059 +0000 UTC m=+23.851829119" observedRunningTime="2025-01-29 16:26:32.415128011 +0000 UTC m=+24.135835080" watchObservedRunningTime="2025-01-29 16:26:32.415295792 +0000 UTC m=+24.136002851" Jan 29 16:26:32.416024 containerd[1511]: time="2025-01-29T16:26:32.415420931Z" level=info msg="TearDown network for sandbox \"e163d3ac954194cd8144c2b133d7e41db5728e141349f8fa0295aabb409874b7\" successfully" Jan 29 16:26:32.416157 containerd[1511]: time="2025-01-29T16:26:32.416079940Z" level=info msg="StopPodSandbox for \"e163d3ac954194cd8144c2b133d7e41db5728e141349f8fa0295aabb409874b7\" returns successfully" Jan 29 16:26:32.416802 containerd[1511]: time="2025-01-29T16:26:32.416717228Z" level=info msg="StopPodSandbox for \"02d75da93423d92f36ecd20e0389bb3c66c200651ce5cea4662a1f1999f6aebc\"" Jan 29 16:26:32.416879 systemd[1]: run-netns-cni\x2df6a200a0\x2d03b7\x2def32\x2dc4e6\x2d4d397d097bcd.mount: Deactivated successfully. Jan 29 16:26:32.417319 containerd[1511]: time="2025-01-29T16:26:32.417099007Z" level=info msg="TearDown network for sandbox \"02d75da93423d92f36ecd20e0389bb3c66c200651ce5cea4662a1f1999f6aebc\" successfully" Jan 29 16:26:32.417319 containerd[1511]: time="2025-01-29T16:26:32.417126450Z" level=info msg="StopPodSandbox for \"02d75da93423d92f36ecd20e0389bb3c66c200651ce5cea4662a1f1999f6aebc\" returns successfully" Jan 29 16:26:32.417973 containerd[1511]: time="2025-01-29T16:26:32.417930426Z" level=info msg="StopPodSandbox for \"bf31a2cdcb0fbdb2a27c9594ace24b451214b41a29d73adba02deb722d8cdcc7\"" Jan 29 16:26:32.418133 containerd[1511]: time="2025-01-29T16:26:32.418048923Z" level=info msg="TearDown network for sandbox \"bf31a2cdcb0fbdb2a27c9594ace24b451214b41a29d73adba02deb722d8cdcc7\" successfully" Jan 29 16:26:32.418133 containerd[1511]: time="2025-01-29T16:26:32.418071666Z" level=info msg="StopPodSandbox for \"bf31a2cdcb0fbdb2a27c9594ace24b451214b41a29d73adba02deb722d8cdcc7\" returns successfully" Jan 29 16:26:32.418632 containerd[1511]: time="2025-01-29T16:26:32.418460629Z" level=info msg="StopPodSandbox for \"8053056e315849889aa194b8a66cdbc30602f9f7d4e7b291ce8ae981ba0f2769\"" Jan 29 16:26:32.418632 containerd[1511]: time="2025-01-29T16:26:32.418561151Z" level=info msg="TearDown network for sandbox \"8053056e315849889aa194b8a66cdbc30602f9f7d4e7b291ce8ae981ba0f2769\" successfully" Jan 29 16:26:32.418632 containerd[1511]: time="2025-01-29T16:26:32.418574396Z" level=info msg="StopPodSandbox for \"8053056e315849889aa194b8a66cdbc30602f9f7d4e7b291ce8ae981ba0f2769\" returns successfully" Jan 29 16:26:32.419142 containerd[1511]: time="2025-01-29T16:26:32.419108537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-w488j,Uid:90a3e9fd-1da6-44cf-aa2e-ea467e1b0e35,Namespace:default,Attempt:5,}" Jan 29 16:26:32.603562 systemd-networkd[1447]: cali67957f0f82e: Link UP Jan 29 16:26:32.603791 systemd-networkd[1447]: cali67957f0f82e: Gained carrier Jan 29 16:26:32.615175 containerd[1511]: 2025-01-29 16:26:32.452 [INFO][2807] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 16:26:32.615175 containerd[1511]: 2025-01-29 16:26:32.463 [INFO][2807] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.149-k8s-csi--node--driver--4pfpp-eth0 csi-node-driver- calico-system ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553 848 0 2025-01-29 16:26:10 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.149 csi-node-driver-4pfpp eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali67957f0f82e [] []}} ContainerID="60caea6b78d53cc6839b4f2b3de3b91bd12b43732d596ac1bc430c8316290f9a" Namespace="calico-system" Pod="csi-node-driver-4pfpp" WorkloadEndpoint="10.0.0.149-k8s-csi--node--driver--4pfpp-" Jan 29 16:26:32.615175 containerd[1511]: 2025-01-29 16:26:32.464 [INFO][2807] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="60caea6b78d53cc6839b4f2b3de3b91bd12b43732d596ac1bc430c8316290f9a" Namespace="calico-system" Pod="csi-node-driver-4pfpp" WorkloadEndpoint="10.0.0.149-k8s-csi--node--driver--4pfpp-eth0" Jan 29 16:26:32.615175 containerd[1511]: 2025-01-29 16:26:32.516 [INFO][2834] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="60caea6b78d53cc6839b4f2b3de3b91bd12b43732d596ac1bc430c8316290f9a" HandleID="k8s-pod-network.60caea6b78d53cc6839b4f2b3de3b91bd12b43732d596ac1bc430c8316290f9a" Workload="10.0.0.149-k8s-csi--node--driver--4pfpp-eth0" Jan 29 16:26:32.615175 containerd[1511]: 2025-01-29 16:26:32.542 [INFO][2834] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="60caea6b78d53cc6839b4f2b3de3b91bd12b43732d596ac1bc430c8316290f9a" HandleID="k8s-pod-network.60caea6b78d53cc6839b4f2b3de3b91bd12b43732d596ac1bc430c8316290f9a" Workload="10.0.0.149-k8s-csi--node--driver--4pfpp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fa0d0), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.149", "pod":"csi-node-driver-4pfpp", "timestamp":"2025-01-29 16:26:32.51644588 +0000 UTC"}, Hostname:"10.0.0.149", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 16:26:32.615175 containerd[1511]: 2025-01-29 16:26:32.542 [INFO][2834] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 16:26:32.615175 containerd[1511]: 2025-01-29 16:26:32.542 [INFO][2834] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 16:26:32.615175 containerd[1511]: 2025-01-29 16:26:32.542 [INFO][2834] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.149' Jan 29 16:26:32.615175 containerd[1511]: 2025-01-29 16:26:32.570 [INFO][2834] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.60caea6b78d53cc6839b4f2b3de3b91bd12b43732d596ac1bc430c8316290f9a" host="10.0.0.149" Jan 29 16:26:32.615175 containerd[1511]: 2025-01-29 16:26:32.573 [INFO][2834] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.149" Jan 29 16:26:32.615175 containerd[1511]: 2025-01-29 16:26:32.579 [INFO][2834] ipam/ipam.go 489: Trying affinity for 192.168.60.0/26 host="10.0.0.149" Jan 29 16:26:32.615175 containerd[1511]: 2025-01-29 16:26:32.580 [INFO][2834] ipam/ipam.go 155: Attempting to load block cidr=192.168.60.0/26 host="10.0.0.149" Jan 29 16:26:32.615175 containerd[1511]: 2025-01-29 16:26:32.582 [INFO][2834] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.60.0/26 host="10.0.0.149" Jan 29 16:26:32.615175 containerd[1511]: 2025-01-29 16:26:32.582 [INFO][2834] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.60.0/26 handle="k8s-pod-network.60caea6b78d53cc6839b4f2b3de3b91bd12b43732d596ac1bc430c8316290f9a" host="10.0.0.149" Jan 29 16:26:32.615175 containerd[1511]: 2025-01-29 16:26:32.583 [INFO][2834] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.60caea6b78d53cc6839b4f2b3de3b91bd12b43732d596ac1bc430c8316290f9a Jan 29 16:26:32.615175 containerd[1511]: 2025-01-29 16:26:32.588 [INFO][2834] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.60.0/26 handle="k8s-pod-network.60caea6b78d53cc6839b4f2b3de3b91bd12b43732d596ac1bc430c8316290f9a" host="10.0.0.149" Jan 29 16:26:32.615175 containerd[1511]: 2025-01-29 16:26:32.592 [INFO][2834] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.60.1/26] block=192.168.60.0/26 handle="k8s-pod-network.60caea6b78d53cc6839b4f2b3de3b91bd12b43732d596ac1bc430c8316290f9a" host="10.0.0.149" Jan 29 16:26:32.615175 containerd[1511]: 2025-01-29 16:26:32.592 [INFO][2834] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.60.1/26] handle="k8s-pod-network.60caea6b78d53cc6839b4f2b3de3b91bd12b43732d596ac1bc430c8316290f9a" host="10.0.0.149" Jan 29 16:26:32.615175 containerd[1511]: 2025-01-29 16:26:32.592 [INFO][2834] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 16:26:32.615175 containerd[1511]: 2025-01-29 16:26:32.592 [INFO][2834] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.60.1/26] IPv6=[] ContainerID="60caea6b78d53cc6839b4f2b3de3b91bd12b43732d596ac1bc430c8316290f9a" HandleID="k8s-pod-network.60caea6b78d53cc6839b4f2b3de3b91bd12b43732d596ac1bc430c8316290f9a" Workload="10.0.0.149-k8s-csi--node--driver--4pfpp-eth0" Jan 29 16:26:32.615858 containerd[1511]: 2025-01-29 16:26:32.597 [INFO][2807] cni-plugin/k8s.go 386: Populated endpoint ContainerID="60caea6b78d53cc6839b4f2b3de3b91bd12b43732d596ac1bc430c8316290f9a" Namespace="calico-system" Pod="csi-node-driver-4pfpp" WorkloadEndpoint="10.0.0.149-k8s-csi--node--driver--4pfpp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.149-k8s-csi--node--driver--4pfpp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 26, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.149", ContainerID:"", Pod:"csi-node-driver-4pfpp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.60.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali67957f0f82e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:26:32.615858 containerd[1511]: 2025-01-29 16:26:32.597 [INFO][2807] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.60.1/32] ContainerID="60caea6b78d53cc6839b4f2b3de3b91bd12b43732d596ac1bc430c8316290f9a" Namespace="calico-system" Pod="csi-node-driver-4pfpp" WorkloadEndpoint="10.0.0.149-k8s-csi--node--driver--4pfpp-eth0" Jan 29 16:26:32.615858 containerd[1511]: 2025-01-29 16:26:32.597 [INFO][2807] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali67957f0f82e ContainerID="60caea6b78d53cc6839b4f2b3de3b91bd12b43732d596ac1bc430c8316290f9a" Namespace="calico-system" Pod="csi-node-driver-4pfpp" WorkloadEndpoint="10.0.0.149-k8s-csi--node--driver--4pfpp-eth0" Jan 29 16:26:32.615858 containerd[1511]: 2025-01-29 16:26:32.603 [INFO][2807] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="60caea6b78d53cc6839b4f2b3de3b91bd12b43732d596ac1bc430c8316290f9a" Namespace="calico-system" Pod="csi-node-driver-4pfpp" WorkloadEndpoint="10.0.0.149-k8s-csi--node--driver--4pfpp-eth0" Jan 29 16:26:32.615858 containerd[1511]: 2025-01-29 16:26:32.603 [INFO][2807] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="60caea6b78d53cc6839b4f2b3de3b91bd12b43732d596ac1bc430c8316290f9a" Namespace="calico-system" Pod="csi-node-driver-4pfpp" WorkloadEndpoint="10.0.0.149-k8s-csi--node--driver--4pfpp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.149-k8s-csi--node--driver--4pfpp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 26, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.149", ContainerID:"60caea6b78d53cc6839b4f2b3de3b91bd12b43732d596ac1bc430c8316290f9a", Pod:"csi-node-driver-4pfpp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.60.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali67957f0f82e", MAC:"aa:86:13:91:91:f4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:26:32.615858 containerd[1511]: 2025-01-29 16:26:32.612 [INFO][2807] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="60caea6b78d53cc6839b4f2b3de3b91bd12b43732d596ac1bc430c8316290f9a" Namespace="calico-system" Pod="csi-node-driver-4pfpp" WorkloadEndpoint="10.0.0.149-k8s-csi--node--driver--4pfpp-eth0" Jan 29 16:26:32.634652 containerd[1511]: time="2025-01-29T16:26:32.634530756Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:26:32.634652 containerd[1511]: time="2025-01-29T16:26:32.634597653Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:26:32.634652 containerd[1511]: time="2025-01-29T16:26:32.634611801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:26:32.634880 containerd[1511]: time="2025-01-29T16:26:32.634726370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:26:32.654892 systemd[1]: Started cri-containerd-60caea6b78d53cc6839b4f2b3de3b91bd12b43732d596ac1bc430c8316290f9a.scope - libcontainer container 60caea6b78d53cc6839b4f2b3de3b91bd12b43732d596ac1bc430c8316290f9a. Jan 29 16:26:32.668974 systemd-resolved[1342]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 16:26:32.683745 containerd[1511]: time="2025-01-29T16:26:32.683697689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4pfpp,Uid:ebfca53b-4c8c-4d66-9ac2-1e0da5e0f553,Namespace:calico-system,Attempt:7,} returns sandbox id \"60caea6b78d53cc6839b4f2b3de3b91bd12b43732d596ac1bc430c8316290f9a\"" Jan 29 16:26:32.686188 containerd[1511]: time="2025-01-29T16:26:32.686127442Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 29 16:26:32.697447 systemd-networkd[1447]: calibf36790c06a: Link UP Jan 29 16:26:32.698519 systemd-networkd[1447]: calibf36790c06a: Gained carrier Jan 29 16:26:32.706514 containerd[1511]: 2025-01-29 16:26:32.479 [INFO][2821] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 16:26:32.706514 containerd[1511]: 2025-01-29 16:26:32.489 [INFO][2821] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.149-k8s-nginx--deployment--8587fbcb89--w488j-eth0 nginx-deployment-8587fbcb89- default 90a3e9fd-1da6-44cf-aa2e-ea467e1b0e35 1024 0 2025-01-29 16:26:27 +0000 UTC map[app:nginx pod-template-hash:8587fbcb89 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.149 nginx-deployment-8587fbcb89-w488j eth0 default [] [] [kns.default ksa.default.default] calibf36790c06a [] []}} ContainerID="cdab2c58eb3a1daba590f054199af22e924350b915da922fa8cf7b59ca8e2443" Namespace="default" Pod="nginx-deployment-8587fbcb89-w488j" WorkloadEndpoint="10.0.0.149-k8s-nginx--deployment--8587fbcb89--w488j-" Jan 29 16:26:32.706514 containerd[1511]: 2025-01-29 16:26:32.489 [INFO][2821] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cdab2c58eb3a1daba590f054199af22e924350b915da922fa8cf7b59ca8e2443" Namespace="default" Pod="nginx-deployment-8587fbcb89-w488j" WorkloadEndpoint="10.0.0.149-k8s-nginx--deployment--8587fbcb89--w488j-eth0" Jan 29 16:26:32.706514 containerd[1511]: 2025-01-29 16:26:32.529 [INFO][2841] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cdab2c58eb3a1daba590f054199af22e924350b915da922fa8cf7b59ca8e2443" HandleID="k8s-pod-network.cdab2c58eb3a1daba590f054199af22e924350b915da922fa8cf7b59ca8e2443" Workload="10.0.0.149-k8s-nginx--deployment--8587fbcb89--w488j-eth0" Jan 29 16:26:32.706514 containerd[1511]: 2025-01-29 16:26:32.542 [INFO][2841] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cdab2c58eb3a1daba590f054199af22e924350b915da922fa8cf7b59ca8e2443" HandleID="k8s-pod-network.cdab2c58eb3a1daba590f054199af22e924350b915da922fa8cf7b59ca8e2443" Workload="10.0.0.149-k8s-nginx--deployment--8587fbcb89--w488j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00012d6a0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.149", "pod":"nginx-deployment-8587fbcb89-w488j", "timestamp":"2025-01-29 16:26:32.529049272 +0000 UTC"}, Hostname:"10.0.0.149", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 16:26:32.706514 containerd[1511]: 2025-01-29 16:26:32.542 [INFO][2841] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 16:26:32.706514 containerd[1511]: 2025-01-29 16:26:32.593 [INFO][2841] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 16:26:32.706514 containerd[1511]: 2025-01-29 16:26:32.593 [INFO][2841] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.149' Jan 29 16:26:32.706514 containerd[1511]: 2025-01-29 16:26:32.671 [INFO][2841] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cdab2c58eb3a1daba590f054199af22e924350b915da922fa8cf7b59ca8e2443" host="10.0.0.149" Jan 29 16:26:32.706514 containerd[1511]: 2025-01-29 16:26:32.676 [INFO][2841] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.149" Jan 29 16:26:32.706514 containerd[1511]: 2025-01-29 16:26:32.680 [INFO][2841] ipam/ipam.go 489: Trying affinity for 192.168.60.0/26 host="10.0.0.149" Jan 29 16:26:32.706514 containerd[1511]: 2025-01-29 16:26:32.681 [INFO][2841] ipam/ipam.go 155: Attempting to load block cidr=192.168.60.0/26 host="10.0.0.149" Jan 29 16:26:32.706514 containerd[1511]: 2025-01-29 16:26:32.683 [INFO][2841] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.60.0/26 host="10.0.0.149" Jan 29 16:26:32.706514 containerd[1511]: 2025-01-29 16:26:32.683 [INFO][2841] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.60.0/26 handle="k8s-pod-network.cdab2c58eb3a1daba590f054199af22e924350b915da922fa8cf7b59ca8e2443" host="10.0.0.149" Jan 29 16:26:32.706514 containerd[1511]: 2025-01-29 16:26:32.684 [INFO][2841] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.cdab2c58eb3a1daba590f054199af22e924350b915da922fa8cf7b59ca8e2443 Jan 29 16:26:32.706514 containerd[1511]: 2025-01-29 16:26:32.688 [INFO][2841] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.60.0/26 handle="k8s-pod-network.cdab2c58eb3a1daba590f054199af22e924350b915da922fa8cf7b59ca8e2443" host="10.0.0.149" Jan 29 16:26:32.706514 containerd[1511]: 2025-01-29 16:26:32.692 [INFO][2841] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.60.2/26] block=192.168.60.0/26 handle="k8s-pod-network.cdab2c58eb3a1daba590f054199af22e924350b915da922fa8cf7b59ca8e2443" host="10.0.0.149" Jan 29 16:26:32.706514 containerd[1511]: 2025-01-29 16:26:32.692 [INFO][2841] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.60.2/26] handle="k8s-pod-network.cdab2c58eb3a1daba590f054199af22e924350b915da922fa8cf7b59ca8e2443" host="10.0.0.149" Jan 29 16:26:32.706514 containerd[1511]: 2025-01-29 16:26:32.692 [INFO][2841] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 16:26:32.706514 containerd[1511]: 2025-01-29 16:26:32.692 [INFO][2841] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.60.2/26] IPv6=[] ContainerID="cdab2c58eb3a1daba590f054199af22e924350b915da922fa8cf7b59ca8e2443" HandleID="k8s-pod-network.cdab2c58eb3a1daba590f054199af22e924350b915da922fa8cf7b59ca8e2443" Workload="10.0.0.149-k8s-nginx--deployment--8587fbcb89--w488j-eth0" Jan 29 16:26:32.707277 containerd[1511]: 2025-01-29 16:26:32.695 [INFO][2821] cni-plugin/k8s.go 386: Populated endpoint ContainerID="cdab2c58eb3a1daba590f054199af22e924350b915da922fa8cf7b59ca8e2443" Namespace="default" Pod="nginx-deployment-8587fbcb89-w488j" WorkloadEndpoint="10.0.0.149-k8s-nginx--deployment--8587fbcb89--w488j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.149-k8s-nginx--deployment--8587fbcb89--w488j-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"90a3e9fd-1da6-44cf-aa2e-ea467e1b0e35", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 26, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.149", ContainerID:"", Pod:"nginx-deployment-8587fbcb89-w488j", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.60.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calibf36790c06a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:26:32.707277 containerd[1511]: 2025-01-29 16:26:32.695 [INFO][2821] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.60.2/32] ContainerID="cdab2c58eb3a1daba590f054199af22e924350b915da922fa8cf7b59ca8e2443" Namespace="default" Pod="nginx-deployment-8587fbcb89-w488j" WorkloadEndpoint="10.0.0.149-k8s-nginx--deployment--8587fbcb89--w488j-eth0" Jan 29 16:26:32.707277 containerd[1511]: 2025-01-29 16:26:32.695 [INFO][2821] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibf36790c06a ContainerID="cdab2c58eb3a1daba590f054199af22e924350b915da922fa8cf7b59ca8e2443" Namespace="default" Pod="nginx-deployment-8587fbcb89-w488j" WorkloadEndpoint="10.0.0.149-k8s-nginx--deployment--8587fbcb89--w488j-eth0" Jan 29 16:26:32.707277 containerd[1511]: 2025-01-29 16:26:32.697 [INFO][2821] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cdab2c58eb3a1daba590f054199af22e924350b915da922fa8cf7b59ca8e2443" Namespace="default" Pod="nginx-deployment-8587fbcb89-w488j" WorkloadEndpoint="10.0.0.149-k8s-nginx--deployment--8587fbcb89--w488j-eth0" Jan 29 16:26:32.707277 containerd[1511]: 2025-01-29 16:26:32.698 [INFO][2821] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cdab2c58eb3a1daba590f054199af22e924350b915da922fa8cf7b59ca8e2443" Namespace="default" Pod="nginx-deployment-8587fbcb89-w488j" WorkloadEndpoint="10.0.0.149-k8s-nginx--deployment--8587fbcb89--w488j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.149-k8s-nginx--deployment--8587fbcb89--w488j-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"90a3e9fd-1da6-44cf-aa2e-ea467e1b0e35", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 26, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.149", ContainerID:"cdab2c58eb3a1daba590f054199af22e924350b915da922fa8cf7b59ca8e2443", Pod:"nginx-deployment-8587fbcb89-w488j", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.60.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calibf36790c06a", MAC:"da:17:ae:c9:26:40", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:26:32.707277 containerd[1511]: 2025-01-29 16:26:32.703 [INFO][2821] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="cdab2c58eb3a1daba590f054199af22e924350b915da922fa8cf7b59ca8e2443" Namespace="default" Pod="nginx-deployment-8587fbcb89-w488j" WorkloadEndpoint="10.0.0.149-k8s-nginx--deployment--8587fbcb89--w488j-eth0" Jan 29 16:26:32.730462 containerd[1511]: time="2025-01-29T16:26:32.730355809Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:26:32.730462 containerd[1511]: time="2025-01-29T16:26:32.730425933Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:26:32.730462 containerd[1511]: time="2025-01-29T16:26:32.730440631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:26:32.730695 containerd[1511]: time="2025-01-29T16:26:32.730530714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:26:32.753796 systemd[1]: Started cri-containerd-cdab2c58eb3a1daba590f054199af22e924350b915da922fa8cf7b59ca8e2443.scope - libcontainer container cdab2c58eb3a1daba590f054199af22e924350b915da922fa8cf7b59ca8e2443. Jan 29 16:26:32.766231 systemd-resolved[1342]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 16:26:32.789385 containerd[1511]: time="2025-01-29T16:26:32.789342897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-w488j,Uid:90a3e9fd-1da6-44cf-aa2e-ea467e1b0e35,Namespace:default,Attempt:5,} returns sandbox id \"cdab2c58eb3a1daba590f054199af22e924350b915da922fa8cf7b59ca8e2443\"" Jan 29 16:26:33.098762 kubelet[1836]: E0129 16:26:33.098695 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:33.780679 kernel: bpftool[3076]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 29 16:26:33.918839 systemd-networkd[1447]: cali67957f0f82e: Gained IPv6LL Jan 29 16:26:34.038424 systemd-networkd[1447]: vxlan.calico: Link UP Jan 29 16:26:34.038437 systemd-networkd[1447]: vxlan.calico: Gained carrier Jan 29 16:26:34.099631 kubelet[1836]: E0129 16:26:34.099602 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:34.174794 systemd-networkd[1447]: calibf36790c06a: Gained IPv6LL Jan 29 16:26:34.347571 containerd[1511]: time="2025-01-29T16:26:34.346376533Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:34.348123 containerd[1511]: time="2025-01-29T16:26:34.348091573Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 29 16:26:34.349384 containerd[1511]: time="2025-01-29T16:26:34.349350413Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:34.351976 containerd[1511]: time="2025-01-29T16:26:34.351936733Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:34.352741 containerd[1511]: time="2025-01-29T16:26:34.352714296Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.666523021s" Jan 29 16:26:34.352779 containerd[1511]: time="2025-01-29T16:26:34.352747720Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 29 16:26:34.354673 containerd[1511]: time="2025-01-29T16:26:34.354613858Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 29 16:26:34.355677 containerd[1511]: time="2025-01-29T16:26:34.355622200Z" level=info msg="CreateContainer within sandbox \"60caea6b78d53cc6839b4f2b3de3b91bd12b43732d596ac1bc430c8316290f9a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 29 16:26:34.372850 containerd[1511]: time="2025-01-29T16:26:34.372808554Z" level=info msg="CreateContainer within sandbox \"60caea6b78d53cc6839b4f2b3de3b91bd12b43732d596ac1bc430c8316290f9a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"359fb70d222460cbde1a423a6670b1bea91d823f58e1a429f3a822994da59a1e\"" Jan 29 16:26:34.373490 containerd[1511]: time="2025-01-29T16:26:34.373340148Z" level=info msg="StartContainer for \"359fb70d222460cbde1a423a6670b1bea91d823f58e1a429f3a822994da59a1e\"" Jan 29 16:26:34.409199 systemd[1]: run-containerd-runc-k8s.io-359fb70d222460cbde1a423a6670b1bea91d823f58e1a429f3a822994da59a1e-runc.YuflnL.mount: Deactivated successfully. Jan 29 16:26:34.423831 systemd[1]: Started cri-containerd-359fb70d222460cbde1a423a6670b1bea91d823f58e1a429f3a822994da59a1e.scope - libcontainer container 359fb70d222460cbde1a423a6670b1bea91d823f58e1a429f3a822994da59a1e. Jan 29 16:26:34.525279 containerd[1511]: time="2025-01-29T16:26:34.525205808Z" level=info msg="StartContainer for \"359fb70d222460cbde1a423a6670b1bea91d823f58e1a429f3a822994da59a1e\" returns successfully" Jan 29 16:26:35.100390 kubelet[1836]: E0129 16:26:35.100326 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:35.546269 kubelet[1836]: I0129 16:26:35.546128 1836 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 16:26:35.546621 kubelet[1836]: E0129 16:26:35.546603 1836 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:26:35.902805 systemd-networkd[1447]: vxlan.calico: Gained IPv6LL Jan 29 16:26:36.101044 kubelet[1836]: E0129 16:26:36.100981 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:37.101734 kubelet[1836]: E0129 16:26:37.101680 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:37.738975 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1608630769.mount: Deactivated successfully. Jan 29 16:26:38.101897 kubelet[1836]: E0129 16:26:38.101846 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:39.102404 kubelet[1836]: E0129 16:26:39.102351 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:39.734516 containerd[1511]: time="2025-01-29T16:26:39.734450281Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:39.737824 containerd[1511]: time="2025-01-29T16:26:39.737758499Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71015561" Jan 29 16:26:39.739175 containerd[1511]: time="2025-01-29T16:26:39.739139199Z" level=info msg="ImageCreate event name:\"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:39.741952 containerd[1511]: time="2025-01-29T16:26:39.741924165Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:39.742754 containerd[1511]: time="2025-01-29T16:26:39.742723412Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 5.388043169s" Jan 29 16:26:39.742814 containerd[1511]: time="2025-01-29T16:26:39.742754281Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 29 16:26:39.744017 containerd[1511]: time="2025-01-29T16:26:39.743836324Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 29 16:26:39.744859 containerd[1511]: time="2025-01-29T16:26:39.744771669Z" level=info msg="CreateContainer within sandbox \"cdab2c58eb3a1daba590f054199af22e924350b915da922fa8cf7b59ca8e2443\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 29 16:26:39.758261 containerd[1511]: time="2025-01-29T16:26:39.758209903Z" level=info msg="CreateContainer within sandbox \"cdab2c58eb3a1daba590f054199af22e924350b915da922fa8cf7b59ca8e2443\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"e72669d0810c72b93ed7dc50c7be8427b3b7de28f395889a8d0c8e9253a0eb13\"" Jan 29 16:26:39.758719 containerd[1511]: time="2025-01-29T16:26:39.758692269Z" level=info msg="StartContainer for \"e72669d0810c72b93ed7dc50c7be8427b3b7de28f395889a8d0c8e9253a0eb13\"" Jan 29 16:26:39.838829 systemd[1]: Started cri-containerd-e72669d0810c72b93ed7dc50c7be8427b3b7de28f395889a8d0c8e9253a0eb13.scope - libcontainer container e72669d0810c72b93ed7dc50c7be8427b3b7de28f395889a8d0c8e9253a0eb13. Jan 29 16:26:40.023774 containerd[1511]: time="2025-01-29T16:26:40.023618058Z" level=info msg="StartContainer for \"e72669d0810c72b93ed7dc50c7be8427b3b7de28f395889a8d0c8e9253a0eb13\" returns successfully" Jan 29 16:26:40.102886 kubelet[1836]: E0129 16:26:40.102839 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:40.450431 kubelet[1836]: I0129 16:26:40.450362 1836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-w488j" podStartSLOduration=6.497273522 podStartE2EDuration="13.450347039s" podCreationTimestamp="2025-01-29 16:26:27 +0000 UTC" firstStartedPulling="2025-01-29 16:26:32.790587695 +0000 UTC m=+24.511294754" lastFinishedPulling="2025-01-29 16:26:39.743661212 +0000 UTC m=+31.464368271" observedRunningTime="2025-01-29 16:26:40.450322803 +0000 UTC m=+32.171029862" watchObservedRunningTime="2025-01-29 16:26:40.450347039 +0000 UTC m=+32.171054098" Jan 29 16:26:40.519999 update_engine[1495]: I20250129 16:26:40.519917 1495 update_attempter.cc:509] Updating boot flags... Jan 29 16:26:40.548670 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3354) Jan 29 16:26:40.601711 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3356) Jan 29 16:26:40.653926 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3356) Jan 29 16:26:41.103885 kubelet[1836]: E0129 16:26:41.103814 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:41.747466 containerd[1511]: time="2025-01-29T16:26:41.747393840Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:41.748789 containerd[1511]: time="2025-01-29T16:26:41.748714643Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 29 16:26:41.749945 containerd[1511]: time="2025-01-29T16:26:41.749904088Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:41.752020 containerd[1511]: time="2025-01-29T16:26:41.751983408Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:41.752654 containerd[1511]: time="2025-01-29T16:26:41.752611619Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.008747401s" Jan 29 16:26:41.752710 containerd[1511]: time="2025-01-29T16:26:41.752659230Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 29 16:26:41.754444 containerd[1511]: time="2025-01-29T16:26:41.754403575Z" level=info msg="CreateContainer within sandbox \"60caea6b78d53cc6839b4f2b3de3b91bd12b43732d596ac1bc430c8316290f9a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 29 16:26:41.771175 containerd[1511]: time="2025-01-29T16:26:41.771123441Z" level=info msg="CreateContainer within sandbox \"60caea6b78d53cc6839b4f2b3de3b91bd12b43732d596ac1bc430c8316290f9a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"530678c81d97957ada81926670c5db6c0821118481037e6e25e1312b8440378a\"" Jan 29 16:26:41.771678 containerd[1511]: time="2025-01-29T16:26:41.771639359Z" level=info msg="StartContainer for \"530678c81d97957ada81926670c5db6c0821118481037e6e25e1312b8440378a\"" Jan 29 16:26:41.803776 systemd[1]: Started cri-containerd-530678c81d97957ada81926670c5db6c0821118481037e6e25e1312b8440378a.scope - libcontainer container 530678c81d97957ada81926670c5db6c0821118481037e6e25e1312b8440378a. Jan 29 16:26:41.976327 containerd[1511]: time="2025-01-29T16:26:41.976259381Z" level=info msg="StartContainer for \"530678c81d97957ada81926670c5db6c0821118481037e6e25e1312b8440378a\" returns successfully" Jan 29 16:26:42.104768 kubelet[1836]: E0129 16:26:42.104720 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:42.164965 kubelet[1836]: I0129 16:26:42.164932 1836 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 29 16:26:42.164965 kubelet[1836]: I0129 16:26:42.164969 1836 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 29 16:26:42.464416 kubelet[1836]: I0129 16:26:42.464261 1836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-4pfpp" podStartSLOduration=23.396512896 podStartE2EDuration="32.464242201s" podCreationTimestamp="2025-01-29 16:26:10 +0000 UTC" firstStartedPulling="2025-01-29 16:26:32.685614592 +0000 UTC m=+24.406321651" lastFinishedPulling="2025-01-29 16:26:41.753343897 +0000 UTC m=+33.474050956" observedRunningTime="2025-01-29 16:26:42.464201534 +0000 UTC m=+34.184908583" watchObservedRunningTime="2025-01-29 16:26:42.464242201 +0000 UTC m=+34.184949260" Jan 29 16:26:43.105803 kubelet[1836]: E0129 16:26:43.105735 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:44.106230 kubelet[1836]: E0129 16:26:44.106154 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:45.106868 kubelet[1836]: E0129 16:26:45.106806 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:45.161388 systemd[1]: Created slice kubepods-besteffort-pod4dc4610b_a400_4664_afea_cee66ef119af.slice - libcontainer container kubepods-besteffort-pod4dc4610b_a400_4664_afea_cee66ef119af.slice. Jan 29 16:26:45.278748 kubelet[1836]: I0129 16:26:45.278698 1836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/4dc4610b-a400-4664-afea-cee66ef119af-data\") pod \"nfs-server-provisioner-0\" (UID: \"4dc4610b-a400-4664-afea-cee66ef119af\") " pod="default/nfs-server-provisioner-0" Jan 29 16:26:45.278748 kubelet[1836]: I0129 16:26:45.278743 1836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp2zh\" (UniqueName: \"kubernetes.io/projected/4dc4610b-a400-4664-afea-cee66ef119af-kube-api-access-mp2zh\") pod \"nfs-server-provisioner-0\" (UID: \"4dc4610b-a400-4664-afea-cee66ef119af\") " pod="default/nfs-server-provisioner-0" Jan 29 16:26:45.464340 containerd[1511]: time="2025-01-29T16:26:45.464220594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:4dc4610b-a400-4664-afea-cee66ef119af,Namespace:default,Attempt:0,}" Jan 29 16:26:45.567954 systemd-networkd[1447]: cali60e51b789ff: Link UP Jan 29 16:26:45.568136 systemd-networkd[1447]: cali60e51b789ff: Gained carrier Jan 29 16:26:45.577409 containerd[1511]: 2025-01-29 16:26:45.510 [INFO][3420] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.149-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 4dc4610b-a400-4664-afea-cee66ef119af 1140 0 2025-01-29 16:26:45 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.149 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="89125b2f79d949b0673ef31131791f2a7817512d12d573300a12d5085aad9cdd" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.149-k8s-nfs--server--provisioner--0-" Jan 29 16:26:45.577409 containerd[1511]: 2025-01-29 16:26:45.510 [INFO][3420] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="89125b2f79d949b0673ef31131791f2a7817512d12d573300a12d5085aad9cdd" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.149-k8s-nfs--server--provisioner--0-eth0" Jan 29 16:26:45.577409 containerd[1511]: 2025-01-29 16:26:45.536 [INFO][3433] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="89125b2f79d949b0673ef31131791f2a7817512d12d573300a12d5085aad9cdd" HandleID="k8s-pod-network.89125b2f79d949b0673ef31131791f2a7817512d12d573300a12d5085aad9cdd" Workload="10.0.0.149-k8s-nfs--server--provisioner--0-eth0" Jan 29 16:26:45.577409 containerd[1511]: 2025-01-29 16:26:45.542 [INFO][3433] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="89125b2f79d949b0673ef31131791f2a7817512d12d573300a12d5085aad9cdd" HandleID="k8s-pod-network.89125b2f79d949b0673ef31131791f2a7817512d12d573300a12d5085aad9cdd" Workload="10.0.0.149-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000308f70), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.149", "pod":"nfs-server-provisioner-0", "timestamp":"2025-01-29 16:26:45.535990365 +0000 UTC"}, Hostname:"10.0.0.149", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 16:26:45.577409 containerd[1511]: 2025-01-29 16:26:45.543 [INFO][3433] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 16:26:45.577409 containerd[1511]: 2025-01-29 16:26:45.543 [INFO][3433] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 16:26:45.577409 containerd[1511]: 2025-01-29 16:26:45.543 [INFO][3433] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.149' Jan 29 16:26:45.577409 containerd[1511]: 2025-01-29 16:26:45.544 [INFO][3433] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.89125b2f79d949b0673ef31131791f2a7817512d12d573300a12d5085aad9cdd" host="10.0.0.149" Jan 29 16:26:45.577409 containerd[1511]: 2025-01-29 16:26:45.547 [INFO][3433] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.149" Jan 29 16:26:45.577409 containerd[1511]: 2025-01-29 16:26:45.551 [INFO][3433] ipam/ipam.go 489: Trying affinity for 192.168.60.0/26 host="10.0.0.149" Jan 29 16:26:45.577409 containerd[1511]: 2025-01-29 16:26:45.552 [INFO][3433] ipam/ipam.go 155: Attempting to load block cidr=192.168.60.0/26 host="10.0.0.149" Jan 29 16:26:45.577409 containerd[1511]: 2025-01-29 16:26:45.554 [INFO][3433] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.60.0/26 host="10.0.0.149" Jan 29 16:26:45.577409 containerd[1511]: 2025-01-29 16:26:45.554 [INFO][3433] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.60.0/26 handle="k8s-pod-network.89125b2f79d949b0673ef31131791f2a7817512d12d573300a12d5085aad9cdd" host="10.0.0.149" Jan 29 16:26:45.577409 containerd[1511]: 2025-01-29 16:26:45.555 [INFO][3433] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.89125b2f79d949b0673ef31131791f2a7817512d12d573300a12d5085aad9cdd Jan 29 16:26:45.577409 containerd[1511]: 2025-01-29 16:26:45.559 [INFO][3433] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.60.0/26 handle="k8s-pod-network.89125b2f79d949b0673ef31131791f2a7817512d12d573300a12d5085aad9cdd" host="10.0.0.149" Jan 29 16:26:45.577409 containerd[1511]: 2025-01-29 16:26:45.563 [INFO][3433] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.60.3/26] block=192.168.60.0/26 handle="k8s-pod-network.89125b2f79d949b0673ef31131791f2a7817512d12d573300a12d5085aad9cdd" host="10.0.0.149" Jan 29 16:26:45.577409 containerd[1511]: 2025-01-29 16:26:45.563 [INFO][3433] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.60.3/26] handle="k8s-pod-network.89125b2f79d949b0673ef31131791f2a7817512d12d573300a12d5085aad9cdd" host="10.0.0.149" Jan 29 16:26:45.577409 containerd[1511]: 2025-01-29 16:26:45.563 [INFO][3433] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 16:26:45.577409 containerd[1511]: 2025-01-29 16:26:45.563 [INFO][3433] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.60.3/26] IPv6=[] ContainerID="89125b2f79d949b0673ef31131791f2a7817512d12d573300a12d5085aad9cdd" HandleID="k8s-pod-network.89125b2f79d949b0673ef31131791f2a7817512d12d573300a12d5085aad9cdd" Workload="10.0.0.149-k8s-nfs--server--provisioner--0-eth0" Jan 29 16:26:45.578250 containerd[1511]: 2025-01-29 16:26:45.565 [INFO][3420] cni-plugin/k8s.go 386: Populated endpoint ContainerID="89125b2f79d949b0673ef31131791f2a7817512d12d573300a12d5085aad9cdd" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.149-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.149-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"4dc4610b-a400-4664-afea-cee66ef119af", ResourceVersion:"1140", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 26, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.149", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.60.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:26:45.578250 containerd[1511]: 2025-01-29 16:26:45.566 [INFO][3420] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.60.3/32] ContainerID="89125b2f79d949b0673ef31131791f2a7817512d12d573300a12d5085aad9cdd" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.149-k8s-nfs--server--provisioner--0-eth0" Jan 29 16:26:45.578250 containerd[1511]: 2025-01-29 16:26:45.566 [INFO][3420] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="89125b2f79d949b0673ef31131791f2a7817512d12d573300a12d5085aad9cdd" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.149-k8s-nfs--server--provisioner--0-eth0" Jan 29 16:26:45.578250 containerd[1511]: 2025-01-29 16:26:45.568 [INFO][3420] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="89125b2f79d949b0673ef31131791f2a7817512d12d573300a12d5085aad9cdd" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.149-k8s-nfs--server--provisioner--0-eth0" Jan 29 16:26:45.578470 containerd[1511]: 2025-01-29 16:26:45.568 [INFO][3420] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="89125b2f79d949b0673ef31131791f2a7817512d12d573300a12d5085aad9cdd" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.149-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.149-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"4dc4610b-a400-4664-afea-cee66ef119af", ResourceVersion:"1140", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 26, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.149", ContainerID:"89125b2f79d949b0673ef31131791f2a7817512d12d573300a12d5085aad9cdd", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.60.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"b6:6f:e7:1a:32:1d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:26:45.578470 containerd[1511]: 2025-01-29 16:26:45.574 [INFO][3420] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="89125b2f79d949b0673ef31131791f2a7817512d12d573300a12d5085aad9cdd" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.149-k8s-nfs--server--provisioner--0-eth0" Jan 29 16:26:45.601758 containerd[1511]: time="2025-01-29T16:26:45.601590090Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:26:45.601758 containerd[1511]: time="2025-01-29T16:26:45.601638631Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:26:45.601758 containerd[1511]: time="2025-01-29T16:26:45.601683557Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:26:45.601937 containerd[1511]: time="2025-01-29T16:26:45.601750062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:26:45.627841 systemd[1]: Started cri-containerd-89125b2f79d949b0673ef31131791f2a7817512d12d573300a12d5085aad9cdd.scope - libcontainer container 89125b2f79d949b0673ef31131791f2a7817512d12d573300a12d5085aad9cdd. Jan 29 16:26:45.639606 systemd-resolved[1342]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 16:26:45.665476 containerd[1511]: time="2025-01-29T16:26:45.665435458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:4dc4610b-a400-4664-afea-cee66ef119af,Namespace:default,Attempt:0,} returns sandbox id \"89125b2f79d949b0673ef31131791f2a7817512d12d573300a12d5085aad9cdd\"" Jan 29 16:26:45.666760 containerd[1511]: time="2025-01-29T16:26:45.666722761Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 29 16:26:46.107921 kubelet[1836]: E0129 16:26:46.107878 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:46.974803 systemd-networkd[1447]: cali60e51b789ff: Gained IPv6LL Jan 29 16:26:47.108257 kubelet[1836]: E0129 16:26:47.108218 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:47.626530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1770979720.mount: Deactivated successfully. Jan 29 16:26:48.108989 kubelet[1836]: E0129 16:26:48.108951 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:49.059063 kubelet[1836]: E0129 16:26:49.059007 1836 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:49.109255 kubelet[1836]: E0129 16:26:49.109131 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:50.109722 kubelet[1836]: E0129 16:26:50.109630 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:51.109825 kubelet[1836]: E0129 16:26:51.109772 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:51.300894 containerd[1511]: time="2025-01-29T16:26:51.300807897Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:51.313858 containerd[1511]: time="2025-01-29T16:26:51.313679429Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jan 29 16:26:51.318625 containerd[1511]: time="2025-01-29T16:26:51.318457500Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:51.329190 containerd[1511]: time="2025-01-29T16:26:51.329062778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:51.330393 containerd[1511]: time="2025-01-29T16:26:51.330329156Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 5.663567601s" Jan 29 16:26:51.330393 containerd[1511]: time="2025-01-29T16:26:51.330374402Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 29 16:26:51.337042 containerd[1511]: time="2025-01-29T16:26:51.336983274Z" level=info msg="CreateContainer within sandbox \"89125b2f79d949b0673ef31131791f2a7817512d12d573300a12d5085aad9cdd\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 29 16:26:51.372933 containerd[1511]: time="2025-01-29T16:26:51.372734562Z" level=info msg="CreateContainer within sandbox \"89125b2f79d949b0673ef31131791f2a7817512d12d573300a12d5085aad9cdd\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"ccb7ab8f1bdaa183b2b1c84396541ec1214b97c991e18638d44ec13eeeb3548f\"" Jan 29 16:26:51.373717 containerd[1511]: time="2025-01-29T16:26:51.373639277Z" level=info msg="StartContainer for \"ccb7ab8f1bdaa183b2b1c84396541ec1214b97c991e18638d44ec13eeeb3548f\"" Jan 29 16:26:51.410866 systemd[1]: Started cri-containerd-ccb7ab8f1bdaa183b2b1c84396541ec1214b97c991e18638d44ec13eeeb3548f.scope - libcontainer container ccb7ab8f1bdaa183b2b1c84396541ec1214b97c991e18638d44ec13eeeb3548f. Jan 29 16:26:51.437965 containerd[1511]: time="2025-01-29T16:26:51.437907520Z" level=info msg="StartContainer for \"ccb7ab8f1bdaa183b2b1c84396541ec1214b97c991e18638d44ec13eeeb3548f\" returns successfully" Jan 29 16:26:51.486771 kubelet[1836]: I0129 16:26:51.484438 1836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=0.819115793 podStartE2EDuration="6.484412125s" podCreationTimestamp="2025-01-29 16:26:45 +0000 UTC" firstStartedPulling="2025-01-29 16:26:45.666488469 +0000 UTC m=+37.387195528" lastFinishedPulling="2025-01-29 16:26:51.331784801 +0000 UTC m=+43.052491860" observedRunningTime="2025-01-29 16:26:51.48387386 +0000 UTC m=+43.204580919" watchObservedRunningTime="2025-01-29 16:26:51.484412125 +0000 UTC m=+43.205119194" Jan 29 16:26:52.110329 kubelet[1836]: E0129 16:26:52.110276 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:53.111435 kubelet[1836]: E0129 16:26:53.111377 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:54.112495 kubelet[1836]: E0129 16:26:54.112425 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:55.113469 kubelet[1836]: E0129 16:26:55.113402 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:56.114094 kubelet[1836]: E0129 16:26:56.114045 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:57.114494 kubelet[1836]: E0129 16:26:57.114429 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:58.115448 kubelet[1836]: E0129 16:26:58.115390 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:59.116363 kubelet[1836]: E0129 16:26:59.116287 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:00.116625 kubelet[1836]: E0129 16:27:00.116576 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:01.117581 kubelet[1836]: E0129 16:27:01.117507 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:01.156346 systemd[1]: Created slice kubepods-besteffort-pod39adef65_c543_4891_9ea0_6ee978a6000a.slice - libcontainer container kubepods-besteffort-pod39adef65_c543_4891_9ea0_6ee978a6000a.slice. Jan 29 16:27:01.266700 kubelet[1836]: I0129 16:27:01.266614 1836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dss74\" (UniqueName: \"kubernetes.io/projected/39adef65-c543-4891-9ea0-6ee978a6000a-kube-api-access-dss74\") pod \"test-pod-1\" (UID: \"39adef65-c543-4891-9ea0-6ee978a6000a\") " pod="default/test-pod-1" Jan 29 16:27:01.266700 kubelet[1836]: I0129 16:27:01.266696 1836 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8edd83cc-8a9d-4be2-9614-9d020369a26b\" (UniqueName: \"kubernetes.io/nfs/39adef65-c543-4891-9ea0-6ee978a6000a-pvc-8edd83cc-8a9d-4be2-9614-9d020369a26b\") pod \"test-pod-1\" (UID: \"39adef65-c543-4891-9ea0-6ee978a6000a\") " pod="default/test-pod-1" Jan 29 16:27:01.397692 kernel: FS-Cache: Loaded Jan 29 16:27:01.467248 kernel: RPC: Registered named UNIX socket transport module. Jan 29 16:27:01.467368 kernel: RPC: Registered udp transport module. Jan 29 16:27:01.467391 kernel: RPC: Registered tcp transport module. Jan 29 16:27:01.467410 kernel: RPC: Registered tcp-with-tls transport module. Jan 29 16:27:01.467930 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 29 16:27:01.686007 kernel: NFS: Registering the id_resolver key type Jan 29 16:27:01.686139 kernel: Key type id_resolver registered Jan 29 16:27:01.686162 kernel: Key type id_legacy registered Jan 29 16:27:01.716221 nfsidmap[3635]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 29 16:27:01.721044 nfsidmap[3638]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 29 16:27:01.759188 containerd[1511]: time="2025-01-29T16:27:01.759140143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:39adef65-c543-4891-9ea0-6ee978a6000a,Namespace:default,Attempt:0,}" Jan 29 16:27:02.118006 kubelet[1836]: E0129 16:27:02.117964 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:02.356258 systemd-networkd[1447]: cali5ec59c6bf6e: Link UP Jan 29 16:27:02.356529 systemd-networkd[1447]: cali5ec59c6bf6e: Gained carrier Jan 29 16:27:02.367147 containerd[1511]: 2025-01-29 16:27:02.279 [INFO][3641] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.149-k8s-test--pod--1-eth0 default 39adef65-c543-4891-9ea0-6ee978a6000a 1218 0 2025-01-29 16:26:45 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.149 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="abdd2c188498c097a8531bf09934fe50408ad5d061807e3c6b5a0930c7d42033" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.149-k8s-test--pod--1-" Jan 29 16:27:02.367147 containerd[1511]: 2025-01-29 16:27:02.279 [INFO][3641] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="abdd2c188498c097a8531bf09934fe50408ad5d061807e3c6b5a0930c7d42033" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.149-k8s-test--pod--1-eth0" Jan 29 16:27:02.367147 containerd[1511]: 2025-01-29 16:27:02.309 [INFO][3655] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="abdd2c188498c097a8531bf09934fe50408ad5d061807e3c6b5a0930c7d42033" HandleID="k8s-pod-network.abdd2c188498c097a8531bf09934fe50408ad5d061807e3c6b5a0930c7d42033" Workload="10.0.0.149-k8s-test--pod--1-eth0" Jan 29 16:27:02.367147 containerd[1511]: 2025-01-29 16:27:02.320 [INFO][3655] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="abdd2c188498c097a8531bf09934fe50408ad5d061807e3c6b5a0930c7d42033" HandleID="k8s-pod-network.abdd2c188498c097a8531bf09934fe50408ad5d061807e3c6b5a0930c7d42033" Workload="10.0.0.149-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dfc30), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.149", "pod":"test-pod-1", "timestamp":"2025-01-29 16:27:02.309464185 +0000 UTC"}, Hostname:"10.0.0.149", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 16:27:02.367147 containerd[1511]: 2025-01-29 16:27:02.320 [INFO][3655] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 16:27:02.367147 containerd[1511]: 2025-01-29 16:27:02.320 [INFO][3655] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 16:27:02.367147 containerd[1511]: 2025-01-29 16:27:02.320 [INFO][3655] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.149' Jan 29 16:27:02.367147 containerd[1511]: 2025-01-29 16:27:02.323 [INFO][3655] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.abdd2c188498c097a8531bf09934fe50408ad5d061807e3c6b5a0930c7d42033" host="10.0.0.149" Jan 29 16:27:02.367147 containerd[1511]: 2025-01-29 16:27:02.327 [INFO][3655] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.149" Jan 29 16:27:02.367147 containerd[1511]: 2025-01-29 16:27:02.332 [INFO][3655] ipam/ipam.go 489: Trying affinity for 192.168.60.0/26 host="10.0.0.149" Jan 29 16:27:02.367147 containerd[1511]: 2025-01-29 16:27:02.334 [INFO][3655] ipam/ipam.go 155: Attempting to load block cidr=192.168.60.0/26 host="10.0.0.149" Jan 29 16:27:02.367147 containerd[1511]: 2025-01-29 16:27:02.338 [INFO][3655] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.60.0/26 host="10.0.0.149" Jan 29 16:27:02.367147 containerd[1511]: 2025-01-29 16:27:02.338 [INFO][3655] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.60.0/26 handle="k8s-pod-network.abdd2c188498c097a8531bf09934fe50408ad5d061807e3c6b5a0930c7d42033" host="10.0.0.149" Jan 29 16:27:02.367147 containerd[1511]: 2025-01-29 16:27:02.340 [INFO][3655] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.abdd2c188498c097a8531bf09934fe50408ad5d061807e3c6b5a0930c7d42033 Jan 29 16:27:02.367147 containerd[1511]: 2025-01-29 16:27:02.345 [INFO][3655] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.60.0/26 handle="k8s-pod-network.abdd2c188498c097a8531bf09934fe50408ad5d061807e3c6b5a0930c7d42033" host="10.0.0.149" Jan 29 16:27:02.367147 containerd[1511]: 2025-01-29 16:27:02.351 [INFO][3655] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.60.4/26] block=192.168.60.0/26 handle="k8s-pod-network.abdd2c188498c097a8531bf09934fe50408ad5d061807e3c6b5a0930c7d42033" host="10.0.0.149" Jan 29 16:27:02.367147 containerd[1511]: 2025-01-29 16:27:02.351 [INFO][3655] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.60.4/26] handle="k8s-pod-network.abdd2c188498c097a8531bf09934fe50408ad5d061807e3c6b5a0930c7d42033" host="10.0.0.149" Jan 29 16:27:02.367147 containerd[1511]: 2025-01-29 16:27:02.351 [INFO][3655] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 16:27:02.367147 containerd[1511]: 2025-01-29 16:27:02.351 [INFO][3655] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.60.4/26] IPv6=[] ContainerID="abdd2c188498c097a8531bf09934fe50408ad5d061807e3c6b5a0930c7d42033" HandleID="k8s-pod-network.abdd2c188498c097a8531bf09934fe50408ad5d061807e3c6b5a0930c7d42033" Workload="10.0.0.149-k8s-test--pod--1-eth0" Jan 29 16:27:02.367147 containerd[1511]: 2025-01-29 16:27:02.353 [INFO][3641] cni-plugin/k8s.go 386: Populated endpoint ContainerID="abdd2c188498c097a8531bf09934fe50408ad5d061807e3c6b5a0930c7d42033" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.149-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.149-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"39adef65-c543-4891-9ea0-6ee978a6000a", ResourceVersion:"1218", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 26, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.149", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.60.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:27:02.367844 containerd[1511]: 2025-01-29 16:27:02.354 [INFO][3641] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.60.4/32] ContainerID="abdd2c188498c097a8531bf09934fe50408ad5d061807e3c6b5a0930c7d42033" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.149-k8s-test--pod--1-eth0" Jan 29 16:27:02.367844 containerd[1511]: 2025-01-29 16:27:02.354 [INFO][3641] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="abdd2c188498c097a8531bf09934fe50408ad5d061807e3c6b5a0930c7d42033" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.149-k8s-test--pod--1-eth0" Jan 29 16:27:02.367844 containerd[1511]: 2025-01-29 16:27:02.356 [INFO][3641] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="abdd2c188498c097a8531bf09934fe50408ad5d061807e3c6b5a0930c7d42033" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.149-k8s-test--pod--1-eth0" Jan 29 16:27:02.367844 containerd[1511]: 2025-01-29 16:27:02.356 [INFO][3641] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="abdd2c188498c097a8531bf09934fe50408ad5d061807e3c6b5a0930c7d42033" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.149-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.149-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"39adef65-c543-4891-9ea0-6ee978a6000a", ResourceVersion:"1218", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 26, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.149", ContainerID:"abdd2c188498c097a8531bf09934fe50408ad5d061807e3c6b5a0930c7d42033", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.60.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"0a:17:9e:88:a0:5a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:27:02.367844 containerd[1511]: 2025-01-29 16:27:02.364 [INFO][3641] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="abdd2c188498c097a8531bf09934fe50408ad5d061807e3c6b5a0930c7d42033" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.149-k8s-test--pod--1-eth0" Jan 29 16:27:02.390748 containerd[1511]: time="2025-01-29T16:27:02.390566314Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:27:02.390899 containerd[1511]: time="2025-01-29T16:27:02.390633461Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:27:02.390899 containerd[1511]: time="2025-01-29T16:27:02.390726405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:27:02.390899 containerd[1511]: time="2025-01-29T16:27:02.390814492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:27:02.418825 systemd[1]: Started cri-containerd-abdd2c188498c097a8531bf09934fe50408ad5d061807e3c6b5a0930c7d42033.scope - libcontainer container abdd2c188498c097a8531bf09934fe50408ad5d061807e3c6b5a0930c7d42033. Jan 29 16:27:02.431376 systemd-resolved[1342]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 16:27:02.456019 containerd[1511]: time="2025-01-29T16:27:02.455940898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:39adef65-c543-4891-9ea0-6ee978a6000a,Namespace:default,Attempt:0,} returns sandbox id \"abdd2c188498c097a8531bf09934fe50408ad5d061807e3c6b5a0930c7d42033\"" Jan 29 16:27:02.457587 containerd[1511]: time="2025-01-29T16:27:02.457553061Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 29 16:27:02.890221 containerd[1511]: time="2025-01-29T16:27:02.890171981Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:27:02.890913 containerd[1511]: time="2025-01-29T16:27:02.890854645Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 29 16:27:02.893608 containerd[1511]: time="2025-01-29T16:27:02.893581482Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 436.001191ms" Jan 29 16:27:02.893685 containerd[1511]: time="2025-01-29T16:27:02.893607060Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 29 16:27:02.895225 containerd[1511]: time="2025-01-29T16:27:02.895201007Z" level=info msg="CreateContainer within sandbox \"abdd2c188498c097a8531bf09934fe50408ad5d061807e3c6b5a0930c7d42033\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 29 16:27:02.911768 containerd[1511]: time="2025-01-29T16:27:02.911733687Z" level=info msg="CreateContainer within sandbox \"abdd2c188498c097a8531bf09934fe50408ad5d061807e3c6b5a0930c7d42033\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"0bce1141e872007bd6d86d4a3e133a1ad601c85a919baa34ad330c813c771495\"" Jan 29 16:27:02.912190 containerd[1511]: time="2025-01-29T16:27:02.912161351Z" level=info msg="StartContainer for \"0bce1141e872007bd6d86d4a3e133a1ad601c85a919baa34ad330c813c771495\"" Jan 29 16:27:02.942768 systemd[1]: Started cri-containerd-0bce1141e872007bd6d86d4a3e133a1ad601c85a919baa34ad330c813c771495.scope - libcontainer container 0bce1141e872007bd6d86d4a3e133a1ad601c85a919baa34ad330c813c771495. Jan 29 16:27:02.968599 containerd[1511]: time="2025-01-29T16:27:02.968550869Z" level=info msg="StartContainer for \"0bce1141e872007bd6d86d4a3e133a1ad601c85a919baa34ad330c813c771495\" returns successfully" Jan 29 16:27:03.118614 kubelet[1836]: E0129 16:27:03.118545 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:03.505150 kubelet[1836]: I0129 16:27:03.505069 1836 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=18.068204554 podStartE2EDuration="18.505046275s" podCreationTimestamp="2025-01-29 16:26:45 +0000 UTC" firstStartedPulling="2025-01-29 16:27:02.457307217 +0000 UTC m=+54.178014267" lastFinishedPulling="2025-01-29 16:27:02.894148929 +0000 UTC m=+54.614855988" observedRunningTime="2025-01-29 16:27:03.504394851 +0000 UTC m=+55.225101910" watchObservedRunningTime="2025-01-29 16:27:03.505046275 +0000 UTC m=+55.225753335" Jan 29 16:27:03.678819 systemd-networkd[1447]: cali5ec59c6bf6e: Gained IPv6LL Jan 29 16:27:04.119281 kubelet[1836]: E0129 16:27:04.119222 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:05.120156 kubelet[1836]: E0129 16:27:05.120105 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:05.604453 kubelet[1836]: E0129 16:27:05.604427 1836 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:27:06.120818 kubelet[1836]: E0129 16:27:06.120733 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:07.121096 kubelet[1836]: E0129 16:27:07.121039 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:08.121882 kubelet[1836]: E0129 16:27:08.121828 1836 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"