Jan 29 14:42:59.927540 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 29 14:42:59.927570 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 14:42:59.927579 kernel: BIOS-provided physical RAM map: Jan 29 14:42:59.927589 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 29 14:42:59.927595 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 29 14:42:59.927601 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 29 14:42:59.927609 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jan 29 14:42:59.927616 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jan 29 14:42:59.927623 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 29 14:42:59.927629 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 29 14:42:59.927636 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 29 14:42:59.927642 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 29 14:42:59.927651 kernel: NX (Execute Disable) protection: active Jan 29 14:42:59.927658 kernel: APIC: Static calls initialized Jan 29 14:42:59.927666 kernel: SMBIOS 2.8 present. Jan 29 14:42:59.927675 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Jan 29 14:42:59.927682 kernel: Hypervisor detected: KVM Jan 29 14:42:59.927692 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 14:42:59.927699 kernel: kvm-clock: using sched offset of 3815172823 cycles Jan 29 14:42:59.927708 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 14:42:59.927716 kernel: tsc: Detected 2294.608 MHz processor Jan 29 14:42:59.927724 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 14:42:59.927732 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 14:42:59.927739 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 29 14:42:59.927747 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 29 14:42:59.927755 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 14:42:59.927764 kernel: Using GB pages for direct mapping Jan 29 14:42:59.927772 kernel: ACPI: Early table checksum verification disabled Jan 29 14:42:59.927779 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Jan 29 14:42:59.927787 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 14:42:59.927795 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 14:42:59.927803 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 14:42:59.927810 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jan 29 14:42:59.927818 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 14:42:59.927825 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 14:42:59.927835 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 14:42:59.927843 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 14:42:59.927850 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jan 29 14:42:59.927858 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jan 29 14:42:59.927866 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jan 29 14:42:59.927877 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jan 29 14:42:59.927885 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jan 29 14:42:59.927895 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jan 29 14:42:59.927903 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jan 29 14:42:59.927911 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 29 14:42:59.927920 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 29 14:42:59.927928 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jan 29 14:42:59.927936 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Jan 29 14:42:59.927944 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jan 29 14:42:59.927951 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Jan 29 14:42:59.927961 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jan 29 14:42:59.927969 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Jan 29 14:42:59.927977 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jan 29 14:42:59.927985 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Jan 29 14:42:59.927993 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jan 29 14:42:59.928001 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Jan 29 14:42:59.928015 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jan 29 14:42:59.928023 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Jan 29 14:42:59.928031 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jan 29 14:42:59.928041 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Jan 29 14:42:59.928049 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 29 14:42:59.928057 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 29 14:42:59.928065 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jan 29 14:42:59.928074 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Jan 29 14:42:59.928082 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Jan 29 14:42:59.928090 kernel: Zone ranges: Jan 29 14:42:59.928099 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 14:42:59.928107 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jan 29 14:42:59.928117 kernel: Normal empty Jan 29 14:42:59.928125 kernel: Movable zone start for each node Jan 29 14:42:59.928133 kernel: Early memory node ranges Jan 29 14:42:59.928141 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 29 14:42:59.928150 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jan 29 14:42:59.928158 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jan 29 14:42:59.928166 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 14:42:59.928174 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 29 14:42:59.928182 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jan 29 14:42:59.928190 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 29 14:42:59.928201 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 14:42:59.928209 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 14:42:59.928217 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 14:42:59.928234 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 14:42:59.928243 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 14:42:59.928251 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 14:42:59.928259 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 14:42:59.928267 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 14:42:59.928275 kernel: TSC deadline timer available Jan 29 14:42:59.928286 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Jan 29 14:42:59.928295 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 14:42:59.928303 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 29 14:42:59.928311 kernel: Booting paravirtualized kernel on KVM Jan 29 14:42:59.928319 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 14:42:59.928327 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 29 14:42:59.928336 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 29 14:42:59.928344 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 29 14:42:59.928352 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 29 14:42:59.928362 kernel: kvm-guest: PV spinlocks enabled Jan 29 14:42:59.928370 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 29 14:42:59.928379 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 14:42:59.928388 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 14:42:59.928396 kernel: random: crng init done Jan 29 14:42:59.928404 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 14:42:59.928412 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 29 14:42:59.928420 kernel: Fallback order for Node 0: 0 Jan 29 14:42:59.928430 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Jan 29 14:42:59.928438 kernel: Policy zone: DMA32 Jan 29 14:42:59.928447 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 14:42:59.928455 kernel: software IO TLB: area num 16. Jan 29 14:42:59.928463 kernel: Memory: 1901536K/2096616K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 194820K reserved, 0K cma-reserved) Jan 29 14:42:59.928472 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 29 14:42:59.928480 kernel: ftrace: allocating 37921 entries in 149 pages Jan 29 14:42:59.928488 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 14:42:59.928496 kernel: Dynamic Preempt: voluntary Jan 29 14:42:59.928522 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 14:42:59.928532 kernel: rcu: RCU event tracing is enabled. Jan 29 14:42:59.928541 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 29 14:42:59.928550 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 14:42:59.928560 kernel: Rude variant of Tasks RCU enabled. Jan 29 14:42:59.928578 kernel: Tracing variant of Tasks RCU enabled. Jan 29 14:42:59.928588 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 14:42:59.928597 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 29 14:42:59.928607 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jan 29 14:42:59.928616 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 14:42:59.928626 kernel: Console: colour VGA+ 80x25 Jan 29 14:42:59.928635 kernel: printk: console [tty0] enabled Jan 29 14:42:59.928647 kernel: printk: console [ttyS0] enabled Jan 29 14:42:59.928657 kernel: ACPI: Core revision 20230628 Jan 29 14:42:59.928666 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 14:42:59.928676 kernel: x2apic enabled Jan 29 14:42:59.928686 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 14:42:59.928698 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Jan 29 14:42:59.928708 kernel: Calibrating delay loop (skipped) preset value.. 4589.21 BogoMIPS (lpj=2294608) Jan 29 14:42:59.928717 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 29 14:42:59.928727 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 29 14:42:59.928737 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 29 14:42:59.928746 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 14:42:59.928756 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 29 14:42:59.928765 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 29 14:42:59.928775 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 29 14:42:59.928784 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 14:42:59.928796 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jan 29 14:42:59.928805 kernel: RETBleed: Mitigation: Enhanced IBRS Jan 29 14:42:59.928815 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 14:42:59.928824 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 14:42:59.928833 kernel: TAA: Mitigation: Clear CPU buffers Jan 29 14:42:59.928843 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 29 14:42:59.928852 kernel: GDS: Unknown: Dependent on hypervisor status Jan 29 14:42:59.928862 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 14:42:59.928871 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 14:42:59.928881 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 14:42:59.928892 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 29 14:42:59.928902 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 29 14:42:59.928911 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 29 14:42:59.928921 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 29 14:42:59.928930 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 14:42:59.928940 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 29 14:42:59.928949 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 29 14:42:59.928959 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 29 14:42:59.928968 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Jan 29 14:42:59.928977 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Jan 29 14:42:59.928987 kernel: Freeing SMP alternatives memory: 32K Jan 29 14:42:59.928996 kernel: pid_max: default: 32768 minimum: 301 Jan 29 14:42:59.929015 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 14:42:59.929025 kernel: landlock: Up and running. Jan 29 14:42:59.929035 kernel: SELinux: Initializing. Jan 29 14:42:59.929044 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 14:42:59.929054 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 14:42:59.929063 kernel: smpboot: CPU0: Intel Xeon Processor (Cascadelake) (family: 0x6, model: 0x55, stepping: 0x6) Jan 29 14:42:59.929073 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 29 14:42:59.929083 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 29 14:42:59.929092 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 29 14:42:59.929102 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 29 14:42:59.929114 kernel: signal: max sigframe size: 3632 Jan 29 14:42:59.929124 kernel: rcu: Hierarchical SRCU implementation. Jan 29 14:42:59.929133 kernel: rcu: Max phase no-delay instances is 400. Jan 29 14:42:59.929143 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 29 14:42:59.929153 kernel: smp: Bringing up secondary CPUs ... Jan 29 14:42:59.929162 kernel: smpboot: x86: Booting SMP configuration: Jan 29 14:42:59.929172 kernel: .... node #0, CPUs: #1 Jan 29 14:42:59.929181 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jan 29 14:42:59.929191 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 14:42:59.929202 kernel: smpboot: Max logical packages: 16 Jan 29 14:42:59.929212 kernel: smpboot: Total of 2 processors activated (9178.43 BogoMIPS) Jan 29 14:42:59.929222 kernel: devtmpfs: initialized Jan 29 14:42:59.930292 kernel: x86/mm: Memory block size: 128MB Jan 29 14:42:59.930304 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 14:42:59.930314 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 29 14:42:59.930324 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 14:42:59.930333 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 14:42:59.930343 kernel: audit: initializing netlink subsys (disabled) Jan 29 14:42:59.930358 kernel: audit: type=2000 audit(1738161778.813:1): state=initialized audit_enabled=0 res=1 Jan 29 14:42:59.930367 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 14:42:59.930377 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 14:42:59.930387 kernel: cpuidle: using governor menu Jan 29 14:42:59.930397 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 14:42:59.930406 kernel: dca service started, version 1.12.1 Jan 29 14:42:59.930416 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 29 14:42:59.930426 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 29 14:42:59.930436 kernel: PCI: Using configuration type 1 for base access Jan 29 14:42:59.930448 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 14:42:59.930458 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 14:42:59.930467 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 14:42:59.930477 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 14:42:59.930486 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 14:42:59.930496 kernel: ACPI: Added _OSI(Module Device) Jan 29 14:42:59.930506 kernel: ACPI: Added _OSI(Processor Device) Jan 29 14:42:59.930516 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 14:42:59.930525 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 14:42:59.930537 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 14:42:59.930547 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 14:42:59.930556 kernel: ACPI: Interpreter enabled Jan 29 14:42:59.930566 kernel: ACPI: PM: (supports S0 S5) Jan 29 14:42:59.930575 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 14:42:59.930585 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 14:42:59.930595 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 14:42:59.930604 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 29 14:42:59.930614 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 14:42:59.930779 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 14:42:59.930880 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 29 14:42:59.930970 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 29 14:42:59.930983 kernel: PCI host bridge to bus 0000:00 Jan 29 14:42:59.931099 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 14:42:59.931182 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 14:42:59.931398 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 14:42:59.931485 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 29 14:42:59.931567 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 29 14:42:59.931647 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jan 29 14:42:59.931728 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 14:42:59.931839 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 29 14:42:59.931951 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Jan 29 14:42:59.932057 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Jan 29 14:42:59.932148 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Jan 29 14:42:59.933301 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Jan 29 14:42:59.933420 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 14:42:59.933525 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 29 14:42:59.933621 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Jan 29 14:42:59.933721 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 29 14:42:59.933818 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Jan 29 14:42:59.933922 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 29 14:42:59.934025 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Jan 29 14:42:59.934123 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 29 14:42:59.934214 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Jan 29 14:42:59.934326 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 29 14:42:59.934420 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Jan 29 14:42:59.934515 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 29 14:42:59.934606 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Jan 29 14:42:59.934709 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 29 14:42:59.934799 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Jan 29 14:42:59.934900 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 29 14:42:59.934993 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Jan 29 14:42:59.935105 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 29 14:42:59.935197 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 29 14:42:59.937396 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Jan 29 14:42:59.937510 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jan 29 14:42:59.937596 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Jan 29 14:42:59.937694 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 29 14:42:59.937786 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 29 14:42:59.937869 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Jan 29 14:42:59.937952 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Jan 29 14:42:59.938058 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 29 14:42:59.938142 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 29 14:42:59.939259 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 29 14:42:59.939374 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Jan 29 14:42:59.939467 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Jan 29 14:42:59.939571 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 29 14:42:59.939661 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 29 14:42:59.939768 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Jan 29 14:42:59.939863 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Jan 29 14:42:59.939961 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 29 14:42:59.940058 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 29 14:42:59.940148 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 29 14:42:59.941305 kernel: pci_bus 0000:02: extended config space not accessible Jan 29 14:42:59.941421 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Jan 29 14:42:59.941521 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Jan 29 14:42:59.941617 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 29 14:42:59.941717 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 29 14:42:59.941822 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 29 14:42:59.941917 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Jan 29 14:42:59.942017 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 29 14:42:59.942107 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 29 14:42:59.942198 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 29 14:42:59.943322 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 29 14:42:59.943427 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jan 29 14:42:59.943521 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 29 14:42:59.943612 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 29 14:42:59.943702 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 29 14:42:59.943794 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 29 14:42:59.943884 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 29 14:42:59.943975 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 29 14:42:59.944075 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 29 14:42:59.944170 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 29 14:42:59.945293 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 29 14:42:59.945389 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 29 14:42:59.945479 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 29 14:42:59.945569 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 29 14:42:59.946322 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 29 14:42:59.946465 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 29 14:42:59.946555 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 29 14:42:59.946654 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 29 14:42:59.946745 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 29 14:42:59.946835 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 29 14:42:59.946847 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 14:42:59.946858 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 14:42:59.946868 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 14:42:59.946878 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 14:42:59.946888 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 29 14:42:59.946897 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 29 14:42:59.946911 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 29 14:42:59.946920 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 29 14:42:59.946930 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 29 14:42:59.946940 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 29 14:42:59.946949 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 29 14:42:59.946959 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 29 14:42:59.946969 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 29 14:42:59.946978 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 29 14:42:59.946988 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 29 14:42:59.947000 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 29 14:42:59.947017 kernel: iommu: Default domain type: Translated Jan 29 14:42:59.947027 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 14:42:59.947037 kernel: PCI: Using ACPI for IRQ routing Jan 29 14:42:59.947046 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 14:42:59.947056 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 29 14:42:59.947065 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jan 29 14:42:59.947155 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 29 14:42:59.949293 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 29 14:42:59.949397 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 14:42:59.949410 kernel: vgaarb: loaded Jan 29 14:42:59.949420 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 14:42:59.949438 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 14:42:59.949456 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 14:42:59.949466 kernel: pnp: PnP ACPI init Jan 29 14:42:59.949631 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 29 14:42:59.949653 kernel: pnp: PnP ACPI: found 5 devices Jan 29 14:42:59.949663 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 14:42:59.949673 kernel: NET: Registered PF_INET protocol family Jan 29 14:42:59.949683 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 14:42:59.949693 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 29 14:42:59.949703 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 14:42:59.949713 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 14:42:59.949723 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 29 14:42:59.949733 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 29 14:42:59.949745 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 14:42:59.949755 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 14:42:59.949765 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 14:42:59.949775 kernel: NET: Registered PF_XDP protocol family Jan 29 14:42:59.949872 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Jan 29 14:42:59.949966 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 29 14:42:59.950068 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 29 14:42:59.950173 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 29 14:42:59.950285 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 29 14:42:59.950384 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 29 14:42:59.950503 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 29 14:42:59.950618 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 29 14:42:59.950712 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 29 14:42:59.950807 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 29 14:42:59.950898 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 29 14:42:59.950989 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 29 14:42:59.951091 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 29 14:42:59.951181 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 29 14:42:59.952315 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 29 14:42:59.952413 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 29 14:42:59.952509 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 29 14:42:59.952603 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 29 14:42:59.952702 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 29 14:42:59.952793 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 29 14:42:59.952883 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 29 14:42:59.952978 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 29 14:42:59.953076 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 29 14:42:59.953170 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 29 14:42:59.954287 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 29 14:42:59.954373 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 29 14:42:59.954457 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 29 14:42:59.954539 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 29 14:42:59.954620 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 29 14:42:59.954703 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 29 14:42:59.954786 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 29 14:42:59.954872 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 29 14:42:59.954958 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 29 14:42:59.955046 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 29 14:42:59.955129 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 29 14:42:59.955211 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 29 14:42:59.956346 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 29 14:42:59.956439 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 29 14:42:59.956530 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 29 14:42:59.956620 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 29 14:42:59.956711 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 29 14:42:59.956808 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 29 14:42:59.956900 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 29 14:42:59.956991 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 29 14:42:59.957090 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 29 14:42:59.957179 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 29 14:42:59.958307 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 29 14:42:59.958402 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 29 14:42:59.958493 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 29 14:42:59.958583 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 29 14:42:59.958672 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 14:42:59.958753 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 14:42:59.958834 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 14:42:59.958915 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 29 14:42:59.959015 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 29 14:42:59.959108 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jan 29 14:42:59.959202 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 29 14:42:59.960308 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jan 29 14:42:59.960395 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 29 14:42:59.960488 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 29 14:42:59.960586 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Jan 29 14:42:59.960677 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 29 14:42:59.960762 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 29 14:42:59.960852 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Jan 29 14:42:59.960936 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 29 14:42:59.961029 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 29 14:42:59.961118 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jan 29 14:42:59.961204 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 29 14:42:59.962328 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 29 14:42:59.962426 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Jan 29 14:42:59.962523 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 29 14:42:59.962598 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 29 14:42:59.962681 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Jan 29 14:42:59.962757 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 29 14:42:59.962833 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 29 14:42:59.962925 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Jan 29 14:42:59.963002 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jan 29 14:42:59.963085 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 29 14:42:59.963169 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Jan 29 14:42:59.964276 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 29 14:42:59.964356 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 29 14:42:59.964374 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 29 14:42:59.964384 kernel: PCI: CLS 0 bytes, default 64 Jan 29 14:42:59.964394 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 29 14:42:59.964404 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jan 29 14:42:59.964413 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 29 14:42:59.964423 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Jan 29 14:42:59.964432 kernel: Initialise system trusted keyrings Jan 29 14:42:59.964442 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 29 14:42:59.964451 kernel: Key type asymmetric registered Jan 29 14:42:59.964463 kernel: Asymmetric key parser 'x509' registered Jan 29 14:42:59.964472 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 14:42:59.964482 kernel: io scheduler mq-deadline registered Jan 29 14:42:59.964491 kernel: io scheduler kyber registered Jan 29 14:42:59.964501 kernel: io scheduler bfq registered Jan 29 14:42:59.964586 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 29 14:42:59.964672 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 29 14:42:59.964755 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 14:42:59.964844 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 29 14:42:59.964927 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 29 14:42:59.965018 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 14:42:59.965103 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 29 14:42:59.965185 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 29 14:42:59.966299 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 14:42:59.966391 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 29 14:42:59.966474 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 29 14:42:59.966579 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 14:42:59.966664 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 29 14:42:59.966746 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 29 14:42:59.966829 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 14:42:59.966917 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 29 14:42:59.966999 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 29 14:42:59.967098 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 14:42:59.967183 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 29 14:42:59.969295 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 29 14:42:59.969398 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 14:42:59.969497 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 29 14:42:59.969590 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 29 14:42:59.969683 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 14:42:59.969698 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 14:42:59.969709 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 29 14:42:59.969720 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 29 14:42:59.969730 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 14:42:59.969744 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 14:42:59.969755 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 14:42:59.969765 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 14:42:59.969776 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 14:42:59.969878 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 29 14:42:59.969892 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 14:42:59.969975 kernel: rtc_cmos 00:03: registered as rtc0 Jan 29 14:42:59.970073 kernel: rtc_cmos 00:03: setting system clock to 2025-01-29T14:42:59 UTC (1738161779) Jan 29 14:42:59.970161 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 29 14:42:59.970175 kernel: intel_pstate: CPU model not supported Jan 29 14:42:59.970185 kernel: NET: Registered PF_INET6 protocol family Jan 29 14:42:59.970196 kernel: Segment Routing with IPv6 Jan 29 14:42:59.970207 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 14:42:59.970217 kernel: NET: Registered PF_PACKET protocol family Jan 29 14:42:59.970241 kernel: Key type dns_resolver registered Jan 29 14:42:59.970251 kernel: IPI shorthand broadcast: enabled Jan 29 14:42:59.970262 kernel: sched_clock: Marking stable (952002632, 121306554)->(1169552692, -96243506) Jan 29 14:42:59.970276 kernel: registered taskstats version 1 Jan 29 14:42:59.970287 kernel: Loading compiled-in X.509 certificates Jan 29 14:42:59.970297 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 29 14:42:59.970307 kernel: Key type .fscrypt registered Jan 29 14:42:59.970317 kernel: Key type fscrypt-provisioning registered Jan 29 14:42:59.970328 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 14:42:59.970339 kernel: ima: Allocated hash algorithm: sha1 Jan 29 14:42:59.970349 kernel: ima: No architecture policies found Jan 29 14:42:59.970360 kernel: clk: Disabling unused clocks Jan 29 14:42:59.970373 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 29 14:42:59.970384 kernel: Write protecting the kernel read-only data: 36864k Jan 29 14:42:59.970394 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 29 14:42:59.970405 kernel: Run /init as init process Jan 29 14:42:59.970415 kernel: with arguments: Jan 29 14:42:59.970426 kernel: /init Jan 29 14:42:59.970436 kernel: with environment: Jan 29 14:42:59.970446 kernel: HOME=/ Jan 29 14:42:59.970456 kernel: TERM=linux Jan 29 14:42:59.970469 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 14:42:59.970482 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 14:42:59.970496 systemd[1]: Detected virtualization kvm. Jan 29 14:42:59.970507 systemd[1]: Detected architecture x86-64. Jan 29 14:42:59.970517 systemd[1]: Running in initrd. Jan 29 14:42:59.970527 systemd[1]: No hostname configured, using default hostname. Jan 29 14:42:59.970538 systemd[1]: Hostname set to . Jan 29 14:42:59.970552 systemd[1]: Initializing machine ID from VM UUID. Jan 29 14:42:59.970562 systemd[1]: Queued start job for default target initrd.target. Jan 29 14:42:59.970573 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 14:42:59.970584 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 14:42:59.970595 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 14:42:59.970606 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 14:42:59.970617 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 14:42:59.970628 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 14:42:59.970643 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 14:42:59.970654 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 14:42:59.970665 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 14:42:59.970676 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 14:42:59.970686 systemd[1]: Reached target paths.target - Path Units. Jan 29 14:42:59.970697 systemd[1]: Reached target slices.target - Slice Units. Jan 29 14:42:59.970708 systemd[1]: Reached target swap.target - Swaps. Jan 29 14:42:59.970721 systemd[1]: Reached target timers.target - Timer Units. Jan 29 14:42:59.970732 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 14:42:59.970743 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 14:42:59.970754 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 14:42:59.970765 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 14:42:59.970776 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 14:42:59.970786 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 14:42:59.970797 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 14:42:59.970808 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 14:42:59.970821 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 14:42:59.970832 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 14:42:59.970843 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 14:42:59.970854 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 14:42:59.970864 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 14:42:59.970875 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 14:42:59.970912 systemd-journald[200]: Collecting audit messages is disabled. Jan 29 14:42:59.970942 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 14:42:59.970953 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 14:42:59.970964 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 14:42:59.970975 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 14:42:59.970989 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 14:42:59.971001 systemd-journald[200]: Journal started Jan 29 14:42:59.971031 systemd-journald[200]: Runtime Journal (/run/log/journal/8d17f75fdbca4c1f98e1be139d17a820) is 4.7M, max 38.0M, 33.2M free. Jan 29 14:42:59.948266 systemd-modules-load[201]: Inserted module 'overlay' Jan 29 14:43:00.001685 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 14:43:00.001708 kernel: Bridge firewalling registered Jan 29 14:43:00.001722 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 14:42:59.974891 systemd-modules-load[201]: Inserted module 'br_netfilter' Jan 29 14:43:00.007603 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 14:43:00.008273 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 14:43:00.014446 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 14:43:00.017063 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 14:43:00.020852 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 14:43:00.021616 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 14:43:00.023600 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 14:43:00.036749 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 14:43:00.039386 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 14:43:00.040764 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 14:43:00.049413 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 14:43:00.052282 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 14:43:00.054365 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 14:43:00.072964 dracut-cmdline[236]: dracut-dracut-053 Jan 29 14:43:00.075582 dracut-cmdline[236]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 14:43:00.078890 systemd-resolved[230]: Positive Trust Anchors: Jan 29 14:43:00.078907 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 14:43:00.078947 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 14:43:00.085336 systemd-resolved[230]: Defaulting to hostname 'linux'. Jan 29 14:43:00.086925 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 14:43:00.087838 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 14:43:00.174260 kernel: SCSI subsystem initialized Jan 29 14:43:00.184248 kernel: Loading iSCSI transport class v2.0-870. Jan 29 14:43:00.196256 kernel: iscsi: registered transport (tcp) Jan 29 14:43:00.218345 kernel: iscsi: registered transport (qla4xxx) Jan 29 14:43:00.218385 kernel: QLogic iSCSI HBA Driver Jan 29 14:43:00.271508 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 14:43:00.276368 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 14:43:00.304252 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 14:43:00.304314 kernel: device-mapper: uevent: version 1.0.3 Jan 29 14:43:00.306253 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 14:43:00.359296 kernel: raid6: avx512x4 gen() 17610 MB/s Jan 29 14:43:00.376298 kernel: raid6: avx512x2 gen() 17564 MB/s Jan 29 14:43:00.393298 kernel: raid6: avx512x1 gen() 17548 MB/s Jan 29 14:43:00.410301 kernel: raid6: avx2x4 gen() 17585 MB/s Jan 29 14:43:00.427270 kernel: raid6: avx2x2 gen() 17550 MB/s Jan 29 14:43:00.444333 kernel: raid6: avx2x1 gen() 13394 MB/s Jan 29 14:43:00.444477 kernel: raid6: using algorithm avx512x4 gen() 17610 MB/s Jan 29 14:43:00.462411 kernel: raid6: .... xor() 7569 MB/s, rmw enabled Jan 29 14:43:00.462562 kernel: raid6: using avx512x2 recovery algorithm Jan 29 14:43:00.485296 kernel: xor: automatically using best checksumming function avx Jan 29 14:43:00.673300 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 14:43:00.691882 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 14:43:00.698405 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 14:43:00.715707 systemd-udevd[418]: Using default interface naming scheme 'v255'. Jan 29 14:43:00.720857 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 14:43:00.733426 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 14:43:00.784018 dracut-pre-trigger[429]: rd.md=0: removing MD RAID activation Jan 29 14:43:00.829421 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 14:43:00.835418 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 14:43:00.902798 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 14:43:00.911631 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 14:43:00.941019 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 14:43:00.942070 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 14:43:00.942934 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 14:43:00.943650 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 14:43:00.950397 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 14:43:00.961505 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 14:43:00.992267 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jan 29 14:43:01.039325 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 14:43:01.039351 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 29 14:43:01.039489 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 14:43:01.039504 kernel: GPT:17805311 != 125829119 Jan 29 14:43:01.039517 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 14:43:01.039529 kernel: GPT:17805311 != 125829119 Jan 29 14:43:01.039541 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 14:43:01.039554 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 14:43:01.008804 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 14:43:01.113455 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 14:43:01.113489 kernel: AES CTR mode by8 optimization enabled Jan 29 14:43:01.113503 kernel: ACPI: bus type USB registered Jan 29 14:43:01.113516 kernel: usbcore: registered new interface driver usbfs Jan 29 14:43:01.113529 kernel: usbcore: registered new interface driver hub Jan 29 14:43:01.113542 kernel: usbcore: registered new device driver usb Jan 29 14:43:01.113555 kernel: libata version 3.00 loaded. Jan 29 14:43:01.113567 kernel: ahci 0000:00:1f.2: version 3.0 Jan 29 14:43:01.141140 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 29 14:43:01.141170 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 29 14:43:01.141335 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 29 14:43:01.141451 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (480) Jan 29 14:43:01.141466 kernel: scsi host0: ahci Jan 29 14:43:01.141590 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (476) Jan 29 14:43:01.141604 kernel: scsi host1: ahci Jan 29 14:43:01.141710 kernel: scsi host2: ahci Jan 29 14:43:01.141818 kernel: scsi host3: ahci Jan 29 14:43:01.141931 kernel: scsi host4: ahci Jan 29 14:43:01.142052 kernel: scsi host5: ahci Jan 29 14:43:01.142166 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 29 14:43:01.149125 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jan 29 14:43:01.149283 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 29 14:43:01.149402 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Jan 29 14:43:01.149424 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Jan 29 14:43:01.149437 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Jan 29 14:43:01.149450 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Jan 29 14:43:01.149463 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Jan 29 14:43:01.149475 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Jan 29 14:43:01.149488 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 29 14:43:01.149603 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jan 29 14:43:01.149718 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jan 29 14:43:01.149834 kernel: hub 1-0:1.0: USB hub found Jan 29 14:43:01.149973 kernel: hub 1-0:1.0: 4 ports detected Jan 29 14:43:01.150085 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 29 14:43:01.150208 kernel: hub 2-0:1.0: USB hub found Jan 29 14:43:01.151380 kernel: hub 2-0:1.0: 4 ports detected Jan 29 14:43:01.008948 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 14:43:01.014836 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 14:43:01.015209 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 14:43:01.015357 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 14:43:01.015733 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 14:43:01.022451 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 14:43:01.101969 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 14:43:01.114369 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 14:43:01.124748 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 14:43:01.139437 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 14:43:01.153167 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 14:43:01.154851 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 14:43:01.161614 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 14:43:01.162425 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 14:43:01.170365 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 14:43:01.176575 disk-uuid[576]: Primary Header is updated. Jan 29 14:43:01.176575 disk-uuid[576]: Secondary Entries is updated. Jan 29 14:43:01.176575 disk-uuid[576]: Secondary Header is updated. Jan 29 14:43:01.180251 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 14:43:01.184242 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 14:43:01.383425 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 29 14:43:01.453279 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 29 14:43:01.453416 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 29 14:43:01.456569 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 29 14:43:01.462971 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 29 14:43:01.463069 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 29 14:43:01.463267 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 29 14:43:01.532432 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 14:43:01.537265 kernel: usbcore: registered new interface driver usbhid Jan 29 14:43:01.537346 kernel: usbhid: USB HID core driver Jan 29 14:43:01.548013 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Jan 29 14:43:01.548082 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jan 29 14:43:02.191418 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 14:43:02.193325 disk-uuid[577]: The operation has completed successfully. Jan 29 14:43:02.229864 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 14:43:02.229991 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 14:43:02.258577 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 14:43:02.262345 sh[589]: Success Jan 29 14:43:02.278258 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 29 14:43:02.330699 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 14:43:02.332151 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 14:43:02.336306 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 14:43:02.360797 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 29 14:43:02.360923 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 14:43:02.360963 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 14:43:02.362096 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 14:43:02.363295 kernel: BTRFS info (device dm-0): using free space tree Jan 29 14:43:02.370411 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 14:43:02.371540 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 14:43:02.377486 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 14:43:02.381448 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 14:43:02.392511 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 14:43:02.392818 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 14:43:02.392844 kernel: BTRFS info (device vda6): using free space tree Jan 29 14:43:02.397045 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 14:43:02.406214 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 14:43:02.407336 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 14:43:02.418680 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 14:43:02.423460 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 14:43:02.529110 ignition[670]: Ignition 2.19.0 Jan 29 14:43:02.530117 ignition[670]: Stage: fetch-offline Jan 29 14:43:02.530172 ignition[670]: no configs at "/usr/lib/ignition/base.d" Jan 29 14:43:02.530184 ignition[670]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 14:43:02.530785 ignition[670]: parsed url from cmdline: "" Jan 29 14:43:02.530789 ignition[670]: no config URL provided Jan 29 14:43:02.530795 ignition[670]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 14:43:02.534066 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 14:43:02.530805 ignition[670]: no config at "/usr/lib/ignition/user.ign" Jan 29 14:43:02.530810 ignition[670]: failed to fetch config: resource requires networking Jan 29 14:43:02.531021 ignition[670]: Ignition finished successfully Jan 29 14:43:02.547446 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 14:43:02.552401 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 14:43:02.585119 systemd-networkd[778]: lo: Link UP Jan 29 14:43:02.585133 systemd-networkd[778]: lo: Gained carrier Jan 29 14:43:02.586451 systemd-networkd[778]: Enumeration completed Jan 29 14:43:02.586809 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 14:43:02.586813 systemd-networkd[778]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 14:43:02.587767 systemd-networkd[778]: eth0: Link UP Jan 29 14:43:02.587772 systemd-networkd[778]: eth0: Gained carrier Jan 29 14:43:02.587780 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 14:43:02.587917 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 14:43:02.589568 systemd[1]: Reached target network.target - Network. Jan 29 14:43:02.599688 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 14:43:02.606360 systemd-networkd[778]: eth0: DHCPv4 address 10.244.90.186/30, gateway 10.244.90.185 acquired from 10.244.90.185 Jan 29 14:43:02.630217 ignition[780]: Ignition 2.19.0 Jan 29 14:43:02.632003 ignition[780]: Stage: fetch Jan 29 14:43:02.632647 ignition[780]: no configs at "/usr/lib/ignition/base.d" Jan 29 14:43:02.636407 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 14:43:02.636528 ignition[780]: parsed url from cmdline: "" Jan 29 14:43:02.636532 ignition[780]: no config URL provided Jan 29 14:43:02.636537 ignition[780]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 14:43:02.636546 ignition[780]: no config at "/usr/lib/ignition/user.ign" Jan 29 14:43:02.636688 ignition[780]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 29 14:43:02.636722 ignition[780]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 29 14:43:02.636838 ignition[780]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 29 14:43:02.655190 ignition[780]: GET result: OK Jan 29 14:43:02.655598 ignition[780]: parsing config with SHA512: 3b1d7409077906b3f1b2b0cfea53bb9729390a1bf52bf183a17213ecadce3bcae33dfbbf8c1b69328e327916827d654614084c6c958f70386f56f3a79b5645c9 Jan 29 14:43:02.660121 unknown[780]: fetched base config from "system" Jan 29 14:43:02.660134 unknown[780]: fetched base config from "system" Jan 29 14:43:02.660474 ignition[780]: fetch: fetch complete Jan 29 14:43:02.660141 unknown[780]: fetched user config from "openstack" Jan 29 14:43:02.660479 ignition[780]: fetch: fetch passed Jan 29 14:43:02.662359 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 14:43:02.660527 ignition[780]: Ignition finished successfully Jan 29 14:43:02.670601 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 14:43:02.688098 ignition[787]: Ignition 2.19.0 Jan 29 14:43:02.688110 ignition[787]: Stage: kargs Jan 29 14:43:02.688334 ignition[787]: no configs at "/usr/lib/ignition/base.d" Jan 29 14:43:02.690882 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 14:43:02.688346 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 14:43:02.689177 ignition[787]: kargs: kargs passed Jan 29 14:43:02.689246 ignition[787]: Ignition finished successfully Jan 29 14:43:02.697421 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 14:43:02.717622 ignition[793]: Ignition 2.19.0 Jan 29 14:43:02.717650 ignition[793]: Stage: disks Jan 29 14:43:02.718113 ignition[793]: no configs at "/usr/lib/ignition/base.d" Jan 29 14:43:02.718139 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 14:43:02.720343 ignition[793]: disks: disks passed Jan 29 14:43:02.720470 ignition[793]: Ignition finished successfully Jan 29 14:43:02.721651 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 14:43:02.722708 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 14:43:02.723831 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 14:43:02.724967 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 14:43:02.726127 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 14:43:02.727120 systemd[1]: Reached target basic.target - Basic System. Jan 29 14:43:02.733371 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 14:43:02.754852 systemd-fsck[801]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 29 14:43:02.759215 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 14:43:02.764827 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 14:43:02.865284 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 29 14:43:02.867659 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 14:43:02.871139 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 14:43:02.886412 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 14:43:02.889540 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 14:43:02.890361 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 14:43:02.893617 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 29 14:43:02.896487 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 14:43:02.896521 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 14:43:02.900329 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 14:43:02.907398 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (809) Jan 29 14:43:02.907428 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 14:43:02.907443 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 14:43:02.907457 kernel: BTRFS info (device vda6): using free space tree Jan 29 14:43:02.907891 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 14:43:02.911291 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 14:43:02.913889 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 14:43:02.968489 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 14:43:02.974892 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Jan 29 14:43:02.983193 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 14:43:02.991222 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 14:43:03.129482 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 14:43:03.137465 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 14:43:03.143549 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 14:43:03.154294 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 14:43:03.183930 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 14:43:03.196852 ignition[926]: INFO : Ignition 2.19.0 Jan 29 14:43:03.196852 ignition[926]: INFO : Stage: mount Jan 29 14:43:03.197962 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 14:43:03.197962 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 14:43:03.199664 ignition[926]: INFO : mount: mount passed Jan 29 14:43:03.200080 ignition[926]: INFO : Ignition finished successfully Jan 29 14:43:03.201934 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 14:43:03.360791 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 14:43:04.227344 systemd-networkd[778]: eth0: Gained IPv6LL Jan 29 14:43:04.787811 systemd-networkd[778]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:16ae:24:19ff:fef4:5aba/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:16ae:24:19ff:fef4:5aba/64 assigned by NDisc. Jan 29 14:43:04.788409 systemd-networkd[778]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 29 14:43:10.039395 coreos-metadata[811]: Jan 29 14:43:10.039 WARN failed to locate config-drive, using the metadata service API instead Jan 29 14:43:10.063329 coreos-metadata[811]: Jan 29 14:43:10.063 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 29 14:43:10.076158 coreos-metadata[811]: Jan 29 14:43:10.076 INFO Fetch successful Jan 29 14:43:10.077719 coreos-metadata[811]: Jan 29 14:43:10.076 INFO wrote hostname srv-4mohb.gb1.brightbox.com to /sysroot/etc/hostname Jan 29 14:43:10.081479 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 29 14:43:10.081758 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 29 14:43:10.093329 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 14:43:10.140587 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 14:43:10.151259 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (942) Jan 29 14:43:10.151340 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 14:43:10.151373 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 14:43:10.151583 kernel: BTRFS info (device vda6): using free space tree Jan 29 14:43:10.157253 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 14:43:10.160454 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 14:43:10.188362 ignition[959]: INFO : Ignition 2.19.0 Jan 29 14:43:10.188362 ignition[959]: INFO : Stage: files Jan 29 14:43:10.189449 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 14:43:10.189449 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 14:43:10.189449 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Jan 29 14:43:10.190956 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 14:43:10.190956 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 14:43:10.193516 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 14:43:10.194119 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 14:43:10.194119 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 14:43:10.193978 unknown[959]: wrote ssh authorized keys file for user: core Jan 29 14:43:10.199355 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 29 14:43:10.199355 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 29 14:43:10.199355 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 29 14:43:10.199355 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 14:43:10.199355 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 14:43:10.199355 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 14:43:10.199355 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 14:43:10.199355 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 14:43:10.199355 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 14:43:10.199355 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 29 14:43:10.765488 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Jan 29 14:43:12.323893 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 14:43:12.323893 ignition[959]: INFO : files: op(8): [started] processing unit "containerd.service" Jan 29 14:43:12.327595 ignition[959]: INFO : files: op(8): op(9): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 29 14:43:12.327595 ignition[959]: INFO : files: op(8): op(9): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 29 14:43:12.327595 ignition[959]: INFO : files: op(8): [finished] processing unit "containerd.service" Jan 29 14:43:12.327595 ignition[959]: INFO : files: createResultFile: createFiles: op(a): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 14:43:12.327595 ignition[959]: INFO : files: createResultFile: createFiles: op(a): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 14:43:12.327595 ignition[959]: INFO : files: files passed Jan 29 14:43:12.327595 ignition[959]: INFO : Ignition finished successfully Jan 29 14:43:12.328256 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 14:43:12.342433 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 14:43:12.344530 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 14:43:12.347694 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 14:43:12.348257 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 14:43:12.357221 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 14:43:12.358296 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 14:43:12.359342 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 14:43:12.361087 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 14:43:12.361675 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 14:43:12.365336 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 14:43:12.391386 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 14:43:12.391511 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 14:43:12.392532 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 14:43:12.393194 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 14:43:12.393962 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 14:43:12.405344 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 14:43:12.418280 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 14:43:12.422377 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 14:43:12.433526 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 14:43:12.434505 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 14:43:12.434972 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 14:43:12.435398 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 14:43:12.435499 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 14:43:12.436511 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 14:43:12.437033 systemd[1]: Stopped target basic.target - Basic System. Jan 29 14:43:12.437790 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 14:43:12.438638 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 14:43:12.439399 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 14:43:12.440128 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 14:43:12.440971 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 14:43:12.441828 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 14:43:12.442628 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 14:43:12.443413 systemd[1]: Stopped target swap.target - Swaps. Jan 29 14:43:12.444138 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 14:43:12.444252 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 14:43:12.445236 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 14:43:12.445710 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 14:43:12.446338 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 14:43:12.446424 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 14:43:12.447052 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 14:43:12.447145 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 14:43:12.448134 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 14:43:12.448257 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 14:43:12.449285 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 14:43:12.449380 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 14:43:12.460721 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 14:43:12.461142 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 14:43:12.461336 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 14:43:12.464425 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 14:43:12.464791 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 14:43:12.464903 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 14:43:12.465375 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 14:43:12.465462 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 14:43:12.470379 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 14:43:12.470473 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 14:43:12.481512 ignition[1012]: INFO : Ignition 2.19.0 Jan 29 14:43:12.482993 ignition[1012]: INFO : Stage: umount Jan 29 14:43:12.482993 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 14:43:12.482993 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 29 14:43:12.482993 ignition[1012]: INFO : umount: umount passed Jan 29 14:43:12.482993 ignition[1012]: INFO : Ignition finished successfully Jan 29 14:43:12.487660 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 14:43:12.488201 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 14:43:12.488431 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 14:43:12.489618 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 14:43:12.489716 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 14:43:12.490614 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 14:43:12.490655 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 14:43:12.491531 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 14:43:12.491570 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 14:43:12.492343 systemd[1]: Stopped target network.target - Network. Jan 29 14:43:12.492967 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 14:43:12.493011 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 14:43:12.493864 systemd[1]: Stopped target paths.target - Path Units. Jan 29 14:43:12.494539 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 14:43:12.494583 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 14:43:12.495293 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 14:43:12.495602 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 14:43:12.495983 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 14:43:12.496020 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 14:43:12.496717 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 14:43:12.496764 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 14:43:12.497435 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 14:43:12.497480 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 14:43:12.498168 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 14:43:12.498205 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 14:43:12.498753 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 14:43:12.499776 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 14:43:12.503825 systemd-networkd[778]: eth0: DHCPv6 lease lost Jan 29 14:43:12.507306 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 14:43:12.507408 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 14:43:12.509037 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 14:43:12.509144 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 14:43:12.512882 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 14:43:12.512944 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 14:43:12.519336 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 14:43:12.519760 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 14:43:12.519830 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 14:43:12.520635 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 14:43:12.520677 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 14:43:12.521470 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 14:43:12.521511 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 14:43:12.524610 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 14:43:12.524655 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 14:43:12.525913 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 14:43:12.533898 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 14:43:12.534736 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 14:43:12.537586 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 14:43:12.537633 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 14:43:12.538084 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 14:43:12.538113 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 14:43:12.538668 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 14:43:12.538709 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 14:43:12.539880 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 14:43:12.539920 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 14:43:12.540660 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 14:43:12.540696 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 14:43:12.548372 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 14:43:12.549329 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 14:43:12.549377 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 14:43:12.549825 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 14:43:12.549863 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 14:43:12.550550 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 14:43:12.552273 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 14:43:12.553347 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 14:43:12.553420 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 14:43:12.553969 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 14:43:12.554041 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 14:43:12.555751 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 14:43:12.556785 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 14:43:12.556849 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 14:43:12.560348 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 14:43:12.569917 systemd[1]: Switching root. Jan 29 14:43:12.603816 systemd-journald[200]: Journal stopped Jan 29 14:43:13.597852 systemd-journald[200]: Received SIGTERM from PID 1 (systemd). Jan 29 14:43:13.597936 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 14:43:13.597954 kernel: SELinux: policy capability open_perms=1 Jan 29 14:43:13.597971 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 14:43:13.597986 kernel: SELinux: policy capability always_check_network=0 Jan 29 14:43:13.598002 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 14:43:13.598019 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 14:43:13.598030 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 14:43:13.598046 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 14:43:13.598058 systemd[1]: Successfully loaded SELinux policy in 41.869ms. Jan 29 14:43:13.598083 kernel: audit: type=1403 audit(1738161792.792:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 14:43:13.598095 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.010ms. Jan 29 14:43:13.598109 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 14:43:13.598121 systemd[1]: Detected virtualization kvm. Jan 29 14:43:13.598134 systemd[1]: Detected architecture x86-64. Jan 29 14:43:13.598145 systemd[1]: Detected first boot. Jan 29 14:43:13.598157 systemd[1]: Hostname set to . Jan 29 14:43:13.598171 systemd[1]: Initializing machine ID from VM UUID. Jan 29 14:43:13.598183 zram_generator::config[1071]: No configuration found. Jan 29 14:43:13.598203 systemd[1]: Populated /etc with preset unit settings. Jan 29 14:43:13.598216 systemd[1]: Queued start job for default target multi-user.target. Jan 29 14:43:13.602878 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 14:43:13.602904 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 14:43:13.602919 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 14:43:13.602932 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 14:43:13.602946 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 14:43:13.602960 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 14:43:13.602980 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 14:43:13.602994 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 14:43:13.603007 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 14:43:13.603021 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 14:43:13.603035 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 14:43:13.603048 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 14:43:13.603074 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 14:43:13.603089 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 14:43:13.603105 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 14:43:13.603119 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 14:43:13.603133 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 14:43:13.603146 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 14:43:13.603161 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 14:43:13.603175 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 14:43:13.603189 systemd[1]: Reached target slices.target - Slice Units. Jan 29 14:43:13.603206 systemd[1]: Reached target swap.target - Swaps. Jan 29 14:43:13.603236 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 14:43:13.603251 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 14:43:13.603265 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 14:43:13.603279 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 14:43:13.603293 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 14:43:13.603306 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 14:43:13.603319 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 14:43:13.603336 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 14:43:13.603357 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 14:43:13.603373 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 14:43:13.603387 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 14:43:13.603400 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 14:43:13.603414 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 14:43:13.603428 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 14:43:13.603444 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 14:43:13.603458 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 14:43:13.603473 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 14:43:13.603486 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 14:43:13.603500 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 14:43:13.603515 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 14:43:13.603528 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 14:43:13.603542 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 14:43:13.603556 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 14:43:13.603575 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 14:43:13.603589 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 14:43:13.603604 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 29 14:43:13.603618 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 29 14:43:13.603631 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 14:43:13.603644 kernel: fuse: init (API version 7.39) Jan 29 14:43:13.603658 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 14:43:13.603671 kernel: loop: module loaded Jan 29 14:43:13.603688 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 14:43:13.603711 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 14:43:13.603724 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 14:43:13.603738 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 14:43:13.603753 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 14:43:13.603767 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 14:43:13.603781 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 14:43:13.603794 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 14:43:13.603832 systemd-journald[1175]: Collecting audit messages is disabled. Jan 29 14:43:13.603872 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 14:43:13.603887 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 14:43:13.603900 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 14:43:13.603914 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 14:43:13.603927 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 14:43:13.603942 systemd-journald[1175]: Journal started Jan 29 14:43:13.603973 systemd-journald[1175]: Runtime Journal (/run/log/journal/8d17f75fdbca4c1f98e1be139d17a820) is 4.7M, max 38.0M, 33.2M free. Jan 29 14:43:13.612609 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 14:43:13.608135 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 14:43:13.608314 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 14:43:13.608989 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 14:43:13.609143 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 14:43:13.609830 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 14:43:13.609976 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 14:43:13.610622 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 14:43:13.610784 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 14:43:13.611490 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 14:43:13.612186 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 14:43:13.613557 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 14:43:13.628739 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 14:43:13.637304 kernel: ACPI: bus type drm_connector registered Jan 29 14:43:13.638081 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 14:43:13.647382 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 14:43:13.647985 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 14:43:13.664627 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 14:43:13.675425 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 14:43:13.676180 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 14:43:13.682310 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 14:43:13.682875 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 14:43:13.691367 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 14:43:13.700478 systemd-journald[1175]: Time spent on flushing to /var/log/journal/8d17f75fdbca4c1f98e1be139d17a820 is 47.694ms for 1116 entries. Jan 29 14:43:13.700478 systemd-journald[1175]: System Journal (/var/log/journal/8d17f75fdbca4c1f98e1be139d17a820) is 8.0M, max 584.8M, 576.8M free. Jan 29 14:43:13.769764 systemd-journald[1175]: Received client request to flush runtime journal. Jan 29 14:43:13.699982 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 14:43:13.712900 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 14:43:13.716609 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 14:43:13.716850 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 14:43:13.722949 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 14:43:13.723583 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 14:43:13.729222 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 14:43:13.734609 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 14:43:13.774696 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 14:43:13.787400 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 14:43:13.794546 systemd-tmpfiles[1224]: ACLs are not supported, ignoring. Jan 29 14:43:13.794597 systemd-tmpfiles[1224]: ACLs are not supported, ignoring. Jan 29 14:43:13.800944 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 14:43:13.813423 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 14:43:13.814218 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 14:43:13.819370 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 14:43:13.835208 udevadm[1243]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 14:43:13.858612 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 14:43:13.864415 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 14:43:13.881492 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Jan 29 14:43:13.881832 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Jan 29 14:43:13.886617 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 14:43:14.367929 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 14:43:14.376443 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 14:43:14.402936 systemd-udevd[1256]: Using default interface naming scheme 'v255'. Jan 29 14:43:14.423141 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 14:43:14.437363 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 14:43:14.464943 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 14:43:14.491258 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1270) Jan 29 14:43:14.513781 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 29 14:43:14.569697 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 14:43:14.613303 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 29 14:43:14.641469 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 14:43:14.646341 kernel: ACPI: button: Power Button [PWRF] Jan 29 14:43:14.668861 systemd-networkd[1266]: lo: Link UP Jan 29 14:43:14.669920 systemd-networkd[1266]: lo: Gained carrier Jan 29 14:43:14.671830 systemd-networkd[1266]: Enumeration completed Jan 29 14:43:14.672409 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 14:43:14.673287 systemd-networkd[1266]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 14:43:14.674259 systemd-networkd[1266]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 14:43:14.675525 systemd-networkd[1266]: eth0: Link UP Jan 29 14:43:14.675616 systemd-networkd[1266]: eth0: Gained carrier Jan 29 14:43:14.675671 systemd-networkd[1266]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 14:43:14.678382 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 14:43:14.685417 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 14:43:14.688329 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Jan 29 14:43:14.691469 systemd-networkd[1266]: eth0: DHCPv4 address 10.244.90.186/30, gateway 10.244.90.185 acquired from 10.244.90.185 Jan 29 14:43:14.696290 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 29 14:43:14.700442 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 29 14:43:14.700616 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 29 14:43:14.746490 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 14:43:14.866177 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 14:43:14.927330 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 14:43:14.938589 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 14:43:14.965290 lvm[1296]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 14:43:14.995002 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 14:43:14.997450 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 14:43:15.008463 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 14:43:15.013992 lvm[1299]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 14:43:15.042185 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 14:43:15.044960 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 14:43:15.047418 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 14:43:15.047669 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 14:43:15.048813 systemd[1]: Reached target machines.target - Containers. Jan 29 14:43:15.051398 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 14:43:15.059398 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 14:43:15.061360 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 14:43:15.062013 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 14:43:15.065388 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 14:43:15.074679 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 14:43:15.085657 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 14:43:15.097294 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 14:43:15.110449 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 14:43:15.126319 kernel: loop0: detected capacity change from 0 to 8 Jan 29 14:43:15.138094 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 14:43:15.146335 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 14:43:15.140901 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 14:43:15.174355 kernel: loop1: detected capacity change from 0 to 142488 Jan 29 14:43:15.227050 kernel: loop2: detected capacity change from 0 to 140768 Jan 29 14:43:15.272503 kernel: loop3: detected capacity change from 0 to 210664 Jan 29 14:43:15.310464 kernel: loop4: detected capacity change from 0 to 8 Jan 29 14:43:15.312313 kernel: loop5: detected capacity change from 0 to 142488 Jan 29 14:43:15.333580 kernel: loop6: detected capacity change from 0 to 140768 Jan 29 14:43:15.344261 kernel: loop7: detected capacity change from 0 to 210664 Jan 29 14:43:15.355476 (sd-merge)[1321]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 29 14:43:15.356051 (sd-merge)[1321]: Merged extensions into '/usr'. Jan 29 14:43:15.361857 systemd[1]: Reloading requested from client PID 1307 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 14:43:15.361885 systemd[1]: Reloading... Jan 29 14:43:15.445285 zram_generator::config[1348]: No configuration found. Jan 29 14:43:15.582910 ldconfig[1303]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 14:43:15.613665 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 14:43:15.670937 systemd[1]: Reloading finished in 308 ms. Jan 29 14:43:15.688581 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 14:43:15.692586 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 14:43:15.702616 systemd[1]: Starting ensure-sysext.service... Jan 29 14:43:15.706379 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 14:43:15.716303 systemd[1]: Reloading requested from client PID 1412 ('systemctl') (unit ensure-sysext.service)... Jan 29 14:43:15.716331 systemd[1]: Reloading... Jan 29 14:43:15.738872 systemd-tmpfiles[1413]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 14:43:15.739211 systemd-tmpfiles[1413]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 14:43:15.740058 systemd-tmpfiles[1413]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 14:43:15.740961 systemd-tmpfiles[1413]: ACLs are not supported, ignoring. Jan 29 14:43:15.741077 systemd-tmpfiles[1413]: ACLs are not supported, ignoring. Jan 29 14:43:15.743811 systemd-tmpfiles[1413]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 14:43:15.743906 systemd-tmpfiles[1413]: Skipping /boot Jan 29 14:43:15.754565 systemd-tmpfiles[1413]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 14:43:15.754650 systemd-tmpfiles[1413]: Skipping /boot Jan 29 14:43:15.797285 zram_generator::config[1445]: No configuration found. Jan 29 14:43:15.938581 systemd-networkd[1266]: eth0: Gained IPv6LL Jan 29 14:43:15.957049 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 14:43:16.018117 systemd[1]: Reloading finished in 301 ms. Jan 29 14:43:16.032101 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 14:43:16.033175 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 14:43:16.046382 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 14:43:16.056378 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 14:43:16.061405 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 14:43:16.072409 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 14:43:16.078053 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 14:43:16.100659 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 14:43:16.101148 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 14:43:16.105852 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 14:43:16.123649 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 14:43:16.138546 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 14:43:16.139275 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 14:43:16.139417 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 14:43:16.149947 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 14:43:16.158513 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 14:43:16.158798 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 14:43:16.159037 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 14:43:16.167595 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 14:43:16.172865 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 14:43:16.173861 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 14:43:16.182168 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 14:43:16.184420 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 14:43:16.187627 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 14:43:16.188552 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 14:43:16.188708 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 14:43:16.191144 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 14:43:16.192578 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 14:43:16.205729 augenrules[1541]: No rules Jan 29 14:43:16.210484 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 14:43:16.213941 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 14:43:16.214309 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 14:43:16.226626 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 14:43:16.233742 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 14:43:16.241591 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 14:43:16.242984 systemd-resolved[1512]: Positive Trust Anchors: Jan 29 14:43:16.244379 systemd-resolved[1512]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 14:43:16.244424 systemd-resolved[1512]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 14:43:16.251628 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 14:43:16.252392 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 14:43:16.252650 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 14:43:16.252847 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 14:43:16.254693 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 14:43:16.254706 systemd-resolved[1512]: Using system hostname 'srv-4mohb.gb1.brightbox.com'. Jan 29 14:43:16.257913 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 14:43:16.258879 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 14:43:16.259073 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 14:43:16.265022 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 14:43:16.265688 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 14:43:16.268184 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 14:43:16.268653 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 14:43:16.270984 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 14:43:16.271544 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 14:43:16.277172 systemd[1]: Finished ensure-sysext.service. Jan 29 14:43:16.283656 systemd[1]: Reached target network.target - Network. Jan 29 14:43:16.284069 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 14:43:16.284493 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 14:43:16.284926 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 14:43:16.284995 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 14:43:16.290756 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 14:43:16.345967 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 14:43:16.347530 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 14:43:16.348039 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 14:43:16.348514 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 14:43:16.348955 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 14:43:16.349460 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 14:43:16.349494 systemd[1]: Reached target paths.target - Path Units. Jan 29 14:43:16.349854 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 14:43:16.350403 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 14:43:16.350920 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 14:43:16.351372 systemd[1]: Reached target timers.target - Timer Units. Jan 29 14:43:16.352470 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 14:43:16.354772 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 14:43:16.357187 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 14:43:16.361006 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 14:43:16.362262 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 14:43:16.363464 systemd[1]: Reached target basic.target - Basic System. Jan 29 14:43:16.364913 systemd[1]: System is tainted: cgroupsv1 Jan 29 14:43:16.365004 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 14:43:16.365059 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 14:43:16.367825 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 14:43:16.374415 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 14:43:16.376428 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 14:43:16.386323 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 14:43:16.393816 jq[1578]: false Jan 29 14:43:16.400054 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 14:43:16.401131 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 14:43:16.403576 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 14:43:16.406370 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 14:43:16.416364 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 14:43:16.424502 dbus-daemon[1576]: [system] SELinux support is enabled Jan 29 14:43:16.425015 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 14:43:16.427612 dbus-daemon[1576]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1266 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 29 14:43:16.433381 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 14:43:16.440369 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 14:43:16.442739 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 14:43:16.451663 extend-filesystems[1581]: Found loop4 Jan 29 14:43:16.451663 extend-filesystems[1581]: Found loop5 Jan 29 14:43:16.451663 extend-filesystems[1581]: Found loop6 Jan 29 14:43:16.451663 extend-filesystems[1581]: Found loop7 Jan 29 14:43:16.451663 extend-filesystems[1581]: Found vda Jan 29 14:43:16.451663 extend-filesystems[1581]: Found vda1 Jan 29 14:43:16.451663 extend-filesystems[1581]: Found vda2 Jan 29 14:43:16.451663 extend-filesystems[1581]: Found vda3 Jan 29 14:43:16.451663 extend-filesystems[1581]: Found usr Jan 29 14:43:16.451663 extend-filesystems[1581]: Found vda4 Jan 29 14:43:16.451663 extend-filesystems[1581]: Found vda6 Jan 29 14:43:16.451663 extend-filesystems[1581]: Found vda7 Jan 29 14:43:16.451663 extend-filesystems[1581]: Found vda9 Jan 29 14:43:16.451663 extend-filesystems[1581]: Checking size of /dev/vda9 Jan 29 14:43:16.458856 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 14:43:16.464349 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 14:43:16.467634 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 14:43:16.484048 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 14:43:16.502801 jq[1603]: true Jan 29 14:43:16.484304 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 14:43:16.489571 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 14:43:16.489811 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 14:43:16.500787 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 14:43:16.507878 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 14:43:16.524781 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 14:43:16.537879 update_engine[1598]: I20250129 14:43:16.533478 1598 main.cc:92] Flatcar Update Engine starting Jan 29 14:43:16.540597 jq[1613]: true Jan 29 14:43:16.541919 dbus-daemon[1576]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 29 14:43:16.543730 extend-filesystems[1581]: Resized partition /dev/vda9 Jan 29 14:43:16.545964 (ntainerd)[1624]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 14:43:16.547612 extend-filesystems[1630]: resize2fs 1.47.1 (20-May-2024) Jan 29 14:43:16.553271 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Jan 29 14:43:16.561381 update_engine[1598]: I20250129 14:43:16.555449 1598 update_check_scheduler.cc:74] Next update check in 2m22s Jan 29 14:43:16.557100 systemd[1]: Started update-engine.service - Update Engine. Jan 29 14:43:16.557689 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 14:43:16.557716 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 14:43:16.577468 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 29 14:43:16.577947 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 14:43:16.577973 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 14:43:16.581088 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 14:43:16.592632 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 14:43:16.666744 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1270) Jan 29 14:43:16.713329 bash[1648]: Updated "/home/core/.ssh/authorized_keys" Jan 29 14:43:16.711514 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 14:43:16.729527 systemd[1]: Starting sshkeys.service... Jan 29 14:43:16.734513 systemd-logind[1593]: Watching system buttons on /dev/input/event2 (Power Button) Jan 29 14:43:16.734530 systemd-logind[1593]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 14:43:16.739074 systemd-logind[1593]: New seat seat0. Jan 29 14:43:16.740070 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 14:43:16.775170 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 14:43:16.781656 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 14:43:16.788455 systemd-networkd[1266]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:16ae:24:19ff:fef4:5aba/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:16ae:24:19ff:fef4:5aba/64 assigned by NDisc. Jan 29 14:43:16.788553 systemd-networkd[1266]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 29 14:43:16.802243 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 29 14:43:16.830697 extend-filesystems[1630]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 14:43:16.830697 extend-filesystems[1630]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 29 14:43:16.830697 extend-filesystems[1630]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 29 14:43:16.828830 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 14:43:16.832802 extend-filesystems[1581]: Resized filesystem in /dev/vda9 Jan 29 14:43:16.829099 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 14:43:16.869445 systemd-timesyncd[1570]: Contacted time server 185.15.104.21:123 (0.flatcar.pool.ntp.org). Jan 29 14:43:16.869802 systemd-timesyncd[1570]: Initial clock synchronization to Wed 2025-01-29 14:43:17.254349 UTC. Jan 29 14:43:16.872795 sshd_keygen[1610]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 14:43:16.919796 locksmithd[1637]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 14:43:16.923883 dbus-daemon[1576]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 29 14:43:16.924076 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 29 14:43:16.926202 dbus-daemon[1576]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1633 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 29 14:43:16.934530 systemd[1]: Starting polkit.service - Authorization Manager... Jan 29 14:43:16.942602 containerd[1624]: time="2025-01-29T14:43:16.942512486Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 29 14:43:16.950456 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 14:43:16.961587 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 14:43:16.966979 polkitd[1675]: Started polkitd version 121 Jan 29 14:43:16.981890 polkitd[1675]: Loading rules from directory /etc/polkit-1/rules.d Jan 29 14:43:16.981942 polkitd[1675]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 29 14:43:16.983563 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 14:43:16.983817 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 14:43:16.991965 containerd[1624]: time="2025-01-29T14:43:16.991738681Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 14:43:16.993647 containerd[1624]: time="2025-01-29T14:43:16.993615946Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 14:43:16.993658 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 14:43:16.993817 containerd[1624]: time="2025-01-29T14:43:16.993802479Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 14:43:16.994954 containerd[1624]: time="2025-01-29T14:43:16.993867185Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 14:43:16.994954 containerd[1624]: time="2025-01-29T14:43:16.994028018Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 14:43:16.994954 containerd[1624]: time="2025-01-29T14:43:16.994044641Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 14:43:16.994954 containerd[1624]: time="2025-01-29T14:43:16.994100394Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 14:43:16.994954 containerd[1624]: time="2025-01-29T14:43:16.994115286Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 14:43:16.994954 containerd[1624]: time="2025-01-29T14:43:16.994347140Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 14:43:16.994954 containerd[1624]: time="2025-01-29T14:43:16.994364045Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 14:43:16.994954 containerd[1624]: time="2025-01-29T14:43:16.994389010Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 14:43:16.994954 containerd[1624]: time="2025-01-29T14:43:16.994399853Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 14:43:16.994954 containerd[1624]: time="2025-01-29T14:43:16.994475583Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 14:43:16.994954 containerd[1624]: time="2025-01-29T14:43:16.994668171Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 14:43:16.995256 containerd[1624]: time="2025-01-29T14:43:16.994798874Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 14:43:16.995256 containerd[1624]: time="2025-01-29T14:43:16.994812537Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 14:43:16.995256 containerd[1624]: time="2025-01-29T14:43:16.994877239Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 14:43:16.995256 containerd[1624]: time="2025-01-29T14:43:16.994916228Z" level=info msg="metadata content store policy set" policy=shared Jan 29 14:43:16.997489 polkitd[1675]: Finished loading, compiling and executing 2 rules Jan 29 14:43:16.998415 dbus-daemon[1576]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 29 14:43:16.998965 systemd[1]: Started polkit.service - Authorization Manager. Jan 29 14:43:17.000374 polkitd[1675]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 29 14:43:17.007293 containerd[1624]: time="2025-01-29T14:43:17.005849059Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 14:43:17.007293 containerd[1624]: time="2025-01-29T14:43:17.005917008Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 14:43:17.007293 containerd[1624]: time="2025-01-29T14:43:17.005935117Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 14:43:17.007293 containerd[1624]: time="2025-01-29T14:43:17.005955220Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 14:43:17.007293 containerd[1624]: time="2025-01-29T14:43:17.005970646Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 14:43:17.007293 containerd[1624]: time="2025-01-29T14:43:17.006118405Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 14:43:17.007293 containerd[1624]: time="2025-01-29T14:43:17.006497459Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 14:43:17.007293 containerd[1624]: time="2025-01-29T14:43:17.006633365Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 14:43:17.007293 containerd[1624]: time="2025-01-29T14:43:17.006650271Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 14:43:17.007293 containerd[1624]: time="2025-01-29T14:43:17.006665401Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 14:43:17.007293 containerd[1624]: time="2025-01-29T14:43:17.006692866Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 14:43:17.007293 containerd[1624]: time="2025-01-29T14:43:17.006710492Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 14:43:17.007293 containerd[1624]: time="2025-01-29T14:43:17.006735903Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 14:43:17.007293 containerd[1624]: time="2025-01-29T14:43:17.006758127Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 14:43:17.007683 containerd[1624]: time="2025-01-29T14:43:17.006775320Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 14:43:17.007683 containerd[1624]: time="2025-01-29T14:43:17.006789068Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 14:43:17.007683 containerd[1624]: time="2025-01-29T14:43:17.006802259Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 14:43:17.007683 containerd[1624]: time="2025-01-29T14:43:17.006815293Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 14:43:17.007683 containerd[1624]: time="2025-01-29T14:43:17.006842476Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 14:43:17.007683 containerd[1624]: time="2025-01-29T14:43:17.006857843Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 14:43:17.007683 containerd[1624]: time="2025-01-29T14:43:17.006891670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 14:43:17.007683 containerd[1624]: time="2025-01-29T14:43:17.006911616Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 14:43:17.007683 containerd[1624]: time="2025-01-29T14:43:17.006932696Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 14:43:17.007683 containerd[1624]: time="2025-01-29T14:43:17.006950008Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 14:43:17.007683 containerd[1624]: time="2025-01-29T14:43:17.006965855Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 14:43:17.007683 containerd[1624]: time="2025-01-29T14:43:17.006980266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 14:43:17.007683 containerd[1624]: time="2025-01-29T14:43:17.006994148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 14:43:17.007683 containerd[1624]: time="2025-01-29T14:43:17.007009747Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 14:43:17.008024 containerd[1624]: time="2025-01-29T14:43:17.007024731Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 14:43:17.008024 containerd[1624]: time="2025-01-29T14:43:17.007037440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 14:43:17.008024 containerd[1624]: time="2025-01-29T14:43:17.007053407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 14:43:17.008024 containerd[1624]: time="2025-01-29T14:43:17.007072851Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 14:43:17.008024 containerd[1624]: time="2025-01-29T14:43:17.007097010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 14:43:17.008024 containerd[1624]: time="2025-01-29T14:43:17.007109958Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 14:43:17.008024 containerd[1624]: time="2025-01-29T14:43:17.007137999Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 14:43:17.008024 containerd[1624]: time="2025-01-29T14:43:17.007196758Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 14:43:17.008024 containerd[1624]: time="2025-01-29T14:43:17.007221750Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 14:43:17.008024 containerd[1624]: time="2025-01-29T14:43:17.007233304Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 14:43:17.008024 containerd[1624]: time="2025-01-29T14:43:17.007246004Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 14:43:17.008024 containerd[1624]: time="2025-01-29T14:43:17.007256214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 14:43:17.011163 containerd[1624]: time="2025-01-29T14:43:17.007269060Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 14:43:17.011244 containerd[1624]: time="2025-01-29T14:43:17.011168953Z" level=info msg="NRI interface is disabled by configuration." Jan 29 14:43:17.011244 containerd[1624]: time="2025-01-29T14:43:17.011192019Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 14:43:17.011644 containerd[1624]: time="2025-01-29T14:43:17.011581408Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 14:43:17.011644 containerd[1624]: time="2025-01-29T14:43:17.011649938Z" level=info msg="Connect containerd service" Jan 29 14:43:17.011832 containerd[1624]: time="2025-01-29T14:43:17.011697232Z" level=info msg="using legacy CRI server" Jan 29 14:43:17.011832 containerd[1624]: time="2025-01-29T14:43:17.011706237Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 14:43:17.011891 containerd[1624]: time="2025-01-29T14:43:17.011832347Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 14:43:17.012642 containerd[1624]: time="2025-01-29T14:43:17.012612310Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 14:43:17.013132 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 14:43:17.017285 containerd[1624]: time="2025-01-29T14:43:17.013357919Z" level=info msg="Start subscribing containerd event" Jan 29 14:43:17.017285 containerd[1624]: time="2025-01-29T14:43:17.015049194Z" level=info msg="Start recovering state" Jan 29 14:43:17.017285 containerd[1624]: time="2025-01-29T14:43:17.015152989Z" level=info msg="Start event monitor" Jan 29 14:43:17.017285 containerd[1624]: time="2025-01-29T14:43:17.015183377Z" level=info msg="Start snapshots syncer" Jan 29 14:43:17.017285 containerd[1624]: time="2025-01-29T14:43:17.015193790Z" level=info msg="Start cni network conf syncer for default" Jan 29 14:43:17.017285 containerd[1624]: time="2025-01-29T14:43:17.015202473Z" level=info msg="Start streaming server" Jan 29 14:43:17.019217 containerd[1624]: time="2025-01-29T14:43:17.019124878Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 14:43:17.019217 containerd[1624]: time="2025-01-29T14:43:17.019177675Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 14:43:17.019333 containerd[1624]: time="2025-01-29T14:43:17.019230572Z" level=info msg="containerd successfully booted in 0.078020s" Jan 29 14:43:17.024775 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 14:43:17.033730 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 14:43:17.035479 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 14:43:17.036667 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 14:43:17.039506 systemd-hostnamed[1633]: Hostname set to (static) Jan 29 14:43:17.651536 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 14:43:17.656611 (kubelet)[1709]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 14:43:18.300159 kubelet[1709]: E0129 14:43:18.300050 1709 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 14:43:18.303362 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 14:43:18.303678 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 14:43:22.108856 login[1696]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 29 14:43:22.109653 login[1697]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 29 14:43:22.126349 systemd-logind[1593]: New session 1 of user core. Jan 29 14:43:22.128110 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 14:43:22.134103 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 14:43:22.139515 systemd-logind[1593]: New session 2 of user core. Jan 29 14:43:22.156963 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 14:43:22.165543 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 14:43:22.170564 (systemd)[1729]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 14:43:22.284369 systemd[1729]: Queued start job for default target default.target. Jan 29 14:43:22.284713 systemd[1729]: Created slice app.slice - User Application Slice. Jan 29 14:43:22.284730 systemd[1729]: Reached target paths.target - Paths. Jan 29 14:43:22.284741 systemd[1729]: Reached target timers.target - Timers. Jan 29 14:43:22.290581 systemd[1729]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 14:43:22.302211 systemd[1729]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 14:43:22.303724 systemd[1729]: Reached target sockets.target - Sockets. Jan 29 14:43:22.303850 systemd[1729]: Reached target basic.target - Basic System. Jan 29 14:43:22.304050 systemd[1729]: Reached target default.target - Main User Target. Jan 29 14:43:22.304219 systemd[1729]: Startup finished in 127ms. Jan 29 14:43:22.305183 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 14:43:22.324773 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 14:43:22.326106 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 14:43:23.483611 coreos-metadata[1575]: Jan 29 14:43:23.483 WARN failed to locate config-drive, using the metadata service API instead Jan 29 14:43:23.511072 coreos-metadata[1575]: Jan 29 14:43:23.510 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 29 14:43:23.518429 coreos-metadata[1575]: Jan 29 14:43:23.518 INFO Fetch failed with 404: resource not found Jan 29 14:43:23.518429 coreos-metadata[1575]: Jan 29 14:43:23.518 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 29 14:43:23.518923 coreos-metadata[1575]: Jan 29 14:43:23.518 INFO Fetch successful Jan 29 14:43:23.518957 coreos-metadata[1575]: Jan 29 14:43:23.518 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 29 14:43:23.531249 coreos-metadata[1575]: Jan 29 14:43:23.531 INFO Fetch successful Jan 29 14:43:23.531249 coreos-metadata[1575]: Jan 29 14:43:23.531 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 29 14:43:23.551756 coreos-metadata[1575]: Jan 29 14:43:23.551 INFO Fetch successful Jan 29 14:43:23.551756 coreos-metadata[1575]: Jan 29 14:43:23.551 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 29 14:43:23.568273 coreos-metadata[1575]: Jan 29 14:43:23.568 INFO Fetch successful Jan 29 14:43:23.568273 coreos-metadata[1575]: Jan 29 14:43:23.568 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 29 14:43:23.588277 coreos-metadata[1575]: Jan 29 14:43:23.588 INFO Fetch successful Jan 29 14:43:23.617741 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 14:43:23.619539 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 14:43:23.941876 coreos-metadata[1652]: Jan 29 14:43:23.941 WARN failed to locate config-drive, using the metadata service API instead Jan 29 14:43:23.966010 coreos-metadata[1652]: Jan 29 14:43:23.965 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 29 14:43:23.987103 coreos-metadata[1652]: Jan 29 14:43:23.986 INFO Fetch successful Jan 29 14:43:23.987473 coreos-metadata[1652]: Jan 29 14:43:23.987 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 29 14:43:24.015948 coreos-metadata[1652]: Jan 29 14:43:24.015 INFO Fetch successful Jan 29 14:43:24.018223 unknown[1652]: wrote ssh authorized keys file for user: core Jan 29 14:43:24.040329 update-ssh-keys[1776]: Updated "/home/core/.ssh/authorized_keys" Jan 29 14:43:24.043469 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 14:43:24.047868 systemd[1]: Finished sshkeys.service. Jan 29 14:43:24.052511 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 14:43:24.052929 systemd[1]: Startup finished in 14.183s (kernel) + 11.302s (userspace) = 25.485s. Jan 29 14:43:26.763077 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 14:43:26.769755 systemd[1]: Started sshd@0-10.244.90.186:22-139.178.68.195:37604.service - OpenSSH per-connection server daemon (139.178.68.195:37604). Jan 29 14:43:27.678695 sshd[1782]: Accepted publickey for core from 139.178.68.195 port 37604 ssh2: RSA SHA256:0vZJraS5L9jVCttGjAqyyzs9a0MPbdpNAxJdtCuEsy8 Jan 29 14:43:27.682678 sshd[1782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 14:43:27.692015 systemd-logind[1593]: New session 3 of user core. Jan 29 14:43:27.702538 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 14:43:28.444074 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 14:43:28.452630 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 14:43:28.467482 systemd[1]: Started sshd@1-10.244.90.186:22-139.178.68.195:37608.service - OpenSSH per-connection server daemon (139.178.68.195:37608). Jan 29 14:43:28.623421 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 14:43:28.628063 (kubelet)[1801]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 14:43:28.684515 kubelet[1801]: E0129 14:43:28.684432 1801 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 14:43:28.690439 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 14:43:28.690623 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 14:43:29.365608 sshd[1788]: Accepted publickey for core from 139.178.68.195 port 37608 ssh2: RSA SHA256:0vZJraS5L9jVCttGjAqyyzs9a0MPbdpNAxJdtCuEsy8 Jan 29 14:43:29.369276 sshd[1788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 14:43:29.380041 systemd-logind[1593]: New session 4 of user core. Jan 29 14:43:29.387567 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 14:43:29.996777 sshd[1788]: pam_unix(sshd:session): session closed for user core Jan 29 14:43:30.005213 systemd[1]: sshd@1-10.244.90.186:22-139.178.68.195:37608.service: Deactivated successfully. Jan 29 14:43:30.005321 systemd-logind[1593]: Session 4 logged out. Waiting for processes to exit. Jan 29 14:43:30.012023 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 14:43:30.013356 systemd-logind[1593]: Removed session 4. Jan 29 14:43:30.152812 systemd[1]: Started sshd@2-10.244.90.186:22-139.178.68.195:37620.service - OpenSSH per-connection server daemon (139.178.68.195:37620). Jan 29 14:43:31.053668 sshd[1816]: Accepted publickey for core from 139.178.68.195 port 37620 ssh2: RSA SHA256:0vZJraS5L9jVCttGjAqyyzs9a0MPbdpNAxJdtCuEsy8 Jan 29 14:43:31.057603 sshd[1816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 14:43:31.068285 systemd-logind[1593]: New session 5 of user core. Jan 29 14:43:31.081743 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 14:43:31.676918 sshd[1816]: pam_unix(sshd:session): session closed for user core Jan 29 14:43:31.684038 systemd[1]: sshd@2-10.244.90.186:22-139.178.68.195:37620.service: Deactivated successfully. Jan 29 14:43:31.692567 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 14:43:31.694566 systemd-logind[1593]: Session 5 logged out. Waiting for processes to exit. Jan 29 14:43:31.696031 systemd-logind[1593]: Removed session 5. Jan 29 14:43:31.834825 systemd[1]: Started sshd@3-10.244.90.186:22-139.178.68.195:37624.service - OpenSSH per-connection server daemon (139.178.68.195:37624). Jan 29 14:43:32.726298 sshd[1824]: Accepted publickey for core from 139.178.68.195 port 37624 ssh2: RSA SHA256:0vZJraS5L9jVCttGjAqyyzs9a0MPbdpNAxJdtCuEsy8 Jan 29 14:43:32.730281 sshd[1824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 14:43:32.740270 systemd-logind[1593]: New session 6 of user core. Jan 29 14:43:32.750598 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 14:43:33.356795 sshd[1824]: pam_unix(sshd:session): session closed for user core Jan 29 14:43:33.364845 systemd[1]: sshd@3-10.244.90.186:22-139.178.68.195:37624.service: Deactivated successfully. Jan 29 14:43:33.369958 systemd-logind[1593]: Session 6 logged out. Waiting for processes to exit. Jan 29 14:43:33.370520 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 14:43:33.372810 systemd-logind[1593]: Removed session 6. Jan 29 14:43:33.510799 systemd[1]: Started sshd@4-10.244.90.186:22-139.178.68.195:37632.service - OpenSSH per-connection server daemon (139.178.68.195:37632). Jan 29 14:43:34.414473 sshd[1832]: Accepted publickey for core from 139.178.68.195 port 37632 ssh2: RSA SHA256:0vZJraS5L9jVCttGjAqyyzs9a0MPbdpNAxJdtCuEsy8 Jan 29 14:43:34.418131 sshd[1832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 14:43:34.427639 systemd-logind[1593]: New session 7 of user core. Jan 29 14:43:34.433831 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 14:43:34.909398 sudo[1836]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 14:43:34.909732 sudo[1836]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 14:43:34.927004 sudo[1836]: pam_unix(sudo:session): session closed for user root Jan 29 14:43:35.073035 sshd[1832]: pam_unix(sshd:session): session closed for user core Jan 29 14:43:35.080902 systemd-logind[1593]: Session 7 logged out. Waiting for processes to exit. Jan 29 14:43:35.082737 systemd[1]: sshd@4-10.244.90.186:22-139.178.68.195:37632.service: Deactivated successfully. Jan 29 14:43:35.088163 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 14:43:35.090182 systemd-logind[1593]: Removed session 7. Jan 29 14:43:35.226538 systemd[1]: Started sshd@5-10.244.90.186:22-139.178.68.195:39268.service - OpenSSH per-connection server daemon (139.178.68.195:39268). Jan 29 14:43:36.140744 sshd[1841]: Accepted publickey for core from 139.178.68.195 port 39268 ssh2: RSA SHA256:0vZJraS5L9jVCttGjAqyyzs9a0MPbdpNAxJdtCuEsy8 Jan 29 14:43:36.144453 sshd[1841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 14:43:36.154300 systemd-logind[1593]: New session 8 of user core. Jan 29 14:43:36.161588 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 14:43:36.625267 sudo[1846]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 14:43:36.625874 sudo[1846]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 14:43:36.631807 sudo[1846]: pam_unix(sudo:session): session closed for user root Jan 29 14:43:36.640722 sudo[1845]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 29 14:43:36.641068 sudo[1845]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 14:43:36.661593 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 29 14:43:36.664091 auditctl[1849]: No rules Jan 29 14:43:36.664791 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 14:43:36.665117 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 29 14:43:36.669770 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 14:43:36.717872 augenrules[1868]: No rules Jan 29 14:43:36.719097 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 14:43:36.720544 sudo[1845]: pam_unix(sudo:session): session closed for user root Jan 29 14:43:36.864854 sshd[1841]: pam_unix(sshd:session): session closed for user core Jan 29 14:43:36.871871 systemd-logind[1593]: Session 8 logged out. Waiting for processes to exit. Jan 29 14:43:36.872847 systemd[1]: sshd@5-10.244.90.186:22-139.178.68.195:39268.service: Deactivated successfully. Jan 29 14:43:36.878119 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 14:43:36.879863 systemd-logind[1593]: Removed session 8. Jan 29 14:43:37.020848 systemd[1]: Started sshd@6-10.244.90.186:22-139.178.68.195:39280.service - OpenSSH per-connection server daemon (139.178.68.195:39280). Jan 29 14:43:37.939291 sshd[1877]: Accepted publickey for core from 139.178.68.195 port 39280 ssh2: RSA SHA256:0vZJraS5L9jVCttGjAqyyzs9a0MPbdpNAxJdtCuEsy8 Jan 29 14:43:37.942175 sshd[1877]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 14:43:37.949565 systemd-logind[1593]: New session 9 of user core. Jan 29 14:43:37.956642 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 14:43:38.426788 sudo[1881]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 14:43:38.427150 sudo[1881]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 14:43:38.779348 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 14:43:38.790596 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 14:43:38.983877 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 14:43:38.993393 (kubelet)[1912]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 14:43:39.064824 kubelet[1912]: E0129 14:43:39.064672 1912 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 14:43:39.067443 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 14:43:39.067666 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 14:43:39.244438 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 14:43:39.254470 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 14:43:39.280368 systemd[1]: Reloading requested from client PID 1941 ('systemctl') (unit session-9.scope)... Jan 29 14:43:39.280507 systemd[1]: Reloading... Jan 29 14:43:39.398260 zram_generator::config[1981]: No configuration found. Jan 29 14:43:39.544712 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 14:43:39.613436 systemd[1]: Reloading finished in 332 ms. Jan 29 14:43:39.672385 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 14:43:39.677063 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 14:43:39.679919 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 14:43:39.680200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 14:43:39.687678 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 14:43:39.817901 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 14:43:39.826619 (kubelet)[2062]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 14:43:39.876470 kubelet[2062]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 14:43:39.876470 kubelet[2062]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 14:43:39.876470 kubelet[2062]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 14:43:39.877089 kubelet[2062]: I0129 14:43:39.876529 2062 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 14:43:40.357868 kubelet[2062]: I0129 14:43:40.357804 2062 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 14:43:40.357868 kubelet[2062]: I0129 14:43:40.357849 2062 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 14:43:40.358190 kubelet[2062]: I0129 14:43:40.358169 2062 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 14:43:40.375463 kubelet[2062]: I0129 14:43:40.374372 2062 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 14:43:40.396853 kubelet[2062]: I0129 14:43:40.396821 2062 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 14:43:40.400018 kubelet[2062]: I0129 14:43:40.399972 2062 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 14:43:40.400416 kubelet[2062]: I0129 14:43:40.400122 2062 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.244.90.186","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 14:43:40.402031 kubelet[2062]: I0129 14:43:40.401714 2062 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 14:43:40.402031 kubelet[2062]: I0129 14:43:40.401736 2062 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 14:43:40.402031 kubelet[2062]: I0129 14:43:40.401906 2062 state_mem.go:36] "Initialized new in-memory state store" Jan 29 14:43:40.402753 kubelet[2062]: I0129 14:43:40.402738 2062 kubelet.go:400] "Attempting to sync node with API server" Jan 29 14:43:40.402820 kubelet[2062]: I0129 14:43:40.402812 2062 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 14:43:40.402899 kubelet[2062]: I0129 14:43:40.402891 2062 kubelet.go:312] "Adding apiserver pod source" Jan 29 14:43:40.402981 kubelet[2062]: I0129 14:43:40.402974 2062 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 14:43:40.404523 kubelet[2062]: E0129 14:43:40.404017 2062 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:43:40.404523 kubelet[2062]: E0129 14:43:40.404078 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:43:40.407937 kubelet[2062]: I0129 14:43:40.407847 2062 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 14:43:40.409352 kubelet[2062]: I0129 14:43:40.409326 2062 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 14:43:40.409460 kubelet[2062]: W0129 14:43:40.409447 2062 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 14:43:40.410176 kubelet[2062]: I0129 14:43:40.410160 2062 server.go:1264] "Started kubelet" Jan 29 14:43:40.412148 kubelet[2062]: I0129 14:43:40.411629 2062 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 14:43:40.413272 kubelet[2062]: I0129 14:43:40.412718 2062 server.go:455] "Adding debug handlers to kubelet server" Jan 29 14:43:40.416903 kubelet[2062]: I0129 14:43:40.415591 2062 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 14:43:40.416903 kubelet[2062]: I0129 14:43:40.415810 2062 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 14:43:40.416903 kubelet[2062]: I0129 14:43:40.415845 2062 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 14:43:40.421493 kubelet[2062]: W0129 14:43:40.421471 2062 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 29 14:43:40.421618 kubelet[2062]: E0129 14:43:40.421608 2062 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 29 14:43:40.421813 kubelet[2062]: W0129 14:43:40.421800 2062 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.244.90.186" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 29 14:43:40.421880 kubelet[2062]: E0129 14:43:40.421874 2062 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.244.90.186" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 29 14:43:40.422400 kubelet[2062]: I0129 14:43:40.422378 2062 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 14:43:40.425353 kubelet[2062]: I0129 14:43:40.425338 2062 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 14:43:40.425725 kubelet[2062]: I0129 14:43:40.425711 2062 reconciler.go:26] "Reconciler: start to sync state" Jan 29 14:43:40.431083 kubelet[2062]: I0129 14:43:40.431057 2062 factory.go:221] Registration of the systemd container factory successfully Jan 29 14:43:40.431181 kubelet[2062]: I0129 14:43:40.431162 2062 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 14:43:40.431382 kubelet[2062]: W0129 14:43:40.431364 2062 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 29 14:43:40.431455 kubelet[2062]: E0129 14:43:40.431447 2062 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 29 14:43:40.431737 kubelet[2062]: E0129 14:43:40.431714 2062 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.244.90.186\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 29 14:43:40.431973 kubelet[2062]: E0129 14:43:40.431860 2062 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.244.90.186.181f30f76bb0f7c9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.244.90.186,UID:10.244.90.186,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.244.90.186,},FirstTimestamp:2025-01-29 14:43:40.410116041 +0000 UTC m=+0.579412851,LastTimestamp:2025-01-29 14:43:40.410116041 +0000 UTC m=+0.579412851,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.244.90.186,}" Jan 29 14:43:40.433131 kubelet[2062]: E0129 14:43:40.433116 2062 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 14:43:40.434528 kubelet[2062]: I0129 14:43:40.434511 2062 factory.go:221] Registration of the containerd container factory successfully Jan 29 14:43:40.434925 kubelet[2062]: E0129 14:43:40.434844 2062 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.244.90.186.181f30f76d0fb993 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.244.90.186,UID:10.244.90.186,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.244.90.186,},FirstTimestamp:2025-01-29 14:43:40.433103251 +0000 UTC m=+0.602400082,LastTimestamp:2025-01-29 14:43:40.433103251 +0000 UTC m=+0.602400082,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.244.90.186,}" Jan 29 14:43:40.459384 kubelet[2062]: I0129 14:43:40.459357 2062 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 14:43:40.459804 kubelet[2062]: I0129 14:43:40.459790 2062 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 14:43:40.459873 kubelet[2062]: I0129 14:43:40.459866 2062 state_mem.go:36] "Initialized new in-memory state store" Jan 29 14:43:40.465752 kubelet[2062]: I0129 14:43:40.465733 2062 policy_none.go:49] "None policy: Start" Jan 29 14:43:40.468749 kubelet[2062]: I0129 14:43:40.468669 2062 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 14:43:40.468749 kubelet[2062]: I0129 14:43:40.468770 2062 state_mem.go:35] "Initializing new in-memory state store" Jan 29 14:43:40.483738 kubelet[2062]: I0129 14:43:40.482054 2062 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 14:43:40.483738 kubelet[2062]: I0129 14:43:40.482255 2062 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 14:43:40.483738 kubelet[2062]: I0129 14:43:40.482395 2062 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 14:43:40.486261 kubelet[2062]: E0129 14:43:40.486204 2062 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.244.90.186\" not found" Jan 29 14:43:40.513122 kubelet[2062]: I0129 14:43:40.513020 2062 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 14:43:40.514677 kubelet[2062]: I0129 14:43:40.514627 2062 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 14:43:40.515668 kubelet[2062]: I0129 14:43:40.515293 2062 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 14:43:40.515668 kubelet[2062]: I0129 14:43:40.515330 2062 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 14:43:40.515668 kubelet[2062]: E0129 14:43:40.515393 2062 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 29 14:43:40.523762 kubelet[2062]: I0129 14:43:40.523745 2062 kubelet_node_status.go:73] "Attempting to register node" node="10.244.90.186" Jan 29 14:43:40.530029 kubelet[2062]: I0129 14:43:40.529992 2062 kubelet_node_status.go:76] "Successfully registered node" node="10.244.90.186" Jan 29 14:43:40.539204 kubelet[2062]: E0129 14:43:40.539166 2062 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.90.186\" not found" Jan 29 14:43:40.639979 kubelet[2062]: E0129 14:43:40.639737 2062 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.90.186\" not found" Jan 29 14:43:40.740660 kubelet[2062]: E0129 14:43:40.740564 2062 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.90.186\" not found" Jan 29 14:43:40.841501 kubelet[2062]: E0129 14:43:40.841393 2062 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.90.186\" not found" Jan 29 14:43:40.856671 sudo[1881]: pam_unix(sudo:session): session closed for user root Jan 29 14:43:40.942340 kubelet[2062]: E0129 14:43:40.942057 2062 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.90.186\" not found" Jan 29 14:43:41.002405 sshd[1877]: pam_unix(sshd:session): session closed for user core Jan 29 14:43:41.012007 systemd[1]: sshd@6-10.244.90.186:22-139.178.68.195:39280.service: Deactivated successfully. Jan 29 14:43:41.019759 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 14:43:41.020759 systemd-logind[1593]: Session 9 logged out. Waiting for processes to exit. Jan 29 14:43:41.021986 systemd-logind[1593]: Removed session 9. Jan 29 14:43:41.042746 kubelet[2062]: E0129 14:43:41.042702 2062 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.90.186\" not found" Jan 29 14:43:41.143963 kubelet[2062]: E0129 14:43:41.143871 2062 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.90.186\" not found" Jan 29 14:43:41.245074 kubelet[2062]: E0129 14:43:41.244817 2062 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.90.186\" not found" Jan 29 14:43:41.346181 kubelet[2062]: E0129 14:43:41.346067 2062 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.90.186\" not found" Jan 29 14:43:41.360369 kubelet[2062]: I0129 14:43:41.360289 2062 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 29 14:43:41.360720 kubelet[2062]: W0129 14:43:41.360604 2062 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 14:43:41.405125 kubelet[2062]: E0129 14:43:41.404944 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:43:41.447171 kubelet[2062]: E0129 14:43:41.447098 2062 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.90.186\" not found" Jan 29 14:43:41.547444 kubelet[2062]: E0129 14:43:41.547342 2062 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.90.186\" not found" Jan 29 14:43:41.648457 kubelet[2062]: E0129 14:43:41.648347 2062 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.90.186\" not found" Jan 29 14:43:41.749623 kubelet[2062]: E0129 14:43:41.749539 2062 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.90.186\" not found" Jan 29 14:43:41.850022 kubelet[2062]: E0129 14:43:41.849743 2062 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.90.186\" not found" Jan 29 14:43:41.950724 kubelet[2062]: E0129 14:43:41.950617 2062 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.90.186\" not found" Jan 29 14:43:42.053514 kubelet[2062]: I0129 14:43:42.052884 2062 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 29 14:43:42.053859 containerd[1624]: time="2025-01-29T14:43:42.053732371Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 14:43:42.055145 kubelet[2062]: I0129 14:43:42.054259 2062 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 29 14:43:42.406004 kubelet[2062]: I0129 14:43:42.405797 2062 apiserver.go:52] "Watching apiserver" Jan 29 14:43:42.406004 kubelet[2062]: E0129 14:43:42.405886 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:43:42.415800 kubelet[2062]: I0129 14:43:42.415702 2062 topology_manager.go:215] "Topology Admit Handler" podUID="4e2a9136-617e-4a37-a28b-d0f2600b3a83" podNamespace="calico-system" podName="calico-node-n9fpc" Jan 29 14:43:42.416010 kubelet[2062]: I0129 14:43:42.415971 2062 topology_manager.go:215] "Topology Admit Handler" podUID="ce838e40-b5e1-4fd4-ba08-f12503c5fb8a" podNamespace="calico-system" podName="csi-node-driver-fxq86" Jan 29 14:43:42.416477 kubelet[2062]: I0129 14:43:42.416150 2062 topology_manager.go:215] "Topology Admit Handler" podUID="02741465-7294-474c-824b-e326a83d6df1" podNamespace="kube-system" podName="kube-proxy-9cd4l" Jan 29 14:43:42.418245 kubelet[2062]: E0129 14:43:42.416555 2062 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fxq86" podUID="ce838e40-b5e1-4fd4-ba08-f12503c5fb8a" Jan 29 14:43:42.426204 kubelet[2062]: I0129 14:43:42.426177 2062 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 14:43:42.436482 kubelet[2062]: I0129 14:43:42.436444 2062 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rsqb\" (UniqueName: \"kubernetes.io/projected/4e2a9136-617e-4a37-a28b-d0f2600b3a83-kube-api-access-7rsqb\") pod \"calico-node-n9fpc\" (UID: \"4e2a9136-617e-4a37-a28b-d0f2600b3a83\") " pod="calico-system/calico-node-n9fpc" Jan 29 14:43:42.436482 kubelet[2062]: I0129 14:43:42.436482 2062 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwbsm\" (UniqueName: \"kubernetes.io/projected/ce838e40-b5e1-4fd4-ba08-f12503c5fb8a-kube-api-access-lwbsm\") pod \"csi-node-driver-fxq86\" (UID: \"ce838e40-b5e1-4fd4-ba08-f12503c5fb8a\") " pod="calico-system/csi-node-driver-fxq86" Jan 29 14:43:42.436621 kubelet[2062]: I0129 14:43:42.436500 2062 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/02741465-7294-474c-824b-e326a83d6df1-kube-proxy\") pod \"kube-proxy-9cd4l\" (UID: \"02741465-7294-474c-824b-e326a83d6df1\") " pod="kube-system/kube-proxy-9cd4l" Jan 29 14:43:42.436621 kubelet[2062]: I0129 14:43:42.436524 2062 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/02741465-7294-474c-824b-e326a83d6df1-xtables-lock\") pod \"kube-proxy-9cd4l\" (UID: \"02741465-7294-474c-824b-e326a83d6df1\") " pod="kube-system/kube-proxy-9cd4l" Jan 29 14:43:42.436621 kubelet[2062]: I0129 14:43:42.436539 2062 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4e2a9136-617e-4a37-a28b-d0f2600b3a83-var-lib-calico\") pod \"calico-node-n9fpc\" (UID: \"4e2a9136-617e-4a37-a28b-d0f2600b3a83\") " pod="calico-system/calico-node-n9fpc" Jan 29 14:43:42.436621 kubelet[2062]: I0129 14:43:42.436553 2062 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4e2a9136-617e-4a37-a28b-d0f2600b3a83-node-certs\") pod \"calico-node-n9fpc\" (UID: \"4e2a9136-617e-4a37-a28b-d0f2600b3a83\") " pod="calico-system/calico-node-n9fpc" Jan 29 14:43:42.436621 kubelet[2062]: I0129 14:43:42.436569 2062 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ce838e40-b5e1-4fd4-ba08-f12503c5fb8a-kubelet-dir\") pod \"csi-node-driver-fxq86\" (UID: \"ce838e40-b5e1-4fd4-ba08-f12503c5fb8a\") " pod="calico-system/csi-node-driver-fxq86" Jan 29 14:43:42.436743 kubelet[2062]: I0129 14:43:42.436582 2062 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqmdt\" (UniqueName: \"kubernetes.io/projected/02741465-7294-474c-824b-e326a83d6df1-kube-api-access-fqmdt\") pod \"kube-proxy-9cd4l\" (UID: \"02741465-7294-474c-824b-e326a83d6df1\") " pod="kube-system/kube-proxy-9cd4l" Jan 29 14:43:42.436743 kubelet[2062]: I0129 14:43:42.436597 2062 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e2a9136-617e-4a37-a28b-d0f2600b3a83-lib-modules\") pod \"calico-node-n9fpc\" (UID: \"4e2a9136-617e-4a37-a28b-d0f2600b3a83\") " pod="calico-system/calico-node-n9fpc" Jan 29 14:43:42.436743 kubelet[2062]: I0129 14:43:42.436611 2062 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4e2a9136-617e-4a37-a28b-d0f2600b3a83-policysync\") pod \"calico-node-n9fpc\" (UID: \"4e2a9136-617e-4a37-a28b-d0f2600b3a83\") " pod="calico-system/calico-node-n9fpc" Jan 29 14:43:42.436743 kubelet[2062]: I0129 14:43:42.436625 2062 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e2a9136-617e-4a37-a28b-d0f2600b3a83-tigera-ca-bundle\") pod \"calico-node-n9fpc\" (UID: \"4e2a9136-617e-4a37-a28b-d0f2600b3a83\") " pod="calico-system/calico-node-n9fpc" Jan 29 14:43:42.436743 kubelet[2062]: I0129 14:43:42.436640 2062 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4e2a9136-617e-4a37-a28b-d0f2600b3a83-var-run-calico\") pod \"calico-node-n9fpc\" (UID: \"4e2a9136-617e-4a37-a28b-d0f2600b3a83\") " pod="calico-system/calico-node-n9fpc" Jan 29 14:43:42.436862 kubelet[2062]: I0129 14:43:42.436654 2062 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4e2a9136-617e-4a37-a28b-d0f2600b3a83-cni-bin-dir\") pod \"calico-node-n9fpc\" (UID: \"4e2a9136-617e-4a37-a28b-d0f2600b3a83\") " pod="calico-system/calico-node-n9fpc" Jan 29 14:43:42.436862 kubelet[2062]: I0129 14:43:42.436684 2062 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4e2a9136-617e-4a37-a28b-d0f2600b3a83-cni-net-dir\") pod \"calico-node-n9fpc\" (UID: \"4e2a9136-617e-4a37-a28b-d0f2600b3a83\") " pod="calico-system/calico-node-n9fpc" Jan 29 14:43:42.436862 kubelet[2062]: I0129 14:43:42.436704 2062 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ce838e40-b5e1-4fd4-ba08-f12503c5fb8a-registration-dir\") pod \"csi-node-driver-fxq86\" (UID: \"ce838e40-b5e1-4fd4-ba08-f12503c5fb8a\") " pod="calico-system/csi-node-driver-fxq86" Jan 29 14:43:42.436862 kubelet[2062]: I0129 14:43:42.436719 2062 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4e2a9136-617e-4a37-a28b-d0f2600b3a83-xtables-lock\") pod \"calico-node-n9fpc\" (UID: \"4e2a9136-617e-4a37-a28b-d0f2600b3a83\") " pod="calico-system/calico-node-n9fpc" Jan 29 14:43:42.436862 kubelet[2062]: I0129 14:43:42.436737 2062 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4e2a9136-617e-4a37-a28b-d0f2600b3a83-flexvol-driver-host\") pod \"calico-node-n9fpc\" (UID: \"4e2a9136-617e-4a37-a28b-d0f2600b3a83\") " pod="calico-system/calico-node-n9fpc" Jan 29 14:43:42.436980 kubelet[2062]: I0129 14:43:42.436754 2062 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ce838e40-b5e1-4fd4-ba08-f12503c5fb8a-varrun\") pod \"csi-node-driver-fxq86\" (UID: \"ce838e40-b5e1-4fd4-ba08-f12503c5fb8a\") " pod="calico-system/csi-node-driver-fxq86" Jan 29 14:43:42.436980 kubelet[2062]: I0129 14:43:42.436769 2062 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ce838e40-b5e1-4fd4-ba08-f12503c5fb8a-socket-dir\") pod \"csi-node-driver-fxq86\" (UID: \"ce838e40-b5e1-4fd4-ba08-f12503c5fb8a\") " pod="calico-system/csi-node-driver-fxq86" Jan 29 14:43:42.436980 kubelet[2062]: I0129 14:43:42.436783 2062 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/02741465-7294-474c-824b-e326a83d6df1-lib-modules\") pod \"kube-proxy-9cd4l\" (UID: \"02741465-7294-474c-824b-e326a83d6df1\") " pod="kube-system/kube-proxy-9cd4l" Jan 29 14:43:42.436980 kubelet[2062]: I0129 14:43:42.436797 2062 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4e2a9136-617e-4a37-a28b-d0f2600b3a83-cni-log-dir\") pod \"calico-node-n9fpc\" (UID: \"4e2a9136-617e-4a37-a28b-d0f2600b3a83\") " pod="calico-system/calico-node-n9fpc" Jan 29 14:43:42.545592 kubelet[2062]: E0129 14:43:42.545299 2062 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:43:42.545592 kubelet[2062]: W0129 14:43:42.545343 2062 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:43:42.545592 kubelet[2062]: E0129 14:43:42.545411 2062 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:43:42.546497 kubelet[2062]: E0129 14:43:42.546422 2062 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:43:42.546497 kubelet[2062]: W0129 14:43:42.546452 2062 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:43:42.547311 kubelet[2062]: E0129 14:43:42.547099 2062 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:43:42.547311 kubelet[2062]: E0129 14:43:42.547257 2062 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:43:42.547311 kubelet[2062]: W0129 14:43:42.547279 2062 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:43:42.547859 kubelet[2062]: E0129 14:43:42.547699 2062 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:43:42.550969 kubelet[2062]: E0129 14:43:42.550953 2062 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:43:42.551057 kubelet[2062]: W0129 14:43:42.551045 2062 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:43:42.552547 kubelet[2062]: E0129 14:43:42.552532 2062 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:43:42.552645 kubelet[2062]: W0129 14:43:42.552633 2062 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:43:42.552927 kubelet[2062]: E0129 14:43:42.552916 2062 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:43:42.552991 kubelet[2062]: W0129 14:43:42.552982 2062 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:43:42.553051 kubelet[2062]: E0129 14:43:42.553041 2062 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:43:42.554251 kubelet[2062]: E0129 14:43:42.553309 2062 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:43:42.554364 kubelet[2062]: W0129 14:43:42.554339 2062 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:43:42.554454 kubelet[2062]: E0129 14:43:42.554441 2062 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:43:42.554764 kubelet[2062]: E0129 14:43:42.554752 2062 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:43:42.554870 kubelet[2062]: W0129 14:43:42.554859 2062 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:43:42.554930 kubelet[2062]: E0129 14:43:42.554921 2062 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:43:42.555256 kubelet[2062]: E0129 14:43:42.555003 2062 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:43:42.555256 kubelet[2062]: E0129 14:43:42.555034 2062 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:43:42.555489 kubelet[2062]: E0129 14:43:42.555478 2062 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:43:42.555548 kubelet[2062]: W0129 14:43:42.555539 2062 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:43:42.555619 kubelet[2062]: E0129 14:43:42.555610 2062 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:43:42.558335 kubelet[2062]: E0129 14:43:42.558305 2062 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:43:42.558422 kubelet[2062]: W0129 14:43:42.558411 2062 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:43:42.558483 kubelet[2062]: E0129 14:43:42.558474 2062 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:43:42.569418 kubelet[2062]: E0129 14:43:42.569340 2062 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:43:42.569418 kubelet[2062]: W0129 14:43:42.569362 2062 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:43:42.569418 kubelet[2062]: E0129 14:43:42.569382 2062 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:43:42.570654 kubelet[2062]: E0129 14:43:42.570210 2062 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:43:42.571192 kubelet[2062]: W0129 14:43:42.571057 2062 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:43:42.571192 kubelet[2062]: E0129 14:43:42.571088 2062 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:43:42.572418 kubelet[2062]: E0129 14:43:42.572324 2062 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:43:42.572418 kubelet[2062]: W0129 14:43:42.572338 2062 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:43:42.572418 kubelet[2062]: E0129 14:43:42.572351 2062 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:43:42.723063 containerd[1624]: time="2025-01-29T14:43:42.722826467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9cd4l,Uid:02741465-7294-474c-824b-e326a83d6df1,Namespace:kube-system,Attempt:0,}" Jan 29 14:43:42.724652 containerd[1624]: time="2025-01-29T14:43:42.723415179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-n9fpc,Uid:4e2a9136-617e-4a37-a28b-d0f2600b3a83,Namespace:calico-system,Attempt:0,}" Jan 29 14:43:43.406629 kubelet[2062]: E0129 14:43:43.406546 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:43:43.499639 containerd[1624]: time="2025-01-29T14:43:43.499540089Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 14:43:43.500382 containerd[1624]: time="2025-01-29T14:43:43.500359354Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 14:43:43.500999 containerd[1624]: time="2025-01-29T14:43:43.500978873Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 14:43:43.501561 containerd[1624]: time="2025-01-29T14:43:43.501439622Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 14:43:43.501561 containerd[1624]: time="2025-01-29T14:43:43.501523656Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 29 14:43:43.503114 containerd[1624]: time="2025-01-29T14:43:43.503072215Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 14:43:43.505465 containerd[1624]: time="2025-01-29T14:43:43.505128069Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 782.008064ms" Jan 29 14:43:43.506189 containerd[1624]: time="2025-01-29T14:43:43.505978683Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 782.201008ms" Jan 29 14:43:43.555111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2863751237.mount: Deactivated successfully. Jan 29 14:43:43.683132 containerd[1624]: time="2025-01-29T14:43:43.682119499Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 14:43:43.683132 containerd[1624]: time="2025-01-29T14:43:43.682258894Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 14:43:43.683132 containerd[1624]: time="2025-01-29T14:43:43.682289585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 14:43:43.683357 containerd[1624]: time="2025-01-29T14:43:43.683172841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 14:43:43.684847 containerd[1624]: time="2025-01-29T14:43:43.684740115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 14:43:43.684847 containerd[1624]: time="2025-01-29T14:43:43.684807352Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 14:43:43.685455 containerd[1624]: time="2025-01-29T14:43:43.684825014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 14:43:43.685455 containerd[1624]: time="2025-01-29T14:43:43.685167049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 14:43:43.796703 containerd[1624]: time="2025-01-29T14:43:43.795692038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9cd4l,Uid:02741465-7294-474c-824b-e326a83d6df1,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d5b1d1af8e857310febd126b7caabe275abc57d511db7865daedbf527a52d93\"" Jan 29 14:43:43.796703 containerd[1624]: time="2025-01-29T14:43:43.796141123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-n9fpc,Uid:4e2a9136-617e-4a37-a28b-d0f2600b3a83,Namespace:calico-system,Attempt:0,} returns sandbox id \"86a90f39958357b8b035574fc2f6cfa95cdfa4dd11fdbaaa0d32bebd1182e3d0\"" Jan 29 14:43:43.799105 containerd[1624]: time="2025-01-29T14:43:43.799016944Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 29 14:43:44.407451 kubelet[2062]: E0129 14:43:44.407342 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:43:44.517032 kubelet[2062]: E0129 14:43:44.516952 2062 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fxq86" podUID="ce838e40-b5e1-4fd4-ba08-f12503c5fb8a" Jan 29 14:43:45.022822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3395384232.mount: Deactivated successfully. Jan 29 14:43:45.408162 kubelet[2062]: E0129 14:43:45.408098 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:43:45.463848 containerd[1624]: time="2025-01-29T14:43:45.463759786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:43:45.465407 containerd[1624]: time="2025-01-29T14:43:45.465331516Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058345" Jan 29 14:43:45.466536 containerd[1624]: time="2025-01-29T14:43:45.466469641Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:43:45.469451 containerd[1624]: time="2025-01-29T14:43:45.469409992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:43:45.471908 containerd[1624]: time="2025-01-29T14:43:45.471546410Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 1.672486752s" Jan 29 14:43:45.471908 containerd[1624]: time="2025-01-29T14:43:45.471627688Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 29 14:43:45.473839 containerd[1624]: time="2025-01-29T14:43:45.473613990Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 29 14:43:45.475162 containerd[1624]: time="2025-01-29T14:43:45.475131721Z" level=info msg="CreateContainer within sandbox \"7d5b1d1af8e857310febd126b7caabe275abc57d511db7865daedbf527a52d93\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 14:43:45.489485 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1835733280.mount: Deactivated successfully. Jan 29 14:43:45.489601 containerd[1624]: time="2025-01-29T14:43:45.489535019Z" level=info msg="CreateContainer within sandbox \"7d5b1d1af8e857310febd126b7caabe275abc57d511db7865daedbf527a52d93\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a252eb125312ab9fc023eb6b5f5dd5d1377f105acd34c0a48d57f680ea0c9eeb\"" Jan 29 14:43:45.505156 containerd[1624]: time="2025-01-29T14:43:45.505043807Z" level=info msg="StartContainer for \"a252eb125312ab9fc023eb6b5f5dd5d1377f105acd34c0a48d57f680ea0c9eeb\"" Jan 29 14:43:45.575902 containerd[1624]: time="2025-01-29T14:43:45.575792524Z" level=info msg="StartContainer for \"a252eb125312ab9fc023eb6b5f5dd5d1377f105acd34c0a48d57f680ea0c9eeb\" returns successfully" Jan 29 14:43:46.408464 kubelet[2062]: E0129 14:43:46.408353 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:43:46.517150 kubelet[2062]: E0129 14:43:46.516018 2062 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fxq86" podUID="ce838e40-b5e1-4fd4-ba08-f12503c5fb8a" Jan 29 14:43:46.584994 kubelet[2062]: I0129 14:43:46.584912 2062 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9cd4l" podStartSLOduration=4.9104184029999995 podStartE2EDuration="6.584877611s" podCreationTimestamp="2025-01-29 14:43:40 +0000 UTC" firstStartedPulling="2025-01-29 14:43:43.79852586 +0000 UTC m=+3.967822674" lastFinishedPulling="2025-01-29 14:43:45.472985068 +0000 UTC m=+5.642281882" observedRunningTime="2025-01-29 14:43:46.584856753 +0000 UTC m=+6.754153682" watchObservedRunningTime="2025-01-29 14:43:46.584877611 +0000 UTC m=+6.754174470" Jan 29 14:43:46.660037 kubelet[2062]: E0129 14:43:46.659868 2062 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:43:46.660037 kubelet[2062]: W0129 14:43:46.659941 2062 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:43:46.660037 kubelet[2062]: E0129 14:43:46.660000 2062 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:43:46.661164 kubelet[2062]: E0129 14:43:46.661046 2062 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:43:46.661164 kubelet[2062]: W0129 14:43:46.661069 2062 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:43:46.661164 kubelet[2062]: E0129 14:43:46.661088 2062 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:43:46.661571 kubelet[2062]: E0129 14:43:46.661427 2062 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:43:46.661571 kubelet[2062]: W0129 14:43:46.661437 2062 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:43:46.661571 kubelet[2062]: E0129 14:43:46.661449 2062 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:43:46.661834 kubelet[2062]: E0129 14:43:46.661730 2062 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:43:46.661834 kubelet[2062]: W0129 14:43:46.661738 2062 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:43:46.661834 kubelet[2062]: E0129 14:43:46.661753 2062 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:43:46.662056 kubelet[2062]: E0129 14:43:46.661955 2062 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:43:46.662056 kubelet[2062]: W0129 14:43:46.661962 2062 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:43:46.662056 kubelet[2062]: E0129 14:43:46.661971 2062 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:43:46.662359 kubelet[2062]: E0129 14:43:46.662123 2062 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:43:46.662359 kubelet[2062]: W0129 14:43:46.662141 2062 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:43:46.662359 kubelet[2062]: E0129 14:43:46.662149 2062 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:43:46.662359 kubelet[2062]: E0129 14:43:46.662324 2062 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:43:46.662359 kubelet[2062]: W0129 14:43:46.662331 2062 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:43:46.662359 kubelet[2062]: E0129 14:43:46.662339 2062 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:43:46.662809 kubelet[2062]: E0129 14:43:46.662498 2062 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:43:46.662809 kubelet[2062]: W0129 14:43:46.662505 2062 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:43:46.662809 kubelet[2062]: E0129 14:43:46.662513 2062 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:43:46.662809 kubelet[2062]: E0129 14:43:46.662669 2062 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:43:46.662809 kubelet[2062]: W0129 14:43:46.662675 2062 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:43:46.662809 kubelet[2062]: E0129 14:43:46.662683 2062 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:43:46.665108 kubelet[2062]: E0129 14:43:46.663048 2062 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:43:46.665108 kubelet[2062]: W0129 14:43:46.663058 2062 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:43:46.665108 kubelet[2062]: E0129 14:43:46.663067 2062 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:43:46.665108 kubelet[2062]: E0129 14:43:46.663372 2062 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:43:46.665108 kubelet[2062]: W0129 14:43:46.663380 2062 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:43:46.665108 kubelet[2062]: E0129 14:43:46.663388 2062 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:43:46.665108 kubelet[2062]: E0129 14:43:46.663862 2062 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:43:46.665108 kubelet[2062]: W0129 14:43:46.663870 2062 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:43:46.665108 kubelet[2062]: E0129 14:43:46.663880 2062 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:43:46.665108 kubelet[2062]: E0129 14:43:46.664254 2062 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:43:46.666021 kubelet[2062]: W0129 14:43:46.664292 2062 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:43:46.666021 kubelet[2062]: E0129 14:43:46.664342 2062 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:43:46.666021 kubelet[2062]: E0129 14:43:46.664564 2062 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:43:46.666021 kubelet[2062]: W0129 14:43:46.664573 2062 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:43:46.666021 kubelet[2062]: E0129 14:43:46.664581 2062 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:43:46.666021 kubelet[2062]: E0129 14:43:46.664729 2062 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:43:46.666021 kubelet[2062]: W0129 14:43:46.664737 2062 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:43:46.666021 kubelet[2062]: E0129 14:43:46.664745 2062 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:43:46.666021 kubelet[2062]: E0129 14:43:46.665067 2062 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:43:46.666021 kubelet[2062]: W0129 14:43:46.665076 2062 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:43:46.666377 kubelet[2062]: E0129 14:43:46.665087 2062 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:43:46.666377 kubelet[2062]: E0129 14:43:46.665330 2062 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:43:46.666377 kubelet[2062]: W0129 14:43:46.665339 2062 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:43:46.666377 kubelet[2062]: E0129 14:43:46.665348 2062 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:43:46.666377 kubelet[2062]: E0129 14:43:46.665507 2062 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:43:46.666377 kubelet[2062]: W0129 14:43:46.665514 2062 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:43:46.666377 kubelet[2062]: E0129 14:43:46.665521 2062 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:43:46.666377 kubelet[2062]: E0129 14:43:46.665663 2062 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:43:46.666377 kubelet[2062]: W0129 14:43:46.665669 2062 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:43:46.666377 kubelet[2062]: E0129 14:43:46.665678 2062 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:43:46.666697 kubelet[2062]: E0129 14:43:46.665872 2062 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:43:46.666697 kubelet[2062]: W0129 14:43:46.665880 2062 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:43:46.666697 kubelet[2062]: E0129 14:43:46.665889 2062 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:43:46.668237 kubelet[2062]: E0129 14:43:46.668202 2062 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:43:46.668237 kubelet[2062]: W0129 14:43:46.668219 2062 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:43:46.668421 kubelet[2062]: E0129 14:43:46.668253 2062 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:43:46.668475 kubelet[2062]: E0129 14:43:46.668456 2062 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:43:46.668475 kubelet[2062]: W0129 14:43:46.668468 2062 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:43:46.668582 kubelet[2062]: E0129 14:43:46.668478 2062 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:43:46.668668 kubelet[2062]: E0129 14:43:46.668657 2062 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:43:46.668716 kubelet[2062]: W0129 14:43:46.668670 2062 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:43:46.668716 kubelet[2062]: E0129 14:43:46.668691 2062 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:43:46.668873 kubelet[2062]: E0129 14:43:46.668863 2062 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:43:46.668912 kubelet[2062]: W0129 14:43:46.668873 2062 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:43:46.668912 kubelet[2062]: E0129 14:43:46.668885 2062 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:43:46.669343 kubelet[2062]: E0129 14:43:46.669328 2062 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:43:46.669343 kubelet[2062]: W0129 14:43:46.669341 2062 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:43:46.669425 kubelet[2062]: E0129 14:43:46.669358 2062 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:43:46.669575 kubelet[2062]: E0129 14:43:46.669563 2062 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:43:46.669575 kubelet[2062]: W0129 14:43:46.669574 2062 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:43:46.669646 kubelet[2062]: E0129 14:43:46.669617 2062 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:43:46.669924 kubelet[2062]: E0129 14:43:46.669909 2062 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:43:46.669924 kubelet[2062]: W0129 14:43:46.669923 2062 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:43:46.670015 kubelet[2062]: E0129 14:43:46.669938 2062 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:43:46.670110 kubelet[2062]: E0129 14:43:46.670098 2062 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:43:46.670110 kubelet[2062]: W0129 14:43:46.670110 2062 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:43:46.670215 kubelet[2062]: E0129 14:43:46.670118 2062 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:43:46.670437 kubelet[2062]: E0129 14:43:46.670383 2062 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:43:46.670437 kubelet[2062]: W0129 14:43:46.670394 2062 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:43:46.670437 kubelet[2062]: E0129 14:43:46.670416 2062 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:43:46.670702 kubelet[2062]: E0129 14:43:46.670634 2062 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:43:46.670702 kubelet[2062]: W0129 14:43:46.670667 2062 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:43:46.670702 kubelet[2062]: E0129 14:43:46.670681 2062 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:43:46.671042 kubelet[2062]: E0129 14:43:46.671029 2062 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:43:46.671042 kubelet[2062]: W0129 14:43:46.671039 2062 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:43:46.671209 kubelet[2062]: E0129 14:43:46.671113 2062 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:43:46.671311 kubelet[2062]: E0129 14:43:46.671263 2062 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 14:43:46.671311 kubelet[2062]: W0129 14:43:46.671270 2062 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 14:43:46.671311 kubelet[2062]: E0129 14:43:46.671279 2062 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 14:43:46.724518 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1796551976.mount: Deactivated successfully. Jan 29 14:43:46.809891 containerd[1624]: time="2025-01-29T14:43:46.809115121Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:43:46.809891 containerd[1624]: time="2025-01-29T14:43:46.809834480Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 29 14:43:46.810754 containerd[1624]: time="2025-01-29T14:43:46.810479321Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:43:46.812521 containerd[1624]: time="2025-01-29T14:43:46.811900883Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:43:46.813038 containerd[1624]: time="2025-01-29T14:43:46.813013564Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.339366571s" Jan 29 14:43:46.813086 containerd[1624]: time="2025-01-29T14:43:46.813045486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 29 14:43:46.814761 containerd[1624]: time="2025-01-29T14:43:46.814733801Z" level=info msg="CreateContainer within sandbox \"86a90f39958357b8b035574fc2f6cfa95cdfa4dd11fdbaaa0d32bebd1182e3d0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 14:43:46.822749 containerd[1624]: time="2025-01-29T14:43:46.822693836Z" level=info msg="CreateContainer within sandbox \"86a90f39958357b8b035574fc2f6cfa95cdfa4dd11fdbaaa0d32bebd1182e3d0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b3c63e211d29d70d870b3bf1c12df1141816e17f509eed13b8330ef1c5c37fef\"" Jan 29 14:43:46.823040 containerd[1624]: time="2025-01-29T14:43:46.823022329Z" level=info msg="StartContainer for \"b3c63e211d29d70d870b3bf1c12df1141816e17f509eed13b8330ef1c5c37fef\"" Jan 29 14:43:46.885280 containerd[1624]: time="2025-01-29T14:43:46.884209998Z" level=info msg="StartContainer for \"b3c63e211d29d70d870b3bf1c12df1141816e17f509eed13b8330ef1c5c37fef\" returns successfully" Jan 29 14:43:47.013371 containerd[1624]: time="2025-01-29T14:43:47.013144479Z" level=info msg="shim disconnected" id=b3c63e211d29d70d870b3bf1c12df1141816e17f509eed13b8330ef1c5c37fef namespace=k8s.io Jan 29 14:43:47.013371 containerd[1624]: time="2025-01-29T14:43:47.013263287Z" level=warning msg="cleaning up after shim disconnected" id=b3c63e211d29d70d870b3bf1c12df1141816e17f509eed13b8330ef1c5c37fef namespace=k8s.io Jan 29 14:43:47.013371 containerd[1624]: time="2025-01-29T14:43:47.013278712Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 14:43:47.083156 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 29 14:43:47.408965 kubelet[2062]: E0129 14:43:47.408823 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:43:47.580257 containerd[1624]: time="2025-01-29T14:43:47.580131796Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 29 14:43:47.699881 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b3c63e211d29d70d870b3bf1c12df1141816e17f509eed13b8330ef1c5c37fef-rootfs.mount: Deactivated successfully. Jan 29 14:43:48.409916 kubelet[2062]: E0129 14:43:48.409829 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:43:48.517414 kubelet[2062]: E0129 14:43:48.516594 2062 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fxq86" podUID="ce838e40-b5e1-4fd4-ba08-f12503c5fb8a" Jan 29 14:43:49.410494 kubelet[2062]: E0129 14:43:49.410424 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:43:50.411359 kubelet[2062]: E0129 14:43:50.411278 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:43:50.516677 kubelet[2062]: E0129 14:43:50.515922 2062 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fxq86" podUID="ce838e40-b5e1-4fd4-ba08-f12503c5fb8a" Jan 29 14:43:50.906058 systemd[1]: Started sshd@7-10.244.90.186:22-187.170.73.190:54342.service - OpenSSH per-connection server daemon (187.170.73.190:54342). Jan 29 14:43:51.412133 kubelet[2062]: E0129 14:43:51.412087 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:43:51.855355 containerd[1624]: time="2025-01-29T14:43:51.855305196Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:43:51.856344 containerd[1624]: time="2025-01-29T14:43:51.856142910Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 29 14:43:51.856988 containerd[1624]: time="2025-01-29T14:43:51.856700757Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:43:51.858794 containerd[1624]: time="2025-01-29T14:43:51.858741201Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:43:51.860672 containerd[1624]: time="2025-01-29T14:43:51.860528932Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.280332433s" Jan 29 14:43:51.860672 containerd[1624]: time="2025-01-29T14:43:51.860564766Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 29 14:43:51.866671 containerd[1624]: time="2025-01-29T14:43:51.866510710Z" level=info msg="CreateContainer within sandbox \"86a90f39958357b8b035574fc2f6cfa95cdfa4dd11fdbaaa0d32bebd1182e3d0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 14:43:51.886299 containerd[1624]: time="2025-01-29T14:43:51.886267511Z" level=info msg="CreateContainer within sandbox \"86a90f39958357b8b035574fc2f6cfa95cdfa4dd11fdbaaa0d32bebd1182e3d0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5ac487a947883440922e9193c2a747fd7f1ef3c738afa422456eb02f4579bbc2\"" Jan 29 14:43:51.887241 containerd[1624]: time="2025-01-29T14:43:51.887153558Z" level=info msg="StartContainer for \"5ac487a947883440922e9193c2a747fd7f1ef3c738afa422456eb02f4579bbc2\"" Jan 29 14:43:51.942889 sshd[2484]: Invalid user temp from 187.170.73.190 port 54342 Jan 29 14:43:51.948872 containerd[1624]: time="2025-01-29T14:43:51.948671041Z" level=info msg="StartContainer for \"5ac487a947883440922e9193c2a747fd7f1ef3c738afa422456eb02f4579bbc2\" returns successfully" Jan 29 14:43:52.132661 sshd[2484]: Received disconnect from 187.170.73.190 port 54342:11: Bye Bye [preauth] Jan 29 14:43:52.132661 sshd[2484]: Disconnected from invalid user temp 187.170.73.190 port 54342 [preauth] Jan 29 14:43:52.135204 systemd[1]: sshd@7-10.244.90.186:22-187.170.73.190:54342.service: Deactivated successfully. Jan 29 14:43:52.412641 kubelet[2062]: E0129 14:43:52.412491 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:43:52.502330 containerd[1624]: time="2025-01-29T14:43:52.502170252Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 14:43:52.512006 kubelet[2062]: I0129 14:43:52.511431 2062 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 29 14:43:52.520395 containerd[1624]: time="2025-01-29T14:43:52.520349480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fxq86,Uid:ce838e40-b5e1-4fd4-ba08-f12503c5fb8a,Namespace:calico-system,Attempt:0,}" Jan 29 14:43:52.578082 containerd[1624]: time="2025-01-29T14:43:52.577984105Z" level=info msg="shim disconnected" id=5ac487a947883440922e9193c2a747fd7f1ef3c738afa422456eb02f4579bbc2 namespace=k8s.io Jan 29 14:43:52.578082 containerd[1624]: time="2025-01-29T14:43:52.578068510Z" level=warning msg="cleaning up after shim disconnected" id=5ac487a947883440922e9193c2a747fd7f1ef3c738afa422456eb02f4579bbc2 namespace=k8s.io Jan 29 14:43:52.578082 containerd[1624]: time="2025-01-29T14:43:52.578080522Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 14:43:52.594766 containerd[1624]: time="2025-01-29T14:43:52.594663124Z" level=warning msg="cleanup warnings time=\"2025-01-29T14:43:52Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 14:43:52.626550 containerd[1624]: time="2025-01-29T14:43:52.626353643Z" level=error msg="Failed to destroy network for sandbox \"a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:43:52.627366 containerd[1624]: time="2025-01-29T14:43:52.627103205Z" level=error msg="encountered an error cleaning up failed sandbox \"a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:43:52.627366 containerd[1624]: time="2025-01-29T14:43:52.627183528Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fxq86,Uid:ce838e40-b5e1-4fd4-ba08-f12503c5fb8a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:43:52.628064 kubelet[2062]: E0129 14:43:52.627686 2062 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:43:52.628064 kubelet[2062]: E0129 14:43:52.627789 2062 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fxq86" Jan 29 14:43:52.628064 kubelet[2062]: E0129 14:43:52.627825 2062 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fxq86" Jan 29 14:43:52.628328 kubelet[2062]: E0129 14:43:52.627879 2062 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fxq86_calico-system(ce838e40-b5e1-4fd4-ba08-f12503c5fb8a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fxq86_calico-system(ce838e40-b5e1-4fd4-ba08-f12503c5fb8a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fxq86" podUID="ce838e40-b5e1-4fd4-ba08-f12503c5fb8a" Jan 29 14:43:52.877844 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75-shm.mount: Deactivated successfully. Jan 29 14:43:52.878209 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ac487a947883440922e9193c2a747fd7f1ef3c738afa422456eb02f4579bbc2-rootfs.mount: Deactivated successfully. Jan 29 14:43:53.413512 kubelet[2062]: E0129 14:43:53.413382 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:43:53.601153 containerd[1624]: time="2025-01-29T14:43:53.601082290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 29 14:43:53.602309 kubelet[2062]: I0129 14:43:53.602131 2062 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75" Jan 29 14:43:53.604277 containerd[1624]: time="2025-01-29T14:43:53.604039564Z" level=info msg="StopPodSandbox for \"a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75\"" Jan 29 14:43:53.604561 containerd[1624]: time="2025-01-29T14:43:53.604402651Z" level=info msg="Ensure that sandbox a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75 in task-service has been cleanup successfully" Jan 29 14:43:53.645879 containerd[1624]: time="2025-01-29T14:43:53.645820941Z" level=error msg="StopPodSandbox for \"a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75\" failed" error="failed to destroy network for sandbox \"a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:43:53.646174 kubelet[2062]: E0129 14:43:53.646114 2062 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75" Jan 29 14:43:53.646269 kubelet[2062]: E0129 14:43:53.646188 2062 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75"} Jan 29 14:43:53.646331 kubelet[2062]: E0129 14:43:53.646289 2062 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ce838e40-b5e1-4fd4-ba08-f12503c5fb8a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 14:43:53.646331 kubelet[2062]: E0129 14:43:53.646317 2062 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ce838e40-b5e1-4fd4-ba08-f12503c5fb8a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fxq86" podUID="ce838e40-b5e1-4fd4-ba08-f12503c5fb8a" Jan 29 14:43:54.414463 kubelet[2062]: E0129 14:43:54.414310 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:43:55.416267 kubelet[2062]: E0129 14:43:55.415244 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:43:56.415752 kubelet[2062]: E0129 14:43:56.415671 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:43:57.416799 kubelet[2062]: E0129 14:43:57.416675 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:43:58.417426 kubelet[2062]: E0129 14:43:58.417213 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:43:58.826971 kubelet[2062]: I0129 14:43:58.826928 2062 topology_manager.go:215] "Topology Admit Handler" podUID="499af898-5fd3-427d-ac5a-a50de5d9cc4e" podNamespace="default" podName="nginx-deployment-85f456d6dd-k9ghk" Jan 29 14:43:58.949960 kubelet[2062]: I0129 14:43:58.949916 2062 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp2dt\" (UniqueName: \"kubernetes.io/projected/499af898-5fd3-427d-ac5a-a50de5d9cc4e-kube-api-access-tp2dt\") pod \"nginx-deployment-85f456d6dd-k9ghk\" (UID: \"499af898-5fd3-427d-ac5a-a50de5d9cc4e\") " pod="default/nginx-deployment-85f456d6dd-k9ghk" Jan 29 14:43:59.132507 containerd[1624]: time="2025-01-29T14:43:59.132290669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-k9ghk,Uid:499af898-5fd3-427d-ac5a-a50de5d9cc4e,Namespace:default,Attempt:0,}" Jan 29 14:43:59.219793 containerd[1624]: time="2025-01-29T14:43:59.219735276Z" level=error msg="Failed to destroy network for sandbox \"4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:43:59.220886 containerd[1624]: time="2025-01-29T14:43:59.220838314Z" level=error msg="encountered an error cleaning up failed sandbox \"4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:43:59.220966 containerd[1624]: time="2025-01-29T14:43:59.220905118Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-k9ghk,Uid:499af898-5fd3-427d-ac5a-a50de5d9cc4e,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:43:59.222696 kubelet[2062]: E0129 14:43:59.222655 2062 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:43:59.222864 kubelet[2062]: E0129 14:43:59.222848 2062 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-k9ghk" Jan 29 14:43:59.222937 kubelet[2062]: E0129 14:43:59.222926 2062 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-k9ghk" Jan 29 14:43:59.222990 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1-shm.mount: Deactivated successfully. Jan 29 14:43:59.223651 kubelet[2062]: E0129 14:43:59.223353 2062 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-k9ghk_default(499af898-5fd3-427d-ac5a-a50de5d9cc4e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-k9ghk_default(499af898-5fd3-427d-ac5a-a50de5d9cc4e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-k9ghk" podUID="499af898-5fd3-427d-ac5a-a50de5d9cc4e" Jan 29 14:43:59.418209 kubelet[2062]: E0129 14:43:59.418062 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:43:59.616369 kubelet[2062]: I0129 14:43:59.616106 2062 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1" Jan 29 14:43:59.617650 containerd[1624]: time="2025-01-29T14:43:59.617566479Z" level=info msg="StopPodSandbox for \"4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1\"" Jan 29 14:43:59.617800 containerd[1624]: time="2025-01-29T14:43:59.617760934Z" level=info msg="Ensure that sandbox 4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1 in task-service has been cleanup successfully" Jan 29 14:43:59.651334 containerd[1624]: time="2025-01-29T14:43:59.651266665Z" level=error msg="StopPodSandbox for \"4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1\" failed" error="failed to destroy network for sandbox \"4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 14:43:59.652021 kubelet[2062]: E0129 14:43:59.651988 2062 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1" Jan 29 14:43:59.652343 kubelet[2062]: E0129 14:43:59.652161 2062 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1"} Jan 29 14:43:59.652343 kubelet[2062]: E0129 14:43:59.652204 2062 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"499af898-5fd3-427d-ac5a-a50de5d9cc4e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 14:43:59.652343 kubelet[2062]: E0129 14:43:59.652229 2062 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"499af898-5fd3-427d-ac5a-a50de5d9cc4e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-k9ghk" podUID="499af898-5fd3-427d-ac5a-a50de5d9cc4e" Jan 29 14:43:59.687138 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2571444273.mount: Deactivated successfully. Jan 29 14:43:59.719866 containerd[1624]: time="2025-01-29T14:43:59.719827015Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:43:59.720608 containerd[1624]: time="2025-01-29T14:43:59.720564701Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 29 14:43:59.720818 containerd[1624]: time="2025-01-29T14:43:59.720796846Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:43:59.722521 containerd[1624]: time="2025-01-29T14:43:59.722471096Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:43:59.723240 containerd[1624]: time="2025-01-29T14:43:59.723205300Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 6.122060423s" Jan 29 14:43:59.723299 containerd[1624]: time="2025-01-29T14:43:59.723253717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 29 14:43:59.733001 containerd[1624]: time="2025-01-29T14:43:59.732924945Z" level=info msg="CreateContainer within sandbox \"86a90f39958357b8b035574fc2f6cfa95cdfa4dd11fdbaaa0d32bebd1182e3d0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 14:43:59.755828 containerd[1624]: time="2025-01-29T14:43:59.755630751Z" level=info msg="CreateContainer within sandbox \"86a90f39958357b8b035574fc2f6cfa95cdfa4dd11fdbaaa0d32bebd1182e3d0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b81a7f895f06bd18859996094641ddf9fb54fc212cef3927fa2053d70e16d850\"" Jan 29 14:43:59.757809 containerd[1624]: time="2025-01-29T14:43:59.756458054Z" level=info msg="StartContainer for \"b81a7f895f06bd18859996094641ddf9fb54fc212cef3927fa2053d70e16d850\"" Jan 29 14:43:59.864302 containerd[1624]: time="2025-01-29T14:43:59.863716221Z" level=info msg="StartContainer for \"b81a7f895f06bd18859996094641ddf9fb54fc212cef3927fa2053d70e16d850\" returns successfully" Jan 29 14:43:59.957981 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 29 14:43:59.959023 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 29 14:44:00.403786 kubelet[2062]: E0129 14:44:00.403622 2062 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:44:00.418563 kubelet[2062]: E0129 14:44:00.418430 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:44:00.654302 kubelet[2062]: I0129 14:44:00.653782 2062 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-n9fpc" podStartSLOduration=4.727842473 podStartE2EDuration="20.653746756s" podCreationTimestamp="2025-01-29 14:43:40 +0000 UTC" firstStartedPulling="2025-01-29 14:43:43.798552823 +0000 UTC m=+3.967849634" lastFinishedPulling="2025-01-29 14:43:59.724457106 +0000 UTC m=+19.893753917" observedRunningTime="2025-01-29 14:44:00.652731491 +0000 UTC m=+20.822028379" watchObservedRunningTime="2025-01-29 14:44:00.653746756 +0000 UTC m=+20.823043685" Jan 29 14:44:01.419064 kubelet[2062]: E0129 14:44:01.418574 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:44:01.581459 update_engine[1598]: I20250129 14:44:01.581355 1598 update_attempter.cc:509] Updating boot flags... Jan 29 14:44:01.634355 kubelet[2062]: I0129 14:44:01.633577 2062 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 14:44:01.659850 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2843) Jan 29 14:44:01.713330 kernel: bpftool[2859]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 29 14:44:01.741350 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2847) Jan 29 14:44:01.839258 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2847) Jan 29 14:44:02.000939 systemd-networkd[1266]: vxlan.calico: Link UP Jan 29 14:44:02.000949 systemd-networkd[1266]: vxlan.calico: Gained carrier Jan 29 14:44:02.419857 kubelet[2062]: E0129 14:44:02.419724 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:44:03.106742 systemd-networkd[1266]: vxlan.calico: Gained IPv6LL Jan 29 14:44:03.421633 kubelet[2062]: E0129 14:44:03.420957 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:44:03.935215 kubelet[2062]: I0129 14:44:03.934856 2062 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 14:44:04.422510 kubelet[2062]: E0129 14:44:04.422356 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:44:05.423080 kubelet[2062]: E0129 14:44:05.422975 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:44:05.518731 containerd[1624]: time="2025-01-29T14:44:05.518584303Z" level=info msg="StopPodSandbox for \"a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75\"" Jan 29 14:44:05.666111 containerd[1624]: 2025-01-29 14:44:05.592 [INFO][2996] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75" Jan 29 14:44:05.666111 containerd[1624]: 2025-01-29 14:44:05.592 [INFO][2996] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75" iface="eth0" netns="/var/run/netns/cni-11aee90d-cb10-9721-cac8-ee21d8f01978" Jan 29 14:44:05.666111 containerd[1624]: 2025-01-29 14:44:05.593 [INFO][2996] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75" iface="eth0" netns="/var/run/netns/cni-11aee90d-cb10-9721-cac8-ee21d8f01978" Jan 29 14:44:05.666111 containerd[1624]: 2025-01-29 14:44:05.595 [INFO][2996] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75" iface="eth0" netns="/var/run/netns/cni-11aee90d-cb10-9721-cac8-ee21d8f01978" Jan 29 14:44:05.666111 containerd[1624]: 2025-01-29 14:44:05.595 [INFO][2996] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75" Jan 29 14:44:05.666111 containerd[1624]: 2025-01-29 14:44:05.595 [INFO][2996] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75" Jan 29 14:44:05.666111 containerd[1624]: 2025-01-29 14:44:05.646 [INFO][3002] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75" HandleID="k8s-pod-network.a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75" Workload="10.244.90.186-k8s-csi--node--driver--fxq86-eth0" Jan 29 14:44:05.666111 containerd[1624]: 2025-01-29 14:44:05.646 [INFO][3002] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 14:44:05.666111 containerd[1624]: 2025-01-29 14:44:05.646 [INFO][3002] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 14:44:05.666111 containerd[1624]: 2025-01-29 14:44:05.659 [WARNING][3002] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75" HandleID="k8s-pod-network.a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75" Workload="10.244.90.186-k8s-csi--node--driver--fxq86-eth0" Jan 29 14:44:05.666111 containerd[1624]: 2025-01-29 14:44:05.659 [INFO][3002] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75" HandleID="k8s-pod-network.a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75" Workload="10.244.90.186-k8s-csi--node--driver--fxq86-eth0" Jan 29 14:44:05.666111 containerd[1624]: 2025-01-29 14:44:05.661 [INFO][3002] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 14:44:05.666111 containerd[1624]: 2025-01-29 14:44:05.664 [INFO][2996] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75" Jan 29 14:44:05.666812 containerd[1624]: time="2025-01-29T14:44:05.666614083Z" level=info msg="TearDown network for sandbox \"a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75\" successfully" Jan 29 14:44:05.666812 containerd[1624]: time="2025-01-29T14:44:05.666676847Z" level=info msg="StopPodSandbox for \"a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75\" returns successfully" Jan 29 14:44:05.668584 containerd[1624]: time="2025-01-29T14:44:05.668508811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fxq86,Uid:ce838e40-b5e1-4fd4-ba08-f12503c5fb8a,Namespace:calico-system,Attempt:1,}" Jan 29 14:44:05.671686 systemd[1]: run-netns-cni\x2d11aee90d\x2dcb10\x2d9721\x2dcac8\x2dee21d8f01978.mount: Deactivated successfully. Jan 29 14:44:05.852005 systemd-networkd[1266]: cali66ddbedf53d: Link UP Jan 29 14:44:05.852988 systemd-networkd[1266]: cali66ddbedf53d: Gained carrier Jan 29 14:44:05.871158 containerd[1624]: 2025-01-29 14:44:05.724 [INFO][3009] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.244.90.186-k8s-csi--node--driver--fxq86-eth0 csi-node-driver- calico-system ce838e40-b5e1-4fd4-ba08-f12503c5fb8a 1053 0 2025-01-29 14:43:40 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.244.90.186 csi-node-driver-fxq86 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali66ddbedf53d [] []}} ContainerID="cf83fcf3e006d90e291b6ef9afcc397cb22ff45f52d4fc240fa35edc603be01a" Namespace="calico-system" Pod="csi-node-driver-fxq86" WorkloadEndpoint="10.244.90.186-k8s-csi--node--driver--fxq86-" Jan 29 14:44:05.871158 containerd[1624]: 2025-01-29 14:44:05.725 [INFO][3009] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cf83fcf3e006d90e291b6ef9afcc397cb22ff45f52d4fc240fa35edc603be01a" Namespace="calico-system" Pod="csi-node-driver-fxq86" WorkloadEndpoint="10.244.90.186-k8s-csi--node--driver--fxq86-eth0" Jan 29 14:44:05.871158 containerd[1624]: 2025-01-29 14:44:05.778 [INFO][3020] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cf83fcf3e006d90e291b6ef9afcc397cb22ff45f52d4fc240fa35edc603be01a" HandleID="k8s-pod-network.cf83fcf3e006d90e291b6ef9afcc397cb22ff45f52d4fc240fa35edc603be01a" Workload="10.244.90.186-k8s-csi--node--driver--fxq86-eth0" Jan 29 14:44:05.871158 containerd[1624]: 2025-01-29 14:44:05.797 [INFO][3020] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cf83fcf3e006d90e291b6ef9afcc397cb22ff45f52d4fc240fa35edc603be01a" HandleID="k8s-pod-network.cf83fcf3e006d90e291b6ef9afcc397cb22ff45f52d4fc240fa35edc603be01a" Workload="10.244.90.186-k8s-csi--node--driver--fxq86-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031b820), Attrs:map[string]string{"namespace":"calico-system", "node":"10.244.90.186", "pod":"csi-node-driver-fxq86", "timestamp":"2025-01-29 14:44:05.778172383 +0000 UTC"}, Hostname:"10.244.90.186", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 14:44:05.871158 containerd[1624]: 2025-01-29 14:44:05.797 [INFO][3020] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 14:44:05.871158 containerd[1624]: 2025-01-29 14:44:05.797 [INFO][3020] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 14:44:05.871158 containerd[1624]: 2025-01-29 14:44:05.797 [INFO][3020] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.244.90.186' Jan 29 14:44:05.871158 containerd[1624]: 2025-01-29 14:44:05.800 [INFO][3020] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cf83fcf3e006d90e291b6ef9afcc397cb22ff45f52d4fc240fa35edc603be01a" host="10.244.90.186" Jan 29 14:44:05.871158 containerd[1624]: 2025-01-29 14:44:05.807 [INFO][3020] ipam/ipam.go 372: Looking up existing affinities for host host="10.244.90.186" Jan 29 14:44:05.871158 containerd[1624]: 2025-01-29 14:44:05.814 [INFO][3020] ipam/ipam.go 489: Trying affinity for 192.168.62.0/26 host="10.244.90.186" Jan 29 14:44:05.871158 containerd[1624]: 2025-01-29 14:44:05.816 [INFO][3020] ipam/ipam.go 155: Attempting to load block cidr=192.168.62.0/26 host="10.244.90.186" Jan 29 14:44:05.871158 containerd[1624]: 2025-01-29 14:44:05.820 [INFO][3020] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.62.0/26 host="10.244.90.186" Jan 29 14:44:05.871158 containerd[1624]: 2025-01-29 14:44:05.820 [INFO][3020] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.62.0/26 handle="k8s-pod-network.cf83fcf3e006d90e291b6ef9afcc397cb22ff45f52d4fc240fa35edc603be01a" host="10.244.90.186" Jan 29 14:44:05.871158 containerd[1624]: 2025-01-29 14:44:05.822 [INFO][3020] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.cf83fcf3e006d90e291b6ef9afcc397cb22ff45f52d4fc240fa35edc603be01a Jan 29 14:44:05.871158 containerd[1624]: 2025-01-29 14:44:05.833 [INFO][3020] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.62.0/26 handle="k8s-pod-network.cf83fcf3e006d90e291b6ef9afcc397cb22ff45f52d4fc240fa35edc603be01a" host="10.244.90.186" Jan 29 14:44:05.871158 containerd[1624]: 2025-01-29 14:44:05.841 [INFO][3020] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.62.1/26] block=192.168.62.0/26 handle="k8s-pod-network.cf83fcf3e006d90e291b6ef9afcc397cb22ff45f52d4fc240fa35edc603be01a" host="10.244.90.186" Jan 29 14:44:05.871158 containerd[1624]: 2025-01-29 14:44:05.841 [INFO][3020] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.62.1/26] handle="k8s-pod-network.cf83fcf3e006d90e291b6ef9afcc397cb22ff45f52d4fc240fa35edc603be01a" host="10.244.90.186" Jan 29 14:44:05.871158 containerd[1624]: 2025-01-29 14:44:05.841 [INFO][3020] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 14:44:05.871158 containerd[1624]: 2025-01-29 14:44:05.841 [INFO][3020] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.1/26] IPv6=[] ContainerID="cf83fcf3e006d90e291b6ef9afcc397cb22ff45f52d4fc240fa35edc603be01a" HandleID="k8s-pod-network.cf83fcf3e006d90e291b6ef9afcc397cb22ff45f52d4fc240fa35edc603be01a" Workload="10.244.90.186-k8s-csi--node--driver--fxq86-eth0" Jan 29 14:44:05.872108 containerd[1624]: 2025-01-29 14:44:05.845 [INFO][3009] cni-plugin/k8s.go 386: Populated endpoint ContainerID="cf83fcf3e006d90e291b6ef9afcc397cb22ff45f52d4fc240fa35edc603be01a" Namespace="calico-system" Pod="csi-node-driver-fxq86" WorkloadEndpoint="10.244.90.186-k8s-csi--node--driver--fxq86-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.244.90.186-k8s-csi--node--driver--fxq86-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ce838e40-b5e1-4fd4-ba08-f12503c5fb8a", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 14, 43, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.244.90.186", ContainerID:"", Pod:"csi-node-driver-fxq86", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.62.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali66ddbedf53d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 14:44:05.872108 containerd[1624]: 2025-01-29 14:44:05.846 [INFO][3009] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.62.1/32] ContainerID="cf83fcf3e006d90e291b6ef9afcc397cb22ff45f52d4fc240fa35edc603be01a" Namespace="calico-system" Pod="csi-node-driver-fxq86" WorkloadEndpoint="10.244.90.186-k8s-csi--node--driver--fxq86-eth0" Jan 29 14:44:05.872108 containerd[1624]: 2025-01-29 14:44:05.846 [INFO][3009] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali66ddbedf53d ContainerID="cf83fcf3e006d90e291b6ef9afcc397cb22ff45f52d4fc240fa35edc603be01a" Namespace="calico-system" Pod="csi-node-driver-fxq86" WorkloadEndpoint="10.244.90.186-k8s-csi--node--driver--fxq86-eth0" Jan 29 14:44:05.872108 containerd[1624]: 2025-01-29 14:44:05.854 [INFO][3009] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cf83fcf3e006d90e291b6ef9afcc397cb22ff45f52d4fc240fa35edc603be01a" Namespace="calico-system" Pod="csi-node-driver-fxq86" WorkloadEndpoint="10.244.90.186-k8s-csi--node--driver--fxq86-eth0" Jan 29 14:44:05.872108 containerd[1624]: 2025-01-29 14:44:05.854 [INFO][3009] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cf83fcf3e006d90e291b6ef9afcc397cb22ff45f52d4fc240fa35edc603be01a" Namespace="calico-system" Pod="csi-node-driver-fxq86" WorkloadEndpoint="10.244.90.186-k8s-csi--node--driver--fxq86-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.244.90.186-k8s-csi--node--driver--fxq86-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ce838e40-b5e1-4fd4-ba08-f12503c5fb8a", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 14, 43, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.244.90.186", ContainerID:"cf83fcf3e006d90e291b6ef9afcc397cb22ff45f52d4fc240fa35edc603be01a", Pod:"csi-node-driver-fxq86", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.62.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali66ddbedf53d", MAC:"02:2e:09:df:eb:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 14:44:05.872108 containerd[1624]: 2025-01-29 14:44:05.869 [INFO][3009] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="cf83fcf3e006d90e291b6ef9afcc397cb22ff45f52d4fc240fa35edc603be01a" Namespace="calico-system" Pod="csi-node-driver-fxq86" WorkloadEndpoint="10.244.90.186-k8s-csi--node--driver--fxq86-eth0" Jan 29 14:44:05.894131 containerd[1624]: time="2025-01-29T14:44:05.894032807Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 14:44:05.894820 containerd[1624]: time="2025-01-29T14:44:05.894629976Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 14:44:05.894820 containerd[1624]: time="2025-01-29T14:44:05.894669557Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 14:44:05.894999 containerd[1624]: time="2025-01-29T14:44:05.894976656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 14:44:05.934532 containerd[1624]: time="2025-01-29T14:44:05.934502622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fxq86,Uid:ce838e40-b5e1-4fd4-ba08-f12503c5fb8a,Namespace:calico-system,Attempt:1,} returns sandbox id \"cf83fcf3e006d90e291b6ef9afcc397cb22ff45f52d4fc240fa35edc603be01a\"" Jan 29 14:44:05.936457 containerd[1624]: time="2025-01-29T14:44:05.936427216Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 29 14:44:06.423474 kubelet[2062]: E0129 14:44:06.423381 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:44:07.256259 containerd[1624]: time="2025-01-29T14:44:07.256073742Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:44:07.257144 containerd[1624]: time="2025-01-29T14:44:07.256800204Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 29 14:44:07.257564 containerd[1624]: time="2025-01-29T14:44:07.257530341Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:44:07.259705 containerd[1624]: time="2025-01-29T14:44:07.259677129Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:44:07.260658 containerd[1624]: time="2025-01-29T14:44:07.260523609Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.324058918s" Jan 29 14:44:07.260658 containerd[1624]: time="2025-01-29T14:44:07.260561978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 29 14:44:07.263736 containerd[1624]: time="2025-01-29T14:44:07.263559157Z" level=info msg="CreateContainer within sandbox \"cf83fcf3e006d90e291b6ef9afcc397cb22ff45f52d4fc240fa35edc603be01a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 29 14:44:07.282730 containerd[1624]: time="2025-01-29T14:44:07.282682046Z" level=info msg="CreateContainer within sandbox \"cf83fcf3e006d90e291b6ef9afcc397cb22ff45f52d4fc240fa35edc603be01a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8b87cf9c5b01feeca22c4d67f8fa5545aa62cf9bd93c8bc25905f674137b7813\"" Jan 29 14:44:07.283659 containerd[1624]: time="2025-01-29T14:44:07.283631008Z" level=info msg="StartContainer for \"8b87cf9c5b01feeca22c4d67f8fa5545aa62cf9bd93c8bc25905f674137b7813\"" Jan 29 14:44:07.357371 containerd[1624]: time="2025-01-29T14:44:07.357326594Z" level=info msg="StartContainer for \"8b87cf9c5b01feeca22c4d67f8fa5545aa62cf9bd93c8bc25905f674137b7813\" returns successfully" Jan 29 14:44:07.360529 containerd[1624]: time="2025-01-29T14:44:07.360496680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 29 14:44:07.424526 kubelet[2062]: E0129 14:44:07.424401 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:44:07.522922 systemd-networkd[1266]: cali66ddbedf53d: Gained IPv6LL Jan 29 14:44:08.424953 kubelet[2062]: E0129 14:44:08.424867 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:44:08.843771 containerd[1624]: time="2025-01-29T14:44:08.843676314Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:44:08.845502 containerd[1624]: time="2025-01-29T14:44:08.845329986Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 29 14:44:08.846445 containerd[1624]: time="2025-01-29T14:44:08.846337256Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:44:08.850833 containerd[1624]: time="2025-01-29T14:44:08.850703707Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:44:08.855992 containerd[1624]: time="2025-01-29T14:44:08.855320592Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.494786007s" Jan 29 14:44:08.855992 containerd[1624]: time="2025-01-29T14:44:08.855374885Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 29 14:44:08.858559 containerd[1624]: time="2025-01-29T14:44:08.858514892Z" level=info msg="CreateContainer within sandbox \"cf83fcf3e006d90e291b6ef9afcc397cb22ff45f52d4fc240fa35edc603be01a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 29 14:44:08.882469 containerd[1624]: time="2025-01-29T14:44:08.882440043Z" level=info msg="CreateContainer within sandbox \"cf83fcf3e006d90e291b6ef9afcc397cb22ff45f52d4fc240fa35edc603be01a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"972ecceeb3d1e9d632a9144dec458bd1265fb454beb21c67e1332910e8f59889\"" Jan 29 14:44:08.883189 containerd[1624]: time="2025-01-29T14:44:08.883171368Z" level=info msg="StartContainer for \"972ecceeb3d1e9d632a9144dec458bd1265fb454beb21c67e1332910e8f59889\"" Jan 29 14:44:08.948838 containerd[1624]: time="2025-01-29T14:44:08.948746596Z" level=info msg="StartContainer for \"972ecceeb3d1e9d632a9144dec458bd1265fb454beb21c67e1332910e8f59889\" returns successfully" Jan 29 14:44:09.425219 kubelet[2062]: E0129 14:44:09.425122 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:44:09.503162 kubelet[2062]: I0129 14:44:09.503082 2062 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 29 14:44:09.503162 kubelet[2062]: I0129 14:44:09.503174 2062 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 29 14:44:09.695289 kubelet[2062]: I0129 14:44:09.694859 2062 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-fxq86" podStartSLOduration=26.77413914 podStartE2EDuration="29.694797603s" podCreationTimestamp="2025-01-29 14:43:40 +0000 UTC" firstStartedPulling="2025-01-29 14:44:05.935963613 +0000 UTC m=+26.105260425" lastFinishedPulling="2025-01-29 14:44:08.85662206 +0000 UTC m=+29.025918888" observedRunningTime="2025-01-29 14:44:09.693876962 +0000 UTC m=+29.863173813" watchObservedRunningTime="2025-01-29 14:44:09.694797603 +0000 UTC m=+29.864094496" Jan 29 14:44:10.426276 kubelet[2062]: E0129 14:44:10.426105 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:44:11.426965 kubelet[2062]: E0129 14:44:11.426863 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:44:12.427904 kubelet[2062]: E0129 14:44:12.427793 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:44:13.428829 kubelet[2062]: E0129 14:44:13.428526 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:44:14.430151 kubelet[2062]: E0129 14:44:14.429294 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:44:14.521566 containerd[1624]: time="2025-01-29T14:44:14.520794207Z" level=info msg="StopPodSandbox for \"4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1\"" Jan 29 14:44:14.658982 containerd[1624]: 2025-01-29 14:44:14.592 [INFO][3187] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1" Jan 29 14:44:14.658982 containerd[1624]: 2025-01-29 14:44:14.593 [INFO][3187] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1" iface="eth0" netns="/var/run/netns/cni-d645a2b7-42a3-b5fe-0977-e951bc6be572" Jan 29 14:44:14.658982 containerd[1624]: 2025-01-29 14:44:14.593 [INFO][3187] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1" iface="eth0" netns="/var/run/netns/cni-d645a2b7-42a3-b5fe-0977-e951bc6be572" Jan 29 14:44:14.658982 containerd[1624]: 2025-01-29 14:44:14.594 [INFO][3187] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1" iface="eth0" netns="/var/run/netns/cni-d645a2b7-42a3-b5fe-0977-e951bc6be572" Jan 29 14:44:14.658982 containerd[1624]: 2025-01-29 14:44:14.594 [INFO][3187] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1" Jan 29 14:44:14.658982 containerd[1624]: 2025-01-29 14:44:14.594 [INFO][3187] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1" Jan 29 14:44:14.658982 containerd[1624]: 2025-01-29 14:44:14.637 [INFO][3193] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1" HandleID="k8s-pod-network.4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1" Workload="10.244.90.186-k8s-nginx--deployment--85f456d6dd--k9ghk-eth0" Jan 29 14:44:14.658982 containerd[1624]: 2025-01-29 14:44:14.637 [INFO][3193] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 14:44:14.658982 containerd[1624]: 2025-01-29 14:44:14.637 [INFO][3193] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 14:44:14.658982 containerd[1624]: 2025-01-29 14:44:14.649 [WARNING][3193] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1" HandleID="k8s-pod-network.4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1" Workload="10.244.90.186-k8s-nginx--deployment--85f456d6dd--k9ghk-eth0" Jan 29 14:44:14.658982 containerd[1624]: 2025-01-29 14:44:14.649 [INFO][3193] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1" HandleID="k8s-pod-network.4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1" Workload="10.244.90.186-k8s-nginx--deployment--85f456d6dd--k9ghk-eth0" Jan 29 14:44:14.658982 containerd[1624]: 2025-01-29 14:44:14.652 [INFO][3193] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 14:44:14.658982 containerd[1624]: 2025-01-29 14:44:14.655 [INFO][3187] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1" Jan 29 14:44:14.665485 containerd[1624]: time="2025-01-29T14:44:14.661829130Z" level=info msg="TearDown network for sandbox \"4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1\" successfully" Jan 29 14:44:14.665485 containerd[1624]: time="2025-01-29T14:44:14.661900389Z" level=info msg="StopPodSandbox for \"4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1\" returns successfully" Jan 29 14:44:14.665485 containerd[1624]: time="2025-01-29T14:44:14.663276870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-k9ghk,Uid:499af898-5fd3-427d-ac5a-a50de5d9cc4e,Namespace:default,Attempt:1,}" Jan 29 14:44:14.666049 systemd[1]: run-netns-cni\x2dd645a2b7\x2d42a3\x2db5fe\x2d0977\x2de951bc6be572.mount: Deactivated successfully. Jan 29 14:44:14.840657 systemd-networkd[1266]: cali36884bf53a4: Link UP Jan 29 14:44:14.843729 systemd-networkd[1266]: cali36884bf53a4: Gained carrier Jan 29 14:44:14.869827 containerd[1624]: 2025-01-29 14:44:14.724 [INFO][3203] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.244.90.186-k8s-nginx--deployment--85f456d6dd--k9ghk-eth0 nginx-deployment-85f456d6dd- default 499af898-5fd3-427d-ac5a-a50de5d9cc4e 1088 0 2025-01-29 14:43:58 +0000 UTC map[app:nginx pod-template-hash:85f456d6dd projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.244.90.186 nginx-deployment-85f456d6dd-k9ghk eth0 default [] [] [kns.default ksa.default.default] cali36884bf53a4 [] []}} ContainerID="4973e3849685513233ee3affab04aeea843e1cfc955bb277c5cd0693dfb6df76" Namespace="default" Pod="nginx-deployment-85f456d6dd-k9ghk" WorkloadEndpoint="10.244.90.186-k8s-nginx--deployment--85f456d6dd--k9ghk-" Jan 29 14:44:14.869827 containerd[1624]: 2025-01-29 14:44:14.724 [INFO][3203] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4973e3849685513233ee3affab04aeea843e1cfc955bb277c5cd0693dfb6df76" Namespace="default" Pod="nginx-deployment-85f456d6dd-k9ghk" WorkloadEndpoint="10.244.90.186-k8s-nginx--deployment--85f456d6dd--k9ghk-eth0" Jan 29 14:44:14.869827 containerd[1624]: 2025-01-29 14:44:14.759 [INFO][3211] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4973e3849685513233ee3affab04aeea843e1cfc955bb277c5cd0693dfb6df76" HandleID="k8s-pod-network.4973e3849685513233ee3affab04aeea843e1cfc955bb277c5cd0693dfb6df76" Workload="10.244.90.186-k8s-nginx--deployment--85f456d6dd--k9ghk-eth0" Jan 29 14:44:14.869827 containerd[1624]: 2025-01-29 14:44:14.775 [INFO][3211] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4973e3849685513233ee3affab04aeea843e1cfc955bb277c5cd0693dfb6df76" HandleID="k8s-pod-network.4973e3849685513233ee3affab04aeea843e1cfc955bb277c5cd0693dfb6df76" Workload="10.244.90.186-k8s-nginx--deployment--85f456d6dd--k9ghk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290ae0), Attrs:map[string]string{"namespace":"default", "node":"10.244.90.186", "pod":"nginx-deployment-85f456d6dd-k9ghk", "timestamp":"2025-01-29 14:44:14.759814279 +0000 UTC"}, Hostname:"10.244.90.186", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 14:44:14.869827 containerd[1624]: 2025-01-29 14:44:14.775 [INFO][3211] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 14:44:14.869827 containerd[1624]: 2025-01-29 14:44:14.775 [INFO][3211] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 14:44:14.869827 containerd[1624]: 2025-01-29 14:44:14.775 [INFO][3211] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.244.90.186' Jan 29 14:44:14.869827 containerd[1624]: 2025-01-29 14:44:14.778 [INFO][3211] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4973e3849685513233ee3affab04aeea843e1cfc955bb277c5cd0693dfb6df76" host="10.244.90.186" Jan 29 14:44:14.869827 containerd[1624]: 2025-01-29 14:44:14.789 [INFO][3211] ipam/ipam.go 372: Looking up existing affinities for host host="10.244.90.186" Jan 29 14:44:14.869827 containerd[1624]: 2025-01-29 14:44:14.799 [INFO][3211] ipam/ipam.go 489: Trying affinity for 192.168.62.0/26 host="10.244.90.186" Jan 29 14:44:14.869827 containerd[1624]: 2025-01-29 14:44:14.802 [INFO][3211] ipam/ipam.go 155: Attempting to load block cidr=192.168.62.0/26 host="10.244.90.186" Jan 29 14:44:14.869827 containerd[1624]: 2025-01-29 14:44:14.806 [INFO][3211] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.62.0/26 host="10.244.90.186" Jan 29 14:44:14.869827 containerd[1624]: 2025-01-29 14:44:14.807 [INFO][3211] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.62.0/26 handle="k8s-pod-network.4973e3849685513233ee3affab04aeea843e1cfc955bb277c5cd0693dfb6df76" host="10.244.90.186" Jan 29 14:44:14.869827 containerd[1624]: 2025-01-29 14:44:14.809 [INFO][3211] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4973e3849685513233ee3affab04aeea843e1cfc955bb277c5cd0693dfb6df76 Jan 29 14:44:14.869827 containerd[1624]: 2025-01-29 14:44:14.817 [INFO][3211] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.62.0/26 handle="k8s-pod-network.4973e3849685513233ee3affab04aeea843e1cfc955bb277c5cd0693dfb6df76" host="10.244.90.186" Jan 29 14:44:14.869827 containerd[1624]: 2025-01-29 14:44:14.826 [INFO][3211] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.62.2/26] block=192.168.62.0/26 handle="k8s-pod-network.4973e3849685513233ee3affab04aeea843e1cfc955bb277c5cd0693dfb6df76" host="10.244.90.186" Jan 29 14:44:14.869827 containerd[1624]: 2025-01-29 14:44:14.826 [INFO][3211] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.62.2/26] handle="k8s-pod-network.4973e3849685513233ee3affab04aeea843e1cfc955bb277c5cd0693dfb6df76" host="10.244.90.186" Jan 29 14:44:14.869827 containerd[1624]: 2025-01-29 14:44:14.826 [INFO][3211] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 14:44:14.869827 containerd[1624]: 2025-01-29 14:44:14.826 [INFO][3211] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.2/26] IPv6=[] ContainerID="4973e3849685513233ee3affab04aeea843e1cfc955bb277c5cd0693dfb6df76" HandleID="k8s-pod-network.4973e3849685513233ee3affab04aeea843e1cfc955bb277c5cd0693dfb6df76" Workload="10.244.90.186-k8s-nginx--deployment--85f456d6dd--k9ghk-eth0" Jan 29 14:44:14.871670 containerd[1624]: 2025-01-29 14:44:14.830 [INFO][3203] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4973e3849685513233ee3affab04aeea843e1cfc955bb277c5cd0693dfb6df76" Namespace="default" Pod="nginx-deployment-85f456d6dd-k9ghk" WorkloadEndpoint="10.244.90.186-k8s-nginx--deployment--85f456d6dd--k9ghk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.244.90.186-k8s-nginx--deployment--85f456d6dd--k9ghk-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"499af898-5fd3-427d-ac5a-a50de5d9cc4e", ResourceVersion:"1088", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 14, 43, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.244.90.186", ContainerID:"", Pod:"nginx-deployment-85f456d6dd-k9ghk", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.62.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali36884bf53a4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 14:44:14.871670 containerd[1624]: 2025-01-29 14:44:14.831 [INFO][3203] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.62.2/32] ContainerID="4973e3849685513233ee3affab04aeea843e1cfc955bb277c5cd0693dfb6df76" Namespace="default" Pod="nginx-deployment-85f456d6dd-k9ghk" WorkloadEndpoint="10.244.90.186-k8s-nginx--deployment--85f456d6dd--k9ghk-eth0" Jan 29 14:44:14.871670 containerd[1624]: 2025-01-29 14:44:14.831 [INFO][3203] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali36884bf53a4 ContainerID="4973e3849685513233ee3affab04aeea843e1cfc955bb277c5cd0693dfb6df76" Namespace="default" Pod="nginx-deployment-85f456d6dd-k9ghk" WorkloadEndpoint="10.244.90.186-k8s-nginx--deployment--85f456d6dd--k9ghk-eth0" Jan 29 14:44:14.871670 containerd[1624]: 2025-01-29 14:44:14.845 [INFO][3203] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4973e3849685513233ee3affab04aeea843e1cfc955bb277c5cd0693dfb6df76" Namespace="default" Pod="nginx-deployment-85f456d6dd-k9ghk" WorkloadEndpoint="10.244.90.186-k8s-nginx--deployment--85f456d6dd--k9ghk-eth0" Jan 29 14:44:14.871670 containerd[1624]: 2025-01-29 14:44:14.848 [INFO][3203] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4973e3849685513233ee3affab04aeea843e1cfc955bb277c5cd0693dfb6df76" Namespace="default" Pod="nginx-deployment-85f456d6dd-k9ghk" WorkloadEndpoint="10.244.90.186-k8s-nginx--deployment--85f456d6dd--k9ghk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.244.90.186-k8s-nginx--deployment--85f456d6dd--k9ghk-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"499af898-5fd3-427d-ac5a-a50de5d9cc4e", ResourceVersion:"1088", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 14, 43, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.244.90.186", ContainerID:"4973e3849685513233ee3affab04aeea843e1cfc955bb277c5cd0693dfb6df76", Pod:"nginx-deployment-85f456d6dd-k9ghk", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.62.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali36884bf53a4", MAC:"c2:68:a9:e9:62:8b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 14:44:14.871670 containerd[1624]: 2025-01-29 14:44:14.865 [INFO][3203] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4973e3849685513233ee3affab04aeea843e1cfc955bb277c5cd0693dfb6df76" Namespace="default" Pod="nginx-deployment-85f456d6dd-k9ghk" WorkloadEndpoint="10.244.90.186-k8s-nginx--deployment--85f456d6dd--k9ghk-eth0" Jan 29 14:44:14.896317 containerd[1624]: time="2025-01-29T14:44:14.896110819Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 14:44:14.896317 containerd[1624]: time="2025-01-29T14:44:14.896180191Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 14:44:14.896317 containerd[1624]: time="2025-01-29T14:44:14.896219109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 14:44:14.896692 containerd[1624]: time="2025-01-29T14:44:14.896390919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 14:44:14.966037 containerd[1624]: time="2025-01-29T14:44:14.965998230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-k9ghk,Uid:499af898-5fd3-427d-ac5a-a50de5d9cc4e,Namespace:default,Attempt:1,} returns sandbox id \"4973e3849685513233ee3affab04aeea843e1cfc955bb277c5cd0693dfb6df76\"" Jan 29 14:44:14.970039 containerd[1624]: time="2025-01-29T14:44:14.969851033Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 29 14:44:15.430599 kubelet[2062]: E0129 14:44:15.430500 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:44:16.431605 kubelet[2062]: E0129 14:44:16.431531 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:44:16.674680 systemd-networkd[1266]: cali36884bf53a4: Gained IPv6LL Jan 29 14:44:17.431821 kubelet[2062]: E0129 14:44:17.431723 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:44:18.345957 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2055722133.mount: Deactivated successfully. Jan 29 14:44:18.432859 kubelet[2062]: E0129 14:44:18.432471 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:44:19.433017 kubelet[2062]: E0129 14:44:19.432944 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:44:19.505901 containerd[1624]: time="2025-01-29T14:44:19.505821298Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:44:19.507688 containerd[1624]: time="2025-01-29T14:44:19.507127400Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71015561" Jan 29 14:44:19.509269 containerd[1624]: time="2025-01-29T14:44:19.508245686Z" level=info msg="ImageCreate event name:\"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:44:19.512864 containerd[1624]: time="2025-01-29T14:44:19.512823918Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:44:19.515152 containerd[1624]: time="2025-01-29T14:44:19.515086489Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 4.545197543s" Jan 29 14:44:19.515217 containerd[1624]: time="2025-01-29T14:44:19.515166535Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 29 14:44:19.527822 containerd[1624]: time="2025-01-29T14:44:19.527794974Z" level=info msg="CreateContainer within sandbox \"4973e3849685513233ee3affab04aeea843e1cfc955bb277c5cd0693dfb6df76\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 29 14:44:19.547426 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount175364710.mount: Deactivated successfully. Jan 29 14:44:19.547849 containerd[1624]: time="2025-01-29T14:44:19.547814744Z" level=info msg="CreateContainer within sandbox \"4973e3849685513233ee3affab04aeea843e1cfc955bb277c5cd0693dfb6df76\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"c7e696100509488d508e202ac4d4ea5b489ecb03cf3349c39c05673dc5b6dd86\"" Jan 29 14:44:19.549316 containerd[1624]: time="2025-01-29T14:44:19.548579026Z" level=info msg="StartContainer for \"c7e696100509488d508e202ac4d4ea5b489ecb03cf3349c39c05673dc5b6dd86\"" Jan 29 14:44:19.600727 containerd[1624]: time="2025-01-29T14:44:19.600490579Z" level=info msg="StartContainer for \"c7e696100509488d508e202ac4d4ea5b489ecb03cf3349c39c05673dc5b6dd86\" returns successfully" Jan 29 14:44:19.738747 kubelet[2062]: I0129 14:44:19.738429 2062 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-k9ghk" podStartSLOduration=17.191120485 podStartE2EDuration="21.738396068s" podCreationTimestamp="2025-01-29 14:43:58 +0000 UTC" firstStartedPulling="2025-01-29 14:44:14.96944368 +0000 UTC m=+35.138740488" lastFinishedPulling="2025-01-29 14:44:19.516719255 +0000 UTC m=+39.686016071" observedRunningTime="2025-01-29 14:44:19.737622763 +0000 UTC m=+39.906919682" watchObservedRunningTime="2025-01-29 14:44:19.738396068 +0000 UTC m=+39.907693057" Jan 29 14:44:20.404004 kubelet[2062]: E0129 14:44:20.403903 2062 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:44:20.433550 kubelet[2062]: E0129 14:44:20.433461 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:44:21.434752 kubelet[2062]: E0129 14:44:21.434635 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:44:22.435168 kubelet[2062]: E0129 14:44:22.435074 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:44:23.435816 kubelet[2062]: E0129 14:44:23.435736 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:44:24.436892 kubelet[2062]: E0129 14:44:24.436796 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:44:25.437900 kubelet[2062]: E0129 14:44:25.437768 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:44:26.438291 kubelet[2062]: E0129 14:44:26.438171 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:44:27.438687 kubelet[2062]: E0129 14:44:27.438601 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:44:27.667375 kubelet[2062]: I0129 14:44:27.667293 2062 topology_manager.go:215] "Topology Admit Handler" podUID="6591fd99-6f5a-4217-a215-a1021a6d47e2" podNamespace="default" podName="nfs-server-provisioner-0" Jan 29 14:44:27.839196 kubelet[2062]: I0129 14:44:27.838921 2062 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6jth\" (UniqueName: \"kubernetes.io/projected/6591fd99-6f5a-4217-a215-a1021a6d47e2-kube-api-access-l6jth\") pod \"nfs-server-provisioner-0\" (UID: \"6591fd99-6f5a-4217-a215-a1021a6d47e2\") " pod="default/nfs-server-provisioner-0" Jan 29 14:44:27.839196 kubelet[2062]: I0129 14:44:27.839060 2062 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/6591fd99-6f5a-4217-a215-a1021a6d47e2-data\") pod \"nfs-server-provisioner-0\" (UID: \"6591fd99-6f5a-4217-a215-a1021a6d47e2\") " pod="default/nfs-server-provisioner-0" Jan 29 14:44:27.973636 containerd[1624]: time="2025-01-29T14:44:27.973594765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:6591fd99-6f5a-4217-a215-a1021a6d47e2,Namespace:default,Attempt:0,}" Jan 29 14:44:28.201858 systemd-networkd[1266]: cali60e51b789ff: Link UP Jan 29 14:44:28.204211 systemd-networkd[1266]: cali60e51b789ff: Gained carrier Jan 29 14:44:28.225812 containerd[1624]: 2025-01-29 14:44:28.095 [INFO][3375] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.244.90.186-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 6591fd99-6f5a-4217-a215-a1021a6d47e2 1142 0 2025-01-29 14:44:27 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.244.90.186 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="c56ff584beaef0df8dc640c4d1a715e567cb9c56068b78511e63827f8c2d58a4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.244.90.186-k8s-nfs--server--provisioner--0-" Jan 29 14:44:28.225812 containerd[1624]: 2025-01-29 14:44:28.095 [INFO][3375] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c56ff584beaef0df8dc640c4d1a715e567cb9c56068b78511e63827f8c2d58a4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.244.90.186-k8s-nfs--server--provisioner--0-eth0" Jan 29 14:44:28.225812 containerd[1624]: 2025-01-29 14:44:28.134 [INFO][3387] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c56ff584beaef0df8dc640c4d1a715e567cb9c56068b78511e63827f8c2d58a4" HandleID="k8s-pod-network.c56ff584beaef0df8dc640c4d1a715e567cb9c56068b78511e63827f8c2d58a4" Workload="10.244.90.186-k8s-nfs--server--provisioner--0-eth0" Jan 29 14:44:28.225812 containerd[1624]: 2025-01-29 14:44:28.149 [INFO][3387] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c56ff584beaef0df8dc640c4d1a715e567cb9c56068b78511e63827f8c2d58a4" HandleID="k8s-pod-network.c56ff584beaef0df8dc640c4d1a715e567cb9c56068b78511e63827f8c2d58a4" Workload="10.244.90.186-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002914c0), Attrs:map[string]string{"namespace":"default", "node":"10.244.90.186", "pod":"nfs-server-provisioner-0", "timestamp":"2025-01-29 14:44:28.134877575 +0000 UTC"}, Hostname:"10.244.90.186", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 14:44:28.225812 containerd[1624]: 2025-01-29 14:44:28.149 [INFO][3387] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 14:44:28.225812 containerd[1624]: 2025-01-29 14:44:28.149 [INFO][3387] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 14:44:28.225812 containerd[1624]: 2025-01-29 14:44:28.149 [INFO][3387] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.244.90.186' Jan 29 14:44:28.225812 containerd[1624]: 2025-01-29 14:44:28.152 [INFO][3387] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c56ff584beaef0df8dc640c4d1a715e567cb9c56068b78511e63827f8c2d58a4" host="10.244.90.186" Jan 29 14:44:28.225812 containerd[1624]: 2025-01-29 14:44:28.160 [INFO][3387] ipam/ipam.go 372: Looking up existing affinities for host host="10.244.90.186" Jan 29 14:44:28.225812 containerd[1624]: 2025-01-29 14:44:28.169 [INFO][3387] ipam/ipam.go 489: Trying affinity for 192.168.62.0/26 host="10.244.90.186" Jan 29 14:44:28.225812 containerd[1624]: 2025-01-29 14:44:28.171 [INFO][3387] ipam/ipam.go 155: Attempting to load block cidr=192.168.62.0/26 host="10.244.90.186" Jan 29 14:44:28.225812 containerd[1624]: 2025-01-29 14:44:28.175 [INFO][3387] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.62.0/26 host="10.244.90.186" Jan 29 14:44:28.225812 containerd[1624]: 2025-01-29 14:44:28.175 [INFO][3387] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.62.0/26 handle="k8s-pod-network.c56ff584beaef0df8dc640c4d1a715e567cb9c56068b78511e63827f8c2d58a4" host="10.244.90.186" Jan 29 14:44:28.225812 containerd[1624]: 2025-01-29 14:44:28.177 [INFO][3387] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c56ff584beaef0df8dc640c4d1a715e567cb9c56068b78511e63827f8c2d58a4 Jan 29 14:44:28.225812 containerd[1624]: 2025-01-29 14:44:28.185 [INFO][3387] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.62.0/26 handle="k8s-pod-network.c56ff584beaef0df8dc640c4d1a715e567cb9c56068b78511e63827f8c2d58a4" host="10.244.90.186" Jan 29 14:44:28.225812 containerd[1624]: 2025-01-29 14:44:28.192 [INFO][3387] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.62.3/26] block=192.168.62.0/26 handle="k8s-pod-network.c56ff584beaef0df8dc640c4d1a715e567cb9c56068b78511e63827f8c2d58a4" host="10.244.90.186" Jan 29 14:44:28.225812 containerd[1624]: 2025-01-29 14:44:28.192 [INFO][3387] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.62.3/26] handle="k8s-pod-network.c56ff584beaef0df8dc640c4d1a715e567cb9c56068b78511e63827f8c2d58a4" host="10.244.90.186" Jan 29 14:44:28.225812 containerd[1624]: 2025-01-29 14:44:28.192 [INFO][3387] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 14:44:28.225812 containerd[1624]: 2025-01-29 14:44:28.192 [INFO][3387] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.3/26] IPv6=[] ContainerID="c56ff584beaef0df8dc640c4d1a715e567cb9c56068b78511e63827f8c2d58a4" HandleID="k8s-pod-network.c56ff584beaef0df8dc640c4d1a715e567cb9c56068b78511e63827f8c2d58a4" Workload="10.244.90.186-k8s-nfs--server--provisioner--0-eth0" Jan 29 14:44:28.226695 containerd[1624]: 2025-01-29 14:44:28.195 [INFO][3375] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c56ff584beaef0df8dc640c4d1a715e567cb9c56068b78511e63827f8c2d58a4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.244.90.186-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.244.90.186-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"6591fd99-6f5a-4217-a215-a1021a6d47e2", ResourceVersion:"1142", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 14, 44, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.244.90.186", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.62.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 14:44:28.226695 containerd[1624]: 2025-01-29 14:44:28.195 [INFO][3375] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.62.3/32] ContainerID="c56ff584beaef0df8dc640c4d1a715e567cb9c56068b78511e63827f8c2d58a4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.244.90.186-k8s-nfs--server--provisioner--0-eth0" Jan 29 14:44:28.226695 containerd[1624]: 2025-01-29 14:44:28.195 [INFO][3375] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="c56ff584beaef0df8dc640c4d1a715e567cb9c56068b78511e63827f8c2d58a4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.244.90.186-k8s-nfs--server--provisioner--0-eth0" Jan 29 14:44:28.226695 containerd[1624]: 2025-01-29 14:44:28.204 [INFO][3375] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c56ff584beaef0df8dc640c4d1a715e567cb9c56068b78511e63827f8c2d58a4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.244.90.186-k8s-nfs--server--provisioner--0-eth0" Jan 29 14:44:28.226925 containerd[1624]: 2025-01-29 14:44:28.206 [INFO][3375] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c56ff584beaef0df8dc640c4d1a715e567cb9c56068b78511e63827f8c2d58a4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.244.90.186-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.244.90.186-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"6591fd99-6f5a-4217-a215-a1021a6d47e2", ResourceVersion:"1142", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 14, 44, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.244.90.186", ContainerID:"c56ff584beaef0df8dc640c4d1a715e567cb9c56068b78511e63827f8c2d58a4", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.62.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"c2:c4:63:26:04:86", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 14:44:28.226925 containerd[1624]: 2025-01-29 14:44:28.224 [INFO][3375] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c56ff584beaef0df8dc640c4d1a715e567cb9c56068b78511e63827f8c2d58a4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.244.90.186-k8s-nfs--server--provisioner--0-eth0" Jan 29 14:44:28.259615 containerd[1624]: time="2025-01-29T14:44:28.259512620Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 14:44:28.260160 containerd[1624]: time="2025-01-29T14:44:28.259589843Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 14:44:28.260160 containerd[1624]: time="2025-01-29T14:44:28.259619377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 14:44:28.260832 containerd[1624]: time="2025-01-29T14:44:28.260179575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 14:44:28.320769 containerd[1624]: time="2025-01-29T14:44:28.320735720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:6591fd99-6f5a-4217-a215-a1021a6d47e2,Namespace:default,Attempt:0,} returns sandbox id \"c56ff584beaef0df8dc640c4d1a715e567cb9c56068b78511e63827f8c2d58a4\"" Jan 29 14:44:28.322943 containerd[1624]: time="2025-01-29T14:44:28.322921687Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 29 14:44:28.440674 kubelet[2062]: E0129 14:44:28.439532 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:44:29.441532 kubelet[2062]: E0129 14:44:29.441445 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:44:29.538892 systemd-networkd[1266]: cali60e51b789ff: Gained IPv6LL Jan 29 14:44:30.442678 kubelet[2062]: E0129 14:44:30.442618 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:44:30.828098 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount678700668.mount: Deactivated successfully. Jan 29 14:44:31.443517 kubelet[2062]: E0129 14:44:31.443456 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:44:32.444006 kubelet[2062]: E0129 14:44:32.443966 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:44:32.627947 containerd[1624]: time="2025-01-29T14:44:32.627887257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:44:32.629317 containerd[1624]: time="2025-01-29T14:44:32.629268574Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039414" Jan 29 14:44:32.629579 containerd[1624]: time="2025-01-29T14:44:32.629551944Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:44:32.632919 containerd[1624]: time="2025-01-29T14:44:32.632885576Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:44:32.634805 containerd[1624]: time="2025-01-29T14:44:32.634761118Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 4.311696901s" Jan 29 14:44:32.634874 containerd[1624]: time="2025-01-29T14:44:32.634807629Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 29 14:44:32.637110 containerd[1624]: time="2025-01-29T14:44:32.637073735Z" level=info msg="CreateContainer within sandbox \"c56ff584beaef0df8dc640c4d1a715e567cb9c56068b78511e63827f8c2d58a4\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 29 14:44:32.647307 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3388604497.mount: Deactivated successfully. Jan 29 14:44:32.650028 containerd[1624]: time="2025-01-29T14:44:32.649940825Z" level=info msg="CreateContainer within sandbox \"c56ff584beaef0df8dc640c4d1a715e567cb9c56068b78511e63827f8c2d58a4\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"afbb151165b2a64c2c095549d99b1eaa7133c016971d980c861057f5935bbdc9\"" Jan 29 14:44:32.651275 containerd[1624]: time="2025-01-29T14:44:32.650523485Z" level=info msg="StartContainer for \"afbb151165b2a64c2c095549d99b1eaa7133c016971d980c861057f5935bbdc9\"" Jan 29 14:44:32.720077 containerd[1624]: time="2025-01-29T14:44:32.719964384Z" level=info msg="StartContainer for \"afbb151165b2a64c2c095549d99b1eaa7133c016971d980c861057f5935bbdc9\" returns successfully" Jan 29 14:44:32.777683 kubelet[2062]: I0129 14:44:32.777610 2062 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.464565682 podStartE2EDuration="5.777580902s" podCreationTimestamp="2025-01-29 14:44:27 +0000 UTC" firstStartedPulling="2025-01-29 14:44:28.322464591 +0000 UTC m=+48.491761406" lastFinishedPulling="2025-01-29 14:44:32.635479815 +0000 UTC m=+52.804776626" observedRunningTime="2025-01-29 14:44:32.775550534 +0000 UTC m=+52.944847369" watchObservedRunningTime="2025-01-29 14:44:32.777580902 +0000 UTC m=+52.946877736" Jan 29 14:44:33.445487 kubelet[2062]: E0129 14:44:33.445386 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:44:34.446400 kubelet[2062]: E0129 14:44:34.446283 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:44:35.447472 kubelet[2062]: E0129 14:44:35.447385 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:44:36.448631 kubelet[2062]: E0129 14:44:36.448541 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:44:37.449372 kubelet[2062]: E0129 14:44:37.449214 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:44:38.450219 kubelet[2062]: E0129 14:44:38.450062 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:44:39.451054 kubelet[2062]: E0129 14:44:39.450941 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:44:40.403489 kubelet[2062]: E0129 14:44:40.403343 2062 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:44:40.435394 containerd[1624]: time="2025-01-29T14:44:40.435180103Z" level=info msg="StopPodSandbox for \"a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75\"" Jan 29 14:44:40.451848 kubelet[2062]: E0129 14:44:40.451812 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:44:40.569345 containerd[1624]: 2025-01-29 14:44:40.500 [WARNING][3584] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.244.90.186-k8s-csi--node--driver--fxq86-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ce838e40-b5e1-4fd4-ba08-f12503c5fb8a", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 14, 43, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.244.90.186", ContainerID:"cf83fcf3e006d90e291b6ef9afcc397cb22ff45f52d4fc240fa35edc603be01a", Pod:"csi-node-driver-fxq86", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.62.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali66ddbedf53d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 14:44:40.569345 containerd[1624]: 2025-01-29 14:44:40.501 [INFO][3584] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75" Jan 29 14:44:40.569345 containerd[1624]: 2025-01-29 14:44:40.501 [INFO][3584] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75" iface="eth0" netns="" Jan 29 14:44:40.569345 containerd[1624]: 2025-01-29 14:44:40.501 [INFO][3584] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75" Jan 29 14:44:40.569345 containerd[1624]: 2025-01-29 14:44:40.501 [INFO][3584] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75" Jan 29 14:44:40.569345 containerd[1624]: 2025-01-29 14:44:40.549 [INFO][3590] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75" HandleID="k8s-pod-network.a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75" Workload="10.244.90.186-k8s-csi--node--driver--fxq86-eth0" Jan 29 14:44:40.569345 containerd[1624]: 2025-01-29 14:44:40.549 [INFO][3590] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 14:44:40.569345 containerd[1624]: 2025-01-29 14:44:40.549 [INFO][3590] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 14:44:40.569345 containerd[1624]: 2025-01-29 14:44:40.560 [WARNING][3590] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75" HandleID="k8s-pod-network.a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75" Workload="10.244.90.186-k8s-csi--node--driver--fxq86-eth0" Jan 29 14:44:40.569345 containerd[1624]: 2025-01-29 14:44:40.561 [INFO][3590] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75" HandleID="k8s-pod-network.a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75" Workload="10.244.90.186-k8s-csi--node--driver--fxq86-eth0" Jan 29 14:44:40.569345 containerd[1624]: 2025-01-29 14:44:40.564 [INFO][3590] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 14:44:40.569345 containerd[1624]: 2025-01-29 14:44:40.566 [INFO][3584] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75" Jan 29 14:44:40.571362 containerd[1624]: time="2025-01-29T14:44:40.569379568Z" level=info msg="TearDown network for sandbox \"a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75\" successfully" Jan 29 14:44:40.571362 containerd[1624]: time="2025-01-29T14:44:40.569449394Z" level=info msg="StopPodSandbox for \"a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75\" returns successfully" Jan 29 14:44:40.577240 containerd[1624]: time="2025-01-29T14:44:40.577164244Z" level=info msg="RemovePodSandbox for \"a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75\"" Jan 29 14:44:40.577240 containerd[1624]: time="2025-01-29T14:44:40.577249986Z" level=info msg="Forcibly stopping sandbox \"a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75\"" Jan 29 14:44:40.687058 containerd[1624]: 2025-01-29 14:44:40.627 [WARNING][3611] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.244.90.186-k8s-csi--node--driver--fxq86-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ce838e40-b5e1-4fd4-ba08-f12503c5fb8a", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 14, 43, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.244.90.186", ContainerID:"cf83fcf3e006d90e291b6ef9afcc397cb22ff45f52d4fc240fa35edc603be01a", Pod:"csi-node-driver-fxq86", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.62.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali66ddbedf53d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 14:44:40.687058 containerd[1624]: 2025-01-29 14:44:40.627 [INFO][3611] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75" Jan 29 14:44:40.687058 containerd[1624]: 2025-01-29 14:44:40.627 [INFO][3611] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75" iface="eth0" netns="" Jan 29 14:44:40.687058 containerd[1624]: 2025-01-29 14:44:40.627 [INFO][3611] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75" Jan 29 14:44:40.687058 containerd[1624]: 2025-01-29 14:44:40.627 [INFO][3611] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75" Jan 29 14:44:40.687058 containerd[1624]: 2025-01-29 14:44:40.659 [INFO][3617] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75" HandleID="k8s-pod-network.a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75" Workload="10.244.90.186-k8s-csi--node--driver--fxq86-eth0" Jan 29 14:44:40.687058 containerd[1624]: 2025-01-29 14:44:40.659 [INFO][3617] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 14:44:40.687058 containerd[1624]: 2025-01-29 14:44:40.659 [INFO][3617] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 14:44:40.687058 containerd[1624]: 2025-01-29 14:44:40.677 [WARNING][3617] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75" HandleID="k8s-pod-network.a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75" Workload="10.244.90.186-k8s-csi--node--driver--fxq86-eth0" Jan 29 14:44:40.687058 containerd[1624]: 2025-01-29 14:44:40.677 [INFO][3617] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75" HandleID="k8s-pod-network.a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75" Workload="10.244.90.186-k8s-csi--node--driver--fxq86-eth0" Jan 29 14:44:40.687058 containerd[1624]: 2025-01-29 14:44:40.681 [INFO][3617] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 14:44:40.687058 containerd[1624]: 2025-01-29 14:44:40.683 [INFO][3611] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75" Jan 29 14:44:40.687058 containerd[1624]: time="2025-01-29T14:44:40.687026090Z" level=info msg="TearDown network for sandbox \"a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75\" successfully" Jan 29 14:44:40.704115 containerd[1624]: time="2025-01-29T14:44:40.704046735Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 14:44:40.704417 containerd[1624]: time="2025-01-29T14:44:40.704145538Z" level=info msg="RemovePodSandbox \"a243e11d0ad5230161ae28f3bc2de05c26ace31ffe6ff0f4d829858f7cf46e75\" returns successfully" Jan 29 14:44:40.705303 containerd[1624]: time="2025-01-29T14:44:40.704996709Z" level=info msg="StopPodSandbox for \"4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1\"" Jan 29 14:44:40.814846 containerd[1624]: 2025-01-29 14:44:40.765 [WARNING][3635] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.244.90.186-k8s-nginx--deployment--85f456d6dd--k9ghk-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"499af898-5fd3-427d-ac5a-a50de5d9cc4e", ResourceVersion:"1106", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 14, 43, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.244.90.186", ContainerID:"4973e3849685513233ee3affab04aeea843e1cfc955bb277c5cd0693dfb6df76", Pod:"nginx-deployment-85f456d6dd-k9ghk", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.62.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali36884bf53a4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 14:44:40.814846 containerd[1624]: 2025-01-29 14:44:40.766 [INFO][3635] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1" Jan 29 14:44:40.814846 containerd[1624]: 2025-01-29 14:44:40.766 [INFO][3635] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1" iface="eth0" netns="" Jan 29 14:44:40.814846 containerd[1624]: 2025-01-29 14:44:40.766 [INFO][3635] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1" Jan 29 14:44:40.814846 containerd[1624]: 2025-01-29 14:44:40.766 [INFO][3635] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1" Jan 29 14:44:40.814846 containerd[1624]: 2025-01-29 14:44:40.793 [INFO][3641] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1" HandleID="k8s-pod-network.4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1" Workload="10.244.90.186-k8s-nginx--deployment--85f456d6dd--k9ghk-eth0" Jan 29 14:44:40.814846 containerd[1624]: 2025-01-29 14:44:40.793 [INFO][3641] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 14:44:40.814846 containerd[1624]: 2025-01-29 14:44:40.793 [INFO][3641] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 14:44:40.814846 containerd[1624]: 2025-01-29 14:44:40.803 [WARNING][3641] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1" HandleID="k8s-pod-network.4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1" Workload="10.244.90.186-k8s-nginx--deployment--85f456d6dd--k9ghk-eth0" Jan 29 14:44:40.814846 containerd[1624]: 2025-01-29 14:44:40.803 [INFO][3641] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1" HandleID="k8s-pod-network.4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1" Workload="10.244.90.186-k8s-nginx--deployment--85f456d6dd--k9ghk-eth0" Jan 29 14:44:40.814846 containerd[1624]: 2025-01-29 14:44:40.809 [INFO][3641] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 14:44:40.814846 containerd[1624]: 2025-01-29 14:44:40.811 [INFO][3635] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1" Jan 29 14:44:40.816184 containerd[1624]: time="2025-01-29T14:44:40.814902474Z" level=info msg="TearDown network for sandbox \"4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1\" successfully" Jan 29 14:44:40.816184 containerd[1624]: time="2025-01-29T14:44:40.814946646Z" level=info msg="StopPodSandbox for \"4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1\" returns successfully" Jan 29 14:44:40.816184 containerd[1624]: time="2025-01-29T14:44:40.815366025Z" level=info msg="RemovePodSandbox for \"4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1\"" Jan 29 14:44:40.816184 containerd[1624]: time="2025-01-29T14:44:40.815396576Z" level=info msg="Forcibly stopping sandbox \"4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1\"" Jan 29 14:44:40.898270 containerd[1624]: 2025-01-29 14:44:40.859 [WARNING][3659] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.244.90.186-k8s-nginx--deployment--85f456d6dd--k9ghk-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"499af898-5fd3-427d-ac5a-a50de5d9cc4e", ResourceVersion:"1106", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 14, 43, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.244.90.186", ContainerID:"4973e3849685513233ee3affab04aeea843e1cfc955bb277c5cd0693dfb6df76", Pod:"nginx-deployment-85f456d6dd-k9ghk", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.62.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali36884bf53a4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 14:44:40.898270 containerd[1624]: 2025-01-29 14:44:40.859 [INFO][3659] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1" Jan 29 14:44:40.898270 containerd[1624]: 2025-01-29 14:44:40.859 [INFO][3659] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1" iface="eth0" netns="" Jan 29 14:44:40.898270 containerd[1624]: 2025-01-29 14:44:40.859 [INFO][3659] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1" Jan 29 14:44:40.898270 containerd[1624]: 2025-01-29 14:44:40.859 [INFO][3659] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1" Jan 29 14:44:40.898270 containerd[1624]: 2025-01-29 14:44:40.882 [INFO][3665] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1" HandleID="k8s-pod-network.4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1" Workload="10.244.90.186-k8s-nginx--deployment--85f456d6dd--k9ghk-eth0" Jan 29 14:44:40.898270 containerd[1624]: 2025-01-29 14:44:40.882 [INFO][3665] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 14:44:40.898270 containerd[1624]: 2025-01-29 14:44:40.882 [INFO][3665] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 14:44:40.898270 containerd[1624]: 2025-01-29 14:44:40.893 [WARNING][3665] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1" HandleID="k8s-pod-network.4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1" Workload="10.244.90.186-k8s-nginx--deployment--85f456d6dd--k9ghk-eth0" Jan 29 14:44:40.898270 containerd[1624]: 2025-01-29 14:44:40.893 [INFO][3665] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1" HandleID="k8s-pod-network.4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1" Workload="10.244.90.186-k8s-nginx--deployment--85f456d6dd--k9ghk-eth0" Jan 29 14:44:40.898270 containerd[1624]: 2025-01-29 14:44:40.895 [INFO][3665] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 14:44:40.898270 containerd[1624]: 2025-01-29 14:44:40.896 [INFO][3659] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1" Jan 29 14:44:40.898270 containerd[1624]: time="2025-01-29T14:44:40.898210303Z" level=info msg="TearDown network for sandbox \"4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1\" successfully" Jan 29 14:44:40.900357 containerd[1624]: time="2025-01-29T14:44:40.900314173Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 14:44:40.900466 containerd[1624]: time="2025-01-29T14:44:40.900369318Z" level=info msg="RemovePodSandbox \"4ff0d307650b1efbc8a9551094595c21646f49bb914e716542d50075adcbb6f1\" returns successfully" Jan 29 14:44:41.453053 kubelet[2062]: E0129 14:44:41.452951 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:44:42.337722 kubelet[2062]: I0129 14:44:42.337556 2062 topology_manager.go:215] "Topology Admit Handler" podUID="92455141-4eb9-4e27-84c8-8b09b0992a4d" podNamespace="default" podName="test-pod-1" Jan 29 14:44:42.453478 kubelet[2062]: E0129 14:44:42.453389 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:44:42.535937 kubelet[2062]: I0129 14:44:42.535720 2062 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a0fc1721-3954-4a2e-b532-8f44da4a7933\" (UniqueName: \"kubernetes.io/nfs/92455141-4eb9-4e27-84c8-8b09b0992a4d-pvc-a0fc1721-3954-4a2e-b532-8f44da4a7933\") pod \"test-pod-1\" (UID: \"92455141-4eb9-4e27-84c8-8b09b0992a4d\") " pod="default/test-pod-1" Jan 29 14:44:42.535937 kubelet[2062]: I0129 14:44:42.535812 2062 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm86c\" (UniqueName: \"kubernetes.io/projected/92455141-4eb9-4e27-84c8-8b09b0992a4d-kube-api-access-rm86c\") pod \"test-pod-1\" (UID: \"92455141-4eb9-4e27-84c8-8b09b0992a4d\") " pod="default/test-pod-1" Jan 29 14:44:42.693274 kernel: FS-Cache: Loaded Jan 29 14:44:42.772334 kernel: RPC: Registered named UNIX socket transport module. Jan 29 14:44:42.772538 kernel: RPC: Registered udp transport module. Jan 29 14:44:42.773465 kernel: RPC: Registered tcp transport module. Jan 29 14:44:42.773559 kernel: RPC: Registered tcp-with-tls transport module. Jan 29 14:44:42.774489 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 29 14:44:43.101568 kernel: NFS: Registering the id_resolver key type Jan 29 14:44:43.102280 kernel: Key type id_resolver registered Jan 29 14:44:43.103322 kernel: Key type id_legacy registered Jan 29 14:44:43.158544 nfsidmap[3692]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Jan 29 14:44:43.173005 nfsidmap[3696]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Jan 29 14:44:43.247075 containerd[1624]: time="2025-01-29T14:44:43.246966957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:92455141-4eb9-4e27-84c8-8b09b0992a4d,Namespace:default,Attempt:0,}" Jan 29 14:44:43.433504 systemd-networkd[1266]: cali5ec59c6bf6e: Link UP Jan 29 14:44:43.435347 systemd-networkd[1266]: cali5ec59c6bf6e: Gained carrier Jan 29 14:44:43.449558 containerd[1624]: 2025-01-29 14:44:43.318 [INFO][3699] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.244.90.186-k8s-test--pod--1-eth0 default 92455141-4eb9-4e27-84c8-8b09b0992a4d 1210 0 2025-01-29 14:44:29 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.244.90.186 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="ea70280eea40c29a49a524cb22eee7df2131e5dda4a9cd2c8d564f51f19903b2" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.244.90.186-k8s-test--pod--1-" Jan 29 14:44:43.449558 containerd[1624]: 2025-01-29 14:44:43.319 [INFO][3699] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ea70280eea40c29a49a524cb22eee7df2131e5dda4a9cd2c8d564f51f19903b2" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.244.90.186-k8s-test--pod--1-eth0" Jan 29 14:44:43.449558 containerd[1624]: 2025-01-29 14:44:43.366 [INFO][3710] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ea70280eea40c29a49a524cb22eee7df2131e5dda4a9cd2c8d564f51f19903b2" HandleID="k8s-pod-network.ea70280eea40c29a49a524cb22eee7df2131e5dda4a9cd2c8d564f51f19903b2" Workload="10.244.90.186-k8s-test--pod--1-eth0" Jan 29 14:44:43.449558 containerd[1624]: 2025-01-29 14:44:43.380 [INFO][3710] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ea70280eea40c29a49a524cb22eee7df2131e5dda4a9cd2c8d564f51f19903b2" HandleID="k8s-pod-network.ea70280eea40c29a49a524cb22eee7df2131e5dda4a9cd2c8d564f51f19903b2" Workload="10.244.90.186-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000292c40), Attrs:map[string]string{"namespace":"default", "node":"10.244.90.186", "pod":"test-pod-1", "timestamp":"2025-01-29 14:44:43.36637961 +0000 UTC"}, Hostname:"10.244.90.186", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 14:44:43.449558 containerd[1624]: 2025-01-29 14:44:43.380 [INFO][3710] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 14:44:43.449558 containerd[1624]: 2025-01-29 14:44:43.380 [INFO][3710] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 14:44:43.449558 containerd[1624]: 2025-01-29 14:44:43.380 [INFO][3710] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.244.90.186' Jan 29 14:44:43.449558 containerd[1624]: 2025-01-29 14:44:43.383 [INFO][3710] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ea70280eea40c29a49a524cb22eee7df2131e5dda4a9cd2c8d564f51f19903b2" host="10.244.90.186" Jan 29 14:44:43.449558 containerd[1624]: 2025-01-29 14:44:43.389 [INFO][3710] ipam/ipam.go 372: Looking up existing affinities for host host="10.244.90.186" Jan 29 14:44:43.449558 containerd[1624]: 2025-01-29 14:44:43.395 [INFO][3710] ipam/ipam.go 489: Trying affinity for 192.168.62.0/26 host="10.244.90.186" Jan 29 14:44:43.449558 containerd[1624]: 2025-01-29 14:44:43.398 [INFO][3710] ipam/ipam.go 155: Attempting to load block cidr=192.168.62.0/26 host="10.244.90.186" Jan 29 14:44:43.449558 containerd[1624]: 2025-01-29 14:44:43.403 [INFO][3710] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.62.0/26 host="10.244.90.186" Jan 29 14:44:43.449558 containerd[1624]: 2025-01-29 14:44:43.403 [INFO][3710] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.62.0/26 handle="k8s-pod-network.ea70280eea40c29a49a524cb22eee7df2131e5dda4a9cd2c8d564f51f19903b2" host="10.244.90.186" Jan 29 14:44:43.449558 containerd[1624]: 2025-01-29 14:44:43.406 [INFO][3710] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ea70280eea40c29a49a524cb22eee7df2131e5dda4a9cd2c8d564f51f19903b2 Jan 29 14:44:43.449558 containerd[1624]: 2025-01-29 14:44:43.411 [INFO][3710] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.62.0/26 handle="k8s-pod-network.ea70280eea40c29a49a524cb22eee7df2131e5dda4a9cd2c8d564f51f19903b2" host="10.244.90.186" Jan 29 14:44:43.449558 containerd[1624]: 2025-01-29 14:44:43.423 [INFO][3710] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.62.4/26] block=192.168.62.0/26 handle="k8s-pod-network.ea70280eea40c29a49a524cb22eee7df2131e5dda4a9cd2c8d564f51f19903b2" host="10.244.90.186" Jan 29 14:44:43.449558 containerd[1624]: 2025-01-29 14:44:43.424 [INFO][3710] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.62.4/26] handle="k8s-pod-network.ea70280eea40c29a49a524cb22eee7df2131e5dda4a9cd2c8d564f51f19903b2" host="10.244.90.186" Jan 29 14:44:43.449558 containerd[1624]: 2025-01-29 14:44:43.424 [INFO][3710] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 14:44:43.449558 containerd[1624]: 2025-01-29 14:44:43.424 [INFO][3710] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.4/26] IPv6=[] ContainerID="ea70280eea40c29a49a524cb22eee7df2131e5dda4a9cd2c8d564f51f19903b2" HandleID="k8s-pod-network.ea70280eea40c29a49a524cb22eee7df2131e5dda4a9cd2c8d564f51f19903b2" Workload="10.244.90.186-k8s-test--pod--1-eth0" Jan 29 14:44:43.449558 containerd[1624]: 2025-01-29 14:44:43.427 [INFO][3699] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ea70280eea40c29a49a524cb22eee7df2131e5dda4a9cd2c8d564f51f19903b2" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.244.90.186-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.244.90.186-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"92455141-4eb9-4e27-84c8-8b09b0992a4d", ResourceVersion:"1210", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 14, 44, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.244.90.186", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.62.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 14:44:43.455877 containerd[1624]: 2025-01-29 14:44:43.427 [INFO][3699] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.62.4/32] ContainerID="ea70280eea40c29a49a524cb22eee7df2131e5dda4a9cd2c8d564f51f19903b2" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.244.90.186-k8s-test--pod--1-eth0" Jan 29 14:44:43.455877 containerd[1624]: 2025-01-29 14:44:43.427 [INFO][3699] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="ea70280eea40c29a49a524cb22eee7df2131e5dda4a9cd2c8d564f51f19903b2" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.244.90.186-k8s-test--pod--1-eth0" Jan 29 14:44:43.455877 containerd[1624]: 2025-01-29 14:44:43.436 [INFO][3699] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ea70280eea40c29a49a524cb22eee7df2131e5dda4a9cd2c8d564f51f19903b2" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.244.90.186-k8s-test--pod--1-eth0" Jan 29 14:44:43.455877 containerd[1624]: 2025-01-29 14:44:43.437 [INFO][3699] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ea70280eea40c29a49a524cb22eee7df2131e5dda4a9cd2c8d564f51f19903b2" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.244.90.186-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.244.90.186-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"92455141-4eb9-4e27-84c8-8b09b0992a4d", ResourceVersion:"1210", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 14, 44, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.244.90.186", ContainerID:"ea70280eea40c29a49a524cb22eee7df2131e5dda4a9cd2c8d564f51f19903b2", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.62.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"e2:e6:11:97:c4:49", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 14:44:43.455877 containerd[1624]: 2025-01-29 14:44:43.447 [INFO][3699] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ea70280eea40c29a49a524cb22eee7df2131e5dda4a9cd2c8d564f51f19903b2" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.244.90.186-k8s-test--pod--1-eth0" Jan 29 14:44:43.456481 kubelet[2062]: E0129 14:44:43.454172 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:44:43.486655 containerd[1624]: time="2025-01-29T14:44:43.486217030Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 14:44:43.486655 containerd[1624]: time="2025-01-29T14:44:43.486611241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 14:44:43.488039 containerd[1624]: time="2025-01-29T14:44:43.486629921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 14:44:43.488039 containerd[1624]: time="2025-01-29T14:44:43.486722140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 14:44:43.550766 containerd[1624]: time="2025-01-29T14:44:43.550723342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:92455141-4eb9-4e27-84c8-8b09b0992a4d,Namespace:default,Attempt:0,} returns sandbox id \"ea70280eea40c29a49a524cb22eee7df2131e5dda4a9cd2c8d564f51f19903b2\"" Jan 29 14:44:43.554365 containerd[1624]: time="2025-01-29T14:44:43.553806502Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 29 14:44:43.923613 containerd[1624]: time="2025-01-29T14:44:43.923503189Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 14:44:43.924744 containerd[1624]: time="2025-01-29T14:44:43.924619483Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 29 14:44:43.931846 containerd[1624]: time="2025-01-29T14:44:43.931665961Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 377.804876ms" Jan 29 14:44:43.931846 containerd[1624]: time="2025-01-29T14:44:43.931712378Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 29 14:44:43.935033 containerd[1624]: time="2025-01-29T14:44:43.934903192Z" level=info msg="CreateContainer within sandbox \"ea70280eea40c29a49a524cb22eee7df2131e5dda4a9cd2c8d564f51f19903b2\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 29 14:44:43.968242 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3289504913.mount: Deactivated successfully. Jan 29 14:44:43.971980 containerd[1624]: time="2025-01-29T14:44:43.971832039Z" level=info msg="CreateContainer within sandbox \"ea70280eea40c29a49a524cb22eee7df2131e5dda4a9cd2c8d564f51f19903b2\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"7ca844ac6d68abe31e828cb1c47f2437eb22bb5513804593a71777cbb93cdd04\"" Jan 29 14:44:43.972924 containerd[1624]: time="2025-01-29T14:44:43.972888106Z" level=info msg="StartContainer for \"7ca844ac6d68abe31e828cb1c47f2437eb22bb5513804593a71777cbb93cdd04\"" Jan 29 14:44:44.075205 containerd[1624]: time="2025-01-29T14:44:44.074619794Z" level=info msg="StartContainer for \"7ca844ac6d68abe31e828cb1c47f2437eb22bb5513804593a71777cbb93cdd04\" returns successfully" Jan 29 14:44:44.455509 kubelet[2062]: E0129 14:44:44.455399 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:44:45.090644 systemd-networkd[1266]: cali5ec59c6bf6e: Gained IPv6LL Jan 29 14:44:45.456526 kubelet[2062]: E0129 14:44:45.456312 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:44:46.457482 kubelet[2062]: E0129 14:44:46.457382 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:44:47.458267 kubelet[2062]: E0129 14:44:47.458152 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:44:48.459032 kubelet[2062]: E0129 14:44:48.458918 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:44:49.460273 kubelet[2062]: E0129 14:44:49.460120 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 14:44:50.460826 kubelet[2062]: E0129 14:44:50.460731 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"