Jan 30 19:15:20.059402 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 19:15:20.059452 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 19:15:20.059467 kernel: BIOS-provided physical RAM map: Jan 30 19:15:20.059485 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 30 19:15:20.059495 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 30 19:15:20.059505 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 30 19:15:20.059517 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jan 30 19:15:20.059528 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jan 30 19:15:20.059538 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 30 19:15:20.059549 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 30 19:15:20.059560 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 30 19:15:20.059570 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 30 19:15:20.059586 kernel: NX (Execute Disable) protection: active Jan 30 19:15:20.059597 kernel: APIC: Static calls initialized Jan 30 19:15:20.059609 kernel: SMBIOS 2.8 present. Jan 30 19:15:20.059621 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Jan 30 19:15:20.059633 kernel: Hypervisor detected: KVM Jan 30 19:15:20.059649 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 19:15:20.059661 kernel: kvm-clock: using sched offset of 4679002164 cycles Jan 30 19:15:20.059673 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 19:15:20.059685 kernel: tsc: Detected 2499.998 MHz processor Jan 30 19:15:20.059697 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 19:15:20.059709 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 19:15:20.059773 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 30 19:15:20.059787 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 30 19:15:20.059798 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 19:15:20.059817 kernel: Using GB pages for direct mapping Jan 30 19:15:20.059829 kernel: ACPI: Early table checksum verification disabled Jan 30 19:15:20.059840 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Jan 30 19:15:20.059852 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 19:15:20.059864 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 19:15:20.059875 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 19:15:20.059887 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jan 30 19:15:20.059898 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 19:15:20.059922 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 19:15:20.059941 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 19:15:20.059953 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 19:15:20.059964 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jan 30 19:15:20.059976 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jan 30 19:15:20.059988 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jan 30 19:15:20.060006 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jan 30 19:15:20.060018 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jan 30 19:15:20.060036 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jan 30 19:15:20.060048 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jan 30 19:15:20.060060 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 30 19:15:20.060072 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 30 19:15:20.060084 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jan 30 19:15:20.060096 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Jan 30 19:15:20.060108 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jan 30 19:15:20.060119 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Jan 30 19:15:20.060151 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jan 30 19:15:20.060164 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Jan 30 19:15:20.060183 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jan 30 19:15:20.060196 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Jan 30 19:15:20.060208 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jan 30 19:15:20.060220 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Jan 30 19:15:20.060232 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jan 30 19:15:20.060244 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Jan 30 19:15:20.060255 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jan 30 19:15:20.060303 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Jan 30 19:15:20.060317 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 30 19:15:20.060329 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 30 19:15:20.060341 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jan 30 19:15:20.060354 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Jan 30 19:15:20.060366 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Jan 30 19:15:20.060379 kernel: Zone ranges: Jan 30 19:15:20.060391 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 19:15:20.060403 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jan 30 19:15:20.060420 kernel: Normal empty Jan 30 19:15:20.064452 kernel: Movable zone start for each node Jan 30 19:15:20.064492 kernel: Early memory node ranges Jan 30 19:15:20.064519 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 30 19:15:20.064532 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jan 30 19:15:20.064544 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jan 30 19:15:20.064556 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 19:15:20.064568 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 30 19:15:20.064579 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jan 30 19:15:20.064591 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 30 19:15:20.064611 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 19:15:20.064623 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 19:15:20.064635 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 19:15:20.064647 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 19:15:20.064658 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 19:15:20.064670 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 19:15:20.064682 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 19:15:20.064694 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 19:15:20.064718 kernel: TSC deadline timer available Jan 30 19:15:20.064736 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Jan 30 19:15:20.064749 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 19:15:20.064761 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 30 19:15:20.064773 kernel: Booting paravirtualized kernel on KVM Jan 30 19:15:20.064785 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 19:15:20.064798 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 30 19:15:20.064810 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 30 19:15:20.064822 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 30 19:15:20.064834 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 30 19:15:20.064852 kernel: kvm-guest: PV spinlocks enabled Jan 30 19:15:20.064864 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 30 19:15:20.064878 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 19:15:20.064891 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 19:15:20.064904 kernel: random: crng init done Jan 30 19:15:20.064931 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 19:15:20.064944 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 30 19:15:20.064956 kernel: Fallback order for Node 0: 0 Jan 30 19:15:20.064974 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Jan 30 19:15:20.064987 kernel: Policy zone: DMA32 Jan 30 19:15:20.064999 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 19:15:20.065012 kernel: software IO TLB: area num 16. Jan 30 19:15:20.065024 kernel: Memory: 1901532K/2096616K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 194824K reserved, 0K cma-reserved) Jan 30 19:15:20.065037 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 30 19:15:20.065049 kernel: Kernel/User page tables isolation: enabled Jan 30 19:15:20.065061 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 19:15:20.065073 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 19:15:20.065090 kernel: Dynamic Preempt: voluntary Jan 30 19:15:20.065103 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 19:15:20.065116 kernel: rcu: RCU event tracing is enabled. Jan 30 19:15:20.065128 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 30 19:15:20.065141 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 19:15:20.065166 kernel: Rude variant of Tasks RCU enabled. Jan 30 19:15:20.065184 kernel: Tracing variant of Tasks RCU enabled. Jan 30 19:15:20.065197 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 19:15:20.065209 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 30 19:15:20.065222 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jan 30 19:15:20.065235 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 19:15:20.065247 kernel: Console: colour VGA+ 80x25 Jan 30 19:15:20.065265 kernel: printk: console [tty0] enabled Jan 30 19:15:20.065278 kernel: printk: console [ttyS0] enabled Jan 30 19:15:20.065291 kernel: ACPI: Core revision 20230628 Jan 30 19:15:20.065304 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 19:15:20.065316 kernel: x2apic enabled Jan 30 19:15:20.065334 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 19:15:20.065348 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 30 19:15:20.065361 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Jan 30 19:15:20.065373 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 30 19:15:20.065386 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 30 19:15:20.065399 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 30 19:15:20.065412 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 19:15:20.065424 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 19:15:20.066512 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 19:15:20.066530 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 19:15:20.066551 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 30 19:15:20.066564 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 19:15:20.066576 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 19:15:20.066589 kernel: MDS: Mitigation: Clear CPU buffers Jan 30 19:15:20.066602 kernel: MMIO Stale Data: Unknown: No mitigations Jan 30 19:15:20.066614 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 30 19:15:20.066627 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 19:15:20.066642 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 19:15:20.066654 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 19:15:20.066667 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 19:15:20.066685 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 30 19:15:20.066698 kernel: Freeing SMP alternatives memory: 32K Jan 30 19:15:20.066711 kernel: pid_max: default: 32768 minimum: 301 Jan 30 19:15:20.066723 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 19:15:20.066736 kernel: landlock: Up and running. Jan 30 19:15:20.066749 kernel: SELinux: Initializing. Jan 30 19:15:20.066761 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 19:15:20.066774 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 19:15:20.066787 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jan 30 19:15:20.066800 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 30 19:15:20.066813 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 30 19:15:20.066831 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 30 19:15:20.066844 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jan 30 19:15:20.066857 kernel: signal: max sigframe size: 1776 Jan 30 19:15:20.066870 kernel: rcu: Hierarchical SRCU implementation. Jan 30 19:15:20.066884 kernel: rcu: Max phase no-delay instances is 400. Jan 30 19:15:20.066897 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 30 19:15:20.066924 kernel: smp: Bringing up secondary CPUs ... Jan 30 19:15:20.066938 kernel: smpboot: x86: Booting SMP configuration: Jan 30 19:15:20.066951 kernel: .... node #0, CPUs: #1 Jan 30 19:15:20.066969 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jan 30 19:15:20.066983 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 19:15:20.066995 kernel: smpboot: Max logical packages: 16 Jan 30 19:15:20.067008 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Jan 30 19:15:20.067021 kernel: devtmpfs: initialized Jan 30 19:15:20.067034 kernel: x86/mm: Memory block size: 128MB Jan 30 19:15:20.067047 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 19:15:20.067059 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 30 19:15:20.067072 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 19:15:20.067090 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 19:15:20.067103 kernel: audit: initializing netlink subsys (disabled) Jan 30 19:15:20.067116 kernel: audit: type=2000 audit(1738264518.923:1): state=initialized audit_enabled=0 res=1 Jan 30 19:15:20.067129 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 19:15:20.067142 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 19:15:20.067155 kernel: cpuidle: using governor menu Jan 30 19:15:20.067168 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 19:15:20.067180 kernel: dca service started, version 1.12.1 Jan 30 19:15:20.067193 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 30 19:15:20.067224 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 30 19:15:20.067237 kernel: PCI: Using configuration type 1 for base access Jan 30 19:15:20.067256 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 19:15:20.067268 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 19:15:20.067293 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 19:15:20.067305 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 19:15:20.067319 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 19:15:20.067331 kernel: ACPI: Added _OSI(Module Device) Jan 30 19:15:20.067342 kernel: ACPI: Added _OSI(Processor Device) Jan 30 19:15:20.067359 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 19:15:20.067372 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 19:15:20.067384 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 19:15:20.067396 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 19:15:20.067407 kernel: ACPI: Interpreter enabled Jan 30 19:15:20.067419 kernel: ACPI: PM: (supports S0 S5) Jan 30 19:15:20.067431 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 19:15:20.067443 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 19:15:20.068510 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 19:15:20.068541 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 30 19:15:20.068555 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 19:15:20.068808 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 19:15:20.069011 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 30 19:15:20.069180 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 30 19:15:20.069200 kernel: PCI host bridge to bus 0000:00 Jan 30 19:15:20.069399 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 19:15:20.071370 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 19:15:20.071574 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 19:15:20.071738 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 30 19:15:20.071888 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 30 19:15:20.072055 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jan 30 19:15:20.072206 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 19:15:20.072427 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 30 19:15:20.074694 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Jan 30 19:15:20.074886 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Jan 30 19:15:20.075090 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Jan 30 19:15:20.075260 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Jan 30 19:15:20.075463 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 19:15:20.075653 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 30 19:15:20.075830 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Jan 30 19:15:20.076037 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 30 19:15:20.076209 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Jan 30 19:15:20.076384 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 30 19:15:20.078613 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Jan 30 19:15:20.078841 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 30 19:15:20.079055 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Jan 30 19:15:20.079250 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 30 19:15:20.079511 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Jan 30 19:15:20.079683 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 30 19:15:20.079878 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Jan 30 19:15:20.080082 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 30 19:15:20.080258 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Jan 30 19:15:20.082490 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 30 19:15:20.082682 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Jan 30 19:15:20.082867 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 30 19:15:20.083069 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 30 19:15:20.083273 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Jan 30 19:15:20.083443 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jan 30 19:15:20.083682 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Jan 30 19:15:20.083902 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 30 19:15:20.084085 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 30 19:15:20.084273 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Jan 30 19:15:20.084440 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Jan 30 19:15:20.086007 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 30 19:15:20.086184 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 30 19:15:20.086384 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 30 19:15:20.086572 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Jan 30 19:15:20.086750 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Jan 30 19:15:20.086963 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 30 19:15:20.087131 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 30 19:15:20.087312 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Jan 30 19:15:20.089808 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Jan 30 19:15:20.090011 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 30 19:15:20.090197 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 30 19:15:20.090387 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 30 19:15:20.090594 kernel: pci_bus 0000:02: extended config space not accessible Jan 30 19:15:20.090820 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Jan 30 19:15:20.091025 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Jan 30 19:15:20.091216 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 30 19:15:20.091399 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 30 19:15:20.097563 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 30 19:15:20.097746 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Jan 30 19:15:20.097964 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 30 19:15:20.098134 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 30 19:15:20.098314 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 30 19:15:20.098547 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 30 19:15:20.098724 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jan 30 19:15:20.098932 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 30 19:15:20.099100 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 30 19:15:20.099271 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 30 19:15:20.099445 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 30 19:15:20.099669 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 30 19:15:20.099861 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 30 19:15:20.100061 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 30 19:15:20.100241 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 30 19:15:20.100412 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 30 19:15:20.100623 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 30 19:15:20.100797 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 30 19:15:20.100984 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 30 19:15:20.101156 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 30 19:15:20.101341 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 30 19:15:20.102660 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 30 19:15:20.102846 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 30 19:15:20.103028 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 30 19:15:20.103205 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 30 19:15:20.103224 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 19:15:20.103237 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 19:15:20.103250 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 19:15:20.103269 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 19:15:20.103282 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 30 19:15:20.103295 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 30 19:15:20.103307 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 30 19:15:20.103319 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 30 19:15:20.103331 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 30 19:15:20.103343 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 30 19:15:20.103355 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 30 19:15:20.103367 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 30 19:15:20.103385 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 30 19:15:20.103397 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 30 19:15:20.103409 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 30 19:15:20.103421 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 30 19:15:20.103433 kernel: iommu: Default domain type: Translated Jan 30 19:15:20.106715 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 19:15:20.106734 kernel: PCI: Using ACPI for IRQ routing Jan 30 19:15:20.106746 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 19:15:20.106759 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 30 19:15:20.106792 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jan 30 19:15:20.106988 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 30 19:15:20.107160 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 30 19:15:20.107338 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 19:15:20.107369 kernel: vgaarb: loaded Jan 30 19:15:20.107383 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 19:15:20.107397 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 19:15:20.107410 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 19:15:20.107423 kernel: pnp: PnP ACPI init Jan 30 19:15:20.107645 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 30 19:15:20.107675 kernel: pnp: PnP ACPI: found 5 devices Jan 30 19:15:20.107689 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 19:15:20.107724 kernel: NET: Registered PF_INET protocol family Jan 30 19:15:20.107736 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 19:15:20.107749 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 30 19:15:20.107761 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 19:15:20.107774 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 19:15:20.107807 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 19:15:20.107820 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 30 19:15:20.107833 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 19:15:20.107846 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 19:15:20.107859 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 19:15:20.107873 kernel: NET: Registered PF_XDP protocol family Jan 30 19:15:20.108052 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Jan 30 19:15:20.108230 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 30 19:15:20.108453 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 30 19:15:20.108641 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 30 19:15:20.108808 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 30 19:15:20.108997 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 30 19:15:20.109164 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 30 19:15:20.109341 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 30 19:15:20.116635 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 30 19:15:20.116812 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 30 19:15:20.117027 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 30 19:15:20.117195 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 30 19:15:20.117372 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 30 19:15:20.117571 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 30 19:15:20.117755 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 30 19:15:20.117944 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 30 19:15:20.118146 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 30 19:15:20.118325 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 30 19:15:20.118546 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 30 19:15:20.118716 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 30 19:15:20.118884 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 30 19:15:20.119065 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 30 19:15:20.119234 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 30 19:15:20.119416 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 30 19:15:20.119596 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 30 19:15:20.119754 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 30 19:15:20.119940 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 30 19:15:20.120111 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 30 19:15:20.120288 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 30 19:15:20.120496 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 30 19:15:20.120684 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 30 19:15:20.120854 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 30 19:15:20.121040 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 30 19:15:20.121211 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 30 19:15:20.121381 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 30 19:15:20.121606 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 30 19:15:20.121788 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 30 19:15:20.121970 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 30 19:15:20.122136 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 30 19:15:20.122311 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 30 19:15:20.122505 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 30 19:15:20.122672 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 30 19:15:20.122845 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 30 19:15:20.123046 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 30 19:15:20.123224 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 30 19:15:20.123393 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 30 19:15:20.124607 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 30 19:15:20.124770 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 30 19:15:20.124985 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 30 19:15:20.125152 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 30 19:15:20.125312 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 19:15:20.126208 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 19:15:20.126370 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 19:15:20.126573 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 30 19:15:20.126737 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 30 19:15:20.126897 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jan 30 19:15:20.127100 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 30 19:15:20.127270 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jan 30 19:15:20.127469 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 30 19:15:20.127641 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 30 19:15:20.127840 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Jan 30 19:15:20.128025 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 30 19:15:20.128183 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 30 19:15:20.128382 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Jan 30 19:15:20.130632 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 30 19:15:20.130809 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 30 19:15:20.131025 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jan 30 19:15:20.131186 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 30 19:15:20.131356 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 30 19:15:20.131564 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Jan 30 19:15:20.131725 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 30 19:15:20.131881 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 30 19:15:20.132074 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Jan 30 19:15:20.132256 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 30 19:15:20.132421 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 30 19:15:20.134033 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Jan 30 19:15:20.134205 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jan 30 19:15:20.134415 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 30 19:15:20.134617 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Jan 30 19:15:20.134816 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 30 19:15:20.135013 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 30 19:15:20.135035 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 30 19:15:20.135050 kernel: PCI: CLS 0 bytes, default 64 Jan 30 19:15:20.135064 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 30 19:15:20.135078 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jan 30 19:15:20.135092 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 30 19:15:20.135106 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 30 19:15:20.135120 kernel: Initialise system trusted keyrings Jan 30 19:15:20.135141 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 30 19:15:20.135155 kernel: Key type asymmetric registered Jan 30 19:15:20.135168 kernel: Asymmetric key parser 'x509' registered Jan 30 19:15:20.135182 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 19:15:20.135195 kernel: io scheduler mq-deadline registered Jan 30 19:15:20.135209 kernel: io scheduler kyber registered Jan 30 19:15:20.135222 kernel: io scheduler bfq registered Jan 30 19:15:20.135395 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 30 19:15:20.135616 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 30 19:15:20.135794 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 19:15:20.135982 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 30 19:15:20.136162 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 30 19:15:20.136396 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 19:15:20.136614 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 30 19:15:20.136791 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 30 19:15:20.136983 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 19:15:20.137151 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 30 19:15:20.137318 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 30 19:15:20.137566 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 19:15:20.137738 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 30 19:15:20.137904 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 30 19:15:20.138093 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 19:15:20.138261 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 30 19:15:20.138426 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 30 19:15:20.138648 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 19:15:20.138826 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 30 19:15:20.139010 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 30 19:15:20.139200 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 19:15:20.139367 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 30 19:15:20.139586 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 30 19:15:20.139762 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 19:15:20.139784 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 19:15:20.139799 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 30 19:15:20.139820 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 30 19:15:20.139834 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 19:15:20.139848 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 19:15:20.139862 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 19:15:20.139875 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 19:15:20.139889 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 19:15:20.140069 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 30 19:15:20.140091 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 19:15:20.140261 kernel: rtc_cmos 00:03: registered as rtc0 Jan 30 19:15:20.140423 kernel: rtc_cmos 00:03: setting system clock to 2025-01-30T19:15:19 UTC (1738264519) Jan 30 19:15:20.140656 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 30 19:15:20.140678 kernel: intel_pstate: CPU model not supported Jan 30 19:15:20.140692 kernel: NET: Registered PF_INET6 protocol family Jan 30 19:15:20.140713 kernel: Segment Routing with IPv6 Jan 30 19:15:20.140727 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 19:15:20.140741 kernel: NET: Registered PF_PACKET protocol family Jan 30 19:15:20.140754 kernel: Key type dns_resolver registered Jan 30 19:15:20.140773 kernel: IPI shorthand broadcast: enabled Jan 30 19:15:20.140787 kernel: sched_clock: Marking stable (1204004057, 236470160)->(1683104290, -242630073) Jan 30 19:15:20.140801 kernel: registered taskstats version 1 Jan 30 19:15:20.140815 kernel: Loading compiled-in X.509 certificates Jan 30 19:15:20.140837 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 19:15:20.140851 kernel: Key type .fscrypt registered Jan 30 19:15:20.140864 kernel: Key type fscrypt-provisioning registered Jan 30 19:15:20.140878 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 19:15:20.140892 kernel: ima: Allocated hash algorithm: sha1 Jan 30 19:15:20.140933 kernel: ima: No architecture policies found Jan 30 19:15:20.140947 kernel: clk: Disabling unused clocks Jan 30 19:15:20.140961 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 19:15:20.140975 kernel: Write protecting the kernel read-only data: 36864k Jan 30 19:15:20.140989 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 19:15:20.141003 kernel: Run /init as init process Jan 30 19:15:20.141016 kernel: with arguments: Jan 30 19:15:20.141029 kernel: /init Jan 30 19:15:20.141043 kernel: with environment: Jan 30 19:15:20.141062 kernel: HOME=/ Jan 30 19:15:20.141075 kernel: TERM=linux Jan 30 19:15:20.141088 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 19:15:20.141105 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 19:15:20.141122 systemd[1]: Detected virtualization kvm. Jan 30 19:15:20.141136 systemd[1]: Detected architecture x86-64. Jan 30 19:15:20.141150 systemd[1]: Running in initrd. Jan 30 19:15:20.141164 systemd[1]: No hostname configured, using default hostname. Jan 30 19:15:20.141183 systemd[1]: Hostname set to . Jan 30 19:15:20.141198 systemd[1]: Initializing machine ID from VM UUID. Jan 30 19:15:20.141212 systemd[1]: Queued start job for default target initrd.target. Jan 30 19:15:20.141227 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 19:15:20.141242 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 19:15:20.141257 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 19:15:20.141271 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 19:15:20.141304 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 19:15:20.141318 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 19:15:20.141334 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 19:15:20.141348 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 19:15:20.141362 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 19:15:20.141388 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 19:15:20.141402 systemd[1]: Reached target paths.target - Path Units. Jan 30 19:15:20.141420 systemd[1]: Reached target slices.target - Slice Units. Jan 30 19:15:20.141434 systemd[1]: Reached target swap.target - Swaps. Jan 30 19:15:20.141448 systemd[1]: Reached target timers.target - Timer Units. Jan 30 19:15:20.141474 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 19:15:20.141488 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 19:15:20.141508 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 19:15:20.141522 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 19:15:20.141536 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 19:15:20.141549 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 19:15:20.141577 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 19:15:20.141591 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 19:15:20.141605 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 19:15:20.141619 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 19:15:20.141632 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 19:15:20.141645 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 19:15:20.141659 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 19:15:20.141673 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 19:15:20.141699 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 19:15:20.141719 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 19:15:20.141733 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 19:15:20.141764 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 19:15:20.141822 systemd-journald[201]: Collecting audit messages is disabled. Jan 30 19:15:20.141860 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 19:15:20.141881 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 19:15:20.141900 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 19:15:20.141926 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 19:15:20.141947 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 19:15:20.141961 kernel: Bridge firewalling registered Jan 30 19:15:20.141977 systemd-journald[201]: Journal started Jan 30 19:15:20.142004 systemd-journald[201]: Runtime Journal (/run/log/journal/9a1931caf5704a03a94f91f95f700cf5) is 4.7M, max 38.0M, 33.2M free. Jan 30 19:15:20.062158 systemd-modules-load[202]: Inserted module 'overlay' Jan 30 19:15:20.167589 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 19:15:20.133138 systemd-modules-load[202]: Inserted module 'br_netfilter' Jan 30 19:15:20.168742 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 19:15:20.170074 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 19:15:20.177610 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 19:15:20.179622 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 19:15:20.190655 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 19:15:20.207038 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 19:15:20.209026 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 19:15:20.211669 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 19:15:20.217630 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 19:15:20.220621 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 19:15:20.244614 dracut-cmdline[234]: dracut-dracut-053 Jan 30 19:15:20.249816 dracut-cmdline[234]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 19:15:20.267263 systemd-resolved[236]: Positive Trust Anchors: Jan 30 19:15:20.267285 systemd-resolved[236]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 19:15:20.267329 systemd-resolved[236]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 19:15:20.271753 systemd-resolved[236]: Defaulting to hostname 'linux'. Jan 30 19:15:20.273597 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 19:15:20.276758 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 19:15:20.357591 kernel: SCSI subsystem initialized Jan 30 19:15:20.370470 kernel: Loading iSCSI transport class v2.0-870. Jan 30 19:15:20.384480 kernel: iscsi: registered transport (tcp) Jan 30 19:15:20.411758 kernel: iscsi: registered transport (qla4xxx) Jan 30 19:15:20.411845 kernel: QLogic iSCSI HBA Driver Jan 30 19:15:20.467260 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 19:15:20.476699 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 19:15:20.511679 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 19:15:20.511767 kernel: device-mapper: uevent: version 1.0.3 Jan 30 19:15:20.514990 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 19:15:20.564477 kernel: raid6: sse2x4 gen() 12558 MB/s Jan 30 19:15:20.582494 kernel: raid6: sse2x2 gen() 8909 MB/s Jan 30 19:15:20.601187 kernel: raid6: sse2x1 gen() 9452 MB/s Jan 30 19:15:20.601239 kernel: raid6: using algorithm sse2x4 gen() 12558 MB/s Jan 30 19:15:20.620222 kernel: raid6: .... xor() 7516 MB/s, rmw enabled Jan 30 19:15:20.620281 kernel: raid6: using ssse3x2 recovery algorithm Jan 30 19:15:20.647497 kernel: xor: automatically using best checksumming function avx Jan 30 19:15:20.852477 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 19:15:20.867773 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 19:15:20.874658 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 19:15:20.901258 systemd-udevd[419]: Using default interface naming scheme 'v255'. Jan 30 19:15:20.908519 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 19:15:20.918271 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 19:15:20.940702 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation Jan 30 19:15:20.981799 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 19:15:20.988639 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 19:15:21.099539 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 19:15:21.107135 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 19:15:21.135507 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 19:15:21.137350 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 19:15:21.138890 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 19:15:21.141948 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 19:15:21.148639 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 19:15:21.177254 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 19:15:21.233999 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jan 30 19:15:21.333393 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 19:15:21.333422 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 30 19:15:21.334710 kernel: AVX version of gcm_enc/dec engaged. Jan 30 19:15:21.334734 kernel: AES CTR mode by8 optimization enabled Jan 30 19:15:21.334752 kernel: ACPI: bus type USB registered Jan 30 19:15:21.334770 kernel: usbcore: registered new interface driver usbfs Jan 30 19:15:21.334787 kernel: usbcore: registered new interface driver hub Jan 30 19:15:21.334805 kernel: usbcore: registered new device driver usb Jan 30 19:15:21.334828 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 19:15:21.334846 kernel: GPT:17805311 != 125829119 Jan 30 19:15:21.334863 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 19:15:21.334907 kernel: GPT:17805311 != 125829119 Jan 30 19:15:21.334927 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 19:15:21.334945 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 19:15:21.259139 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 19:15:21.454659 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 30 19:15:21.454975 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jan 30 19:15:21.455197 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 30 19:15:21.455425 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 30 19:15:21.457688 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jan 30 19:15:21.457926 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jan 30 19:15:21.458140 kernel: libata version 3.00 loaded. Jan 30 19:15:21.458165 kernel: hub 1-0:1.0: USB hub found Jan 30 19:15:21.458449 kernel: hub 1-0:1.0: 4 ports detected Jan 30 19:15:21.458705 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 30 19:15:21.459020 kernel: hub 2-0:1.0: USB hub found Jan 30 19:15:21.459271 kernel: hub 2-0:1.0: 4 ports detected Jan 30 19:15:21.460751 kernel: ahci 0000:00:1f.2: version 3.0 Jan 30 19:15:21.473263 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 30 19:15:21.473289 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 30 19:15:21.473537 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 30 19:15:21.473782 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (477) Jan 30 19:15:21.473813 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (469) Jan 30 19:15:21.473831 kernel: scsi host0: ahci Jan 30 19:15:21.474071 kernel: scsi host1: ahci Jan 30 19:15:21.474282 kernel: scsi host2: ahci Jan 30 19:15:21.474497 kernel: scsi host3: ahci Jan 30 19:15:21.474697 kernel: scsi host4: ahci Jan 30 19:15:21.474955 kernel: scsi host5: ahci Jan 30 19:15:21.475150 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Jan 30 19:15:21.475174 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Jan 30 19:15:21.475192 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Jan 30 19:15:21.475210 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Jan 30 19:15:21.475228 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Jan 30 19:15:21.475246 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Jan 30 19:15:21.259323 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 19:15:21.261492 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 19:15:21.262290 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 19:15:21.262608 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 19:15:21.270479 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 19:15:21.282784 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 19:15:21.432623 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 19:15:21.453933 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 19:15:21.466095 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 19:15:21.495634 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 19:15:21.508023 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 19:15:21.508906 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 19:15:21.520670 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 19:15:21.524946 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 19:15:21.532272 disk-uuid[561]: Primary Header is updated. Jan 30 19:15:21.532272 disk-uuid[561]: Secondary Entries is updated. Jan 30 19:15:21.532272 disk-uuid[561]: Secondary Header is updated. Jan 30 19:15:21.539233 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 19:15:21.545096 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 19:15:21.560027 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 19:15:21.596663 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 30 19:15:21.746469 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 19:15:21.782463 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 30 19:15:21.784493 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 30 19:15:21.784539 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 30 19:15:21.787717 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 30 19:15:21.788477 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 30 19:15:21.791156 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 30 19:15:21.801817 kernel: usbcore: registered new interface driver usbhid Jan 30 19:15:21.801885 kernel: usbhid: USB HID core driver Jan 30 19:15:21.810935 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 30 19:15:21.810981 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jan 30 19:15:22.553480 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 19:15:22.553835 disk-uuid[563]: The operation has completed successfully. Jan 30 19:15:22.607623 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 19:15:22.607799 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 19:15:22.628735 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 19:15:22.632630 sh[585]: Success Jan 30 19:15:22.648493 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Jan 30 19:15:22.709365 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 19:15:22.731641 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 19:15:22.732836 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 19:15:22.764504 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 19:15:22.764573 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 19:15:22.767659 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 19:15:22.769925 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 19:15:22.771617 kernel: BTRFS info (device dm-0): using free space tree Jan 30 19:15:22.783248 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 19:15:22.784888 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 19:15:22.789663 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 19:15:22.791949 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 19:15:22.816107 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 19:15:22.816208 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 19:15:22.816788 kernel: BTRFS info (device vda6): using free space tree Jan 30 19:15:22.823461 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 19:15:22.837862 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 19:15:22.840582 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 19:15:22.847990 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 19:15:22.855641 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 19:15:22.958806 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 19:15:22.975034 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 19:15:23.005289 ignition[688]: Ignition 2.19.0 Jan 30 19:15:23.005310 ignition[688]: Stage: fetch-offline Jan 30 19:15:23.008172 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 19:15:23.005383 ignition[688]: no configs at "/usr/lib/ignition/base.d" Jan 30 19:15:23.005402 ignition[688]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 19:15:23.005792 ignition[688]: parsed url from cmdline: "" Jan 30 19:15:23.005799 ignition[688]: no config URL provided Jan 30 19:15:23.005809 ignition[688]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 19:15:23.005830 ignition[688]: no config at "/usr/lib/ignition/user.ign" Jan 30 19:15:23.005854 ignition[688]: failed to fetch config: resource requires networking Jan 30 19:15:23.006084 ignition[688]: Ignition finished successfully Jan 30 19:15:23.017012 systemd-networkd[768]: lo: Link UP Jan 30 19:15:23.017018 systemd-networkd[768]: lo: Gained carrier Jan 30 19:15:23.019551 systemd-networkd[768]: Enumeration completed Jan 30 19:15:23.019674 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 19:15:23.020689 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 19:15:23.020694 systemd-networkd[768]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 19:15:23.022921 systemd-networkd[768]: eth0: Link UP Jan 30 19:15:23.022926 systemd-networkd[768]: eth0: Gained carrier Jan 30 19:15:23.022937 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 19:15:23.023504 systemd[1]: Reached target network.target - Network. Jan 30 19:15:23.029616 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 19:15:23.050426 ignition[775]: Ignition 2.19.0 Jan 30 19:15:23.050471 ignition[775]: Stage: fetch Jan 30 19:15:23.050699 ignition[775]: no configs at "/usr/lib/ignition/base.d" Jan 30 19:15:23.052542 systemd-networkd[768]: eth0: DHCPv4 address 10.230.38.22/30, gateway 10.230.38.21 acquired from 10.230.38.21 Jan 30 19:15:23.050718 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 19:15:23.050850 ignition[775]: parsed url from cmdline: "" Jan 30 19:15:23.050857 ignition[775]: no config URL provided Jan 30 19:15:23.050866 ignition[775]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 19:15:23.050882 ignition[775]: no config at "/usr/lib/ignition/user.ign" Jan 30 19:15:23.051057 ignition[775]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 30 19:15:23.051259 ignition[775]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 30 19:15:23.051289 ignition[775]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 30 19:15:23.051414 ignition[775]: GET error: Get "http://169.254.169.254/openstack/latest/user_data": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 30 19:15:23.251607 ignition[775]: GET http://169.254.169.254/openstack/latest/user_data: attempt #2 Jan 30 19:15:23.264889 ignition[775]: GET result: OK Jan 30 19:15:23.264994 ignition[775]: parsing config with SHA512: 3cc61ee7b10b19f6e83dd47577d2bb0536ac06a882a65748569d9819b2d9f73679cdbe361b82aaa7454a003e77760dff4c85edc858611e4f1161c076923f24c0 Jan 30 19:15:23.270033 unknown[775]: fetched base config from "system" Jan 30 19:15:23.270846 unknown[775]: fetched base config from "system" Jan 30 19:15:23.271253 ignition[775]: fetch: fetch complete Jan 30 19:15:23.270858 unknown[775]: fetched user config from "openstack" Jan 30 19:15:23.271365 ignition[775]: fetch: fetch passed Jan 30 19:15:23.274353 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 19:15:23.272501 ignition[775]: Ignition finished successfully Jan 30 19:15:23.293724 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 19:15:23.315378 ignition[782]: Ignition 2.19.0 Jan 30 19:15:23.315399 ignition[782]: Stage: kargs Jan 30 19:15:23.315693 ignition[782]: no configs at "/usr/lib/ignition/base.d" Jan 30 19:15:23.318180 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 19:15:23.315722 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 19:15:23.316583 ignition[782]: kargs: kargs passed Jan 30 19:15:23.316671 ignition[782]: Ignition finished successfully Jan 30 19:15:23.325637 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 19:15:23.344001 ignition[788]: Ignition 2.19.0 Jan 30 19:15:23.344019 ignition[788]: Stage: disks Jan 30 19:15:23.344246 ignition[788]: no configs at "/usr/lib/ignition/base.d" Jan 30 19:15:23.344266 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 19:15:23.346604 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 19:15:23.345135 ignition[788]: disks: disks passed Jan 30 19:15:23.348778 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 19:15:23.345201 ignition[788]: Ignition finished successfully Jan 30 19:15:23.349878 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 19:15:23.351275 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 19:15:23.352831 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 19:15:23.354210 systemd[1]: Reached target basic.target - Basic System. Jan 30 19:15:23.367664 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 19:15:23.386335 systemd-fsck[796]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 30 19:15:23.390041 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 19:15:23.399650 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 19:15:23.526465 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 19:15:23.527808 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 19:15:23.529285 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 19:15:23.535540 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 19:15:23.546234 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 19:15:23.548349 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 19:15:23.551408 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 30 19:15:23.552294 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 19:15:23.552384 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 19:15:23.556280 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 19:15:23.563502 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (804) Jan 30 19:15:23.568316 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 19:15:23.569226 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 19:15:23.573626 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 19:15:23.573654 kernel: BTRFS info (device vda6): using free space tree Jan 30 19:15:23.582453 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 19:15:23.587014 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 19:15:23.647348 initrd-setup-root[831]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 19:15:23.655386 initrd-setup-root[839]: cut: /sysroot/etc/group: No such file or directory Jan 30 19:15:23.662552 initrd-setup-root[846]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 19:15:23.672163 initrd-setup-root[853]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 19:15:23.782289 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 19:15:23.790572 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 19:15:23.792621 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 19:15:23.806253 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 19:15:23.809760 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 19:15:23.844320 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 19:15:23.852570 ignition[921]: INFO : Ignition 2.19.0 Jan 30 19:15:23.852570 ignition[921]: INFO : Stage: mount Jan 30 19:15:23.854380 ignition[921]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 19:15:23.854380 ignition[921]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 19:15:23.854380 ignition[921]: INFO : mount: mount passed Jan 30 19:15:23.854380 ignition[921]: INFO : Ignition finished successfully Jan 30 19:15:23.855159 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 19:15:24.312096 systemd-networkd[768]: eth0: Gained IPv6LL Jan 30 19:15:25.818962 systemd-networkd[768]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8985:24:19ff:fee6:2616/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8985:24:19ff:fee6:2616/64 assigned by NDisc. Jan 30 19:15:25.818980 systemd-networkd[768]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 30 19:15:30.732910 coreos-metadata[806]: Jan 30 19:15:30.732 WARN failed to locate config-drive, using the metadata service API instead Jan 30 19:15:30.756314 coreos-metadata[806]: Jan 30 19:15:30.756 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 30 19:15:30.770707 coreos-metadata[806]: Jan 30 19:15:30.770 INFO Fetch successful Jan 30 19:15:30.771657 coreos-metadata[806]: Jan 30 19:15:30.770 INFO wrote hostname srv-73j9m.gb1.brightbox.com to /sysroot/etc/hostname Jan 30 19:15:30.773863 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 30 19:15:30.774065 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 30 19:15:30.783595 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 19:15:30.803006 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 19:15:30.814461 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (937) Jan 30 19:15:30.814554 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 19:15:30.817594 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 19:15:30.819469 kernel: BTRFS info (device vda6): using free space tree Jan 30 19:15:30.824461 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 19:15:30.827626 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 19:15:30.860076 ignition[955]: INFO : Ignition 2.19.0 Jan 30 19:15:30.860076 ignition[955]: INFO : Stage: files Jan 30 19:15:30.862086 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 19:15:30.862086 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 19:15:30.862086 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Jan 30 19:15:30.868361 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 19:15:30.868361 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 19:15:30.872535 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 19:15:30.873571 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 19:15:30.873571 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 19:15:30.873183 unknown[955]: wrote ssh authorized keys file for user: core Jan 30 19:15:30.876674 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 30 19:15:30.876674 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 19:15:30.876674 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 19:15:30.876674 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 19:15:30.876674 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 19:15:30.876674 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 19:15:30.876674 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 19:15:30.876674 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Jan 30 19:15:31.492605 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 30 19:15:33.240493 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 19:15:33.243427 ignition[955]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 19:15:33.243427 ignition[955]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 19:15:33.243427 ignition[955]: INFO : files: files passed Jan 30 19:15:33.243427 ignition[955]: INFO : Ignition finished successfully Jan 30 19:15:33.244506 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 19:15:33.259808 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 19:15:33.264660 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 19:15:33.268049 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 19:15:33.269035 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 19:15:33.285351 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 19:15:33.285351 initrd-setup-root-after-ignition[983]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 19:15:33.288257 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 19:15:33.289307 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 19:15:33.291013 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 19:15:33.305712 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 19:15:33.343956 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 19:15:33.344184 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 19:15:33.346330 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 19:15:33.347642 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 19:15:33.349235 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 19:15:33.354683 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 19:15:33.374428 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 19:15:33.382643 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 19:15:33.403026 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 19:15:33.404951 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 19:15:33.405876 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 19:15:33.407538 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 19:15:33.407767 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 19:15:33.409491 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 19:15:33.410495 systemd[1]: Stopped target basic.target - Basic System. Jan 30 19:15:33.412026 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 19:15:33.413563 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 19:15:33.414993 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 19:15:33.416531 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 19:15:33.418157 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 19:15:33.419822 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 19:15:33.421288 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 19:15:33.422856 systemd[1]: Stopped target swap.target - Swaps. Jan 30 19:15:33.424214 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 19:15:33.424368 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 19:15:33.426368 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 19:15:33.427344 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 19:15:33.428786 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 19:15:33.430486 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 19:15:33.432299 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 19:15:33.432525 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 19:15:33.434389 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 19:15:33.434605 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 19:15:33.436230 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 19:15:33.436395 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 19:15:33.454184 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 19:15:33.454959 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 19:15:33.455208 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 19:15:33.458736 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 19:15:33.460853 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 19:15:33.461096 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 19:15:33.464683 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 19:15:33.464915 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 19:15:33.475162 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 19:15:33.475300 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 19:15:33.488471 ignition[1007]: INFO : Ignition 2.19.0 Jan 30 19:15:33.488471 ignition[1007]: INFO : Stage: umount Jan 30 19:15:33.488471 ignition[1007]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 19:15:33.488471 ignition[1007]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 19:15:33.492911 ignition[1007]: INFO : umount: umount passed Jan 30 19:15:33.492911 ignition[1007]: INFO : Ignition finished successfully Jan 30 19:15:33.491928 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 19:15:33.492103 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 19:15:33.494174 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 19:15:33.494340 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 19:15:33.495990 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 19:15:33.496064 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 19:15:33.497624 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 19:15:33.497697 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 19:15:33.499108 systemd[1]: Stopped target network.target - Network. Jan 30 19:15:33.500469 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 19:15:33.500561 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 19:15:33.501986 systemd[1]: Stopped target paths.target - Path Units. Jan 30 19:15:33.503332 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 19:15:33.507512 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 19:15:33.514811 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 19:15:33.516368 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 19:15:33.517111 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 19:15:33.517177 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 19:15:33.518498 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 19:15:33.518574 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 19:15:33.520038 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 19:15:33.520115 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 19:15:33.521871 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 19:15:33.521936 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 19:15:33.523494 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 19:15:33.525301 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 19:15:33.527618 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 19:15:33.527682 systemd-networkd[768]: eth0: DHCPv6 lease lost Jan 30 19:15:33.530055 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 19:15:33.530225 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 19:15:33.533768 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 19:15:33.533891 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 19:15:33.540610 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 19:15:33.541832 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 19:15:33.541921 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 19:15:33.545311 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 19:15:33.548832 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 19:15:33.549019 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 19:15:33.553199 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 19:15:33.553341 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 19:15:33.555711 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 19:15:33.555778 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 19:15:33.557511 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 19:15:33.557644 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 19:15:33.560080 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 19:15:33.560313 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 19:15:33.569692 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 19:15:33.569805 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 19:15:33.571419 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 19:15:33.571523 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 19:15:33.573541 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 19:15:33.573624 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 19:15:33.576598 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 19:15:33.576673 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 19:15:33.578184 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 19:15:33.578269 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 19:15:33.586718 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 19:15:33.587694 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 19:15:33.587773 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 19:15:33.590938 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 19:15:33.591008 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 19:15:33.593069 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 19:15:33.593239 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 19:15:33.601463 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 19:15:33.601640 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 19:15:33.625033 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 19:15:33.625234 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 19:15:33.627505 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 19:15:33.629192 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 19:15:33.629302 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 19:15:33.635679 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 19:15:33.654538 systemd[1]: Switching root. Jan 30 19:15:33.699590 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Jan 30 19:15:33.699780 systemd-journald[201]: Journal stopped Jan 30 19:15:35.195246 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 19:15:35.195371 kernel: SELinux: policy capability open_perms=1 Jan 30 19:15:35.195402 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 19:15:35.195475 kernel: SELinux: policy capability always_check_network=0 Jan 30 19:15:35.195515 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 19:15:35.195560 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 19:15:35.195582 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 19:15:35.195601 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 19:15:35.195639 kernel: audit: type=1403 audit(1738264533.967:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 19:15:35.195686 systemd[1]: Successfully loaded SELinux policy in 49.220ms. Jan 30 19:15:35.195735 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.461ms. Jan 30 19:15:35.195765 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 19:15:35.195793 systemd[1]: Detected virtualization kvm. Jan 30 19:15:35.195819 systemd[1]: Detected architecture x86-64. Jan 30 19:15:35.195852 systemd[1]: Detected first boot. Jan 30 19:15:35.195877 systemd[1]: Hostname set to . Jan 30 19:15:35.195921 systemd[1]: Initializing machine ID from VM UUID. Jan 30 19:15:35.195941 zram_generator::config[1049]: No configuration found. Jan 30 19:15:35.195981 systemd[1]: Populated /etc with preset unit settings. Jan 30 19:15:35.196001 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 19:15:35.196025 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 19:15:35.196056 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 19:15:35.196084 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 19:15:35.196105 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 19:15:35.196136 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 19:15:35.196168 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 19:15:35.196192 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 19:15:35.196212 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 19:15:35.196235 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 19:15:35.196261 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 19:15:35.196291 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 19:15:35.196311 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 19:15:35.196329 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 19:15:35.196367 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 19:15:35.196387 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 19:15:35.196412 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 19:15:35.196463 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 19:15:35.196486 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 19:15:35.196517 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 19:15:35.196565 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 19:15:35.196595 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 19:15:35.196615 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 19:15:35.196635 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 19:15:35.196661 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 19:15:35.196682 systemd[1]: Reached target slices.target - Slice Units. Jan 30 19:15:35.196719 systemd[1]: Reached target swap.target - Swaps. Jan 30 19:15:35.196740 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 19:15:35.196761 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 19:15:35.196780 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 19:15:35.196816 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 19:15:35.196844 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 19:15:35.196865 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 19:15:35.196899 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 19:15:35.196928 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 19:15:35.196972 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 19:15:35.196993 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 19:15:35.197018 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 19:15:35.197039 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 19:15:35.197064 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 19:15:35.197092 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 19:15:35.197117 systemd[1]: Reached target machines.target - Containers. Jan 30 19:15:35.197138 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 19:15:35.197180 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 19:15:35.197201 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 19:15:35.197220 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 19:15:35.197258 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 19:15:35.197297 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 19:15:35.197333 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 19:15:35.197368 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 19:15:35.197390 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 19:15:35.197414 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 19:15:35.200446 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 19:15:35.200496 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 19:15:35.200560 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 19:15:35.200602 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 19:15:35.200636 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 19:15:35.200688 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 19:15:35.200746 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 19:15:35.200769 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 19:15:35.200801 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 19:15:35.200831 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 19:15:35.200858 systemd[1]: Stopped verity-setup.service. Jan 30 19:15:35.200891 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 19:15:35.200920 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 19:15:35.200968 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 19:15:35.200988 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 19:15:35.201025 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 19:15:35.201045 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 19:15:35.201063 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 19:15:35.201082 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 19:15:35.201126 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 19:15:35.201146 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 19:15:35.201171 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 19:15:35.201219 systemd-journald[1145]: Collecting audit messages is disabled. Jan 30 19:15:35.201278 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 19:15:35.201300 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 19:15:35.201342 kernel: fuse: init (API version 7.39) Jan 30 19:15:35.201375 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 19:15:35.201394 kernel: loop: module loaded Jan 30 19:15:35.201417 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 19:15:35.201450 systemd-journald[1145]: Journal started Jan 30 19:15:35.206109 systemd-journald[1145]: Runtime Journal (/run/log/journal/9a1931caf5704a03a94f91f95f700cf5) is 4.7M, max 38.0M, 33.2M free. Jan 30 19:15:35.206181 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 19:15:35.206227 kernel: ACPI: bus type drm_connector registered Jan 30 19:15:35.206269 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 19:15:34.777898 systemd[1]: Queued start job for default target multi-user.target. Jan 30 19:15:34.801611 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 19:15:34.802328 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 19:15:35.211240 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 19:15:35.212323 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 19:15:35.213590 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 19:15:35.215142 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 19:15:35.215355 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 19:15:35.216959 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 19:15:35.219349 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 19:15:35.220636 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 19:15:35.237901 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 19:15:35.247942 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 19:15:35.256419 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 19:15:35.257414 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 19:15:35.257478 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 19:15:35.260275 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 19:15:35.265584 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 19:15:35.277373 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 19:15:35.278405 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 19:15:35.284290 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 19:15:35.298699 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 19:15:35.300601 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 19:15:35.302644 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 19:15:35.303825 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 19:15:35.306687 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 19:15:35.313643 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 19:15:35.322654 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 19:15:35.326729 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 19:15:35.327972 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 19:15:35.330259 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 19:15:35.338605 systemd-journald[1145]: Time spent on flushing to /var/log/journal/9a1931caf5704a03a94f91f95f700cf5 is 163.448ms for 1124 entries. Jan 30 19:15:35.338605 systemd-journald[1145]: System Journal (/var/log/journal/9a1931caf5704a03a94f91f95f700cf5) is 8.0M, max 584.8M, 576.8M free. Jan 30 19:15:35.541735 systemd-journald[1145]: Received client request to flush runtime journal. Jan 30 19:15:35.542597 kernel: loop0: detected capacity change from 0 to 140768 Jan 30 19:15:35.542657 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 19:15:35.542694 kernel: loop1: detected capacity change from 0 to 218376 Jan 30 19:15:35.388191 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 19:15:35.389212 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 19:15:35.396677 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 19:15:35.439939 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 19:15:35.477691 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 19:15:35.479920 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 19:15:35.503934 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 19:15:35.506236 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 19:15:35.518376 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 19:15:35.526033 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 19:15:35.549187 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 19:15:35.578804 udevadm[1195]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 30 19:15:35.590815 kernel: loop2: detected capacity change from 0 to 142488 Jan 30 19:15:35.611179 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Jan 30 19:15:35.611208 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Jan 30 19:15:35.628929 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 19:15:35.670539 kernel: loop3: detected capacity change from 0 to 8 Jan 30 19:15:35.696998 kernel: loop4: detected capacity change from 0 to 140768 Jan 30 19:15:35.726466 kernel: loop5: detected capacity change from 0 to 218376 Jan 30 19:15:35.754275 kernel: loop6: detected capacity change from 0 to 142488 Jan 30 19:15:35.778630 kernel: loop7: detected capacity change from 0 to 8 Jan 30 19:15:35.785555 (sd-merge)[1208]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 30 19:15:35.787605 (sd-merge)[1208]: Merged extensions into '/usr'. Jan 30 19:15:35.797474 systemd[1]: Reloading requested from client PID 1182 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 19:15:35.798038 systemd[1]: Reloading... Jan 30 19:15:35.947837 zram_generator::config[1233]: No configuration found. Jan 30 19:15:36.077402 ldconfig[1177]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 19:15:36.196143 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 19:15:36.267164 systemd[1]: Reloading finished in 468 ms. Jan 30 19:15:36.298510 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 19:15:36.304742 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 19:15:36.318718 systemd[1]: Starting ensure-sysext.service... Jan 30 19:15:36.333554 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 19:15:36.355643 systemd[1]: Reloading requested from client PID 1290 ('systemctl') (unit ensure-sysext.service)... Jan 30 19:15:36.355679 systemd[1]: Reloading... Jan 30 19:15:36.371049 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 19:15:36.373802 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 19:15:36.378087 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 19:15:36.378730 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. Jan 30 19:15:36.379142 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. Jan 30 19:15:36.385163 systemd-tmpfiles[1291]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 19:15:36.385305 systemd-tmpfiles[1291]: Skipping /boot Jan 30 19:15:36.410824 systemd-tmpfiles[1291]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 19:15:36.410845 systemd-tmpfiles[1291]: Skipping /boot Jan 30 19:15:36.471520 zram_generator::config[1324]: No configuration found. Jan 30 19:15:36.634615 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 19:15:36.707243 systemd[1]: Reloading finished in 350 ms. Jan 30 19:15:36.730515 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 19:15:36.735998 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 19:15:36.755082 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 19:15:36.759681 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 19:15:36.764694 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 19:15:36.774395 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 19:15:36.778210 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 19:15:36.783371 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 19:15:36.793682 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 19:15:36.793953 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 19:15:36.799745 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 19:15:36.810540 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 19:15:36.815320 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 19:15:36.816562 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 19:15:36.816721 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 19:15:36.825029 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 19:15:36.829122 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 19:15:36.829805 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 19:15:36.830032 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 19:15:36.830168 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 19:15:36.837960 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 19:15:36.838276 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 19:15:36.848759 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 19:15:36.849694 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 19:15:36.849885 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 19:15:36.860945 systemd[1]: Finished ensure-sysext.service. Jan 30 19:15:36.874120 systemd-udevd[1387]: Using default interface naming scheme 'v255'. Jan 30 19:15:36.878795 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 19:15:36.881377 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 19:15:36.882723 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 19:15:36.885701 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 19:15:36.885934 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 19:15:36.888099 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 19:15:36.889423 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 19:15:36.896629 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 19:15:36.910131 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 19:15:36.919455 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 19:15:36.920339 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 19:15:36.922381 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 19:15:36.928917 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 19:15:36.937082 augenrules[1411]: No rules Jan 30 19:15:36.940236 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 19:15:36.942547 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 19:15:36.953826 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 19:15:36.965624 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 19:15:36.969561 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 19:15:36.973062 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 19:15:36.998903 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 19:15:37.006599 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 19:15:37.145615 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 19:15:37.146715 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 19:15:37.182154 systemd-networkd[1422]: lo: Link UP Jan 30 19:15:37.183501 systemd-networkd[1422]: lo: Gained carrier Jan 30 19:15:37.185387 systemd-resolved[1384]: Positive Trust Anchors: Jan 30 19:15:37.185423 systemd-resolved[1384]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 19:15:37.185508 systemd-resolved[1384]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 19:15:37.186267 systemd-networkd[1422]: Enumeration completed Jan 30 19:15:37.186595 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 19:15:37.198549 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 19:15:37.206626 systemd-resolved[1384]: Using system hostname 'srv-73j9m.gb1.brightbox.com'. Jan 30 19:15:37.209863 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 19:15:37.211463 systemd[1]: Reached target network.target - Network. Jan 30 19:15:37.212643 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 19:15:37.214231 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 30 19:15:37.236484 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1428) Jan 30 19:15:37.289560 systemd-networkd[1422]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 19:15:37.289758 systemd-networkd[1422]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 19:15:37.292461 systemd-networkd[1422]: eth0: Link UP Jan 30 19:15:37.292498 systemd-networkd[1422]: eth0: Gained carrier Jan 30 19:15:37.292538 systemd-networkd[1422]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 19:15:37.304564 systemd-networkd[1422]: eth0: DHCPv4 address 10.230.38.22/30, gateway 10.230.38.21 acquired from 10.230.38.21 Jan 30 19:15:37.305845 systemd-timesyncd[1399]: Network configuration changed, trying to establish connection. Jan 30 19:15:37.325525 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 19:15:37.334929 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 19:15:37.350511 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 30 19:15:37.362746 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 19:15:37.365488 kernel: ACPI: button: Power Button [PWRF] Jan 30 19:15:37.391492 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 19:15:37.413589 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 30 19:15:37.420403 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 30 19:15:37.422316 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 30 19:15:37.430477 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 30 19:15:37.484818 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 19:15:37.685948 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 19:15:37.744177 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 19:15:37.750767 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 19:15:37.771521 lvm[1465]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 19:15:37.803931 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 19:15:37.805764 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 19:15:37.806661 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 19:15:37.807684 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 19:15:37.808568 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 19:15:37.809668 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 19:15:37.810593 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 19:15:37.811416 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 19:15:37.812201 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 19:15:37.812258 systemd[1]: Reached target paths.target - Path Units. Jan 30 19:15:37.812926 systemd[1]: Reached target timers.target - Timer Units. Jan 30 19:15:37.814593 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 19:15:37.817237 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 19:15:37.823738 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 19:15:37.826364 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 19:15:37.827946 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 19:15:37.828824 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 19:15:37.829594 systemd[1]: Reached target basic.target - Basic System. Jan 30 19:15:37.830322 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 19:15:37.830366 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 19:15:37.836598 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 19:15:37.843667 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 19:15:37.844858 lvm[1469]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 19:15:37.847683 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 19:15:37.852603 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 19:15:37.856729 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 19:15:37.858558 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 19:15:37.865641 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 19:15:37.869045 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 19:15:37.885856 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 19:15:37.900662 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 19:15:37.905920 dbus-daemon[1472]: [system] SELinux support is enabled Jan 30 19:15:37.903411 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 19:15:37.908987 dbus-daemon[1472]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1422 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 30 19:15:37.905224 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 19:15:37.913545 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 19:15:37.922561 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 19:15:37.924251 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 19:15:37.940306 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 19:15:37.949384 jq[1482]: true Jan 30 19:15:37.942080 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 19:15:37.943567 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 19:15:37.956221 dbus-daemon[1472]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 30 19:15:37.962379 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 19:15:37.978958 jq[1473]: false Jan 30 19:15:37.963528 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 19:15:37.974672 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 30 19:15:37.975502 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 19:15:37.975534 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 19:15:37.987513 jq[1487]: true Jan 30 19:15:37.992091 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 19:15:37.993573 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 19:15:38.009616 extend-filesystems[1474]: Found loop4 Jan 30 19:15:38.014818 extend-filesystems[1474]: Found loop5 Jan 30 19:15:38.014818 extend-filesystems[1474]: Found loop6 Jan 30 19:15:38.014818 extend-filesystems[1474]: Found loop7 Jan 30 19:15:38.014818 extend-filesystems[1474]: Found vda Jan 30 19:15:38.014818 extend-filesystems[1474]: Found vda1 Jan 30 19:15:38.014818 extend-filesystems[1474]: Found vda2 Jan 30 19:15:38.014818 extend-filesystems[1474]: Found vda3 Jan 30 19:15:38.014818 extend-filesystems[1474]: Found usr Jan 30 19:15:38.014818 extend-filesystems[1474]: Found vda4 Jan 30 19:15:38.014818 extend-filesystems[1474]: Found vda6 Jan 30 19:15:38.014818 extend-filesystems[1474]: Found vda7 Jan 30 19:15:38.014818 extend-filesystems[1474]: Found vda9 Jan 30 19:15:38.014818 extend-filesystems[1474]: Checking size of /dev/vda9 Jan 30 19:15:38.067201 update_engine[1481]: I20250130 19:15:38.017377 1481 main.cc:92] Flatcar Update Engine starting Jan 30 19:15:38.067201 update_engine[1481]: I20250130 19:15:38.024284 1481 update_check_scheduler.cc:74] Next update check in 8m39s Jan 30 19:15:38.020987 systemd[1]: Started update-engine.service - Update Engine. Jan 30 19:15:38.028745 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 19:15:38.042241 (ntainerd)[1497]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 19:15:38.086717 extend-filesystems[1474]: Resized partition /dev/vda9 Jan 30 19:15:38.087070 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 19:15:38.087783 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 19:15:38.094472 extend-filesystems[1521]: resize2fs 1.47.1 (20-May-2024) Jan 30 19:15:38.102493 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Jan 30 19:15:38.145392 systemd-logind[1480]: Watching system buttons on /dev/input/event2 (Power Button) Jan 30 19:15:38.145464 systemd-logind[1480]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 19:15:38.150797 systemd-logind[1480]: New seat seat0. Jan 30 19:15:38.171051 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 19:15:38.211769 bash[1525]: Updated "/home/core/.ssh/authorized_keys" Jan 30 19:15:38.219026 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 19:15:38.233039 systemd[1]: Starting sshkeys.service... Jan 30 19:15:38.251487 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1430) Jan 30 19:15:38.314116 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 19:15:38.322872 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 19:15:38.356518 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 30 19:15:38.357695 dbus-daemon[1472]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 30 19:15:38.359605 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 30 19:15:38.369902 dbus-daemon[1472]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1491 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 30 19:15:38.378854 systemd[1]: Starting polkit.service - Authorization Manager... Jan 30 19:15:38.395673 extend-filesystems[1521]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 19:15:38.395673 extend-filesystems[1521]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 30 19:15:38.395673 extend-filesystems[1521]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 30 19:15:38.442545 extend-filesystems[1474]: Resized filesystem in /dev/vda9 Jan 30 19:15:38.396943 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 19:15:38.397197 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 19:15:38.444988 polkitd[1536]: Started polkitd version 121 Jan 30 19:15:38.452003 locksmithd[1503]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 19:15:38.454976 polkitd[1536]: Loading rules from directory /etc/polkit-1/rules.d Jan 30 19:15:38.455077 polkitd[1536]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 30 19:15:38.455799 polkitd[1536]: Finished loading, compiling and executing 2 rules Jan 30 19:15:38.460976 dbus-daemon[1472]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 30 19:15:38.461247 systemd[1]: Started polkit.service - Authorization Manager. Jan 30 19:15:38.464103 polkitd[1536]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 30 19:15:38.524470 systemd-hostnamed[1491]: Hostname set to (static) Jan 30 19:15:38.579481 containerd[1497]: time="2025-01-30T19:15:38.578558530Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 19:15:38.587278 sshd_keygen[1510]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 19:15:38.619679 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 19:15:38.623731 containerd[1497]: time="2025-01-30T19:15:38.623625572Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 19:15:38.626369 containerd[1497]: time="2025-01-30T19:15:38.626316611Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 19:15:38.626369 containerd[1497]: time="2025-01-30T19:15:38.626366909Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 19:15:38.626554 containerd[1497]: time="2025-01-30T19:15:38.626393136Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 19:15:38.626754 containerd[1497]: time="2025-01-30T19:15:38.626722092Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 19:15:38.626883 containerd[1497]: time="2025-01-30T19:15:38.626777975Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 19:15:38.626950 containerd[1497]: time="2025-01-30T19:15:38.626904382Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 19:15:38.626950 containerd[1497]: time="2025-01-30T19:15:38.626928003Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 19:15:38.627203 containerd[1497]: time="2025-01-30T19:15:38.627166874Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 19:15:38.627203 containerd[1497]: time="2025-01-30T19:15:38.627198517Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 19:15:38.627306 containerd[1497]: time="2025-01-30T19:15:38.627220267Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 19:15:38.627306 containerd[1497]: time="2025-01-30T19:15:38.627265585Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 19:15:38.627502 containerd[1497]: time="2025-01-30T19:15:38.627423984Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 19:15:38.627900 containerd[1497]: time="2025-01-30T19:15:38.627867128Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 19:15:38.628046 containerd[1497]: time="2025-01-30T19:15:38.628012726Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 19:15:38.628046 containerd[1497]: time="2025-01-30T19:15:38.628043309Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 19:15:38.628215 containerd[1497]: time="2025-01-30T19:15:38.628186875Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 19:15:38.628310 containerd[1497]: time="2025-01-30T19:15:38.628283653Z" level=info msg="metadata content store policy set" policy=shared Jan 30 19:15:38.631183 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 19:15:38.639674 containerd[1497]: time="2025-01-30T19:15:38.639626560Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 19:15:38.639767 containerd[1497]: time="2025-01-30T19:15:38.639724190Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 19:15:38.639767 containerd[1497]: time="2025-01-30T19:15:38.639752171Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 19:15:38.639891 containerd[1497]: time="2025-01-30T19:15:38.639775715Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 19:15:38.639891 containerd[1497]: time="2025-01-30T19:15:38.639818004Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 19:15:38.640083 containerd[1497]: time="2025-01-30T19:15:38.640046475Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 19:15:38.640378 containerd[1497]: time="2025-01-30T19:15:38.640352524Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 19:15:38.640609 containerd[1497]: time="2025-01-30T19:15:38.640582377Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 19:15:38.640734 containerd[1497]: time="2025-01-30T19:15:38.640615412Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 19:15:38.640734 containerd[1497]: time="2025-01-30T19:15:38.640650856Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 19:15:38.640734 containerd[1497]: time="2025-01-30T19:15:38.640675790Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 19:15:38.640734 containerd[1497]: time="2025-01-30T19:15:38.640706982Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 19:15:38.640734 containerd[1497]: time="2025-01-30T19:15:38.640728199Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 19:15:38.640903 containerd[1497]: time="2025-01-30T19:15:38.640748100Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 19:15:38.640903 containerd[1497]: time="2025-01-30T19:15:38.640769070Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 19:15:38.640903 containerd[1497]: time="2025-01-30T19:15:38.640788265Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 19:15:38.640903 containerd[1497]: time="2025-01-30T19:15:38.640806721Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 19:15:38.640903 containerd[1497]: time="2025-01-30T19:15:38.640825805Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 19:15:38.640903 containerd[1497]: time="2025-01-30T19:15:38.640863011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 19:15:38.640903 containerd[1497]: time="2025-01-30T19:15:38.640886046Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 19:15:38.641182 containerd[1497]: time="2025-01-30T19:15:38.640904622Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 19:15:38.641182 containerd[1497]: time="2025-01-30T19:15:38.640924672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 19:15:38.641182 containerd[1497]: time="2025-01-30T19:15:38.640967309Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 19:15:38.641182 containerd[1497]: time="2025-01-30T19:15:38.640988743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 19:15:38.641182 containerd[1497]: time="2025-01-30T19:15:38.641007149Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 19:15:38.641182 containerd[1497]: time="2025-01-30T19:15:38.641047754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 19:15:38.641182 containerd[1497]: time="2025-01-30T19:15:38.641081931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 19:15:38.641182 containerd[1497]: time="2025-01-30T19:15:38.641105849Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 19:15:38.641182 containerd[1497]: time="2025-01-30T19:15:38.641124449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 19:15:38.641182 containerd[1497]: time="2025-01-30T19:15:38.641157223Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 19:15:38.641182 containerd[1497]: time="2025-01-30T19:15:38.641177408Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 19:15:38.642595 containerd[1497]: time="2025-01-30T19:15:38.641210822Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 19:15:38.642595 containerd[1497]: time="2025-01-30T19:15:38.641240972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 19:15:38.642595 containerd[1497]: time="2025-01-30T19:15:38.641261950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 19:15:38.642595 containerd[1497]: time="2025-01-30T19:15:38.641288485Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 19:15:38.642595 containerd[1497]: time="2025-01-30T19:15:38.641355690Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 19:15:38.642595 containerd[1497]: time="2025-01-30T19:15:38.641393198Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 19:15:38.642595 containerd[1497]: time="2025-01-30T19:15:38.641412002Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 19:15:38.642595 containerd[1497]: time="2025-01-30T19:15:38.641485750Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 19:15:38.642595 containerd[1497]: time="2025-01-30T19:15:38.641504712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 19:15:38.642595 containerd[1497]: time="2025-01-30T19:15:38.641524291Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 19:15:38.642595 containerd[1497]: time="2025-01-30T19:15:38.641548487Z" level=info msg="NRI interface is disabled by configuration." Jan 30 19:15:38.642595 containerd[1497]: time="2025-01-30T19:15:38.641566308Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 19:15:38.643645 containerd[1497]: time="2025-01-30T19:15:38.641975067Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 19:15:38.643645 containerd[1497]: time="2025-01-30T19:15:38.642053835Z" level=info msg="Connect containerd service" Jan 30 19:15:38.643645 containerd[1497]: time="2025-01-30T19:15:38.642095386Z" level=info msg="using legacy CRI server" Jan 30 19:15:38.643645 containerd[1497]: time="2025-01-30T19:15:38.642113050Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 19:15:38.643645 containerd[1497]: time="2025-01-30T19:15:38.642245595Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 19:15:38.646469 containerd[1497]: time="2025-01-30T19:15:38.644556409Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 19:15:38.646469 containerd[1497]: time="2025-01-30T19:15:38.645129393Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 19:15:38.646469 containerd[1497]: time="2025-01-30T19:15:38.645207695Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 19:15:38.646469 containerd[1497]: time="2025-01-30T19:15:38.645273085Z" level=info msg="Start subscribing containerd event" Jan 30 19:15:38.646469 containerd[1497]: time="2025-01-30T19:15:38.645334728Z" level=info msg="Start recovering state" Jan 30 19:15:38.646469 containerd[1497]: time="2025-01-30T19:15:38.645468642Z" level=info msg="Start event monitor" Jan 30 19:15:38.646469 containerd[1497]: time="2025-01-30T19:15:38.645547646Z" level=info msg="Start snapshots syncer" Jan 30 19:15:38.646469 containerd[1497]: time="2025-01-30T19:15:38.645574594Z" level=info msg="Start cni network conf syncer for default" Jan 30 19:15:38.646469 containerd[1497]: time="2025-01-30T19:15:38.645592173Z" level=info msg="Start streaming server" Jan 30 19:15:38.646469 containerd[1497]: time="2025-01-30T19:15:38.645687973Z" level=info msg="containerd successfully booted in 0.069142s" Jan 30 19:15:38.645779 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 19:15:38.652336 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 19:15:38.652658 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 19:15:38.665637 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 19:15:38.679031 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 19:15:38.695098 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 19:15:38.698049 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 19:15:38.699133 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 19:15:39.159726 systemd-networkd[1422]: eth0: Gained IPv6LL Jan 30 19:15:39.161306 systemd-timesyncd[1399]: Network configuration changed, trying to establish connection. Jan 30 19:15:39.163528 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 19:15:39.166119 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 19:15:39.178893 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 19:15:39.183133 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 19:15:39.215235 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 19:15:40.215647 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 19:15:40.222371 (kubelet)[1590]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 19:15:40.391537 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 19:15:40.400892 systemd[1]: Started sshd@0-10.230.38.22:22-139.178.89.65:60668.service - OpenSSH per-connection server daemon (139.178.89.65:60668). Jan 30 19:15:40.664632 systemd-timesyncd[1399]: Network configuration changed, trying to establish connection. Jan 30 19:15:40.667657 systemd-networkd[1422]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8985:24:19ff:fee6:2616/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8985:24:19ff:fee6:2616/64 assigned by NDisc. Jan 30 19:15:40.667669 systemd-networkd[1422]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 30 19:15:40.882825 kubelet[1590]: E0130 19:15:40.882718 1590 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 19:15:40.885877 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 19:15:40.886319 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 19:15:40.887094 systemd[1]: kubelet.service: Consumed 1.105s CPU time. Jan 30 19:15:41.296285 sshd[1596]: Accepted publickey for core from 139.178.89.65 port 60668 ssh2: RSA SHA256:u8+itYrLEk8gleuOQPYU4Ynz962uCQsxC4IoVAtgGFc Jan 30 19:15:41.299988 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 19:15:41.313735 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 19:15:41.320847 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 19:15:41.325766 systemd-logind[1480]: New session 1 of user core. Jan 30 19:15:41.346393 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 19:15:41.354971 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 19:15:41.378041 (systemd)[1604]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 19:15:41.524778 systemd[1604]: Queued start job for default target default.target. Jan 30 19:15:41.534394 systemd[1604]: Created slice app.slice - User Application Slice. Jan 30 19:15:41.534575 systemd[1604]: Reached target paths.target - Paths. Jan 30 19:15:41.534606 systemd[1604]: Reached target timers.target - Timers. Jan 30 19:15:41.536731 systemd[1604]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 19:15:41.552054 systemd[1604]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 19:15:41.552242 systemd[1604]: Reached target sockets.target - Sockets. Jan 30 19:15:41.552267 systemd[1604]: Reached target basic.target - Basic System. Jan 30 19:15:41.552356 systemd[1604]: Reached target default.target - Main User Target. Jan 30 19:15:41.552456 systemd[1604]: Startup finished in 164ms. Jan 30 19:15:41.552616 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 19:15:41.563832 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 19:15:41.848585 systemd-timesyncd[1399]: Network configuration changed, trying to establish connection. Jan 30 19:15:42.201889 systemd[1]: Started sshd@1-10.230.38.22:22-139.178.89.65:60676.service - OpenSSH per-connection server daemon (139.178.89.65:60676). Jan 30 19:15:43.084508 sshd[1616]: Accepted publickey for core from 139.178.89.65 port 60676 ssh2: RSA SHA256:u8+itYrLEk8gleuOQPYU4Ynz962uCQsxC4IoVAtgGFc Jan 30 19:15:43.086565 sshd[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 19:15:43.093775 systemd-logind[1480]: New session 2 of user core. Jan 30 19:15:43.104848 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 19:15:43.709489 sshd[1616]: pam_unix(sshd:session): session closed for user core Jan 30 19:15:43.731502 systemd[1]: sshd@1-10.230.38.22:22-139.178.89.65:60676.service: Deactivated successfully. Jan 30 19:15:43.734882 login[1571]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 30 19:15:43.736210 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 19:15:43.738808 systemd-logind[1480]: Session 2 logged out. Waiting for processes to exit. Jan 30 19:15:43.741192 systemd-logind[1480]: Removed session 2. Jan 30 19:15:43.746498 systemd-logind[1480]: New session 3 of user core. Jan 30 19:15:43.751742 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 19:15:43.766790 login[1570]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 30 19:15:43.773509 systemd-logind[1480]: New session 4 of user core. Jan 30 19:15:43.782007 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 19:15:43.865985 systemd[1]: Started sshd@2-10.230.38.22:22-139.178.89.65:60678.service - OpenSSH per-connection server daemon (139.178.89.65:60678). Jan 30 19:15:44.743340 sshd[1647]: Accepted publickey for core from 139.178.89.65 port 60678 ssh2: RSA SHA256:u8+itYrLEk8gleuOQPYU4Ynz962uCQsxC4IoVAtgGFc Jan 30 19:15:44.745668 sshd[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 19:15:44.754218 systemd-logind[1480]: New session 5 of user core. Jan 30 19:15:44.760752 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 19:15:44.981848 coreos-metadata[1471]: Jan 30 19:15:44.981 WARN failed to locate config-drive, using the metadata service API instead Jan 30 19:15:45.008331 coreos-metadata[1471]: Jan 30 19:15:45.008 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 30 19:15:45.014422 coreos-metadata[1471]: Jan 30 19:15:45.014 INFO Fetch failed with 404: resource not found Jan 30 19:15:45.014422 coreos-metadata[1471]: Jan 30 19:15:45.014 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 30 19:15:45.015138 coreos-metadata[1471]: Jan 30 19:15:45.015 INFO Fetch successful Jan 30 19:15:45.015287 coreos-metadata[1471]: Jan 30 19:15:45.015 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 30 19:15:45.027340 coreos-metadata[1471]: Jan 30 19:15:45.027 INFO Fetch successful Jan 30 19:15:45.027557 coreos-metadata[1471]: Jan 30 19:15:45.027 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 30 19:15:45.042011 coreos-metadata[1471]: Jan 30 19:15:45.041 INFO Fetch successful Jan 30 19:15:45.042011 coreos-metadata[1471]: Jan 30 19:15:45.041 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 30 19:15:45.055812 coreos-metadata[1471]: Jan 30 19:15:45.055 INFO Fetch successful Jan 30 19:15:45.055812 coreos-metadata[1471]: Jan 30 19:15:45.055 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 30 19:15:45.073373 coreos-metadata[1471]: Jan 30 19:15:45.073 INFO Fetch successful Jan 30 19:15:45.116224 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 19:15:45.117161 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 19:15:45.398620 sshd[1647]: pam_unix(sshd:session): session closed for user core Jan 30 19:15:45.403146 systemd-logind[1480]: Session 5 logged out. Waiting for processes to exit. Jan 30 19:15:45.405080 systemd[1]: sshd@2-10.230.38.22:22-139.178.89.65:60678.service: Deactivated successfully. Jan 30 19:15:45.407687 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 19:15:45.409471 systemd-logind[1480]: Removed session 5. Jan 30 19:15:45.501747 coreos-metadata[1529]: Jan 30 19:15:45.501 WARN failed to locate config-drive, using the metadata service API instead Jan 30 19:15:45.524719 coreos-metadata[1529]: Jan 30 19:15:45.524 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 30 19:15:45.548161 coreos-metadata[1529]: Jan 30 19:15:45.548 INFO Fetch successful Jan 30 19:15:45.548313 coreos-metadata[1529]: Jan 30 19:15:45.548 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 30 19:15:45.578762 coreos-metadata[1529]: Jan 30 19:15:45.578 INFO Fetch successful Jan 30 19:15:45.580879 unknown[1529]: wrote ssh authorized keys file for user: core Jan 30 19:15:45.603764 update-ssh-keys[1662]: Updated "/home/core/.ssh/authorized_keys" Jan 30 19:15:45.604416 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 19:15:45.606887 systemd[1]: Finished sshkeys.service. Jan 30 19:15:45.610002 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 19:15:45.610209 systemd[1]: Startup finished in 1.385s (kernel) + 14.210s (initrd) + 11.691s (userspace) = 27.287s. Jan 30 19:15:51.136718 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 19:15:51.147935 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 19:15:51.316954 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 19:15:51.329176 (kubelet)[1674]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 19:15:51.427617 kubelet[1674]: E0130 19:15:51.427323 1674 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 19:15:51.431765 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 19:15:51.431984 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 19:15:55.520764 systemd[1]: Started sshd@3-10.230.38.22:22-139.178.89.65:36302.service - OpenSSH per-connection server daemon (139.178.89.65:36302). Jan 30 19:15:56.408374 sshd[1683]: Accepted publickey for core from 139.178.89.65 port 36302 ssh2: RSA SHA256:u8+itYrLEk8gleuOQPYU4Ynz962uCQsxC4IoVAtgGFc Jan 30 19:15:56.410774 sshd[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 19:15:56.419210 systemd-logind[1480]: New session 6 of user core. Jan 30 19:15:56.428661 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 19:15:57.030894 sshd[1683]: pam_unix(sshd:session): session closed for user core Jan 30 19:15:57.034792 systemd-logind[1480]: Session 6 logged out. Waiting for processes to exit. Jan 30 19:15:57.036091 systemd[1]: sshd@3-10.230.38.22:22-139.178.89.65:36302.service: Deactivated successfully. Jan 30 19:15:57.038100 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 19:15:57.040220 systemd-logind[1480]: Removed session 6. Jan 30 19:15:57.190853 systemd[1]: Started sshd@4-10.230.38.22:22-139.178.89.65:36306.service - OpenSSH per-connection server daemon (139.178.89.65:36306). Jan 30 19:15:58.071469 sshd[1690]: Accepted publickey for core from 139.178.89.65 port 36306 ssh2: RSA SHA256:u8+itYrLEk8gleuOQPYU4Ynz962uCQsxC4IoVAtgGFc Jan 30 19:15:58.073675 sshd[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 19:15:58.081030 systemd-logind[1480]: New session 7 of user core. Jan 30 19:15:58.090745 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 19:15:58.685386 sshd[1690]: pam_unix(sshd:session): session closed for user core Jan 30 19:15:58.690269 systemd[1]: sshd@4-10.230.38.22:22-139.178.89.65:36306.service: Deactivated successfully. Jan 30 19:15:58.692562 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 19:15:58.693461 systemd-logind[1480]: Session 7 logged out. Waiting for processes to exit. Jan 30 19:15:58.695028 systemd-logind[1480]: Removed session 7. Jan 30 19:15:58.847739 systemd[1]: Started sshd@5-10.230.38.22:22-139.178.89.65:36316.service - OpenSSH per-connection server daemon (139.178.89.65:36316). Jan 30 19:15:59.728013 sshd[1697]: Accepted publickey for core from 139.178.89.65 port 36316 ssh2: RSA SHA256:u8+itYrLEk8gleuOQPYU4Ynz962uCQsxC4IoVAtgGFc Jan 30 19:15:59.730657 sshd[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 19:15:59.738543 systemd-logind[1480]: New session 8 of user core. Jan 30 19:15:59.748996 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 19:16:00.347766 sshd[1697]: pam_unix(sshd:session): session closed for user core Jan 30 19:16:00.351719 systemd[1]: sshd@5-10.230.38.22:22-139.178.89.65:36316.service: Deactivated successfully. Jan 30 19:16:00.354014 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 19:16:00.355111 systemd-logind[1480]: Session 8 logged out. Waiting for processes to exit. Jan 30 19:16:00.357480 systemd-logind[1480]: Removed session 8. Jan 30 19:16:00.501424 systemd[1]: Started sshd@6-10.230.38.22:22-139.178.89.65:53836.service - OpenSSH per-connection server daemon (139.178.89.65:53836). Jan 30 19:16:01.403957 sshd[1704]: Accepted publickey for core from 139.178.89.65 port 53836 ssh2: RSA SHA256:u8+itYrLEk8gleuOQPYU4Ynz962uCQsxC4IoVAtgGFc Jan 30 19:16:01.406511 sshd[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 19:16:01.414178 systemd-logind[1480]: New session 9 of user core. Jan 30 19:16:01.425648 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 19:16:01.433206 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 19:16:01.448778 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 19:16:01.598701 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 19:16:01.598932 (kubelet)[1715]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 19:16:01.692503 kubelet[1715]: E0130 19:16:01.690123 1715 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 19:16:01.692680 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 19:16:01.692947 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 19:16:01.892609 sudo[1722]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 19:16:01.893080 sudo[1722]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 19:16:01.909819 sudo[1722]: pam_unix(sudo:session): session closed for user root Jan 30 19:16:02.053793 sshd[1704]: pam_unix(sshd:session): session closed for user core Jan 30 19:16:02.059961 systemd[1]: sshd@6-10.230.38.22:22-139.178.89.65:53836.service: Deactivated successfully. Jan 30 19:16:02.063281 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 19:16:02.065513 systemd-logind[1480]: Session 9 logged out. Waiting for processes to exit. Jan 30 19:16:02.067342 systemd-logind[1480]: Removed session 9. Jan 30 19:16:02.210789 systemd[1]: Started sshd@7-10.230.38.22:22-139.178.89.65:53850.service - OpenSSH per-connection server daemon (139.178.89.65:53850). Jan 30 19:16:03.102709 sshd[1727]: Accepted publickey for core from 139.178.89.65 port 53850 ssh2: RSA SHA256:u8+itYrLEk8gleuOQPYU4Ynz962uCQsxC4IoVAtgGFc Jan 30 19:16:03.105106 sshd[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 19:16:03.112826 systemd-logind[1480]: New session 10 of user core. Jan 30 19:16:03.120700 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 19:16:03.581546 sudo[1731]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 19:16:03.582055 sudo[1731]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 19:16:03.588654 sudo[1731]: pam_unix(sudo:session): session closed for user root Jan 30 19:16:03.598490 sudo[1730]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 19:16:03.599003 sudo[1730]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 19:16:03.617203 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 19:16:03.631653 auditctl[1734]: No rules Jan 30 19:16:03.632338 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 19:16:03.632765 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 19:16:03.640918 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 19:16:03.684948 augenrules[1752]: No rules Jan 30 19:16:03.687009 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 19:16:03.688397 sudo[1730]: pam_unix(sudo:session): session closed for user root Jan 30 19:16:03.833602 sshd[1727]: pam_unix(sshd:session): session closed for user core Jan 30 19:16:03.839087 systemd-logind[1480]: Session 10 logged out. Waiting for processes to exit. Jan 30 19:16:03.839750 systemd[1]: sshd@7-10.230.38.22:22-139.178.89.65:53850.service: Deactivated successfully. Jan 30 19:16:03.842383 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 19:16:03.845024 systemd-logind[1480]: Removed session 10. Jan 30 19:16:03.996916 systemd[1]: Started sshd@8-10.230.38.22:22-139.178.89.65:53864.service - OpenSSH per-connection server daemon (139.178.89.65:53864). Jan 30 19:16:04.871620 sshd[1760]: Accepted publickey for core from 139.178.89.65 port 53864 ssh2: RSA SHA256:u8+itYrLEk8gleuOQPYU4Ynz962uCQsxC4IoVAtgGFc Jan 30 19:16:04.873603 sshd[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 19:16:04.881665 systemd-logind[1480]: New session 11 of user core. Jan 30 19:16:04.889719 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 19:16:05.344366 sudo[1763]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 19:16:05.345520 sudo[1763]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 19:16:06.073237 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 19:16:06.080848 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 19:16:06.124165 systemd[1]: Reloading requested from client PID 1796 ('systemctl') (unit session-11.scope)... Jan 30 19:16:06.124398 systemd[1]: Reloading... Jan 30 19:16:06.277502 zram_generator::config[1844]: No configuration found. Jan 30 19:16:06.448585 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 19:16:06.561439 systemd[1]: Reloading finished in 436 ms. Jan 30 19:16:06.631368 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 19:16:06.642032 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 19:16:06.642579 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 19:16:06.642890 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 19:16:06.646672 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 19:16:06.850632 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 19:16:06.865593 (kubelet)[1905]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 19:16:06.944856 kubelet[1905]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 19:16:06.944856 kubelet[1905]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 30 19:16:06.944856 kubelet[1905]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 19:16:06.945598 kubelet[1905]: I0130 19:16:06.945009 1905 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 19:16:08.063561 kubelet[1905]: I0130 19:16:08.063428 1905 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 30 19:16:08.063561 kubelet[1905]: I0130 19:16:08.063535 1905 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 19:16:08.064221 kubelet[1905]: I0130 19:16:08.063964 1905 server.go:954] "Client rotation is on, will bootstrap in background" Jan 30 19:16:08.097849 kubelet[1905]: I0130 19:16:08.097516 1905 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 19:16:08.113691 kubelet[1905]: E0130 19:16:08.113624 1905 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 19:16:08.113787 kubelet[1905]: I0130 19:16:08.113699 1905 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 19:16:08.119574 kubelet[1905]: I0130 19:16:08.119518 1905 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 19:16:08.130165 kubelet[1905]: I0130 19:16:08.129537 1905 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 19:16:08.130165 kubelet[1905]: I0130 19:16:08.129612 1905 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.230.38.22","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 19:16:08.130165 kubelet[1905]: I0130 19:16:08.129921 1905 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 19:16:08.130165 kubelet[1905]: I0130 19:16:08.129938 1905 container_manager_linux.go:304] "Creating device plugin manager" Jan 30 19:16:08.130641 kubelet[1905]: I0130 19:16:08.130220 1905 state_mem.go:36] "Initialized new in-memory state store" Jan 30 19:16:08.134940 kubelet[1905]: I0130 19:16:08.134506 1905 kubelet.go:446] "Attempting to sync node with API server" Jan 30 19:16:08.134940 kubelet[1905]: I0130 19:16:08.134537 1905 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 19:16:08.134940 kubelet[1905]: I0130 19:16:08.134573 1905 kubelet.go:352] "Adding apiserver pod source" Jan 30 19:16:08.134940 kubelet[1905]: I0130 19:16:08.134607 1905 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 19:16:08.136972 kubelet[1905]: E0130 19:16:08.136916 1905 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:16:08.137061 kubelet[1905]: E0130 19:16:08.137025 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:16:08.138873 kubelet[1905]: I0130 19:16:08.138830 1905 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 19:16:08.139743 kubelet[1905]: I0130 19:16:08.139710 1905 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 19:16:08.140584 kubelet[1905]: W0130 19:16:08.140542 1905 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 19:16:08.143232 kubelet[1905]: I0130 19:16:08.143125 1905 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 30 19:16:08.143232 kubelet[1905]: I0130 19:16:08.143192 1905 server.go:1287] "Started kubelet" Jan 30 19:16:08.144552 kubelet[1905]: I0130 19:16:08.143487 1905 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 19:16:08.145765 kubelet[1905]: I0130 19:16:08.145240 1905 server.go:490] "Adding debug handlers to kubelet server" Jan 30 19:16:08.147485 kubelet[1905]: I0130 19:16:08.146969 1905 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 19:16:08.147621 kubelet[1905]: I0130 19:16:08.147600 1905 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 19:16:08.149356 kubelet[1905]: I0130 19:16:08.149299 1905 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 19:16:08.154872 kubelet[1905]: E0130 19:16:08.153382 1905 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.230.38.22.181f8e6a3dd20a5e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.230.38.22,UID:10.230.38.22,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.230.38.22,},FirstTimestamp:2025-01-30 19:16:08.143153758 +0000 UTC m=+1.268699718,LastTimestamp:2025-01-30 19:16:08.143153758 +0000 UTC m=+1.268699718,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.230.38.22,}" Jan 30 19:16:08.157330 kubelet[1905]: W0130 19:16:08.157093 1905 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.230.38.22" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 30 19:16:08.157330 kubelet[1905]: E0130 19:16:08.157159 1905 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.230.38.22\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 30 19:16:08.157330 kubelet[1905]: W0130 19:16:08.157315 1905 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 30 19:16:08.158012 kubelet[1905]: E0130 19:16:08.157342 1905 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 30 19:16:08.158012 kubelet[1905]: I0130 19:16:08.157846 1905 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 30 19:16:08.158391 kubelet[1905]: I0130 19:16:08.158352 1905 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 19:16:08.159618 kubelet[1905]: E0130 19:16:08.159561 1905 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.230.38.22\" not found" Jan 30 19:16:08.161506 kubelet[1905]: I0130 19:16:08.159942 1905 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 19:16:08.161506 kubelet[1905]: I0130 19:16:08.160033 1905 reconciler.go:26] "Reconciler: start to sync state" Jan 30 19:16:08.166478 kubelet[1905]: I0130 19:16:08.165737 1905 factory.go:221] Registration of the systemd container factory successfully Jan 30 19:16:08.166478 kubelet[1905]: I0130 19:16:08.165848 1905 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 19:16:08.167777 kubelet[1905]: E0130 19:16:08.167745 1905 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 19:16:08.169328 kubelet[1905]: I0130 19:16:08.169284 1905 factory.go:221] Registration of the containerd container factory successfully Jan 30 19:16:08.200970 kubelet[1905]: E0130 19:16:08.200397 1905 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.230.38.22\" not found" node="10.230.38.22" Jan 30 19:16:08.208507 kubelet[1905]: I0130 19:16:08.208462 1905 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 30 19:16:08.208507 kubelet[1905]: I0130 19:16:08.208498 1905 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 30 19:16:08.208647 kubelet[1905]: I0130 19:16:08.208528 1905 state_mem.go:36] "Initialized new in-memory state store" Jan 30 19:16:08.211173 kubelet[1905]: I0130 19:16:08.211143 1905 policy_none.go:49] "None policy: Start" Jan 30 19:16:08.211250 kubelet[1905]: I0130 19:16:08.211195 1905 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 30 19:16:08.211250 kubelet[1905]: I0130 19:16:08.211224 1905 state_mem.go:35] "Initializing new in-memory state store" Jan 30 19:16:08.225197 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 19:16:08.242190 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 19:16:08.248591 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 19:16:08.257241 kubelet[1905]: I0130 19:16:08.255934 1905 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 19:16:08.257241 kubelet[1905]: I0130 19:16:08.256221 1905 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 19:16:08.257241 kubelet[1905]: I0130 19:16:08.256247 1905 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 19:16:08.257241 kubelet[1905]: I0130 19:16:08.257015 1905 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 19:16:08.261278 kubelet[1905]: E0130 19:16:08.261035 1905 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 30 19:16:08.261575 kubelet[1905]: E0130 19:16:08.261544 1905 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.230.38.22\" not found" Jan 30 19:16:08.265209 kubelet[1905]: I0130 19:16:08.265138 1905 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 19:16:08.267183 kubelet[1905]: I0130 19:16:08.267057 1905 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 19:16:08.267183 kubelet[1905]: I0130 19:16:08.267131 1905 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 30 19:16:08.267183 kubelet[1905]: I0130 19:16:08.267169 1905 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 30 19:16:08.267354 kubelet[1905]: I0130 19:16:08.267202 1905 kubelet.go:2388] "Starting kubelet main sync loop" Jan 30 19:16:08.267422 kubelet[1905]: E0130 19:16:08.267360 1905 kubelet.go:2412] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 30 19:16:08.358827 kubelet[1905]: I0130 19:16:08.358152 1905 kubelet_node_status.go:76] "Attempting to register node" node="10.230.38.22" Jan 30 19:16:08.367457 kubelet[1905]: I0130 19:16:08.366840 1905 kubelet_node_status.go:79] "Successfully registered node" node="10.230.38.22" Jan 30 19:16:08.367457 kubelet[1905]: E0130 19:16:08.367040 1905 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"10.230.38.22\": node \"10.230.38.22\" not found" Jan 30 19:16:08.382290 kubelet[1905]: E0130 19:16:08.381897 1905 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.230.38.22\" not found" Jan 30 19:16:08.410073 sudo[1763]: pam_unix(sudo:session): session closed for user root Jan 30 19:16:08.482300 kubelet[1905]: E0130 19:16:08.482223 1905 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.230.38.22\" not found" Jan 30 19:16:08.553906 sshd[1760]: pam_unix(sshd:session): session closed for user core Jan 30 19:16:08.559232 systemd-logind[1480]: Session 11 logged out. Waiting for processes to exit. Jan 30 19:16:08.561432 systemd[1]: sshd@8-10.230.38.22:22-139.178.89.65:53864.service: Deactivated successfully. Jan 30 19:16:08.565154 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 19:16:08.567881 systemd-logind[1480]: Removed session 11. Jan 30 19:16:08.583267 kubelet[1905]: E0130 19:16:08.583153 1905 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.230.38.22\" not found" Jan 30 19:16:08.684492 kubelet[1905]: E0130 19:16:08.684216 1905 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.230.38.22\" not found" Jan 30 19:16:08.785138 kubelet[1905]: E0130 19:16:08.785040 1905 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.230.38.22\" not found" Jan 30 19:16:08.885916 kubelet[1905]: E0130 19:16:08.885877 1905 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.230.38.22\" not found" Jan 30 19:16:08.986891 kubelet[1905]: E0130 19:16:08.986634 1905 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.230.38.22\" not found" Jan 30 19:16:09.066847 kubelet[1905]: I0130 19:16:09.066366 1905 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 30 19:16:09.066847 kubelet[1905]: W0130 19:16:09.066749 1905 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 30 19:16:09.066847 kubelet[1905]: W0130 19:16:09.066801 1905 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 30 19:16:09.087971 kubelet[1905]: E0130 19:16:09.087861 1905 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.230.38.22\" not found" Jan 30 19:16:09.137590 kubelet[1905]: E0130 19:16:09.137523 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:16:09.188319 kubelet[1905]: E0130 19:16:09.188273 1905 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.230.38.22\" not found" Jan 30 19:16:09.289391 kubelet[1905]: E0130 19:16:09.289199 1905 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.230.38.22\" not found" Jan 30 19:16:09.389898 kubelet[1905]: E0130 19:16:09.389820 1905 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.230.38.22\" not found" Jan 30 19:16:09.489987 kubelet[1905]: E0130 19:16:09.489947 1905 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.230.38.22\" not found" Jan 30 19:16:09.590662 kubelet[1905]: E0130 19:16:09.590576 1905 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.230.38.22\" not found" Jan 30 19:16:09.692774 kubelet[1905]: I0130 19:16:09.692625 1905 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 30 19:16:09.693531 containerd[1497]: time="2025-01-30T19:16:09.693206104Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 19:16:09.694100 kubelet[1905]: I0130 19:16:09.693547 1905 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 30 19:16:10.137861 kubelet[1905]: E0130 19:16:10.137800 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:16:10.138857 kubelet[1905]: I0130 19:16:10.138532 1905 apiserver.go:52] "Watching apiserver" Jan 30 19:16:10.145832 kubelet[1905]: E0130 19:16:10.145498 1905 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n5m8t" podUID="e47b55f6-c6c0-4e43-a8ed-5ed724f4ad91" Jan 30 19:16:10.155144 systemd[1]: Created slice kubepods-besteffort-pode920cc34_75b8_4e4c_8418_91e5be3161da.slice - libcontainer container kubepods-besteffort-pode920cc34_75b8_4e4c_8418_91e5be3161da.slice. Jan 30 19:16:10.161387 kubelet[1905]: I0130 19:16:10.160452 1905 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 19:16:10.174116 kubelet[1905]: I0130 19:16:10.171350 1905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e920cc34-75b8-4e4c-8418-91e5be3161da-lib-modules\") pod \"calico-node-jvtg9\" (UID: \"e920cc34-75b8-4e4c-8418-91e5be3161da\") " pod="calico-system/calico-node-jvtg9" Jan 30 19:16:10.174116 kubelet[1905]: I0130 19:16:10.171407 1905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e920cc34-75b8-4e4c-8418-91e5be3161da-node-certs\") pod \"calico-node-jvtg9\" (UID: \"e920cc34-75b8-4e4c-8418-91e5be3161da\") " pod="calico-system/calico-node-jvtg9" Jan 30 19:16:10.174116 kubelet[1905]: I0130 19:16:10.171464 1905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e920cc34-75b8-4e4c-8418-91e5be3161da-var-run-calico\") pod \"calico-node-jvtg9\" (UID: \"e920cc34-75b8-4e4c-8418-91e5be3161da\") " pod="calico-system/calico-node-jvtg9" Jan 30 19:16:10.174116 kubelet[1905]: I0130 19:16:10.171495 1905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e920cc34-75b8-4e4c-8418-91e5be3161da-cni-net-dir\") pod \"calico-node-jvtg9\" (UID: \"e920cc34-75b8-4e4c-8418-91e5be3161da\") " pod="calico-system/calico-node-jvtg9" Jan 30 19:16:10.174116 kubelet[1905]: I0130 19:16:10.171523 1905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e47b55f6-c6c0-4e43-a8ed-5ed724f4ad91-registration-dir\") pod \"csi-node-driver-n5m8t\" (UID: \"e47b55f6-c6c0-4e43-a8ed-5ed724f4ad91\") " pod="calico-system/csi-node-driver-n5m8t" Jan 30 19:16:10.174415 kubelet[1905]: I0130 19:16:10.171553 1905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c1366b74-db2a-4ec6-a0be-2a6884018a8b-lib-modules\") pod \"kube-proxy-s2kpr\" (UID: \"c1366b74-db2a-4ec6-a0be-2a6884018a8b\") " pod="kube-system/kube-proxy-s2kpr" Jan 30 19:16:10.174415 kubelet[1905]: I0130 19:16:10.171577 1905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e920cc34-75b8-4e4c-8418-91e5be3161da-xtables-lock\") pod \"calico-node-jvtg9\" (UID: \"e920cc34-75b8-4e4c-8418-91e5be3161da\") " pod="calico-system/calico-node-jvtg9" Jan 30 19:16:10.174415 kubelet[1905]: I0130 19:16:10.171621 1905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e920cc34-75b8-4e4c-8418-91e5be3161da-policysync\") pod \"calico-node-jvtg9\" (UID: \"e920cc34-75b8-4e4c-8418-91e5be3161da\") " pod="calico-system/calico-node-jvtg9" Jan 30 19:16:10.174415 kubelet[1905]: I0130 19:16:10.171674 1905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e920cc34-75b8-4e4c-8418-91e5be3161da-var-lib-calico\") pod \"calico-node-jvtg9\" (UID: \"e920cc34-75b8-4e4c-8418-91e5be3161da\") " pod="calico-system/calico-node-jvtg9" Jan 30 19:16:10.174415 kubelet[1905]: I0130 19:16:10.171707 1905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e47b55f6-c6c0-4e43-a8ed-5ed724f4ad91-varrun\") pod \"csi-node-driver-n5m8t\" (UID: \"e47b55f6-c6c0-4e43-a8ed-5ed724f4ad91\") " pod="calico-system/csi-node-driver-n5m8t" Jan 30 19:16:10.174704 kubelet[1905]: I0130 19:16:10.171742 1905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e47b55f6-c6c0-4e43-a8ed-5ed724f4ad91-kubelet-dir\") pod \"csi-node-driver-n5m8t\" (UID: \"e47b55f6-c6c0-4e43-a8ed-5ed724f4ad91\") " pod="calico-system/csi-node-driver-n5m8t" Jan 30 19:16:10.174704 kubelet[1905]: I0130 19:16:10.171771 1905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhfb6\" (UniqueName: \"kubernetes.io/projected/e47b55f6-c6c0-4e43-a8ed-5ed724f4ad91-kube-api-access-vhfb6\") pod \"csi-node-driver-n5m8t\" (UID: \"e47b55f6-c6c0-4e43-a8ed-5ed724f4ad91\") " pod="calico-system/csi-node-driver-n5m8t" Jan 30 19:16:10.174704 kubelet[1905]: I0130 19:16:10.171805 1905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e920cc34-75b8-4e4c-8418-91e5be3161da-flexvol-driver-host\") pod \"calico-node-jvtg9\" (UID: \"e920cc34-75b8-4e4c-8418-91e5be3161da\") " pod="calico-system/calico-node-jvtg9" Jan 30 19:16:10.174704 kubelet[1905]: I0130 19:16:10.171847 1905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c1366b74-db2a-4ec6-a0be-2a6884018a8b-kube-proxy\") pod \"kube-proxy-s2kpr\" (UID: \"c1366b74-db2a-4ec6-a0be-2a6884018a8b\") " pod="kube-system/kube-proxy-s2kpr" Jan 30 19:16:10.174704 kubelet[1905]: I0130 19:16:10.171885 1905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c1366b74-db2a-4ec6-a0be-2a6884018a8b-xtables-lock\") pod \"kube-proxy-s2kpr\" (UID: \"c1366b74-db2a-4ec6-a0be-2a6884018a8b\") " pod="kube-system/kube-proxy-s2kpr" Jan 30 19:16:10.174931 kubelet[1905]: I0130 19:16:10.171925 1905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfqrm\" (UniqueName: \"kubernetes.io/projected/c1366b74-db2a-4ec6-a0be-2a6884018a8b-kube-api-access-kfqrm\") pod \"kube-proxy-s2kpr\" (UID: \"c1366b74-db2a-4ec6-a0be-2a6884018a8b\") " pod="kube-system/kube-proxy-s2kpr" Jan 30 19:16:10.174931 kubelet[1905]: I0130 19:16:10.171956 1905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e920cc34-75b8-4e4c-8418-91e5be3161da-tigera-ca-bundle\") pod \"calico-node-jvtg9\" (UID: \"e920cc34-75b8-4e4c-8418-91e5be3161da\") " pod="calico-system/calico-node-jvtg9" Jan 30 19:16:10.174931 kubelet[1905]: I0130 19:16:10.172004 1905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e920cc34-75b8-4e4c-8418-91e5be3161da-cni-bin-dir\") pod \"calico-node-jvtg9\" (UID: \"e920cc34-75b8-4e4c-8418-91e5be3161da\") " pod="calico-system/calico-node-jvtg9" Jan 30 19:16:10.174931 kubelet[1905]: I0130 19:16:10.172038 1905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e920cc34-75b8-4e4c-8418-91e5be3161da-cni-log-dir\") pod \"calico-node-jvtg9\" (UID: \"e920cc34-75b8-4e4c-8418-91e5be3161da\") " pod="calico-system/calico-node-jvtg9" Jan 30 19:16:10.174931 kubelet[1905]: I0130 19:16:10.172072 1905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdz65\" (UniqueName: \"kubernetes.io/projected/e920cc34-75b8-4e4c-8418-91e5be3161da-kube-api-access-cdz65\") pod \"calico-node-jvtg9\" (UID: \"e920cc34-75b8-4e4c-8418-91e5be3161da\") " pod="calico-system/calico-node-jvtg9" Jan 30 19:16:10.175163 kubelet[1905]: I0130 19:16:10.172105 1905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e47b55f6-c6c0-4e43-a8ed-5ed724f4ad91-socket-dir\") pod \"csi-node-driver-n5m8t\" (UID: \"e47b55f6-c6c0-4e43-a8ed-5ed724f4ad91\") " pod="calico-system/csi-node-driver-n5m8t" Jan 30 19:16:10.175930 systemd[1]: Created slice kubepods-besteffort-podc1366b74_db2a_4ec6_a0be_2a6884018a8b.slice - libcontainer container kubepods-besteffort-podc1366b74_db2a_4ec6_a0be_2a6884018a8b.slice. Jan 30 19:16:10.278148 kubelet[1905]: E0130 19:16:10.278103 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:10.278148 kubelet[1905]: W0130 19:16:10.278134 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:10.278373 kubelet[1905]: E0130 19:16:10.278197 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:10.282340 kubelet[1905]: E0130 19:16:10.282313 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:10.282499 kubelet[1905]: W0130 19:16:10.282464 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:10.282682 kubelet[1905]: E0130 19:16:10.282642 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:10.317736 kubelet[1905]: E0130 19:16:10.317581 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:10.317736 kubelet[1905]: W0130 19:16:10.317605 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:10.317736 kubelet[1905]: E0130 19:16:10.317649 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:10.319919 kubelet[1905]: E0130 19:16:10.319619 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:10.319919 kubelet[1905]: W0130 19:16:10.319666 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:10.319919 kubelet[1905]: E0130 19:16:10.319689 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:10.322227 kubelet[1905]: E0130 19:16:10.322196 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:10.322321 kubelet[1905]: W0130 19:16:10.322282 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:10.322321 kubelet[1905]: E0130 19:16:10.322313 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:10.473405 containerd[1497]: time="2025-01-30T19:16:10.473139865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jvtg9,Uid:e920cc34-75b8-4e4c-8418-91e5be3161da,Namespace:calico-system,Attempt:0,}" Jan 30 19:16:10.481555 containerd[1497]: time="2025-01-30T19:16:10.481112484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s2kpr,Uid:c1366b74-db2a-4ec6-a0be-2a6884018a8b,Namespace:kube-system,Attempt:0,}" Jan 30 19:16:10.685915 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 30 19:16:11.138155 kubelet[1905]: E0130 19:16:11.138016 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:16:11.250014 containerd[1497]: time="2025-01-30T19:16:11.249917769Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 19:16:11.252230 containerd[1497]: time="2025-01-30T19:16:11.252185366Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 19:16:11.253014 containerd[1497]: time="2025-01-30T19:16:11.252960913Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 19:16:11.253970 containerd[1497]: time="2025-01-30T19:16:11.253940490Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 30 19:16:11.255077 containerd[1497]: time="2025-01-30T19:16:11.255008166Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 19:16:11.259638 containerd[1497]: time="2025-01-30T19:16:11.259591668Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 19:16:11.261242 containerd[1497]: time="2025-01-30T19:16:11.260940104Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 779.724354ms" Jan 30 19:16:11.263265 containerd[1497]: time="2025-01-30T19:16:11.263095786Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 789.243078ms" Jan 30 19:16:11.282213 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2563079910.mount: Deactivated successfully. Jan 30 19:16:11.441700 containerd[1497]: time="2025-01-30T19:16:11.441065433Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 19:16:11.441700 containerd[1497]: time="2025-01-30T19:16:11.441322128Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 19:16:11.444115 containerd[1497]: time="2025-01-30T19:16:11.444050089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 19:16:11.444806 containerd[1497]: time="2025-01-30T19:16:11.444564883Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 19:16:11.447642 containerd[1497]: time="2025-01-30T19:16:11.445078222Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 19:16:11.447642 containerd[1497]: time="2025-01-30T19:16:11.446299740Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 19:16:11.447642 containerd[1497]: time="2025-01-30T19:16:11.446361315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 19:16:11.455160 containerd[1497]: time="2025-01-30T19:16:11.453395744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 19:16:11.553722 systemd[1]: Started cri-containerd-01959a7ec741b30561a701be1d18a18696230228c08bb373cccd2b647264816c.scope - libcontainer container 01959a7ec741b30561a701be1d18a18696230228c08bb373cccd2b647264816c. Jan 30 19:16:11.562688 systemd[1]: Started cri-containerd-a02e06fd599113815222b861417df1039b2047a555d6130c52dc1ac2c2baa618.scope - libcontainer container a02e06fd599113815222b861417df1039b2047a555d6130c52dc1ac2c2baa618. Jan 30 19:16:11.605764 containerd[1497]: time="2025-01-30T19:16:11.605318089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s2kpr,Uid:c1366b74-db2a-4ec6-a0be-2a6884018a8b,Namespace:kube-system,Attempt:0,} returns sandbox id \"01959a7ec741b30561a701be1d18a18696230228c08bb373cccd2b647264816c\"" Jan 30 19:16:11.612352 containerd[1497]: time="2025-01-30T19:16:11.612315058Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\"" Jan 30 19:16:11.619412 containerd[1497]: time="2025-01-30T19:16:11.619286541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jvtg9,Uid:e920cc34-75b8-4e4c-8418-91e5be3161da,Namespace:calico-system,Attempt:0,} returns sandbox id \"a02e06fd599113815222b861417df1039b2047a555d6130c52dc1ac2c2baa618\"" Jan 30 19:16:12.139245 kubelet[1905]: E0130 19:16:12.139125 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:16:12.269035 kubelet[1905]: E0130 19:16:12.268462 1905 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n5m8t" podUID="e47b55f6-c6c0-4e43-a8ed-5ed724f4ad91" Jan 30 19:16:12.282847 systemd[1]: run-containerd-runc-k8s.io-a02e06fd599113815222b861417df1039b2047a555d6130c52dc1ac2c2baa618-runc.TMh21n.mount: Deactivated successfully. Jan 30 19:16:13.405646 systemd-resolved[1384]: Clock change detected. Flushing caches. Jan 30 19:16:13.406040 systemd-timesyncd[1399]: Contacted time server [2a05:b400:c::123:60]:123 (2.flatcar.pool.ntp.org). Jan 30 19:16:13.406126 systemd-timesyncd[1399]: Initial clock synchronization to Thu 2025-01-30 19:16:13.405559 UTC. Jan 30 19:16:14.123272 kubelet[1905]: E0130 19:16:14.123187 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:16:14.350439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2260782454.mount: Deactivated successfully. Jan 30 19:16:15.124998 kubelet[1905]: E0130 19:16:15.124867 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:16:15.130069 containerd[1497]: time="2025-01-30T19:16:15.130017403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 19:16:15.131220 containerd[1497]: time="2025-01-30T19:16:15.131142535Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.1: active requests=0, bytes read=30909474" Jan 30 19:16:15.132271 containerd[1497]: time="2025-01-30T19:16:15.132191075Z" level=info msg="ImageCreate event name:\"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 19:16:15.136662 containerd[1497]: time="2025-01-30T19:16:15.136010639Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 19:16:15.137540 containerd[1497]: time="2025-01-30T19:16:15.137323965Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.1\" with image id \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\", repo tag \"registry.k8s.io/kube-proxy:v1.32.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\", size \"30908485\" in 2.541151058s" Jan 30 19:16:15.137540 containerd[1497]: time="2025-01-30T19:16:15.137377014Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\" returns image reference \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\"" Jan 30 19:16:15.139551 containerd[1497]: time="2025-01-30T19:16:15.139516606Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 30 19:16:15.143014 containerd[1497]: time="2025-01-30T19:16:15.142976538Z" level=info msg="CreateContainer within sandbox \"01959a7ec741b30561a701be1d18a18696230228c08bb373cccd2b647264816c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 19:16:15.162009 containerd[1497]: time="2025-01-30T19:16:15.161817732Z" level=info msg="CreateContainer within sandbox \"01959a7ec741b30561a701be1d18a18696230228c08bb373cccd2b647264816c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3d7c0f2652ade73485bdc4f14faca2769ad3f659dcbc28cd38e1aa69c7ed26be\"" Jan 30 19:16:15.163310 containerd[1497]: time="2025-01-30T19:16:15.163210367Z" level=info msg="StartContainer for \"3d7c0f2652ade73485bdc4f14faca2769ad3f659dcbc28cd38e1aa69c7ed26be\"" Jan 30 19:16:15.220573 systemd[1]: Started cri-containerd-3d7c0f2652ade73485bdc4f14faca2769ad3f659dcbc28cd38e1aa69c7ed26be.scope - libcontainer container 3d7c0f2652ade73485bdc4f14faca2769ad3f659dcbc28cd38e1aa69c7ed26be. Jan 30 19:16:15.257283 kubelet[1905]: E0130 19:16:15.255160 1905 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n5m8t" podUID="e47b55f6-c6c0-4e43-a8ed-5ed724f4ad91" Jan 30 19:16:15.270200 containerd[1497]: time="2025-01-30T19:16:15.270128109Z" level=info msg="StartContainer for \"3d7c0f2652ade73485bdc4f14faca2769ad3f659dcbc28cd38e1aa69c7ed26be\" returns successfully" Jan 30 19:16:15.293007 kubelet[1905]: I0130 19:16:15.292913 1905 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-s2kpr" podStartSLOduration=3.7490071929999997 podStartE2EDuration="6.292850766s" podCreationTimestamp="2025-01-30 19:16:09 +0000 UTC" firstStartedPulling="2025-01-30 19:16:11.611310741 +0000 UTC m=+4.736856704" lastFinishedPulling="2025-01-30 19:16:15.138959253 +0000 UTC m=+7.280700277" observedRunningTime="2025-01-30 19:16:15.2926776 +0000 UTC m=+7.434418644" watchObservedRunningTime="2025-01-30 19:16:15.292850766 +0000 UTC m=+7.434591799" Jan 30 19:16:15.369204 kubelet[1905]: E0130 19:16:15.369102 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:15.369204 kubelet[1905]: W0130 19:16:15.369154 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:15.369204 kubelet[1905]: E0130 19:16:15.369205 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:15.369570 kubelet[1905]: E0130 19:16:15.369537 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:15.369570 kubelet[1905]: W0130 19:16:15.369559 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:15.369682 kubelet[1905]: E0130 19:16:15.369576 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:15.369889 kubelet[1905]: E0130 19:16:15.369860 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:15.369889 kubelet[1905]: W0130 19:16:15.369883 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:15.370010 kubelet[1905]: E0130 19:16:15.369910 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:15.370656 kubelet[1905]: E0130 19:16:15.370368 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:15.370656 kubelet[1905]: W0130 19:16:15.370389 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:15.370656 kubelet[1905]: E0130 19:16:15.370406 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:15.370859 kubelet[1905]: E0130 19:16:15.370738 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:15.370859 kubelet[1905]: W0130 19:16:15.370752 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:15.370859 kubelet[1905]: E0130 19:16:15.370779 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:15.371312 kubelet[1905]: E0130 19:16:15.371064 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:15.371312 kubelet[1905]: W0130 19:16:15.371088 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:15.371312 kubelet[1905]: E0130 19:16:15.371102 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:15.371312 kubelet[1905]: E0130 19:16:15.371387 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:15.371312 kubelet[1905]: W0130 19:16:15.371402 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:15.371312 kubelet[1905]: E0130 19:16:15.371416 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:15.372178 kubelet[1905]: E0130 19:16:15.371671 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:15.372178 kubelet[1905]: W0130 19:16:15.371687 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:15.372178 kubelet[1905]: E0130 19:16:15.371702 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:15.372178 kubelet[1905]: E0130 19:16:15.371974 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:15.372178 kubelet[1905]: W0130 19:16:15.371989 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:15.372178 kubelet[1905]: E0130 19:16:15.372003 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:15.373317 kubelet[1905]: E0130 19:16:15.372274 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:15.373317 kubelet[1905]: W0130 19:16:15.372288 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:15.373317 kubelet[1905]: E0130 19:16:15.372305 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:15.373317 kubelet[1905]: E0130 19:16:15.372572 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:15.373317 kubelet[1905]: W0130 19:16:15.372586 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:15.373317 kubelet[1905]: E0130 19:16:15.372600 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:15.373317 kubelet[1905]: E0130 19:16:15.372864 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:15.373317 kubelet[1905]: W0130 19:16:15.372878 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:15.373317 kubelet[1905]: E0130 19:16:15.372893 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:15.373317 kubelet[1905]: E0130 19:16:15.373152 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:15.373744 kubelet[1905]: W0130 19:16:15.373166 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:15.373744 kubelet[1905]: E0130 19:16:15.373181 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:15.374799 kubelet[1905]: E0130 19:16:15.373975 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:15.374799 kubelet[1905]: W0130 19:16:15.373997 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:15.374799 kubelet[1905]: E0130 19:16:15.374014 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:15.374799 kubelet[1905]: E0130 19:16:15.374394 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:15.374799 kubelet[1905]: W0130 19:16:15.374408 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:15.374799 kubelet[1905]: E0130 19:16:15.374423 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:15.374799 kubelet[1905]: E0130 19:16:15.374678 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:15.374799 kubelet[1905]: W0130 19:16:15.374691 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:15.374799 kubelet[1905]: E0130 19:16:15.374720 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:15.375303 kubelet[1905]: E0130 19:16:15.374990 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:15.375303 kubelet[1905]: W0130 19:16:15.375004 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:15.375303 kubelet[1905]: E0130 19:16:15.375019 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:15.378091 kubelet[1905]: E0130 19:16:15.376924 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:15.378091 kubelet[1905]: W0130 19:16:15.376948 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:15.378091 kubelet[1905]: E0130 19:16:15.376966 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:15.378091 kubelet[1905]: E0130 19:16:15.377253 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:15.378091 kubelet[1905]: W0130 19:16:15.377269 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:15.378091 kubelet[1905]: E0130 19:16:15.377283 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:15.378091 kubelet[1905]: E0130 19:16:15.377554 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:15.378091 kubelet[1905]: W0130 19:16:15.377568 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:15.378091 kubelet[1905]: E0130 19:16:15.377581 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:15.387141 kubelet[1905]: E0130 19:16:15.387107 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:15.387141 kubelet[1905]: W0130 19:16:15.387132 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:15.387318 kubelet[1905]: E0130 19:16:15.387161 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:15.387722 kubelet[1905]: E0130 19:16:15.387575 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:15.387722 kubelet[1905]: W0130 19:16:15.387598 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:15.387722 kubelet[1905]: E0130 19:16:15.387633 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:15.388281 kubelet[1905]: E0130 19:16:15.387961 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:15.388281 kubelet[1905]: W0130 19:16:15.387988 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:15.388281 kubelet[1905]: E0130 19:16:15.388014 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:15.388457 kubelet[1905]: E0130 19:16:15.388357 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:15.388457 kubelet[1905]: W0130 19:16:15.388372 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:15.388457 kubelet[1905]: E0130 19:16:15.388404 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:15.388989 kubelet[1905]: E0130 19:16:15.388711 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:15.388989 kubelet[1905]: W0130 19:16:15.388732 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:15.388989 kubelet[1905]: E0130 19:16:15.388765 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:15.389153 kubelet[1905]: E0130 19:16:15.389108 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:15.389153 kubelet[1905]: W0130 19:16:15.389122 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:15.389262 kubelet[1905]: E0130 19:16:15.389154 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:15.390189 kubelet[1905]: E0130 19:16:15.389738 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:15.390189 kubelet[1905]: W0130 19:16:15.389760 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:15.390189 kubelet[1905]: E0130 19:16:15.389852 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:15.390189 kubelet[1905]: E0130 19:16:15.390097 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:15.390189 kubelet[1905]: W0130 19:16:15.390111 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:15.390189 kubelet[1905]: E0130 19:16:15.390126 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:15.390508 kubelet[1905]: E0130 19:16:15.390413 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:15.390508 kubelet[1905]: W0130 19:16:15.390428 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:15.390508 kubelet[1905]: E0130 19:16:15.390443 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:15.391531 kubelet[1905]: E0130 19:16:15.390707 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:15.391531 kubelet[1905]: W0130 19:16:15.390728 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:15.391531 kubelet[1905]: E0130 19:16:15.390750 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:15.391531 kubelet[1905]: E0130 19:16:15.391042 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:15.391531 kubelet[1905]: W0130 19:16:15.391056 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:15.391531 kubelet[1905]: E0130 19:16:15.391070 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:15.391853 kubelet[1905]: E0130 19:16:15.391617 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:15.391853 kubelet[1905]: W0130 19:16:15.391631 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:15.391853 kubelet[1905]: E0130 19:16:15.391647 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:16.126141 kubelet[1905]: E0130 19:16:16.126057 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:16:16.285225 kubelet[1905]: E0130 19:16:16.285190 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:16.285225 kubelet[1905]: W0130 19:16:16.285219 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:16.285588 kubelet[1905]: E0130 19:16:16.285284 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:16.285719 kubelet[1905]: E0130 19:16:16.285636 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:16.285719 kubelet[1905]: W0130 19:16:16.285650 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:16.285719 kubelet[1905]: E0130 19:16:16.285664 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:16.285980 kubelet[1905]: E0130 19:16:16.285922 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:16.285980 kubelet[1905]: W0130 19:16:16.285936 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:16.285980 kubelet[1905]: E0130 19:16:16.285958 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:16.286275 kubelet[1905]: E0130 19:16:16.286234 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:16.286275 kubelet[1905]: W0130 19:16:16.286266 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:16.286433 kubelet[1905]: E0130 19:16:16.286281 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:16.286584 kubelet[1905]: E0130 19:16:16.286561 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:16.286584 kubelet[1905]: W0130 19:16:16.286581 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:16.286723 kubelet[1905]: E0130 19:16:16.286596 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:16.286859 kubelet[1905]: E0130 19:16:16.286839 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:16.286859 kubelet[1905]: W0130 19:16:16.286853 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:16.286982 kubelet[1905]: E0130 19:16:16.286868 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:16.287126 kubelet[1905]: E0130 19:16:16.287102 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:16.287126 kubelet[1905]: W0130 19:16:16.287116 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:16.287292 kubelet[1905]: E0130 19:16:16.287130 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:16.287439 kubelet[1905]: E0130 19:16:16.287418 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:16.287439 kubelet[1905]: W0130 19:16:16.287438 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:16.287571 kubelet[1905]: E0130 19:16:16.287453 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:16.287736 kubelet[1905]: E0130 19:16:16.287718 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:16.287799 kubelet[1905]: W0130 19:16:16.287736 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:16.287799 kubelet[1905]: E0130 19:16:16.287772 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:16.288046 kubelet[1905]: E0130 19:16:16.288028 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:16.288046 kubelet[1905]: W0130 19:16:16.288046 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:16.288190 kubelet[1905]: E0130 19:16:16.288061 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:16.288375 kubelet[1905]: E0130 19:16:16.288352 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:16.288375 kubelet[1905]: W0130 19:16:16.288365 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:16.288551 kubelet[1905]: E0130 19:16:16.288392 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:16.288686 kubelet[1905]: E0130 19:16:16.288666 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:16.288765 kubelet[1905]: W0130 19:16:16.288687 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:16.288765 kubelet[1905]: E0130 19:16:16.288711 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:16.289020 kubelet[1905]: E0130 19:16:16.288983 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:16.289020 kubelet[1905]: W0130 19:16:16.289013 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:16.289149 kubelet[1905]: E0130 19:16:16.289028 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:16.289351 kubelet[1905]: E0130 19:16:16.289322 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:16.289351 kubelet[1905]: W0130 19:16:16.289341 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:16.289489 kubelet[1905]: E0130 19:16:16.289365 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:16.289618 kubelet[1905]: E0130 19:16:16.289598 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:16.289618 kubelet[1905]: W0130 19:16:16.289617 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:16.289759 kubelet[1905]: E0130 19:16:16.289631 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:16.289901 kubelet[1905]: E0130 19:16:16.289883 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:16.289901 kubelet[1905]: W0130 19:16:16.289901 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:16.290004 kubelet[1905]: E0130 19:16:16.289917 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:16.290188 kubelet[1905]: E0130 19:16:16.290170 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:16.290188 kubelet[1905]: W0130 19:16:16.290188 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:16.290346 kubelet[1905]: E0130 19:16:16.290203 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:16.290511 kubelet[1905]: E0130 19:16:16.290491 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:16.290511 kubelet[1905]: W0130 19:16:16.290511 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:16.290614 kubelet[1905]: E0130 19:16:16.290525 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:16.290813 kubelet[1905]: E0130 19:16:16.290795 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:16.290813 kubelet[1905]: W0130 19:16:16.290814 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:16.290933 kubelet[1905]: E0130 19:16:16.290828 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:16.291104 kubelet[1905]: E0130 19:16:16.291085 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:16.291104 kubelet[1905]: W0130 19:16:16.291104 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:16.291194 kubelet[1905]: E0130 19:16:16.291119 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:16.294662 kubelet[1905]: E0130 19:16:16.294638 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:16.294963 kubelet[1905]: W0130 19:16:16.294761 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:16.294963 kubelet[1905]: E0130 19:16:16.294786 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:16.295206 kubelet[1905]: E0130 19:16:16.295186 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:16.295551 kubelet[1905]: W0130 19:16:16.295355 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:16.295551 kubelet[1905]: E0130 19:16:16.295396 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:16.295756 kubelet[1905]: E0130 19:16:16.295735 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:16.295868 kubelet[1905]: W0130 19:16:16.295836 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:16.295995 kubelet[1905]: E0130 19:16:16.295976 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:16.296411 kubelet[1905]: E0130 19:16:16.296375 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:16.296411 kubelet[1905]: W0130 19:16:16.296402 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:16.296535 kubelet[1905]: E0130 19:16:16.296429 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:16.296699 kubelet[1905]: E0130 19:16:16.296677 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:16.296699 kubelet[1905]: W0130 19:16:16.296698 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:16.296800 kubelet[1905]: E0130 19:16:16.296733 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:16.297038 kubelet[1905]: E0130 19:16:16.297019 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:16.297102 kubelet[1905]: W0130 19:16:16.297040 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:16.297102 kubelet[1905]: E0130 19:16:16.297085 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:16.297492 kubelet[1905]: E0130 19:16:16.297472 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:16.297492 kubelet[1905]: W0130 19:16:16.297491 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:16.297616 kubelet[1905]: E0130 19:16:16.297594 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:16.298096 kubelet[1905]: E0130 19:16:16.298065 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:16.298096 kubelet[1905]: W0130 19:16:16.298089 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:16.298215 kubelet[1905]: E0130 19:16:16.298113 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:16.298478 kubelet[1905]: E0130 19:16:16.298449 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:16.298478 kubelet[1905]: W0130 19:16:16.298470 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:16.298603 kubelet[1905]: E0130 19:16:16.298494 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:16.298828 kubelet[1905]: E0130 19:16:16.298807 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:16.298828 kubelet[1905]: W0130 19:16:16.298828 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:16.298931 kubelet[1905]: E0130 19:16:16.298851 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:16.299469 kubelet[1905]: E0130 19:16:16.299280 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:16.299469 kubelet[1905]: W0130 19:16:16.299311 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:16.299469 kubelet[1905]: E0130 19:16:16.299339 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:16.299831 kubelet[1905]: E0130 19:16:16.299764 1905 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 19:16:16.299831 kubelet[1905]: W0130 19:16:16.299784 1905 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 19:16:16.299831 kubelet[1905]: E0130 19:16:16.299799 1905 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 19:16:16.795872 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3532234863.mount: Deactivated successfully. Jan 30 19:16:16.936421 containerd[1497]: time="2025-01-30T19:16:16.936361755Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 19:16:16.938003 containerd[1497]: time="2025-01-30T19:16:16.937952623Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 30 19:16:16.938374 containerd[1497]: time="2025-01-30T19:16:16.938299865Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 19:16:16.941054 containerd[1497]: time="2025-01-30T19:16:16.941014048Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 19:16:16.942512 containerd[1497]: time="2025-01-30T19:16:16.942358835Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.802799548s" Jan 30 19:16:16.942512 containerd[1497]: time="2025-01-30T19:16:16.942403827Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 30 19:16:16.945007 containerd[1497]: time="2025-01-30T19:16:16.944964714Z" level=info msg="CreateContainer within sandbox \"a02e06fd599113815222b861417df1039b2047a555d6130c52dc1ac2c2baa618\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 30 19:16:16.981778 containerd[1497]: time="2025-01-30T19:16:16.981712213Z" level=info msg="CreateContainer within sandbox \"a02e06fd599113815222b861417df1039b2047a555d6130c52dc1ac2c2baa618\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8ea7d2ec431b3dca713d8f2b7175a42d16c1566156c3ba691f318b5101e8d4d2\"" Jan 30 19:16:16.982859 containerd[1497]: time="2025-01-30T19:16:16.982486831Z" level=info msg="StartContainer for \"8ea7d2ec431b3dca713d8f2b7175a42d16c1566156c3ba691f318b5101e8d4d2\"" Jan 30 19:16:17.023450 systemd[1]: Started cri-containerd-8ea7d2ec431b3dca713d8f2b7175a42d16c1566156c3ba691f318b5101e8d4d2.scope - libcontainer container 8ea7d2ec431b3dca713d8f2b7175a42d16c1566156c3ba691f318b5101e8d4d2. Jan 30 19:16:17.061190 containerd[1497]: time="2025-01-30T19:16:17.061038034Z" level=info msg="StartContainer for \"8ea7d2ec431b3dca713d8f2b7175a42d16c1566156c3ba691f318b5101e8d4d2\" returns successfully" Jan 30 19:16:17.079747 systemd[1]: cri-containerd-8ea7d2ec431b3dca713d8f2b7175a42d16c1566156c3ba691f318b5101e8d4d2.scope: Deactivated successfully. Jan 30 19:16:17.126798 kubelet[1905]: E0130 19:16:17.126634 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:16:17.252839 kubelet[1905]: E0130 19:16:17.252310 1905 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n5m8t" podUID="e47b55f6-c6c0-4e43-a8ed-5ed724f4ad91" Jan 30 19:16:17.326499 containerd[1497]: time="2025-01-30T19:16:17.325924189Z" level=info msg="shim disconnected" id=8ea7d2ec431b3dca713d8f2b7175a42d16c1566156c3ba691f318b5101e8d4d2 namespace=k8s.io Jan 30 19:16:17.326499 containerd[1497]: time="2025-01-30T19:16:17.326042038Z" level=warning msg="cleaning up after shim disconnected" id=8ea7d2ec431b3dca713d8f2b7175a42d16c1566156c3ba691f318b5101e8d4d2 namespace=k8s.io Jan 30 19:16:17.326499 containerd[1497]: time="2025-01-30T19:16:17.326072575Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 19:16:17.684091 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ea7d2ec431b3dca713d8f2b7175a42d16c1566156c3ba691f318b5101e8d4d2-rootfs.mount: Deactivated successfully. Jan 30 19:16:18.127094 kubelet[1905]: E0130 19:16:18.126994 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:16:18.291642 containerd[1497]: time="2025-01-30T19:16:18.291573501Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 30 19:16:19.127654 kubelet[1905]: E0130 19:16:19.127584 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:16:19.253796 kubelet[1905]: E0130 19:16:19.253726 1905 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n5m8t" podUID="e47b55f6-c6c0-4e43-a8ed-5ed724f4ad91" Jan 30 19:16:20.128345 kubelet[1905]: E0130 19:16:20.128269 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:16:21.128937 kubelet[1905]: E0130 19:16:21.128863 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:16:21.253103 kubelet[1905]: E0130 19:16:21.252593 1905 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n5m8t" podUID="e47b55f6-c6c0-4e43-a8ed-5ed724f4ad91" Jan 30 19:16:22.129298 kubelet[1905]: E0130 19:16:22.129235 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:16:23.130986 kubelet[1905]: E0130 19:16:23.130890 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:16:23.252277 kubelet[1905]: E0130 19:16:23.251921 1905 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n5m8t" podUID="e47b55f6-c6c0-4e43-a8ed-5ed724f4ad91" Jan 30 19:16:24.032645 containerd[1497]: time="2025-01-30T19:16:24.032550423Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 19:16:24.034280 containerd[1497]: time="2025-01-30T19:16:24.034141174Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 30 19:16:24.035377 containerd[1497]: time="2025-01-30T19:16:24.035306153Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 19:16:24.038427 containerd[1497]: time="2025-01-30T19:16:24.038391749Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 19:16:24.039716 containerd[1497]: time="2025-01-30T19:16:24.039499836Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.747865014s" Jan 30 19:16:24.039716 containerd[1497]: time="2025-01-30T19:16:24.039566541Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 30 19:16:24.042858 containerd[1497]: time="2025-01-30T19:16:24.042801295Z" level=info msg="CreateContainer within sandbox \"a02e06fd599113815222b861417df1039b2047a555d6130c52dc1ac2c2baa618\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 19:16:24.063418 containerd[1497]: time="2025-01-30T19:16:24.063327026Z" level=info msg="CreateContainer within sandbox \"a02e06fd599113815222b861417df1039b2047a555d6130c52dc1ac2c2baa618\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d460de1b71c9bfaefcc46794a8e55eaa95d8a6b8f1886f9d2677083eb798781b\"" Jan 30 19:16:24.064380 containerd[1497]: time="2025-01-30T19:16:24.064223057Z" level=info msg="StartContainer for \"d460de1b71c9bfaefcc46794a8e55eaa95d8a6b8f1886f9d2677083eb798781b\"" Jan 30 19:16:24.109515 systemd[1]: Started cri-containerd-d460de1b71c9bfaefcc46794a8e55eaa95d8a6b8f1886f9d2677083eb798781b.scope - libcontainer container d460de1b71c9bfaefcc46794a8e55eaa95d8a6b8f1886f9d2677083eb798781b. Jan 30 19:16:24.131094 kubelet[1905]: E0130 19:16:24.131034 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:16:24.151870 containerd[1497]: time="2025-01-30T19:16:24.151769635Z" level=info msg="StartContainer for \"d460de1b71c9bfaefcc46794a8e55eaa95d8a6b8f1886f9d2677083eb798781b\" returns successfully" Jan 30 19:16:24.306689 update_engine[1481]: I20250130 19:16:24.306504 1481 update_attempter.cc:509] Updating boot flags... Jan 30 19:16:24.446734 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2406) Jan 30 19:16:25.028321 containerd[1497]: time="2025-01-30T19:16:25.028215948Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 19:16:25.032495 systemd[1]: cri-containerd-d460de1b71c9bfaefcc46794a8e55eaa95d8a6b8f1886f9d2677083eb798781b.scope: Deactivated successfully. Jan 30 19:16:25.041404 kubelet[1905]: I0130 19:16:25.040965 1905 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Jan 30 19:16:25.065985 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d460de1b71c9bfaefcc46794a8e55eaa95d8a6b8f1886f9d2677083eb798781b-rootfs.mount: Deactivated successfully. Jan 30 19:16:25.131742 kubelet[1905]: E0130 19:16:25.131675 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:16:25.261312 systemd[1]: Created slice kubepods-besteffort-pode47b55f6_c6c0_4e43_a8ed_5ed724f4ad91.slice - libcontainer container kubepods-besteffort-pode47b55f6_c6c0_4e43_a8ed_5ed724f4ad91.slice. Jan 30 19:16:25.265296 containerd[1497]: time="2025-01-30T19:16:25.265226522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n5m8t,Uid:e47b55f6-c6c0-4e43-a8ed-5ed724f4ad91,Namespace:calico-system,Attempt:0,}" Jan 30 19:16:25.392963 containerd[1497]: time="2025-01-30T19:16:25.392812174Z" level=info msg="shim disconnected" id=d460de1b71c9bfaefcc46794a8e55eaa95d8a6b8f1886f9d2677083eb798781b namespace=k8s.io Jan 30 19:16:25.392963 containerd[1497]: time="2025-01-30T19:16:25.392927676Z" level=warning msg="cleaning up after shim disconnected" id=d460de1b71c9bfaefcc46794a8e55eaa95d8a6b8f1886f9d2677083eb798781b namespace=k8s.io Jan 30 19:16:25.392963 containerd[1497]: time="2025-01-30T19:16:25.392947895Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 19:16:25.490773 containerd[1497]: time="2025-01-30T19:16:25.490511776Z" level=error msg="Failed to destroy network for sandbox \"2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 19:16:25.494319 containerd[1497]: time="2025-01-30T19:16:25.491619588Z" level=error msg="encountered an error cleaning up failed sandbox \"2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 19:16:25.494319 containerd[1497]: time="2025-01-30T19:16:25.493314293Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n5m8t,Uid:e47b55f6-c6c0-4e43-a8ed-5ed724f4ad91,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 19:16:25.492990 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1-shm.mount: Deactivated successfully. Jan 30 19:16:25.494626 kubelet[1905]: E0130 19:16:25.494480 1905 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 19:16:25.494706 kubelet[1905]: E0130 19:16:25.494629 1905 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n5m8t" Jan 30 19:16:25.494706 kubelet[1905]: E0130 19:16:25.494692 1905 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n5m8t" Jan 30 19:16:25.494867 kubelet[1905]: E0130 19:16:25.494764 1905 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-n5m8t_calico-system(e47b55f6-c6c0-4e43-a8ed-5ed724f4ad91)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-n5m8t_calico-system(e47b55f6-c6c0-4e43-a8ed-5ed724f4ad91)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-n5m8t" podUID="e47b55f6-c6c0-4e43-a8ed-5ed724f4ad91" Jan 30 19:16:26.132873 kubelet[1905]: E0130 19:16:26.132784 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:16:26.316367 containerd[1497]: time="2025-01-30T19:16:26.315524772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 30 19:16:26.316959 kubelet[1905]: I0130 19:16:26.315841 1905 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1" Jan 30 19:16:26.317029 containerd[1497]: time="2025-01-30T19:16:26.316727206Z" level=info msg="StopPodSandbox for \"2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1\"" Jan 30 19:16:26.317029 containerd[1497]: time="2025-01-30T19:16:26.316987037Z" level=info msg="Ensure that sandbox 2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1 in task-service has been cleanup successfully" Jan 30 19:16:26.355326 containerd[1497]: time="2025-01-30T19:16:26.355016597Z" level=error msg="StopPodSandbox for \"2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1\" failed" error="failed to destroy network for sandbox \"2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 19:16:26.355509 kubelet[1905]: E0130 19:16:26.355402 1905 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1" Jan 30 19:16:26.355592 kubelet[1905]: E0130 19:16:26.355509 1905 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1"} Jan 30 19:16:26.355676 kubelet[1905]: E0130 19:16:26.355604 1905 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e47b55f6-c6c0-4e43-a8ed-5ed724f4ad91\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 19:16:26.355676 kubelet[1905]: E0130 19:16:26.355637 1905 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e47b55f6-c6c0-4e43-a8ed-5ed724f4ad91\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-n5m8t" podUID="e47b55f6-c6c0-4e43-a8ed-5ed724f4ad91" Jan 30 19:16:27.133891 kubelet[1905]: E0130 19:16:27.133804 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:16:27.361984 systemd[1]: Created slice kubepods-besteffort-pod57660fc5_fce7_487a_8617_deb8ce9ecbd3.slice - libcontainer container kubepods-besteffort-pod57660fc5_fce7_487a_8617_deb8ce9ecbd3.slice. Jan 30 19:16:27.365316 kubelet[1905]: I0130 19:16:27.365201 1905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9l9w\" (UniqueName: \"kubernetes.io/projected/57660fc5-fce7-487a-8617-deb8ce9ecbd3-kube-api-access-n9l9w\") pod \"nginx-deployment-7fcdb87857-dvkbv\" (UID: \"57660fc5-fce7-487a-8617-deb8ce9ecbd3\") " pod="default/nginx-deployment-7fcdb87857-dvkbv" Jan 30 19:16:27.672796 containerd[1497]: time="2025-01-30T19:16:27.672632729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-dvkbv,Uid:57660fc5-fce7-487a-8617-deb8ce9ecbd3,Namespace:default,Attempt:0,}" Jan 30 19:16:27.834021 containerd[1497]: time="2025-01-30T19:16:27.833925643Z" level=error msg="Failed to destroy network for sandbox \"47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 19:16:27.836937 containerd[1497]: time="2025-01-30T19:16:27.836551635Z" level=error msg="encountered an error cleaning up failed sandbox \"47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 19:16:27.836937 containerd[1497]: time="2025-01-30T19:16:27.836614647Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-dvkbv,Uid:57660fc5-fce7-487a-8617-deb8ce9ecbd3,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 19:16:27.836194 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622-shm.mount: Deactivated successfully. Jan 30 19:16:27.837294 kubelet[1905]: E0130 19:16:27.837045 1905 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 19:16:27.837294 kubelet[1905]: E0130 19:16:27.837171 1905 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-dvkbv" Jan 30 19:16:27.837294 kubelet[1905]: E0130 19:16:27.837208 1905 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-dvkbv" Jan 30 19:16:27.837529 kubelet[1905]: E0130 19:16:27.837363 1905 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-dvkbv_default(57660fc5-fce7-487a-8617-deb8ce9ecbd3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-dvkbv_default(57660fc5-fce7-487a-8617-deb8ce9ecbd3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-dvkbv" podUID="57660fc5-fce7-487a-8617-deb8ce9ecbd3" Jan 30 19:16:28.134232 kubelet[1905]: E0130 19:16:28.134148 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:16:28.327193 kubelet[1905]: I0130 19:16:28.326401 1905 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622" Jan 30 19:16:28.328276 containerd[1497]: time="2025-01-30T19:16:28.327558882Z" level=info msg="StopPodSandbox for \"47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622\"" Jan 30 19:16:28.328276 containerd[1497]: time="2025-01-30T19:16:28.327805468Z" level=info msg="Ensure that sandbox 47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622 in task-service has been cleanup successfully" Jan 30 19:16:28.404451 containerd[1497]: time="2025-01-30T19:16:28.404289581Z" level=error msg="StopPodSandbox for \"47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622\" failed" error="failed to destroy network for sandbox \"47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 19:16:28.405343 kubelet[1905]: E0130 19:16:28.405211 1905 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622" Jan 30 19:16:28.405462 kubelet[1905]: E0130 19:16:28.405370 1905 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622"} Jan 30 19:16:28.405462 kubelet[1905]: E0130 19:16:28.405433 1905 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"57660fc5-fce7-487a-8617-deb8ce9ecbd3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 19:16:28.405623 kubelet[1905]: E0130 19:16:28.405471 1905 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"57660fc5-fce7-487a-8617-deb8ce9ecbd3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-dvkbv" podUID="57660fc5-fce7-487a-8617-deb8ce9ecbd3" Jan 30 19:16:29.120098 kubelet[1905]: E0130 19:16:29.119268 1905 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:16:29.136524 kubelet[1905]: E0130 19:16:29.136379 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:16:30.137427 kubelet[1905]: E0130 19:16:30.137196 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:16:31.138218 kubelet[1905]: E0130 19:16:31.138165 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:16:32.140276 kubelet[1905]: E0130 19:16:32.140188 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:16:33.140398 kubelet[1905]: E0130 19:16:33.140344 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:16:34.087001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1792687020.mount: Deactivated successfully. Jan 30 19:16:34.140660 kubelet[1905]: E0130 19:16:34.140530 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:16:34.319911 containerd[1497]: time="2025-01-30T19:16:34.319718852Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 19:16:34.321139 containerd[1497]: time="2025-01-30T19:16:34.320988991Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 30 19:16:34.322185 containerd[1497]: time="2025-01-30T19:16:34.322082685Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 19:16:34.324899 containerd[1497]: time="2025-01-30T19:16:34.324822483Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 19:16:34.326672 containerd[1497]: time="2025-01-30T19:16:34.325968130Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 8.010186923s" Jan 30 19:16:34.326672 containerd[1497]: time="2025-01-30T19:16:34.326029258Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 30 19:16:34.359335 containerd[1497]: time="2025-01-30T19:16:34.359099586Z" level=info msg="CreateContainer within sandbox \"a02e06fd599113815222b861417df1039b2047a555d6130c52dc1ac2c2baa618\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 30 19:16:34.388907 containerd[1497]: time="2025-01-30T19:16:34.388724159Z" level=info msg="CreateContainer within sandbox \"a02e06fd599113815222b861417df1039b2047a555d6130c52dc1ac2c2baa618\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"277399867a74b419957289c1e23fb54ef18e54886b7d93c40246038548111877\"" Jan 30 19:16:34.390079 containerd[1497]: time="2025-01-30T19:16:34.389676729Z" level=info msg="StartContainer for \"277399867a74b419957289c1e23fb54ef18e54886b7d93c40246038548111877\"" Jan 30 19:16:34.478480 systemd[1]: Started cri-containerd-277399867a74b419957289c1e23fb54ef18e54886b7d93c40246038548111877.scope - libcontainer container 277399867a74b419957289c1e23fb54ef18e54886b7d93c40246038548111877. Jan 30 19:16:34.528796 containerd[1497]: time="2025-01-30T19:16:34.528661355Z" level=info msg="StartContainer for \"277399867a74b419957289c1e23fb54ef18e54886b7d93c40246038548111877\" returns successfully" Jan 30 19:16:34.630674 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 30 19:16:34.630932 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 30 19:16:35.141094 kubelet[1905]: E0130 19:16:35.141011 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:16:35.374475 kubelet[1905]: I0130 19:16:35.373557 1905 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jvtg9" podStartSLOduration=4.651220255 podStartE2EDuration="26.373535361s" podCreationTimestamp="2025-01-30 19:16:09 +0000 UTC" firstStartedPulling="2025-01-30 19:16:11.62097745 +0000 UTC m=+4.746523412" lastFinishedPulling="2025-01-30 19:16:34.327097499 +0000 UTC m=+26.468838518" observedRunningTime="2025-01-30 19:16:35.373156267 +0000 UTC m=+27.514897315" watchObservedRunningTime="2025-01-30 19:16:35.373535361 +0000 UTC m=+27.515276394" Jan 30 19:16:36.141558 kubelet[1905]: E0130 19:16:36.141476 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:16:36.296283 kernel: bpftool[2743]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 30 19:16:36.385311 systemd[1]: run-containerd-runc-k8s.io-277399867a74b419957289c1e23fb54ef18e54886b7d93c40246038548111877-runc.62U4S1.mount: Deactivated successfully. Jan 30 19:16:36.642055 systemd-networkd[1422]: vxlan.calico: Link UP Jan 30 19:16:36.642070 systemd-networkd[1422]: vxlan.calico: Gained carrier Jan 30 19:16:37.141944 kubelet[1905]: E0130 19:16:37.141871 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:16:38.142641 kubelet[1905]: E0130 19:16:38.142538 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:16:38.252971 containerd[1497]: time="2025-01-30T19:16:38.252763069Z" level=info msg="StopPodSandbox for \"2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1\"" Jan 30 19:16:38.256586 systemd-networkd[1422]: vxlan.calico: Gained IPv6LL Jan 30 19:16:38.465407 containerd[1497]: 2025-01-30 19:16:38.347 [INFO][2861] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1" Jan 30 19:16:38.465407 containerd[1497]: 2025-01-30 19:16:38.348 [INFO][2861] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1" iface="eth0" netns="/var/run/netns/cni-f3ef90e5-83aa-63ab-1920-198bcda1723e" Jan 30 19:16:38.465407 containerd[1497]: 2025-01-30 19:16:38.349 [INFO][2861] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1" iface="eth0" netns="/var/run/netns/cni-f3ef90e5-83aa-63ab-1920-198bcda1723e" Jan 30 19:16:38.465407 containerd[1497]: 2025-01-30 19:16:38.351 [INFO][2861] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1" iface="eth0" netns="/var/run/netns/cni-f3ef90e5-83aa-63ab-1920-198bcda1723e" Jan 30 19:16:38.465407 containerd[1497]: 2025-01-30 19:16:38.351 [INFO][2861] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1" Jan 30 19:16:38.465407 containerd[1497]: 2025-01-30 19:16:38.351 [INFO][2861] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1" Jan 30 19:16:38.465407 containerd[1497]: 2025-01-30 19:16:38.445 [INFO][2867] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1" HandleID="k8s-pod-network.2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1" Workload="10.230.38.22-k8s-csi--node--driver--n5m8t-eth0" Jan 30 19:16:38.465407 containerd[1497]: 2025-01-30 19:16:38.445 [INFO][2867] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 19:16:38.465407 containerd[1497]: 2025-01-30 19:16:38.445 [INFO][2867] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 19:16:38.465407 containerd[1497]: 2025-01-30 19:16:38.458 [WARNING][2867] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1" HandleID="k8s-pod-network.2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1" Workload="10.230.38.22-k8s-csi--node--driver--n5m8t-eth0" Jan 30 19:16:38.465407 containerd[1497]: 2025-01-30 19:16:38.458 [INFO][2867] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1" HandleID="k8s-pod-network.2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1" Workload="10.230.38.22-k8s-csi--node--driver--n5m8t-eth0" Jan 30 19:16:38.465407 containerd[1497]: 2025-01-30 19:16:38.461 [INFO][2867] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 19:16:38.465407 containerd[1497]: 2025-01-30 19:16:38.463 [INFO][2861] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1" Jan 30 19:16:38.465407 containerd[1497]: time="2025-01-30T19:16:38.465303053Z" level=info msg="TearDown network for sandbox \"2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1\" successfully" Jan 30 19:16:38.465407 containerd[1497]: time="2025-01-30T19:16:38.465347832Z" level=info msg="StopPodSandbox for \"2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1\" returns successfully" Jan 30 19:16:38.467975 systemd[1]: run-netns-cni\x2df3ef90e5\x2d83aa\x2d63ab\x2d1920\x2d198bcda1723e.mount: Deactivated successfully. Jan 30 19:16:38.470104 containerd[1497]: time="2025-01-30T19:16:38.468954971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n5m8t,Uid:e47b55f6-c6c0-4e43-a8ed-5ed724f4ad91,Namespace:calico-system,Attempt:1,}" Jan 30 19:16:38.761102 systemd-networkd[1422]: calidd0da33f110: Link UP Jan 30 19:16:38.762393 systemd-networkd[1422]: calidd0da33f110: Gained carrier Jan 30 19:16:38.787295 containerd[1497]: 2025-01-30 19:16:38.538 [INFO][2874] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.230.38.22-k8s-csi--node--driver--n5m8t-eth0 csi-node-driver- calico-system e47b55f6-c6c0-4e43-a8ed-5ed724f4ad91 1122 0 2025-01-30 19:16:09 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:84cddb44f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.230.38.22 csi-node-driver-n5m8t eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calidd0da33f110 [] []}} ContainerID="16b998780cfc31b9af93526411d1ed888204afb3f02aede211e2649680c62c7b" Namespace="calico-system" Pod="csi-node-driver-n5m8t" WorkloadEndpoint="10.230.38.22-k8s-csi--node--driver--n5m8t-" Jan 30 19:16:38.787295 containerd[1497]: 2025-01-30 19:16:38.538 [INFO][2874] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="16b998780cfc31b9af93526411d1ed888204afb3f02aede211e2649680c62c7b" Namespace="calico-system" Pod="csi-node-driver-n5m8t" WorkloadEndpoint="10.230.38.22-k8s-csi--node--driver--n5m8t-eth0" Jan 30 19:16:38.787295 containerd[1497]: 2025-01-30 19:16:38.595 [INFO][2886] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="16b998780cfc31b9af93526411d1ed888204afb3f02aede211e2649680c62c7b" HandleID="k8s-pod-network.16b998780cfc31b9af93526411d1ed888204afb3f02aede211e2649680c62c7b" Workload="10.230.38.22-k8s-csi--node--driver--n5m8t-eth0" Jan 30 19:16:38.787295 containerd[1497]: 2025-01-30 19:16:38.629 [INFO][2886] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="16b998780cfc31b9af93526411d1ed888204afb3f02aede211e2649680c62c7b" HandleID="k8s-pod-network.16b998780cfc31b9af93526411d1ed888204afb3f02aede211e2649680c62c7b" Workload="10.230.38.22-k8s-csi--node--driver--n5m8t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030fc60), Attrs:map[string]string{"namespace":"calico-system", "node":"10.230.38.22", "pod":"csi-node-driver-n5m8t", "timestamp":"2025-01-30 19:16:38.595266784 +0000 UTC"}, Hostname:"10.230.38.22", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 19:16:38.787295 containerd[1497]: 2025-01-30 19:16:38.629 [INFO][2886] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 19:16:38.787295 containerd[1497]: 2025-01-30 19:16:38.629 [INFO][2886] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 19:16:38.787295 containerd[1497]: 2025-01-30 19:16:38.629 [INFO][2886] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.230.38.22' Jan 30 19:16:38.787295 containerd[1497]: 2025-01-30 19:16:38.632 [INFO][2886] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.16b998780cfc31b9af93526411d1ed888204afb3f02aede211e2649680c62c7b" host="10.230.38.22" Jan 30 19:16:38.787295 containerd[1497]: 2025-01-30 19:16:38.653 [INFO][2886] ipam/ipam.go 372: Looking up existing affinities for host host="10.230.38.22" Jan 30 19:16:38.787295 containerd[1497]: 2025-01-30 19:16:38.662 [INFO][2886] ipam/ipam.go 489: Trying affinity for 192.168.9.0/26 host="10.230.38.22" Jan 30 19:16:38.787295 containerd[1497]: 2025-01-30 19:16:38.665 [INFO][2886] ipam/ipam.go 155: Attempting to load block cidr=192.168.9.0/26 host="10.230.38.22" Jan 30 19:16:38.787295 containerd[1497]: 2025-01-30 19:16:38.668 [INFO][2886] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.9.0/26 host="10.230.38.22" Jan 30 19:16:38.787295 containerd[1497]: 2025-01-30 19:16:38.668 [INFO][2886] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.9.0/26 handle="k8s-pod-network.16b998780cfc31b9af93526411d1ed888204afb3f02aede211e2649680c62c7b" host="10.230.38.22" Jan 30 19:16:38.787295 containerd[1497]: 2025-01-30 19:16:38.671 [INFO][2886] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.16b998780cfc31b9af93526411d1ed888204afb3f02aede211e2649680c62c7b Jan 30 19:16:38.787295 containerd[1497]: 2025-01-30 19:16:38.679 [INFO][2886] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.9.0/26 handle="k8s-pod-network.16b998780cfc31b9af93526411d1ed888204afb3f02aede211e2649680c62c7b" host="10.230.38.22" Jan 30 19:16:38.787295 containerd[1497]: 2025-01-30 19:16:38.687 [INFO][2886] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.9.1/26] block=192.168.9.0/26 handle="k8s-pod-network.16b998780cfc31b9af93526411d1ed888204afb3f02aede211e2649680c62c7b" host="10.230.38.22" Jan 30 19:16:38.787295 containerd[1497]: 2025-01-30 19:16:38.687 [INFO][2886] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.9.1/26] handle="k8s-pod-network.16b998780cfc31b9af93526411d1ed888204afb3f02aede211e2649680c62c7b" host="10.230.38.22" Jan 30 19:16:38.787295 containerd[1497]: 2025-01-30 19:16:38.687 [INFO][2886] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 19:16:38.787295 containerd[1497]: 2025-01-30 19:16:38.688 [INFO][2886] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.1/26] IPv6=[] ContainerID="16b998780cfc31b9af93526411d1ed888204afb3f02aede211e2649680c62c7b" HandleID="k8s-pod-network.16b998780cfc31b9af93526411d1ed888204afb3f02aede211e2649680c62c7b" Workload="10.230.38.22-k8s-csi--node--driver--n5m8t-eth0" Jan 30 19:16:38.788473 containerd[1497]: 2025-01-30 19:16:38.691 [INFO][2874] cni-plugin/k8s.go 386: Populated endpoint ContainerID="16b998780cfc31b9af93526411d1ed888204afb3f02aede211e2649680c62c7b" Namespace="calico-system" Pod="csi-node-driver-n5m8t" WorkloadEndpoint="10.230.38.22-k8s-csi--node--driver--n5m8t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.38.22-k8s-csi--node--driver--n5m8t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e47b55f6-c6c0-4e43-a8ed-5ed724f4ad91", ResourceVersion:"1122", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 19, 16, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.38.22", ContainerID:"", Pod:"csi-node-driver-n5m8t", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.9.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidd0da33f110", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 19:16:38.788473 containerd[1497]: 2025-01-30 19:16:38.691 [INFO][2874] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.9.1/32] ContainerID="16b998780cfc31b9af93526411d1ed888204afb3f02aede211e2649680c62c7b" Namespace="calico-system" Pod="csi-node-driver-n5m8t" WorkloadEndpoint="10.230.38.22-k8s-csi--node--driver--n5m8t-eth0" Jan 30 19:16:38.788473 containerd[1497]: 2025-01-30 19:16:38.691 [INFO][2874] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidd0da33f110 ContainerID="16b998780cfc31b9af93526411d1ed888204afb3f02aede211e2649680c62c7b" Namespace="calico-system" Pod="csi-node-driver-n5m8t" WorkloadEndpoint="10.230.38.22-k8s-csi--node--driver--n5m8t-eth0" Jan 30 19:16:38.788473 containerd[1497]: 2025-01-30 19:16:38.762 [INFO][2874] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="16b998780cfc31b9af93526411d1ed888204afb3f02aede211e2649680c62c7b" Namespace="calico-system" Pod="csi-node-driver-n5m8t" WorkloadEndpoint="10.230.38.22-k8s-csi--node--driver--n5m8t-eth0" Jan 30 19:16:38.788473 containerd[1497]: 2025-01-30 19:16:38.764 [INFO][2874] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="16b998780cfc31b9af93526411d1ed888204afb3f02aede211e2649680c62c7b" Namespace="calico-system" Pod="csi-node-driver-n5m8t" WorkloadEndpoint="10.230.38.22-k8s-csi--node--driver--n5m8t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.38.22-k8s-csi--node--driver--n5m8t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e47b55f6-c6c0-4e43-a8ed-5ed724f4ad91", ResourceVersion:"1122", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 19, 16, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.38.22", ContainerID:"16b998780cfc31b9af93526411d1ed888204afb3f02aede211e2649680c62c7b", Pod:"csi-node-driver-n5m8t", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.9.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidd0da33f110", MAC:"ca:47:6e:8a:ef:a0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 19:16:38.788473 containerd[1497]: 2025-01-30 19:16:38.778 [INFO][2874] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="16b998780cfc31b9af93526411d1ed888204afb3f02aede211e2649680c62c7b" Namespace="calico-system" Pod="csi-node-driver-n5m8t" WorkloadEndpoint="10.230.38.22-k8s-csi--node--driver--n5m8t-eth0" Jan 30 19:16:38.821427 containerd[1497]: time="2025-01-30T19:16:38.821136815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 19:16:38.822288 containerd[1497]: time="2025-01-30T19:16:38.822006875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 19:16:38.822288 containerd[1497]: time="2025-01-30T19:16:38.822034495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 19:16:38.822288 containerd[1497]: time="2025-01-30T19:16:38.822206246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 19:16:38.855617 systemd[1]: Started cri-containerd-16b998780cfc31b9af93526411d1ed888204afb3f02aede211e2649680c62c7b.scope - libcontainer container 16b998780cfc31b9af93526411d1ed888204afb3f02aede211e2649680c62c7b. Jan 30 19:16:38.894513 containerd[1497]: time="2025-01-30T19:16:38.894386930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n5m8t,Uid:e47b55f6-c6c0-4e43-a8ed-5ed724f4ad91,Namespace:calico-system,Attempt:1,} returns sandbox id \"16b998780cfc31b9af93526411d1ed888204afb3f02aede211e2649680c62c7b\"" Jan 30 19:16:38.897283 containerd[1497]: time="2025-01-30T19:16:38.897170099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 30 19:16:39.143177 kubelet[1905]: E0130 19:16:39.143109 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:16:39.919665 systemd-networkd[1422]: calidd0da33f110: Gained IPv6LL Jan 30 19:16:40.144066 kubelet[1905]: E0130 19:16:40.144001 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:16:40.301308 containerd[1497]: time="2025-01-30T19:16:40.301209818Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 19:16:40.302810 containerd[1497]: time="2025-01-30T19:16:40.302748317Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 30 19:16:40.303807 containerd[1497]: time="2025-01-30T19:16:40.303741152Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 19:16:40.306571 containerd[1497]: time="2025-01-30T19:16:40.306517762Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 19:16:40.308769 containerd[1497]: time="2025-01-30T19:16:40.308717849Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.411268742s" Jan 30 19:16:40.308769 containerd[1497]: time="2025-01-30T19:16:40.308759881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 30 19:16:40.311473 containerd[1497]: time="2025-01-30T19:16:40.311414120Z" level=info msg="CreateContainer within sandbox \"16b998780cfc31b9af93526411d1ed888204afb3f02aede211e2649680c62c7b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 30 19:16:40.330425 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2453975579.mount: Deactivated successfully. Jan 30 19:16:40.342091 containerd[1497]: time="2025-01-30T19:16:40.342038453Z" level=info msg="CreateContainer within sandbox \"16b998780cfc31b9af93526411d1ed888204afb3f02aede211e2649680c62c7b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a5d4be2e1c3531157fed4096c4cdce7bb456100084b5b45ea3f88a10629c7992\"" Jan 30 19:16:40.343183 containerd[1497]: time="2025-01-30T19:16:40.343103209Z" level=info msg="StartContainer for \"a5d4be2e1c3531157fed4096c4cdce7bb456100084b5b45ea3f88a10629c7992\"" Jan 30 19:16:40.389495 systemd[1]: Started cri-containerd-a5d4be2e1c3531157fed4096c4cdce7bb456100084b5b45ea3f88a10629c7992.scope - libcontainer container a5d4be2e1c3531157fed4096c4cdce7bb456100084b5b45ea3f88a10629c7992. Jan 30 19:16:40.480513 containerd[1497]: time="2025-01-30T19:16:40.480412996Z" level=info msg="StartContainer for \"a5d4be2e1c3531157fed4096c4cdce7bb456100084b5b45ea3f88a10629c7992\" returns successfully" Jan 30 19:16:40.483399 containerd[1497]: time="2025-01-30T19:16:40.482804831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 30 19:16:41.145189 kubelet[1905]: E0130 19:16:41.145096 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:16:42.026667 containerd[1497]: time="2025-01-30T19:16:42.026601333Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 19:16:42.029289 containerd[1497]: time="2025-01-30T19:16:42.029188045Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 30 19:16:42.032145 containerd[1497]: time="2025-01-30T19:16:42.030662913Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 19:16:42.033997 containerd[1497]: time="2025-01-30T19:16:42.033963307Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 19:16:42.036397 containerd[1497]: time="2025-01-30T19:16:42.036348392Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.553485922s" Jan 30 19:16:42.036552 containerd[1497]: time="2025-01-30T19:16:42.036520558Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 30 19:16:42.039433 containerd[1497]: time="2025-01-30T19:16:42.039399458Z" level=info msg="CreateContainer within sandbox \"16b998780cfc31b9af93526411d1ed888204afb3f02aede211e2649680c62c7b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 30 19:16:42.059739 containerd[1497]: time="2025-01-30T19:16:42.059694651Z" level=info msg="CreateContainer within sandbox \"16b998780cfc31b9af93526411d1ed888204afb3f02aede211e2649680c62c7b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"5feeb72dbc4811b2df6dcc88db617d787c0898a0b289cbe72fcef968809abf00\"" Jan 30 19:16:42.061139 containerd[1497]: time="2025-01-30T19:16:42.061110209Z" level=info msg="StartContainer for \"5feeb72dbc4811b2df6dcc88db617d787c0898a0b289cbe72fcef968809abf00\"" Jan 30 19:16:42.108505 systemd[1]: Started cri-containerd-5feeb72dbc4811b2df6dcc88db617d787c0898a0b289cbe72fcef968809abf00.scope - libcontainer container 5feeb72dbc4811b2df6dcc88db617d787c0898a0b289cbe72fcef968809abf00. Jan 30 19:16:42.146061 kubelet[1905]: E0130 19:16:42.145972 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:16:42.148225 containerd[1497]: time="2025-01-30T19:16:42.147963116Z" level=info msg="StartContainer for \"5feeb72dbc4811b2df6dcc88db617d787c0898a0b289cbe72fcef968809abf00\" returns successfully" Jan 30 19:16:42.253274 containerd[1497]: time="2025-01-30T19:16:42.252795710Z" level=info msg="StopPodSandbox for \"47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622\"" Jan 30 19:16:42.262232 kubelet[1905]: I0130 19:16:42.262191 1905 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 30 19:16:42.262959 kubelet[1905]: I0130 19:16:42.262292 1905 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 30 19:16:42.367377 containerd[1497]: 2025-01-30 19:16:42.321 [INFO][3044] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622" Jan 30 19:16:42.367377 containerd[1497]: 2025-01-30 19:16:42.321 [INFO][3044] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622" iface="eth0" netns="/var/run/netns/cni-0e60b74c-c589-22d8-d358-543a62539de1" Jan 30 19:16:42.367377 containerd[1497]: 2025-01-30 19:16:42.321 [INFO][3044] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622" iface="eth0" netns="/var/run/netns/cni-0e60b74c-c589-22d8-d358-543a62539de1" Jan 30 19:16:42.367377 containerd[1497]: 2025-01-30 19:16:42.322 [INFO][3044] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622" iface="eth0" netns="/var/run/netns/cni-0e60b74c-c589-22d8-d358-543a62539de1" Jan 30 19:16:42.367377 containerd[1497]: 2025-01-30 19:16:42.322 [INFO][3044] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622" Jan 30 19:16:42.367377 containerd[1497]: 2025-01-30 19:16:42.322 [INFO][3044] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622" Jan 30 19:16:42.367377 containerd[1497]: 2025-01-30 19:16:42.352 [INFO][3051] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622" HandleID="k8s-pod-network.47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622" Workload="10.230.38.22-k8s-nginx--deployment--7fcdb87857--dvkbv-eth0" Jan 30 19:16:42.367377 containerd[1497]: 2025-01-30 19:16:42.352 [INFO][3051] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 19:16:42.367377 containerd[1497]: 2025-01-30 19:16:42.352 [INFO][3051] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 19:16:42.367377 containerd[1497]: 2025-01-30 19:16:42.362 [WARNING][3051] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622" HandleID="k8s-pod-network.47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622" Workload="10.230.38.22-k8s-nginx--deployment--7fcdb87857--dvkbv-eth0" Jan 30 19:16:42.367377 containerd[1497]: 2025-01-30 19:16:42.362 [INFO][3051] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622" HandleID="k8s-pod-network.47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622" Workload="10.230.38.22-k8s-nginx--deployment--7fcdb87857--dvkbv-eth0" Jan 30 19:16:42.367377 containerd[1497]: 2025-01-30 19:16:42.364 [INFO][3051] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 19:16:42.367377 containerd[1497]: 2025-01-30 19:16:42.365 [INFO][3044] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622" Jan 30 19:16:42.370486 containerd[1497]: time="2025-01-30T19:16:42.367675682Z" level=info msg="TearDown network for sandbox \"47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622\" successfully" Jan 30 19:16:42.370486 containerd[1497]: time="2025-01-30T19:16:42.367711908Z" level=info msg="StopPodSandbox for \"47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622\" returns successfully" Jan 30 19:16:42.370486 containerd[1497]: time="2025-01-30T19:16:42.368894449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-dvkbv,Uid:57660fc5-fce7-487a-8617-deb8ce9ecbd3,Namespace:default,Attempt:1,}" Jan 30 19:16:42.372977 systemd[1]: run-netns-cni\x2d0e60b74c\x2dc589\x2d22d8\x2dd358\x2d543a62539de1.mount: Deactivated successfully. Jan 30 19:16:42.395866 kubelet[1905]: I0130 19:16:42.395798 1905 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-n5m8t" podStartSLOduration=30.254377088 podStartE2EDuration="33.395778406s" podCreationTimestamp="2025-01-30 19:16:09 +0000 UTC" firstStartedPulling="2025-01-30 19:16:38.896234337 +0000 UTC m=+31.037975363" lastFinishedPulling="2025-01-30 19:16:42.037635656 +0000 UTC m=+34.179376681" observedRunningTime="2025-01-30 19:16:42.395577554 +0000 UTC m=+34.537318603" watchObservedRunningTime="2025-01-30 19:16:42.395778406 +0000 UTC m=+34.537519450" Jan 30 19:16:42.528498 systemd-networkd[1422]: cali7b0b9a94470: Link UP Jan 30 19:16:42.530046 systemd-networkd[1422]: cali7b0b9a94470: Gained carrier Jan 30 19:16:42.542365 containerd[1497]: 2025-01-30 19:16:42.431 [INFO][3058] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.230.38.22-k8s-nginx--deployment--7fcdb87857--dvkbv-eth0 nginx-deployment-7fcdb87857- default 57660fc5-fce7-487a-8617-deb8ce9ecbd3 1147 0 2025-01-30 19:16:27 +0000 UTC map[app:nginx pod-template-hash:7fcdb87857 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.230.38.22 nginx-deployment-7fcdb87857-dvkbv eth0 default [] [] [kns.default ksa.default.default] cali7b0b9a94470 [] []}} ContainerID="dd2127b18d1dda06e83f05c45cff2425a2af8b448a828ec1a09df48769ef2faf" Namespace="default" Pod="nginx-deployment-7fcdb87857-dvkbv" WorkloadEndpoint="10.230.38.22-k8s-nginx--deployment--7fcdb87857--dvkbv-" Jan 30 19:16:42.542365 containerd[1497]: 2025-01-30 19:16:42.431 [INFO][3058] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="dd2127b18d1dda06e83f05c45cff2425a2af8b448a828ec1a09df48769ef2faf" Namespace="default" Pod="nginx-deployment-7fcdb87857-dvkbv" WorkloadEndpoint="10.230.38.22-k8s-nginx--deployment--7fcdb87857--dvkbv-eth0" Jan 30 19:16:42.542365 containerd[1497]: 2025-01-30 19:16:42.469 [INFO][3068] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dd2127b18d1dda06e83f05c45cff2425a2af8b448a828ec1a09df48769ef2faf" HandleID="k8s-pod-network.dd2127b18d1dda06e83f05c45cff2425a2af8b448a828ec1a09df48769ef2faf" Workload="10.230.38.22-k8s-nginx--deployment--7fcdb87857--dvkbv-eth0" Jan 30 19:16:42.542365 containerd[1497]: 2025-01-30 19:16:42.484 [INFO][3068] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dd2127b18d1dda06e83f05c45cff2425a2af8b448a828ec1a09df48769ef2faf" HandleID="k8s-pod-network.dd2127b18d1dda06e83f05c45cff2425a2af8b448a828ec1a09df48769ef2faf" Workload="10.230.38.22-k8s-nginx--deployment--7fcdb87857--dvkbv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000334490), Attrs:map[string]string{"namespace":"default", "node":"10.230.38.22", "pod":"nginx-deployment-7fcdb87857-dvkbv", "timestamp":"2025-01-30 19:16:42.469851881 +0000 UTC"}, Hostname:"10.230.38.22", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 19:16:42.542365 containerd[1497]: 2025-01-30 19:16:42.484 [INFO][3068] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 19:16:42.542365 containerd[1497]: 2025-01-30 19:16:42.484 [INFO][3068] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 19:16:42.542365 containerd[1497]: 2025-01-30 19:16:42.484 [INFO][3068] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.230.38.22' Jan 30 19:16:42.542365 containerd[1497]: 2025-01-30 19:16:42.487 [INFO][3068] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.dd2127b18d1dda06e83f05c45cff2425a2af8b448a828ec1a09df48769ef2faf" host="10.230.38.22" Jan 30 19:16:42.542365 containerd[1497]: 2025-01-30 19:16:42.493 [INFO][3068] ipam/ipam.go 372: Looking up existing affinities for host host="10.230.38.22" Jan 30 19:16:42.542365 containerd[1497]: 2025-01-30 19:16:42.500 [INFO][3068] ipam/ipam.go 489: Trying affinity for 192.168.9.0/26 host="10.230.38.22" Jan 30 19:16:42.542365 containerd[1497]: 2025-01-30 19:16:42.503 [INFO][3068] ipam/ipam.go 155: Attempting to load block cidr=192.168.9.0/26 host="10.230.38.22" Jan 30 19:16:42.542365 containerd[1497]: 2025-01-30 19:16:42.507 [INFO][3068] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.9.0/26 host="10.230.38.22" Jan 30 19:16:42.542365 containerd[1497]: 2025-01-30 19:16:42.507 [INFO][3068] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.9.0/26 handle="k8s-pod-network.dd2127b18d1dda06e83f05c45cff2425a2af8b448a828ec1a09df48769ef2faf" host="10.230.38.22" Jan 30 19:16:42.542365 containerd[1497]: 2025-01-30 19:16:42.510 [INFO][3068] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.dd2127b18d1dda06e83f05c45cff2425a2af8b448a828ec1a09df48769ef2faf Jan 30 19:16:42.542365 containerd[1497]: 2025-01-30 19:16:42.515 [INFO][3068] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.9.0/26 handle="k8s-pod-network.dd2127b18d1dda06e83f05c45cff2425a2af8b448a828ec1a09df48769ef2faf" host="10.230.38.22" Jan 30 19:16:42.542365 containerd[1497]: 2025-01-30 19:16:42.522 [INFO][3068] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.9.2/26] block=192.168.9.0/26 handle="k8s-pod-network.dd2127b18d1dda06e83f05c45cff2425a2af8b448a828ec1a09df48769ef2faf" host="10.230.38.22" Jan 30 19:16:42.542365 containerd[1497]: 2025-01-30 19:16:42.522 [INFO][3068] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.9.2/26] handle="k8s-pod-network.dd2127b18d1dda06e83f05c45cff2425a2af8b448a828ec1a09df48769ef2faf" host="10.230.38.22" Jan 30 19:16:42.542365 containerd[1497]: 2025-01-30 19:16:42.522 [INFO][3068] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 19:16:42.542365 containerd[1497]: 2025-01-30 19:16:42.522 [INFO][3068] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.2/26] IPv6=[] ContainerID="dd2127b18d1dda06e83f05c45cff2425a2af8b448a828ec1a09df48769ef2faf" HandleID="k8s-pod-network.dd2127b18d1dda06e83f05c45cff2425a2af8b448a828ec1a09df48769ef2faf" Workload="10.230.38.22-k8s-nginx--deployment--7fcdb87857--dvkbv-eth0" Jan 30 19:16:42.544999 containerd[1497]: 2025-01-30 19:16:42.524 [INFO][3058] cni-plugin/k8s.go 386: Populated endpoint ContainerID="dd2127b18d1dda06e83f05c45cff2425a2af8b448a828ec1a09df48769ef2faf" Namespace="default" Pod="nginx-deployment-7fcdb87857-dvkbv" WorkloadEndpoint="10.230.38.22-k8s-nginx--deployment--7fcdb87857--dvkbv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.38.22-k8s-nginx--deployment--7fcdb87857--dvkbv-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"57660fc5-fce7-487a-8617-deb8ce9ecbd3", ResourceVersion:"1147", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 19, 16, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.38.22", ContainerID:"", Pod:"nginx-deployment-7fcdb87857-dvkbv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.9.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali7b0b9a94470", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 19:16:42.544999 containerd[1497]: 2025-01-30 19:16:42.524 [INFO][3058] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.9.2/32] ContainerID="dd2127b18d1dda06e83f05c45cff2425a2af8b448a828ec1a09df48769ef2faf" Namespace="default" Pod="nginx-deployment-7fcdb87857-dvkbv" WorkloadEndpoint="10.230.38.22-k8s-nginx--deployment--7fcdb87857--dvkbv-eth0" Jan 30 19:16:42.544999 containerd[1497]: 2025-01-30 19:16:42.525 [INFO][3058] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7b0b9a94470 ContainerID="dd2127b18d1dda06e83f05c45cff2425a2af8b448a828ec1a09df48769ef2faf" Namespace="default" Pod="nginx-deployment-7fcdb87857-dvkbv" WorkloadEndpoint="10.230.38.22-k8s-nginx--deployment--7fcdb87857--dvkbv-eth0" Jan 30 19:16:42.544999 containerd[1497]: 2025-01-30 19:16:42.530 [INFO][3058] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dd2127b18d1dda06e83f05c45cff2425a2af8b448a828ec1a09df48769ef2faf" Namespace="default" Pod="nginx-deployment-7fcdb87857-dvkbv" WorkloadEndpoint="10.230.38.22-k8s-nginx--deployment--7fcdb87857--dvkbv-eth0" Jan 30 19:16:42.544999 containerd[1497]: 2025-01-30 19:16:42.530 [INFO][3058] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="dd2127b18d1dda06e83f05c45cff2425a2af8b448a828ec1a09df48769ef2faf" Namespace="default" Pod="nginx-deployment-7fcdb87857-dvkbv" WorkloadEndpoint="10.230.38.22-k8s-nginx--deployment--7fcdb87857--dvkbv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.38.22-k8s-nginx--deployment--7fcdb87857--dvkbv-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"57660fc5-fce7-487a-8617-deb8ce9ecbd3", ResourceVersion:"1147", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 19, 16, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.38.22", ContainerID:"dd2127b18d1dda06e83f05c45cff2425a2af8b448a828ec1a09df48769ef2faf", Pod:"nginx-deployment-7fcdb87857-dvkbv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.9.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali7b0b9a94470", MAC:"da:20:c3:95:09:d5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 19:16:42.544999 containerd[1497]: 2025-01-30 19:16:42.540 [INFO][3058] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="dd2127b18d1dda06e83f05c45cff2425a2af8b448a828ec1a09df48769ef2faf" Namespace="default" Pod="nginx-deployment-7fcdb87857-dvkbv" WorkloadEndpoint="10.230.38.22-k8s-nginx--deployment--7fcdb87857--dvkbv-eth0" Jan 30 19:16:42.580230 containerd[1497]: time="2025-01-30T19:16:42.579861194Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 19:16:42.580230 containerd[1497]: time="2025-01-30T19:16:42.579923021Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 19:16:42.580230 containerd[1497]: time="2025-01-30T19:16:42.579939477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 19:16:42.580230 containerd[1497]: time="2025-01-30T19:16:42.580086715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 19:16:42.606547 systemd[1]: Started cri-containerd-dd2127b18d1dda06e83f05c45cff2425a2af8b448a828ec1a09df48769ef2faf.scope - libcontainer container dd2127b18d1dda06e83f05c45cff2425a2af8b448a828ec1a09df48769ef2faf. Jan 30 19:16:42.662380 containerd[1497]: time="2025-01-30T19:16:42.662332912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-dvkbv,Uid:57660fc5-fce7-487a-8617-deb8ce9ecbd3,Namespace:default,Attempt:1,} returns sandbox id \"dd2127b18d1dda06e83f05c45cff2425a2af8b448a828ec1a09df48769ef2faf\"" Jan 30 19:16:42.664320 containerd[1497]: time="2025-01-30T19:16:42.664132051Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 30 19:16:43.146633 kubelet[1905]: E0130 19:16:43.146544 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:16:44.150259 kubelet[1905]: E0130 19:16:44.149405 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:16:44.463462 systemd-networkd[1422]: cali7b0b9a94470: Gained IPv6LL Jan 30 19:16:45.150707 kubelet[1905]: E0130 19:16:45.150656 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:16:46.151655 kubelet[1905]: E0130 19:16:46.151601 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:16:46.324760 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2630122777.mount: Deactivated successfully. Jan 30 19:16:47.153383 kubelet[1905]: E0130 19:16:47.153254 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:16:48.051534 containerd[1497]: time="2025-01-30T19:16:48.051469199Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 19:16:48.053086 containerd[1497]: time="2025-01-30T19:16:48.052856504Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71015561" Jan 30 19:16:48.053962 containerd[1497]: time="2025-01-30T19:16:48.053893635Z" level=info msg="ImageCreate event name:\"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 19:16:48.057868 containerd[1497]: time="2025-01-30T19:16:48.057830649Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 19:16:48.059339 containerd[1497]: time="2025-01-30T19:16:48.059296361Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 5.395121813s" Jan 30 19:16:48.059418 containerd[1497]: time="2025-01-30T19:16:48.059349739Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 30 19:16:48.073199 containerd[1497]: time="2025-01-30T19:16:48.073157306Z" level=info msg="CreateContainer within sandbox \"dd2127b18d1dda06e83f05c45cff2425a2af8b448a828ec1a09df48769ef2faf\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 30 19:16:48.087351 containerd[1497]: time="2025-01-30T19:16:48.087212211Z" level=info msg="CreateContainer within sandbox \"dd2127b18d1dda06e83f05c45cff2425a2af8b448a828ec1a09df48769ef2faf\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"696c95e5ef8e59e9f5fe71c0e0ed091ef965aba7fb7be4463b07f29fef317090\"" Jan 30 19:16:48.088279 containerd[1497]: time="2025-01-30T19:16:48.087935549Z" level=info msg="StartContainer for \"696c95e5ef8e59e9f5fe71c0e0ed091ef965aba7fb7be4463b07f29fef317090\"" Jan 30 19:16:48.132532 systemd[1]: Started cri-containerd-696c95e5ef8e59e9f5fe71c0e0ed091ef965aba7fb7be4463b07f29fef317090.scope - libcontainer container 696c95e5ef8e59e9f5fe71c0e0ed091ef965aba7fb7be4463b07f29fef317090. Jan 30 19:16:48.153859 kubelet[1905]: E0130 19:16:48.153785 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:16:48.168286 containerd[1497]: time="2025-01-30T19:16:48.168070586Z" level=info msg="StartContainer for \"696c95e5ef8e59e9f5fe71c0e0ed091ef965aba7fb7be4463b07f29fef317090\" returns successfully" Jan 30 19:16:49.119046 kubelet[1905]: E0130 19:16:49.118965 1905 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:16:49.154771 kubelet[1905]: E0130 19:16:49.154678 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:16:50.154956 kubelet[1905]: E0130 19:16:50.154879 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:16:51.156127 kubelet[1905]: E0130 19:16:51.156058 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:16:52.156743 kubelet[1905]: E0130 19:16:52.156646 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:16:53.157900 kubelet[1905]: E0130 19:16:53.157826 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:16:54.158367 kubelet[1905]: E0130 19:16:54.158265 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:16:54.786845 kubelet[1905]: I0130 19:16:54.786745 1905 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-dvkbv" podStartSLOduration=22.389634493 podStartE2EDuration="27.786720879s" podCreationTimestamp="2025-01-30 19:16:27 +0000 UTC" firstStartedPulling="2025-01-30 19:16:42.6637229 +0000 UTC m=+34.805463924" lastFinishedPulling="2025-01-30 19:16:48.060809285 +0000 UTC m=+40.202550310" observedRunningTime="2025-01-30 19:16:48.420685906 +0000 UTC m=+40.562426951" watchObservedRunningTime="2025-01-30 19:16:54.786720879 +0000 UTC m=+46.928461912" Jan 30 19:16:54.794895 systemd[1]: Created slice kubepods-besteffort-poda62c2975_30cd_4ae3_a7d7_2eeb9447df8a.slice - libcontainer container kubepods-besteffort-poda62c2975_30cd_4ae3_a7d7_2eeb9447df8a.slice. Jan 30 19:16:54.858021 kubelet[1905]: I0130 19:16:54.857926 1905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/a62c2975-30cd-4ae3-a7d7-2eeb9447df8a-data\") pod \"nfs-server-provisioner-0\" (UID: \"a62c2975-30cd-4ae3-a7d7-2eeb9447df8a\") " pod="default/nfs-server-provisioner-0" Jan 30 19:16:54.858210 kubelet[1905]: I0130 19:16:54.858045 1905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvbbr\" (UniqueName: \"kubernetes.io/projected/a62c2975-30cd-4ae3-a7d7-2eeb9447df8a-kube-api-access-hvbbr\") pod \"nfs-server-provisioner-0\" (UID: \"a62c2975-30cd-4ae3-a7d7-2eeb9447df8a\") " pod="default/nfs-server-provisioner-0" Jan 30 19:16:55.100599 containerd[1497]: time="2025-01-30T19:16:55.100467791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:a62c2975-30cd-4ae3-a7d7-2eeb9447df8a,Namespace:default,Attempt:0,}" Jan 30 19:16:55.158863 kubelet[1905]: E0130 19:16:55.158768 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:16:55.286852 systemd-networkd[1422]: cali60e51b789ff: Link UP Jan 30 19:16:55.288413 systemd-networkd[1422]: cali60e51b789ff: Gained carrier Jan 30 19:16:55.304288 containerd[1497]: 2025-01-30 19:16:55.172 [INFO][3235] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.230.38.22-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default a62c2975-30cd-4ae3-a7d7-2eeb9447df8a 1200 0 2025-01-30 19:16:54 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.230.38.22 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="c19875a2eee52ec1fb27784c0f1bfda7fa4cad6aa282831903174686bdabce24" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.38.22-k8s-nfs--server--provisioner--0-" Jan 30 19:16:55.304288 containerd[1497]: 2025-01-30 19:16:55.172 [INFO][3235] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c19875a2eee52ec1fb27784c0f1bfda7fa4cad6aa282831903174686bdabce24" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.38.22-k8s-nfs--server--provisioner--0-eth0" Jan 30 19:16:55.304288 containerd[1497]: 2025-01-30 19:16:55.219 [INFO][3246] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c19875a2eee52ec1fb27784c0f1bfda7fa4cad6aa282831903174686bdabce24" HandleID="k8s-pod-network.c19875a2eee52ec1fb27784c0f1bfda7fa4cad6aa282831903174686bdabce24" Workload="10.230.38.22-k8s-nfs--server--provisioner--0-eth0" Jan 30 19:16:55.304288 containerd[1497]: 2025-01-30 19:16:55.232 [INFO][3246] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c19875a2eee52ec1fb27784c0f1bfda7fa4cad6aa282831903174686bdabce24" HandleID="k8s-pod-network.c19875a2eee52ec1fb27784c0f1bfda7fa4cad6aa282831903174686bdabce24" Workload="10.230.38.22-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000332570), Attrs:map[string]string{"namespace":"default", "node":"10.230.38.22", "pod":"nfs-server-provisioner-0", "timestamp":"2025-01-30 19:16:55.21912051 +0000 UTC"}, Hostname:"10.230.38.22", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 19:16:55.304288 containerd[1497]: 2025-01-30 19:16:55.232 [INFO][3246] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 19:16:55.304288 containerd[1497]: 2025-01-30 19:16:55.233 [INFO][3246] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 19:16:55.304288 containerd[1497]: 2025-01-30 19:16:55.233 [INFO][3246] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.230.38.22' Jan 30 19:16:55.304288 containerd[1497]: 2025-01-30 19:16:55.236 [INFO][3246] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c19875a2eee52ec1fb27784c0f1bfda7fa4cad6aa282831903174686bdabce24" host="10.230.38.22" Jan 30 19:16:55.304288 containerd[1497]: 2025-01-30 19:16:55.250 [INFO][3246] ipam/ipam.go 372: Looking up existing affinities for host host="10.230.38.22" Jan 30 19:16:55.304288 containerd[1497]: 2025-01-30 19:16:55.258 [INFO][3246] ipam/ipam.go 489: Trying affinity for 192.168.9.0/26 host="10.230.38.22" Jan 30 19:16:55.304288 containerd[1497]: 2025-01-30 19:16:55.260 [INFO][3246] ipam/ipam.go 155: Attempting to load block cidr=192.168.9.0/26 host="10.230.38.22" Jan 30 19:16:55.304288 containerd[1497]: 2025-01-30 19:16:55.263 [INFO][3246] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.9.0/26 host="10.230.38.22" Jan 30 19:16:55.304288 containerd[1497]: 2025-01-30 19:16:55.264 [INFO][3246] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.9.0/26 handle="k8s-pod-network.c19875a2eee52ec1fb27784c0f1bfda7fa4cad6aa282831903174686bdabce24" host="10.230.38.22" Jan 30 19:16:55.304288 containerd[1497]: 2025-01-30 19:16:55.266 [INFO][3246] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c19875a2eee52ec1fb27784c0f1bfda7fa4cad6aa282831903174686bdabce24 Jan 30 19:16:55.304288 containerd[1497]: 2025-01-30 19:16:55.272 [INFO][3246] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.9.0/26 handle="k8s-pod-network.c19875a2eee52ec1fb27784c0f1bfda7fa4cad6aa282831903174686bdabce24" host="10.230.38.22" Jan 30 19:16:55.304288 containerd[1497]: 2025-01-30 19:16:55.280 [INFO][3246] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.9.3/26] block=192.168.9.0/26 handle="k8s-pod-network.c19875a2eee52ec1fb27784c0f1bfda7fa4cad6aa282831903174686bdabce24" host="10.230.38.22" Jan 30 19:16:55.304288 containerd[1497]: 2025-01-30 19:16:55.280 [INFO][3246] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.9.3/26] handle="k8s-pod-network.c19875a2eee52ec1fb27784c0f1bfda7fa4cad6aa282831903174686bdabce24" host="10.230.38.22" Jan 30 19:16:55.304288 containerd[1497]: 2025-01-30 19:16:55.280 [INFO][3246] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 19:16:55.304288 containerd[1497]: 2025-01-30 19:16:55.280 [INFO][3246] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.3/26] IPv6=[] ContainerID="c19875a2eee52ec1fb27784c0f1bfda7fa4cad6aa282831903174686bdabce24" HandleID="k8s-pod-network.c19875a2eee52ec1fb27784c0f1bfda7fa4cad6aa282831903174686bdabce24" Workload="10.230.38.22-k8s-nfs--server--provisioner--0-eth0" Jan 30 19:16:55.305633 containerd[1497]: 2025-01-30 19:16:55.282 [INFO][3235] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c19875a2eee52ec1fb27784c0f1bfda7fa4cad6aa282831903174686bdabce24" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.38.22-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.38.22-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"a62c2975-30cd-4ae3-a7d7-2eeb9447df8a", ResourceVersion:"1200", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 19, 16, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.38.22", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.9.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 19:16:55.305633 containerd[1497]: 2025-01-30 19:16:55.283 [INFO][3235] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.9.3/32] ContainerID="c19875a2eee52ec1fb27784c0f1bfda7fa4cad6aa282831903174686bdabce24" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.38.22-k8s-nfs--server--provisioner--0-eth0" Jan 30 19:16:55.305633 containerd[1497]: 2025-01-30 19:16:55.283 [INFO][3235] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="c19875a2eee52ec1fb27784c0f1bfda7fa4cad6aa282831903174686bdabce24" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.38.22-k8s-nfs--server--provisioner--0-eth0" Jan 30 19:16:55.305633 containerd[1497]: 2025-01-30 19:16:55.289 [INFO][3235] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c19875a2eee52ec1fb27784c0f1bfda7fa4cad6aa282831903174686bdabce24" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.38.22-k8s-nfs--server--provisioner--0-eth0" Jan 30 19:16:55.306204 containerd[1497]: 2025-01-30 19:16:55.289 [INFO][3235] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c19875a2eee52ec1fb27784c0f1bfda7fa4cad6aa282831903174686bdabce24" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.38.22-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.38.22-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"a62c2975-30cd-4ae3-a7d7-2eeb9447df8a", ResourceVersion:"1200", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 19, 16, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.38.22", ContainerID:"c19875a2eee52ec1fb27784c0f1bfda7fa4cad6aa282831903174686bdabce24", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.9.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"66:f9:e2:be:8a:1d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 19:16:55.306204 containerd[1497]: 2025-01-30 19:16:55.302 [INFO][3235] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c19875a2eee52ec1fb27784c0f1bfda7fa4cad6aa282831903174686bdabce24" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.38.22-k8s-nfs--server--provisioner--0-eth0" Jan 30 19:16:55.337179 containerd[1497]: time="2025-01-30T19:16:55.337040274Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 19:16:55.337179 containerd[1497]: time="2025-01-30T19:16:55.337121597Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 19:16:55.337732 containerd[1497]: time="2025-01-30T19:16:55.337470961Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 19:16:55.337732 containerd[1497]: time="2025-01-30T19:16:55.337607807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 19:16:55.374471 systemd[1]: Started cri-containerd-c19875a2eee52ec1fb27784c0f1bfda7fa4cad6aa282831903174686bdabce24.scope - libcontainer container c19875a2eee52ec1fb27784c0f1bfda7fa4cad6aa282831903174686bdabce24. Jan 30 19:16:55.433711 containerd[1497]: time="2025-01-30T19:16:55.433650783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:a62c2975-30cd-4ae3-a7d7-2eeb9447df8a,Namespace:default,Attempt:0,} returns sandbox id \"c19875a2eee52ec1fb27784c0f1bfda7fa4cad6aa282831903174686bdabce24\"" Jan 30 19:16:55.435905 containerd[1497]: time="2025-01-30T19:16:55.435850583Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 30 19:16:56.159827 kubelet[1905]: E0130 19:16:56.159354 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:16:56.752462 systemd-networkd[1422]: cali60e51b789ff: Gained IPv6LL Jan 30 19:16:57.159754 kubelet[1905]: E0130 19:16:57.159670 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:16:58.160842 kubelet[1905]: E0130 19:16:58.160744 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:16:59.056352 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount136397025.mount: Deactivated successfully. Jan 30 19:16:59.161072 kubelet[1905]: E0130 19:16:59.160989 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:17:00.161522 kubelet[1905]: E0130 19:17:00.161425 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:17:01.162653 kubelet[1905]: E0130 19:17:01.162479 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:17:02.138211 containerd[1497]: time="2025-01-30T19:17:02.138100663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 19:17:02.141858 containerd[1497]: time="2025-01-30T19:17:02.141782263Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039414" Jan 30 19:17:02.146272 containerd[1497]: time="2025-01-30T19:17:02.146175759Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 19:17:02.150040 containerd[1497]: time="2025-01-30T19:17:02.149997575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 19:17:02.151900 containerd[1497]: time="2025-01-30T19:17:02.151661353Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 6.715746387s" Jan 30 19:17:02.151900 containerd[1497]: time="2025-01-30T19:17:02.151723722Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 30 19:17:02.160796 containerd[1497]: time="2025-01-30T19:17:02.160723997Z" level=info msg="CreateContainer within sandbox \"c19875a2eee52ec1fb27784c0f1bfda7fa4cad6aa282831903174686bdabce24\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 30 19:17:02.162736 kubelet[1905]: E0130 19:17:02.162651 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:17:02.183523 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1117691156.mount: Deactivated successfully. Jan 30 19:17:02.186210 containerd[1497]: time="2025-01-30T19:17:02.186099793Z" level=info msg="CreateContainer within sandbox \"c19875a2eee52ec1fb27784c0f1bfda7fa4cad6aa282831903174686bdabce24\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"1dbdd66b0568c668b703a8cd353abd82fb519a35df641b1f1dbe06625b5b691e\"" Jan 30 19:17:02.188186 containerd[1497]: time="2025-01-30T19:17:02.187794188Z" level=info msg="StartContainer for \"1dbdd66b0568c668b703a8cd353abd82fb519a35df641b1f1dbe06625b5b691e\"" Jan 30 19:17:02.236539 systemd[1]: Started cri-containerd-1dbdd66b0568c668b703a8cd353abd82fb519a35df641b1f1dbe06625b5b691e.scope - libcontainer container 1dbdd66b0568c668b703a8cd353abd82fb519a35df641b1f1dbe06625b5b691e. Jan 30 19:17:02.271747 containerd[1497]: time="2025-01-30T19:17:02.271667158Z" level=info msg="StartContainer for \"1dbdd66b0568c668b703a8cd353abd82fb519a35df641b1f1dbe06625b5b691e\" returns successfully" Jan 30 19:17:02.479287 kubelet[1905]: I0130 19:17:02.479013 1905 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.761035039 podStartE2EDuration="8.478972805s" podCreationTimestamp="2025-01-30 19:16:54 +0000 UTC" firstStartedPulling="2025-01-30 19:16:55.435542588 +0000 UTC m=+47.577283608" lastFinishedPulling="2025-01-30 19:17:02.153480348 +0000 UTC m=+54.295221374" observedRunningTime="2025-01-30 19:17:02.477752507 +0000 UTC m=+54.619493551" watchObservedRunningTime="2025-01-30 19:17:02.478972805 +0000 UTC m=+54.620713830" Jan 30 19:17:03.163266 kubelet[1905]: E0130 19:17:03.163105 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:17:04.163460 kubelet[1905]: E0130 19:17:04.163357 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:17:05.164327 kubelet[1905]: E0130 19:17:05.164203 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:17:06.164512 kubelet[1905]: E0130 19:17:06.164418 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:17:06.380999 systemd[1]: run-containerd-runc-k8s.io-277399867a74b419957289c1e23fb54ef18e54886b7d93c40246038548111877-runc.vb4C7C.mount: Deactivated successfully. Jan 30 19:17:07.164703 kubelet[1905]: E0130 19:17:07.164623 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:17:08.165602 kubelet[1905]: E0130 19:17:08.165485 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:17:09.118968 kubelet[1905]: E0130 19:17:09.118871 1905 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:17:09.152958 containerd[1497]: time="2025-01-30T19:17:09.152858239Z" level=info msg="StopPodSandbox for \"47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622\"" Jan 30 19:17:09.166661 kubelet[1905]: E0130 19:17:09.166598 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:17:09.290927 containerd[1497]: 2025-01-30 19:17:09.226 [WARNING][3449] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.38.22-k8s-nginx--deployment--7fcdb87857--dvkbv-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"57660fc5-fce7-487a-8617-deb8ce9ecbd3", ResourceVersion:"1168", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 19, 16, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.38.22", ContainerID:"dd2127b18d1dda06e83f05c45cff2425a2af8b448a828ec1a09df48769ef2faf", Pod:"nginx-deployment-7fcdb87857-dvkbv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.9.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali7b0b9a94470", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 19:17:09.290927 containerd[1497]: 2025-01-30 19:17:09.226 [INFO][3449] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622" Jan 30 19:17:09.290927 containerd[1497]: 2025-01-30 19:17:09.226 [INFO][3449] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622" iface="eth0" netns="" Jan 30 19:17:09.290927 containerd[1497]: 2025-01-30 19:17:09.226 [INFO][3449] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622" Jan 30 19:17:09.290927 containerd[1497]: 2025-01-30 19:17:09.226 [INFO][3449] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622" Jan 30 19:17:09.290927 containerd[1497]: 2025-01-30 19:17:09.270 [INFO][3455] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622" HandleID="k8s-pod-network.47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622" Workload="10.230.38.22-k8s-nginx--deployment--7fcdb87857--dvkbv-eth0" Jan 30 19:17:09.290927 containerd[1497]: 2025-01-30 19:17:09.271 [INFO][3455] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 19:17:09.290927 containerd[1497]: 2025-01-30 19:17:09.271 [INFO][3455] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 19:17:09.290927 containerd[1497]: 2025-01-30 19:17:09.283 [WARNING][3455] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622" HandleID="k8s-pod-network.47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622" Workload="10.230.38.22-k8s-nginx--deployment--7fcdb87857--dvkbv-eth0" Jan 30 19:17:09.290927 containerd[1497]: 2025-01-30 19:17:09.283 [INFO][3455] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622" HandleID="k8s-pod-network.47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622" Workload="10.230.38.22-k8s-nginx--deployment--7fcdb87857--dvkbv-eth0" Jan 30 19:17:09.290927 containerd[1497]: 2025-01-30 19:17:09.287 [INFO][3455] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 19:17:09.290927 containerd[1497]: 2025-01-30 19:17:09.289 [INFO][3449] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622" Jan 30 19:17:09.292605 containerd[1497]: time="2025-01-30T19:17:09.290948808Z" level=info msg="TearDown network for sandbox \"47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622\" successfully" Jan 30 19:17:09.292605 containerd[1497]: time="2025-01-30T19:17:09.290980289Z" level=info msg="StopPodSandbox for \"47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622\" returns successfully" Jan 30 19:17:09.295528 containerd[1497]: time="2025-01-30T19:17:09.295481928Z" level=info msg="RemovePodSandbox for \"47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622\"" Jan 30 19:17:09.295631 containerd[1497]: time="2025-01-30T19:17:09.295538937Z" level=info msg="Forcibly stopping sandbox \"47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622\"" Jan 30 19:17:09.405091 containerd[1497]: 2025-01-30 19:17:09.351 [WARNING][3475] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.38.22-k8s-nginx--deployment--7fcdb87857--dvkbv-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"57660fc5-fce7-487a-8617-deb8ce9ecbd3", ResourceVersion:"1168", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 19, 16, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.38.22", ContainerID:"dd2127b18d1dda06e83f05c45cff2425a2af8b448a828ec1a09df48769ef2faf", Pod:"nginx-deployment-7fcdb87857-dvkbv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.9.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali7b0b9a94470", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 19:17:09.405091 containerd[1497]: 2025-01-30 19:17:09.352 [INFO][3475] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622" Jan 30 19:17:09.405091 containerd[1497]: 2025-01-30 19:17:09.352 [INFO][3475] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622" iface="eth0" netns="" Jan 30 19:17:09.405091 containerd[1497]: 2025-01-30 19:17:09.352 [INFO][3475] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622" Jan 30 19:17:09.405091 containerd[1497]: 2025-01-30 19:17:09.352 [INFO][3475] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622" Jan 30 19:17:09.405091 containerd[1497]: 2025-01-30 19:17:09.389 [INFO][3482] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622" HandleID="k8s-pod-network.47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622" Workload="10.230.38.22-k8s-nginx--deployment--7fcdb87857--dvkbv-eth0" Jan 30 19:17:09.405091 containerd[1497]: 2025-01-30 19:17:09.390 [INFO][3482] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 19:17:09.405091 containerd[1497]: 2025-01-30 19:17:09.390 [INFO][3482] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 19:17:09.405091 containerd[1497]: 2025-01-30 19:17:09.399 [WARNING][3482] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622" HandleID="k8s-pod-network.47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622" Workload="10.230.38.22-k8s-nginx--deployment--7fcdb87857--dvkbv-eth0" Jan 30 19:17:09.405091 containerd[1497]: 2025-01-30 19:17:09.399 [INFO][3482] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622" HandleID="k8s-pod-network.47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622" Workload="10.230.38.22-k8s-nginx--deployment--7fcdb87857--dvkbv-eth0" Jan 30 19:17:09.405091 containerd[1497]: 2025-01-30 19:17:09.401 [INFO][3482] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 19:17:09.405091 containerd[1497]: 2025-01-30 19:17:09.402 [INFO][3475] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622" Jan 30 19:17:09.405091 containerd[1497]: time="2025-01-30T19:17:09.404631371Z" level=info msg="TearDown network for sandbox \"47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622\" successfully" Jan 30 19:17:09.420762 containerd[1497]: time="2025-01-30T19:17:09.420682234Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 19:17:09.420884 containerd[1497]: time="2025-01-30T19:17:09.420798557Z" level=info msg="RemovePodSandbox \"47f5fa847ec4341f907e7e4a779edef3f6b81ce2251a82ac44adee65eb681622\" returns successfully" Jan 30 19:17:09.421561 containerd[1497]: time="2025-01-30T19:17:09.421515752Z" level=info msg="StopPodSandbox for \"2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1\"" Jan 30 19:17:09.518567 containerd[1497]: 2025-01-30 19:17:09.473 [WARNING][3500] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.38.22-k8s-csi--node--driver--n5m8t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e47b55f6-c6c0-4e43-a8ed-5ed724f4ad91", ResourceVersion:"1148", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 19, 16, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.38.22", ContainerID:"16b998780cfc31b9af93526411d1ed888204afb3f02aede211e2649680c62c7b", Pod:"csi-node-driver-n5m8t", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.9.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidd0da33f110", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 19:17:09.518567 containerd[1497]: 2025-01-30 19:17:09.474 [INFO][3500] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1" Jan 30 19:17:09.518567 containerd[1497]: 2025-01-30 19:17:09.474 [INFO][3500] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1" iface="eth0" netns="" Jan 30 19:17:09.518567 containerd[1497]: 2025-01-30 19:17:09.474 [INFO][3500] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1" Jan 30 19:17:09.518567 containerd[1497]: 2025-01-30 19:17:09.474 [INFO][3500] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1" Jan 30 19:17:09.518567 containerd[1497]: 2025-01-30 19:17:09.503 [INFO][3506] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1" HandleID="k8s-pod-network.2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1" Workload="10.230.38.22-k8s-csi--node--driver--n5m8t-eth0" Jan 30 19:17:09.518567 containerd[1497]: 2025-01-30 19:17:09.503 [INFO][3506] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 19:17:09.518567 containerd[1497]: 2025-01-30 19:17:09.503 [INFO][3506] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 19:17:09.518567 containerd[1497]: 2025-01-30 19:17:09.513 [WARNING][3506] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1" HandleID="k8s-pod-network.2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1" Workload="10.230.38.22-k8s-csi--node--driver--n5m8t-eth0" Jan 30 19:17:09.518567 containerd[1497]: 2025-01-30 19:17:09.513 [INFO][3506] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1" HandleID="k8s-pod-network.2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1" Workload="10.230.38.22-k8s-csi--node--driver--n5m8t-eth0" Jan 30 19:17:09.518567 containerd[1497]: 2025-01-30 19:17:09.515 [INFO][3506] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 19:17:09.518567 containerd[1497]: 2025-01-30 19:17:09.517 [INFO][3500] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1" Jan 30 19:17:09.520455 containerd[1497]: time="2025-01-30T19:17:09.518623277Z" level=info msg="TearDown network for sandbox \"2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1\" successfully" Jan 30 19:17:09.520455 containerd[1497]: time="2025-01-30T19:17:09.518672244Z" level=info msg="StopPodSandbox for \"2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1\" returns successfully" Jan 30 19:17:09.520455 containerd[1497]: time="2025-01-30T19:17:09.519423734Z" level=info msg="RemovePodSandbox for \"2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1\"" Jan 30 19:17:09.520455 containerd[1497]: time="2025-01-30T19:17:09.519470973Z" level=info msg="Forcibly stopping sandbox \"2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1\"" Jan 30 19:17:09.616044 containerd[1497]: 2025-01-30 19:17:09.571 [WARNING][3524] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.38.22-k8s-csi--node--driver--n5m8t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e47b55f6-c6c0-4e43-a8ed-5ed724f4ad91", ResourceVersion:"1148", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 19, 16, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.38.22", ContainerID:"16b998780cfc31b9af93526411d1ed888204afb3f02aede211e2649680c62c7b", Pod:"csi-node-driver-n5m8t", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.9.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidd0da33f110", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 19:17:09.616044 containerd[1497]: 2025-01-30 19:17:09.571 [INFO][3524] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1" Jan 30 19:17:09.616044 containerd[1497]: 2025-01-30 19:17:09.571 [INFO][3524] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1" iface="eth0" netns="" Jan 30 19:17:09.616044 containerd[1497]: 2025-01-30 19:17:09.571 [INFO][3524] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1" Jan 30 19:17:09.616044 containerd[1497]: 2025-01-30 19:17:09.571 [INFO][3524] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1" Jan 30 19:17:09.616044 containerd[1497]: 2025-01-30 19:17:09.599 [INFO][3531] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1" HandleID="k8s-pod-network.2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1" Workload="10.230.38.22-k8s-csi--node--driver--n5m8t-eth0" Jan 30 19:17:09.616044 containerd[1497]: 2025-01-30 19:17:09.599 [INFO][3531] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 19:17:09.616044 containerd[1497]: 2025-01-30 19:17:09.599 [INFO][3531] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 19:17:09.616044 containerd[1497]: 2025-01-30 19:17:09.609 [WARNING][3531] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1" HandleID="k8s-pod-network.2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1" Workload="10.230.38.22-k8s-csi--node--driver--n5m8t-eth0" Jan 30 19:17:09.616044 containerd[1497]: 2025-01-30 19:17:09.609 [INFO][3531] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1" HandleID="k8s-pod-network.2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1" Workload="10.230.38.22-k8s-csi--node--driver--n5m8t-eth0" Jan 30 19:17:09.616044 containerd[1497]: 2025-01-30 19:17:09.612 [INFO][3531] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 19:17:09.616044 containerd[1497]: 2025-01-30 19:17:09.614 [INFO][3524] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1" Jan 30 19:17:09.616044 containerd[1497]: time="2025-01-30T19:17:09.615915638Z" level=info msg="TearDown network for sandbox \"2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1\" successfully" Jan 30 19:17:09.632617 containerd[1497]: time="2025-01-30T19:17:09.632538773Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 19:17:09.633588 containerd[1497]: time="2025-01-30T19:17:09.633054691Z" level=info msg="RemovePodSandbox \"2787c743dda7741f9ff13cca66805cbd8eb96e68d15dcc4fadb5deac18d2ccd1\" returns successfully" Jan 30 19:17:10.167647 kubelet[1905]: E0130 19:17:10.167577 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:17:11.168309 kubelet[1905]: E0130 19:17:11.168205 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:17:11.675617 systemd[1]: Created slice kubepods-besteffort-pod514caf29_c009_47cc_b0d0_5ec38b28947c.slice - libcontainer container kubepods-besteffort-pod514caf29_c009_47cc_b0d0_5ec38b28947c.slice. Jan 30 19:17:11.781300 kubelet[1905]: I0130 19:17:11.780855 1905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e01c4bd7-7eac-4eda-a4bc-6caf40977d1b\" (UniqueName: \"kubernetes.io/nfs/514caf29-c009-47cc-b0d0-5ec38b28947c-pvc-e01c4bd7-7eac-4eda-a4bc-6caf40977d1b\") pod \"test-pod-1\" (UID: \"514caf29-c009-47cc-b0d0-5ec38b28947c\") " pod="default/test-pod-1" Jan 30 19:17:11.781300 kubelet[1905]: I0130 19:17:11.780969 1905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7vlv\" (UniqueName: \"kubernetes.io/projected/514caf29-c009-47cc-b0d0-5ec38b28947c-kube-api-access-f7vlv\") pod \"test-pod-1\" (UID: \"514caf29-c009-47cc-b0d0-5ec38b28947c\") " pod="default/test-pod-1" Jan 30 19:17:11.931374 kernel: FS-Cache: Loaded Jan 30 19:17:12.026721 kernel: RPC: Registered named UNIX socket transport module. Jan 30 19:17:12.026926 kernel: RPC: Registered udp transport module. Jan 30 19:17:12.026980 kernel: RPC: Registered tcp transport module. Jan 30 19:17:12.027545 kernel: RPC: Registered tcp-with-tls transport module. Jan 30 19:17:12.028645 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 30 19:17:12.169301 kubelet[1905]: E0130 19:17:12.169188 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:17:12.390918 kernel: NFS: Registering the id_resolver key type Jan 30 19:17:12.391166 kernel: Key type id_resolver registered Jan 30 19:17:12.391222 kernel: Key type id_legacy registered Jan 30 19:17:12.442913 nfsidmap[3552]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Jan 30 19:17:12.451144 nfsidmap[3555]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Jan 30 19:17:12.581288 containerd[1497]: time="2025-01-30T19:17:12.580651582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:514caf29-c009-47cc-b0d0-5ec38b28947c,Namespace:default,Attempt:0,}" Jan 30 19:17:12.764066 systemd-networkd[1422]: cali5ec59c6bf6e: Link UP Jan 30 19:17:12.764588 systemd-networkd[1422]: cali5ec59c6bf6e: Gained carrier Jan 30 19:17:12.780561 containerd[1497]: 2025-01-30 19:17:12.662 [INFO][3558] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.230.38.22-k8s-test--pod--1-eth0 default 514caf29-c009-47cc-b0d0-5ec38b28947c 1272 0 2025-01-30 19:16:56 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.230.38.22 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="9844b4eb9ca7219a3e042371a66ef10e179b82bc080412cff487197b4c6e383b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.38.22-k8s-test--pod--1-" Jan 30 19:17:12.780561 containerd[1497]: 2025-01-30 19:17:12.662 [INFO][3558] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9844b4eb9ca7219a3e042371a66ef10e179b82bc080412cff487197b4c6e383b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.38.22-k8s-test--pod--1-eth0" Jan 30 19:17:12.780561 containerd[1497]: 2025-01-30 19:17:12.703 [INFO][3569] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9844b4eb9ca7219a3e042371a66ef10e179b82bc080412cff487197b4c6e383b" HandleID="k8s-pod-network.9844b4eb9ca7219a3e042371a66ef10e179b82bc080412cff487197b4c6e383b" Workload="10.230.38.22-k8s-test--pod--1-eth0" Jan 30 19:17:12.780561 containerd[1497]: 2025-01-30 19:17:12.717 [INFO][3569] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9844b4eb9ca7219a3e042371a66ef10e179b82bc080412cff487197b4c6e383b" HandleID="k8s-pod-network.9844b4eb9ca7219a3e042371a66ef10e179b82bc080412cff487197b4c6e383b" Workload="10.230.38.22-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001fed80), Attrs:map[string]string{"namespace":"default", "node":"10.230.38.22", "pod":"test-pod-1", "timestamp":"2025-01-30 19:17:12.703438324 +0000 UTC"}, Hostname:"10.230.38.22", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 19:17:12.780561 containerd[1497]: 2025-01-30 19:17:12.717 [INFO][3569] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 19:17:12.780561 containerd[1497]: 2025-01-30 19:17:12.717 [INFO][3569] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 19:17:12.780561 containerd[1497]: 2025-01-30 19:17:12.717 [INFO][3569] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.230.38.22' Jan 30 19:17:12.780561 containerd[1497]: 2025-01-30 19:17:12.720 [INFO][3569] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9844b4eb9ca7219a3e042371a66ef10e179b82bc080412cff487197b4c6e383b" host="10.230.38.22" Jan 30 19:17:12.780561 containerd[1497]: 2025-01-30 19:17:12.726 [INFO][3569] ipam/ipam.go 372: Looking up existing affinities for host host="10.230.38.22" Jan 30 19:17:12.780561 containerd[1497]: 2025-01-30 19:17:12.733 [INFO][3569] ipam/ipam.go 489: Trying affinity for 192.168.9.0/26 host="10.230.38.22" Jan 30 19:17:12.780561 containerd[1497]: 2025-01-30 19:17:12.736 [INFO][3569] ipam/ipam.go 155: Attempting to load block cidr=192.168.9.0/26 host="10.230.38.22" Jan 30 19:17:12.780561 containerd[1497]: 2025-01-30 19:17:12.739 [INFO][3569] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.9.0/26 host="10.230.38.22" Jan 30 19:17:12.780561 containerd[1497]: 2025-01-30 19:17:12.739 [INFO][3569] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.9.0/26 handle="k8s-pod-network.9844b4eb9ca7219a3e042371a66ef10e179b82bc080412cff487197b4c6e383b" host="10.230.38.22" Jan 30 19:17:12.780561 containerd[1497]: 2025-01-30 19:17:12.742 [INFO][3569] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9844b4eb9ca7219a3e042371a66ef10e179b82bc080412cff487197b4c6e383b Jan 30 19:17:12.780561 containerd[1497]: 2025-01-30 19:17:12.748 [INFO][3569] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.9.0/26 handle="k8s-pod-network.9844b4eb9ca7219a3e042371a66ef10e179b82bc080412cff487197b4c6e383b" host="10.230.38.22" Jan 30 19:17:12.780561 containerd[1497]: 2025-01-30 19:17:12.755 [INFO][3569] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.9.4/26] block=192.168.9.0/26 handle="k8s-pod-network.9844b4eb9ca7219a3e042371a66ef10e179b82bc080412cff487197b4c6e383b" host="10.230.38.22" Jan 30 19:17:12.780561 containerd[1497]: 2025-01-30 19:17:12.755 [INFO][3569] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.9.4/26] handle="k8s-pod-network.9844b4eb9ca7219a3e042371a66ef10e179b82bc080412cff487197b4c6e383b" host="10.230.38.22" Jan 30 19:17:12.780561 containerd[1497]: 2025-01-30 19:17:12.755 [INFO][3569] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 19:17:12.780561 containerd[1497]: 2025-01-30 19:17:12.755 [INFO][3569] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.4/26] IPv6=[] ContainerID="9844b4eb9ca7219a3e042371a66ef10e179b82bc080412cff487197b4c6e383b" HandleID="k8s-pod-network.9844b4eb9ca7219a3e042371a66ef10e179b82bc080412cff487197b4c6e383b" Workload="10.230.38.22-k8s-test--pod--1-eth0" Jan 30 19:17:12.780561 containerd[1497]: 2025-01-30 19:17:12.757 [INFO][3558] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9844b4eb9ca7219a3e042371a66ef10e179b82bc080412cff487197b4c6e383b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.38.22-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.38.22-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"514caf29-c009-47cc-b0d0-5ec38b28947c", ResourceVersion:"1272", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 19, 16, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.38.22", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.9.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 19:17:12.787408 containerd[1497]: 2025-01-30 19:17:12.757 [INFO][3558] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.9.4/32] ContainerID="9844b4eb9ca7219a3e042371a66ef10e179b82bc080412cff487197b4c6e383b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.38.22-k8s-test--pod--1-eth0" Jan 30 19:17:12.787408 containerd[1497]: 2025-01-30 19:17:12.758 [INFO][3558] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="9844b4eb9ca7219a3e042371a66ef10e179b82bc080412cff487197b4c6e383b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.38.22-k8s-test--pod--1-eth0" Jan 30 19:17:12.787408 containerd[1497]: 2025-01-30 19:17:12.761 [INFO][3558] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9844b4eb9ca7219a3e042371a66ef10e179b82bc080412cff487197b4c6e383b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.38.22-k8s-test--pod--1-eth0" Jan 30 19:17:12.787408 containerd[1497]: 2025-01-30 19:17:12.762 [INFO][3558] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9844b4eb9ca7219a3e042371a66ef10e179b82bc080412cff487197b4c6e383b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.38.22-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.38.22-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"514caf29-c009-47cc-b0d0-5ec38b28947c", ResourceVersion:"1272", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 19, 16, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.38.22", ContainerID:"9844b4eb9ca7219a3e042371a66ef10e179b82bc080412cff487197b4c6e383b", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.9.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"92:fd:db:e0:4d:73", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 19:17:12.787408 containerd[1497]: 2025-01-30 19:17:12.774 [INFO][3558] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9844b4eb9ca7219a3e042371a66ef10e179b82bc080412cff487197b4c6e383b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.38.22-k8s-test--pod--1-eth0" Jan 30 19:17:12.818064 containerd[1497]: time="2025-01-30T19:17:12.817222597Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 19:17:12.818369 containerd[1497]: time="2025-01-30T19:17:12.818014855Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 19:17:12.818369 containerd[1497]: time="2025-01-30T19:17:12.818057613Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 19:17:12.819916 containerd[1497]: time="2025-01-30T19:17:12.818200945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 19:17:12.848475 systemd[1]: Started cri-containerd-9844b4eb9ca7219a3e042371a66ef10e179b82bc080412cff487197b4c6e383b.scope - libcontainer container 9844b4eb9ca7219a3e042371a66ef10e179b82bc080412cff487197b4c6e383b. Jan 30 19:17:12.912463 containerd[1497]: time="2025-01-30T19:17:12.912397408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:514caf29-c009-47cc-b0d0-5ec38b28947c,Namespace:default,Attempt:0,} returns sandbox id \"9844b4eb9ca7219a3e042371a66ef10e179b82bc080412cff487197b4c6e383b\"" Jan 30 19:17:12.914624 containerd[1497]: time="2025-01-30T19:17:12.914462030Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 30 19:17:13.169914 kubelet[1905]: E0130 19:17:13.169831 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:17:13.256262 containerd[1497]: time="2025-01-30T19:17:13.256102028Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 19:17:13.257232 containerd[1497]: time="2025-01-30T19:17:13.257174373Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 30 19:17:13.261778 containerd[1497]: time="2025-01-30T19:17:13.261633508Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 347.089036ms" Jan 30 19:17:13.261778 containerd[1497]: time="2025-01-30T19:17:13.261691054Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 30 19:17:13.264410 containerd[1497]: time="2025-01-30T19:17:13.264190992Z" level=info msg="CreateContainer within sandbox \"9844b4eb9ca7219a3e042371a66ef10e179b82bc080412cff487197b4c6e383b\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 30 19:17:13.286130 containerd[1497]: time="2025-01-30T19:17:13.286006200Z" level=info msg="CreateContainer within sandbox \"9844b4eb9ca7219a3e042371a66ef10e179b82bc080412cff487197b4c6e383b\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"efe02257f3cd12d5f9c5a5762b257f98046ea2498a59cf4cd61490955ad985f9\"" Jan 30 19:17:13.287336 containerd[1497]: time="2025-01-30T19:17:13.287303844Z" level=info msg="StartContainer for \"efe02257f3cd12d5f9c5a5762b257f98046ea2498a59cf4cd61490955ad985f9\"" Jan 30 19:17:13.328442 systemd[1]: Started cri-containerd-efe02257f3cd12d5f9c5a5762b257f98046ea2498a59cf4cd61490955ad985f9.scope - libcontainer container efe02257f3cd12d5f9c5a5762b257f98046ea2498a59cf4cd61490955ad985f9. Jan 30 19:17:13.368193 containerd[1497]: time="2025-01-30T19:17:13.368120623Z" level=info msg="StartContainer for \"efe02257f3cd12d5f9c5a5762b257f98046ea2498a59cf4cd61490955ad985f9\" returns successfully" Jan 30 19:17:14.170503 kubelet[1905]: E0130 19:17:14.170427 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:17:14.671529 systemd-networkd[1422]: cali5ec59c6bf6e: Gained IPv6LL Jan 30 19:17:15.170857 kubelet[1905]: E0130 19:17:15.170792 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:17:16.171832 kubelet[1905]: E0130 19:17:16.171738 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:17:17.172908 kubelet[1905]: E0130 19:17:17.172818 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:17:18.173663 kubelet[1905]: E0130 19:17:18.173584 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:17:19.174665 kubelet[1905]: E0130 19:17:19.174587 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:17:20.175760 kubelet[1905]: E0130 19:17:20.175692 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:17:21.176486 kubelet[1905]: E0130 19:17:21.176427 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:17:22.176746 kubelet[1905]: E0130 19:17:22.176671 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 19:17:23.177373 kubelet[1905]: E0130 19:17:23.177293 1905 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"