Jan 17 12:22:49.079256 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 17 10:39:07 -00 2025 Jan 17 12:22:49.079299 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:22:49.079326 kernel: BIOS-provided physical RAM map: Jan 17 12:22:49.079344 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 17 12:22:49.079356 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 17 12:22:49.079365 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 17 12:22:49.079396 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Jan 17 12:22:49.079409 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Jan 17 12:22:49.079422 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 17 12:22:49.079441 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 17 12:22:49.079452 kernel: NX (Execute Disable) protection: active Jan 17 12:22:49.079464 kernel: APIC: Static calls initialized Jan 17 12:22:49.079483 kernel: SMBIOS 2.8 present. Jan 17 12:22:49.079494 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jan 17 12:22:49.079508 kernel: Hypervisor detected: KVM Jan 17 12:22:49.079526 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 12:22:49.079546 kernel: kvm-clock: using sched offset of 4002548246 cycles Jan 17 12:22:49.079583 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 12:22:49.079608 kernel: tsc: Detected 2294.606 MHz processor Jan 17 12:22:49.079631 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 12:22:49.079645 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 12:22:49.079657 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Jan 17 12:22:49.079671 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 17 12:22:49.079684 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 12:22:49.079703 kernel: ACPI: Early table checksum verification disabled Jan 17 12:22:49.079716 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Jan 17 12:22:49.079729 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:22:49.079745 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:22:49.079761 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:22:49.079776 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 17 12:22:49.079789 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:22:49.079805 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:22:49.079821 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:22:49.079839 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:22:49.079851 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jan 17 12:22:49.079864 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jan 17 12:22:49.079877 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 17 12:22:49.079891 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jan 17 12:22:49.079903 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jan 17 12:22:49.079916 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jan 17 12:22:49.079940 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jan 17 12:22:49.079955 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 17 12:22:49.079970 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 17 12:22:49.079987 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 17 12:22:49.080000 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 17 12:22:49.080023 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Jan 17 12:22:49.080037 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Jan 17 12:22:49.080055 kernel: Zone ranges: Jan 17 12:22:49.080069 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 12:22:49.080084 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Jan 17 12:22:49.080098 kernel: Normal empty Jan 17 12:22:49.080113 kernel: Movable zone start for each node Jan 17 12:22:49.080127 kernel: Early memory node ranges Jan 17 12:22:49.080141 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 17 12:22:49.080155 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Jan 17 12:22:49.080169 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Jan 17 12:22:49.080188 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 12:22:49.080203 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 17 12:22:49.080224 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Jan 17 12:22:49.080240 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 17 12:22:49.080253 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 12:22:49.080268 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 12:22:49.080282 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 17 12:22:49.080298 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 12:22:49.080314 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 12:22:49.080334 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 12:22:49.080349 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 12:22:49.080363 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 12:22:49.080421 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 17 12:22:49.080437 kernel: TSC deadline timer available Jan 17 12:22:49.080450 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 17 12:22:49.080463 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 17 12:22:49.080476 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jan 17 12:22:49.080495 kernel: Booting paravirtualized kernel on KVM Jan 17 12:22:49.080511 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 12:22:49.080533 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 17 12:22:49.080547 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 17 12:22:49.080561 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 17 12:22:49.080575 kernel: pcpu-alloc: [0] 0 1 Jan 17 12:22:49.080587 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 17 12:22:49.080604 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:22:49.080619 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 17 12:22:49.080649 kernel: random: crng init done Jan 17 12:22:49.080664 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 12:22:49.080678 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 17 12:22:49.080691 kernel: Fallback order for Node 0: 0 Jan 17 12:22:49.080705 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Jan 17 12:22:49.080718 kernel: Policy zone: DMA32 Jan 17 12:22:49.080733 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 12:22:49.080746 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42848K init, 2344K bss, 125148K reserved, 0K cma-reserved) Jan 17 12:22:49.080759 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 12:22:49.080781 kernel: Kernel/User page tables isolation: enabled Jan 17 12:22:49.080796 kernel: ftrace: allocating 37918 entries in 149 pages Jan 17 12:22:49.080819 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 12:22:49.080835 kernel: Dynamic Preempt: voluntary Jan 17 12:22:49.080850 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 12:22:49.080865 kernel: rcu: RCU event tracing is enabled. Jan 17 12:22:49.080879 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 12:22:49.080894 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 12:22:49.080908 kernel: Rude variant of Tasks RCU enabled. Jan 17 12:22:49.080928 kernel: Tracing variant of Tasks RCU enabled. Jan 17 12:22:49.080944 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 12:22:49.080958 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 12:22:49.080973 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 17 12:22:49.080987 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 12:22:49.081008 kernel: Console: colour VGA+ 80x25 Jan 17 12:22:49.081021 kernel: printk: console [tty0] enabled Jan 17 12:22:49.081037 kernel: printk: console [ttyS0] enabled Jan 17 12:22:49.081053 kernel: ACPI: Core revision 20230628 Jan 17 12:22:49.081068 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 17 12:22:49.081089 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 12:22:49.081102 kernel: x2apic enabled Jan 17 12:22:49.081115 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 12:22:49.081129 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 17 12:22:49.081143 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x21134dbeb26, max_idle_ns: 440795298546 ns Jan 17 12:22:49.081159 kernel: Calibrating delay loop (skipped) preset value.. 4589.21 BogoMIPS (lpj=2294606) Jan 17 12:22:49.081171 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 17 12:22:49.081185 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 17 12:22:49.081219 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 12:22:49.081236 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 12:22:49.081253 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 17 12:22:49.081271 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 17 12:22:49.081286 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 17 12:22:49.081302 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 17 12:22:49.081319 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 17 12:22:49.081337 kernel: MDS: Mitigation: Clear CPU buffers Jan 17 12:22:49.081354 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 12:22:49.081555 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 12:22:49.081585 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 12:22:49.081611 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 12:22:49.081627 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 12:22:49.081641 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 17 12:22:49.081657 kernel: Freeing SMP alternatives memory: 32K Jan 17 12:22:49.081670 kernel: pid_max: default: 32768 minimum: 301 Jan 17 12:22:49.081685 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 12:22:49.081707 kernel: landlock: Up and running. Jan 17 12:22:49.081721 kernel: SELinux: Initializing. Jan 17 12:22:49.081734 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 12:22:49.081747 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 12:22:49.081761 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jan 17 12:22:49.081775 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:22:49.081790 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:22:49.081805 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:22:49.081818 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jan 17 12:22:49.081837 kernel: signal: max sigframe size: 1776 Jan 17 12:22:49.081852 kernel: rcu: Hierarchical SRCU implementation. Jan 17 12:22:49.081868 kernel: rcu: Max phase no-delay instances is 400. Jan 17 12:22:49.081882 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 12:22:49.081895 kernel: smp: Bringing up secondary CPUs ... Jan 17 12:22:49.081908 kernel: smpboot: x86: Booting SMP configuration: Jan 17 12:22:49.081921 kernel: .... node #0, CPUs: #1 Jan 17 12:22:49.081934 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 12:22:49.081955 kernel: smpboot: Max logical packages: 1 Jan 17 12:22:49.081974 kernel: smpboot: Total of 2 processors activated (9178.42 BogoMIPS) Jan 17 12:22:49.081987 kernel: devtmpfs: initialized Jan 17 12:22:49.082001 kernel: x86/mm: Memory block size: 128MB Jan 17 12:22:49.082017 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 12:22:49.082031 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 12:22:49.082045 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 12:22:49.082058 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 12:22:49.082072 kernel: audit: initializing netlink subsys (disabled) Jan 17 12:22:49.082087 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 12:22:49.082107 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 12:22:49.082122 kernel: audit: type=2000 audit(1737116567.232:1): state=initialized audit_enabled=0 res=1 Jan 17 12:22:49.082136 kernel: cpuidle: using governor menu Jan 17 12:22:49.082153 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 12:22:49.082168 kernel: dca service started, version 1.12.1 Jan 17 12:22:49.082182 kernel: PCI: Using configuration type 1 for base access Jan 17 12:22:49.082196 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 12:22:49.082233 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 12:22:49.082258 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 12:22:49.082280 kernel: ACPI: Added _OSI(Module Device) Jan 17 12:22:49.082326 kernel: ACPI: Added _OSI(Processor Device) Jan 17 12:22:49.082342 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 17 12:22:49.082399 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 12:22:49.082413 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 12:22:49.082426 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 12:22:49.082440 kernel: ACPI: Interpreter enabled Jan 17 12:22:49.082454 kernel: ACPI: PM: (supports S0 S5) Jan 17 12:22:49.082469 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 12:22:49.082491 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 12:22:49.082506 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 12:22:49.082521 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 17 12:22:49.082538 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 12:22:49.082876 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 17 12:22:49.083072 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 17 12:22:49.083238 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 17 12:22:49.083269 kernel: acpiphp: Slot [3] registered Jan 17 12:22:49.083286 kernel: acpiphp: Slot [4] registered Jan 17 12:22:49.083301 kernel: acpiphp: Slot [5] registered Jan 17 12:22:49.083319 kernel: acpiphp: Slot [6] registered Jan 17 12:22:49.083335 kernel: acpiphp: Slot [7] registered Jan 17 12:22:49.083350 kernel: acpiphp: Slot [8] registered Jan 17 12:22:49.083365 kernel: acpiphp: Slot [9] registered Jan 17 12:22:49.083408 kernel: acpiphp: Slot [10] registered Jan 17 12:22:49.083422 kernel: acpiphp: Slot [11] registered Jan 17 12:22:49.083451 kernel: acpiphp: Slot [12] registered Jan 17 12:22:49.083467 kernel: acpiphp: Slot [13] registered Jan 17 12:22:49.083483 kernel: acpiphp: Slot [14] registered Jan 17 12:22:49.083498 kernel: acpiphp: Slot [15] registered Jan 17 12:22:49.083514 kernel: acpiphp: Slot [16] registered Jan 17 12:22:49.083529 kernel: acpiphp: Slot [17] registered Jan 17 12:22:49.083544 kernel: acpiphp: Slot [18] registered Jan 17 12:22:49.083568 kernel: acpiphp: Slot [19] registered Jan 17 12:22:49.083585 kernel: acpiphp: Slot [20] registered Jan 17 12:22:49.083600 kernel: acpiphp: Slot [21] registered Jan 17 12:22:49.083622 kernel: acpiphp: Slot [22] registered Jan 17 12:22:49.083637 kernel: acpiphp: Slot [23] registered Jan 17 12:22:49.083653 kernel: acpiphp: Slot [24] registered Jan 17 12:22:49.083668 kernel: acpiphp: Slot [25] registered Jan 17 12:22:49.083682 kernel: acpiphp: Slot [26] registered Jan 17 12:22:49.083697 kernel: acpiphp: Slot [27] registered Jan 17 12:22:49.083712 kernel: acpiphp: Slot [28] registered Jan 17 12:22:49.083726 kernel: acpiphp: Slot [29] registered Jan 17 12:22:49.083740 kernel: acpiphp: Slot [30] registered Jan 17 12:22:49.083759 kernel: acpiphp: Slot [31] registered Jan 17 12:22:49.083775 kernel: PCI host bridge to bus 0000:00 Jan 17 12:22:49.084058 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 12:22:49.084215 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 12:22:49.084356 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 12:22:49.084535 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 17 12:22:49.084757 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jan 17 12:22:49.084916 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 12:22:49.085169 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 17 12:22:49.085406 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 17 12:22:49.085663 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 17 12:22:49.085835 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jan 17 12:22:49.086035 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 17 12:22:49.086355 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 17 12:22:49.087686 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 17 12:22:49.087880 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 17 12:22:49.088084 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jan 17 12:22:49.088277 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jan 17 12:22:49.089814 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 17 12:22:49.090019 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 17 12:22:49.090277 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 17 12:22:49.090495 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 17 12:22:49.090670 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 17 12:22:49.090831 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jan 17 12:22:49.091013 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jan 17 12:22:49.093711 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 17 12:22:49.093915 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 12:22:49.094171 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 17 12:22:49.094367 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jan 17 12:22:49.095812 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jan 17 12:22:49.096007 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jan 17 12:22:49.096241 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 17 12:22:49.096445 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jan 17 12:22:49.096666 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jan 17 12:22:49.096835 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jan 17 12:22:49.097045 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jan 17 12:22:49.097231 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jan 17 12:22:49.100634 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jan 17 12:22:49.100900 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jan 17 12:22:49.101122 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jan 17 12:22:49.101317 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jan 17 12:22:49.101548 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jan 17 12:22:49.101737 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jan 17 12:22:49.101935 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jan 17 12:22:49.102119 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jan 17 12:22:49.102334 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jan 17 12:22:49.104621 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jan 17 12:22:49.104845 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jan 17 12:22:49.105036 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jan 17 12:22:49.105216 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jan 17 12:22:49.105241 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 12:22:49.105256 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 12:22:49.105270 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 12:22:49.105285 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 12:22:49.105311 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 17 12:22:49.105324 kernel: iommu: Default domain type: Translated Jan 17 12:22:49.105338 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 12:22:49.105353 kernel: PCI: Using ACPI for IRQ routing Jan 17 12:22:49.105367 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 12:22:49.105521 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 17 12:22:49.105546 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Jan 17 12:22:49.105777 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 17 12:22:49.105949 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 17 12:22:49.106125 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 12:22:49.106151 kernel: vgaarb: loaded Jan 17 12:22:49.106168 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 17 12:22:49.106183 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 17 12:22:49.106198 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 12:22:49.106280 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 12:22:49.106296 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 12:22:49.106311 kernel: pnp: PnP ACPI init Jan 17 12:22:49.106326 kernel: pnp: PnP ACPI: found 4 devices Jan 17 12:22:49.106350 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 12:22:49.106366 kernel: NET: Registered PF_INET protocol family Jan 17 12:22:49.106939 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 12:22:49.106955 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 17 12:22:49.106970 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 12:22:49.106986 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 12:22:49.107001 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 17 12:22:49.107016 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 17 12:22:49.107032 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 12:22:49.107056 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 12:22:49.107071 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 12:22:49.107089 kernel: NET: Registered PF_XDP protocol family Jan 17 12:22:49.107344 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 12:22:49.108617 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 12:22:49.108794 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 12:22:49.108952 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 17 12:22:49.109164 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jan 17 12:22:49.110485 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 17 12:22:49.110713 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 17 12:22:49.110748 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 17 12:22:49.110941 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 43354 usecs Jan 17 12:22:49.110968 kernel: PCI: CLS 0 bytes, default 64 Jan 17 12:22:49.110987 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 17 12:22:49.111006 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x21134dbeb26, max_idle_ns: 440795298546 ns Jan 17 12:22:49.111023 kernel: Initialise system trusted keyrings Jan 17 12:22:49.111050 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 17 12:22:49.111066 kernel: Key type asymmetric registered Jan 17 12:22:49.111081 kernel: Asymmetric key parser 'x509' registered Jan 17 12:22:49.111096 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 12:22:49.111111 kernel: io scheduler mq-deadline registered Jan 17 12:22:49.111127 kernel: io scheduler kyber registered Jan 17 12:22:49.111144 kernel: io scheduler bfq registered Jan 17 12:22:49.111162 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 12:22:49.111180 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 17 12:22:49.111204 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 17 12:22:49.111220 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 17 12:22:49.111235 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 12:22:49.111249 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 12:22:49.111263 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 12:22:49.111278 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 12:22:49.111293 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 12:22:49.112634 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 17 12:22:49.112673 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 12:22:49.112853 kernel: rtc_cmos 00:03: registered as rtc0 Jan 17 12:22:49.113010 kernel: rtc_cmos 00:03: setting system clock to 2025-01-17T12:22:48 UTC (1737116568) Jan 17 12:22:49.113174 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 17 12:22:49.113196 kernel: intel_pstate: CPU model not supported Jan 17 12:22:49.113212 kernel: NET: Registered PF_INET6 protocol family Jan 17 12:22:49.113226 kernel: Segment Routing with IPv6 Jan 17 12:22:49.113244 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 12:22:49.113258 kernel: NET: Registered PF_PACKET protocol family Jan 17 12:22:49.113281 kernel: Key type dns_resolver registered Jan 17 12:22:49.113297 kernel: IPI shorthand broadcast: enabled Jan 17 12:22:49.113312 kernel: sched_clock: Marking stable (1304006762, 182113702)->(1527332428, -41211964) Jan 17 12:22:49.113327 kernel: registered taskstats version 1 Jan 17 12:22:49.113341 kernel: Loading compiled-in X.509 certificates Jan 17 12:22:49.113356 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 6baa290b0089ed5c4c5f7248306af816ac8c7f80' Jan 17 12:22:49.113371 kernel: Key type .fscrypt registered Jan 17 12:22:49.114445 kernel: Key type fscrypt-provisioning registered Jan 17 12:22:49.114478 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 12:22:49.114505 kernel: ima: Allocated hash algorithm: sha1 Jan 17 12:22:49.114520 kernel: ima: No architecture policies found Jan 17 12:22:49.114537 kernel: clk: Disabling unused clocks Jan 17 12:22:49.114552 kernel: Freeing unused kernel image (initmem) memory: 42848K Jan 17 12:22:49.114567 kernel: Write protecting the kernel read-only data: 36864k Jan 17 12:22:49.114611 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 17 12:22:49.114631 kernel: Run /init as init process Jan 17 12:22:49.114649 kernel: with arguments: Jan 17 12:22:49.114668 kernel: /init Jan 17 12:22:49.114691 kernel: with environment: Jan 17 12:22:49.114708 kernel: HOME=/ Jan 17 12:22:49.114726 kernel: TERM=linux Jan 17 12:22:49.114743 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 17 12:22:49.114764 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:22:49.114786 systemd[1]: Detected virtualization kvm. Jan 17 12:22:49.114804 systemd[1]: Detected architecture x86-64. Jan 17 12:22:49.114821 systemd[1]: Running in initrd. Jan 17 12:22:49.114842 systemd[1]: No hostname configured, using default hostname. Jan 17 12:22:49.114858 systemd[1]: Hostname set to . Jan 17 12:22:49.114876 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:22:49.114894 systemd[1]: Queued start job for default target initrd.target. Jan 17 12:22:49.114914 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:22:49.114938 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:22:49.114956 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 12:22:49.114972 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:22:49.114993 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 12:22:49.115017 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 12:22:49.115034 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 12:22:49.115051 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 12:22:49.115069 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:22:49.115085 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:22:49.115105 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:22:49.115119 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:22:49.115135 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:22:49.115155 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:22:49.115171 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:22:49.115186 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:22:49.115207 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:22:49.115225 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:22:49.115241 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:22:49.115258 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:22:49.115274 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:22:49.115314 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:22:49.115332 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 12:22:49.115347 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:22:49.117425 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 12:22:49.117476 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 12:22:49.117493 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:22:49.117508 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:22:49.117523 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:22:49.117602 systemd-journald[183]: Collecting audit messages is disabled. Jan 17 12:22:49.117652 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 12:22:49.117668 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:22:49.117683 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 12:22:49.117699 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:22:49.117721 systemd-journald[183]: Journal started Jan 17 12:22:49.117752 systemd-journald[183]: Runtime Journal (/run/log/journal/ca4b2c5d16dd48ec9e5d696ffacc59e7) is 4.9M, max 39.3M, 34.4M free. Jan 17 12:22:49.087619 systemd-modules-load[184]: Inserted module 'overlay' Jan 17 12:22:49.139458 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 12:22:49.142430 kernel: Bridge firewalling registered Jan 17 12:22:49.142025 systemd-modules-load[184]: Inserted module 'br_netfilter' Jan 17 12:22:49.187506 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:22:49.187889 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:22:49.193750 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:22:49.208863 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:22:49.215880 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:22:49.219743 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:22:49.224301 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:22:49.243918 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:22:49.250486 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:22:49.269948 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:22:49.272599 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:22:49.285802 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 12:22:49.290687 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:22:49.291892 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:22:49.319759 dracut-cmdline[217]: dracut-dracut-053 Jan 17 12:22:49.331919 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:22:49.358653 systemd-resolved[218]: Positive Trust Anchors: Jan 17 12:22:49.358678 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:22:49.358732 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:22:49.363650 systemd-resolved[218]: Defaulting to hostname 'linux'. Jan 17 12:22:49.365799 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:22:49.369767 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:22:49.495420 kernel: SCSI subsystem initialized Jan 17 12:22:49.509452 kernel: Loading iSCSI transport class v2.0-870. Jan 17 12:22:49.526543 kernel: iscsi: registered transport (tcp) Jan 17 12:22:49.559818 kernel: iscsi: registered transport (qla4xxx) Jan 17 12:22:49.559942 kernel: QLogic iSCSI HBA Driver Jan 17 12:22:49.637598 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 12:22:49.645766 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 12:22:49.696071 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 12:22:49.696159 kernel: device-mapper: uevent: version 1.0.3 Jan 17 12:22:49.699413 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 12:22:49.752465 kernel: raid6: avx2x4 gen() 16083 MB/s Jan 17 12:22:49.770460 kernel: raid6: avx2x2 gen() 15099 MB/s Jan 17 12:22:49.789068 kernel: raid6: avx2x1 gen() 11504 MB/s Jan 17 12:22:49.789205 kernel: raid6: using algorithm avx2x4 gen() 16083 MB/s Jan 17 12:22:49.808287 kernel: raid6: .... xor() 5538 MB/s, rmw enabled Jan 17 12:22:49.808507 kernel: raid6: using avx2x2 recovery algorithm Jan 17 12:22:49.841430 kernel: xor: automatically using best checksumming function avx Jan 17 12:22:50.062458 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 12:22:50.089279 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:22:50.099973 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:22:50.133071 systemd-udevd[402]: Using default interface naming scheme 'v255'. Jan 17 12:22:50.143568 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:22:50.157687 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 12:22:50.192892 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Jan 17 12:22:50.259743 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:22:50.267844 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:22:50.379850 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:22:50.388875 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 12:22:50.432423 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 12:22:50.442512 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:22:50.444173 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:22:50.446520 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:22:50.461602 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 12:22:50.500193 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:22:50.560968 kernel: scsi host0: Virtio SCSI HBA Jan 17 12:22:50.571987 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 12:22:50.572099 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jan 17 12:22:50.670593 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 17 12:22:50.670830 kernel: libata version 3.00 loaded. Jan 17 12:22:50.670869 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 12:22:50.670901 kernel: GPT:9289727 != 125829119 Jan 17 12:22:50.670927 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 12:22:50.670968 kernel: GPT:9289727 != 125829119 Jan 17 12:22:50.670994 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 12:22:50.671020 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:22:50.671047 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 17 12:22:50.692573 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 12:22:50.692614 kernel: AES CTR mode by8 optimization enabled Jan 17 12:22:50.692642 kernel: scsi host1: ata_piix Jan 17 12:22:50.692972 kernel: scsi host2: ata_piix Jan 17 12:22:50.693283 kernel: ACPI: bus type USB registered Jan 17 12:22:50.693313 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jan 17 12:22:50.693339 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jan 17 12:22:50.693367 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jan 17 12:22:50.695605 kernel: usbcore: registered new interface driver usbfs Jan 17 12:22:50.695638 kernel: usbcore: registered new interface driver hub Jan 17 12:22:50.695657 kernel: usbcore: registered new device driver usb Jan 17 12:22:50.695674 kernel: virtio_blk virtio5: [vdb] 920 512-byte logical blocks (471 kB/460 KiB) Jan 17 12:22:50.664251 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:22:50.664567 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:22:50.665758 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:22:50.666543 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:22:50.666877 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:22:50.667759 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:22:50.676106 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:22:50.777776 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:22:50.782800 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:22:50.852541 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:22:50.910420 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (457) Jan 17 12:22:50.920443 kernel: BTRFS: device fsid e459b8ee-f1f7-4c3d-a087-3f1955f52c85 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (446) Jan 17 12:22:50.941583 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 17 12:22:50.959346 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jan 17 12:22:50.967386 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jan 17 12:22:50.967686 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jan 17 12:22:50.967903 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jan 17 12:22:50.968141 kernel: hub 1-0:1.0: USB hub found Jan 17 12:22:50.968771 kernel: hub 1-0:1.0: 2 ports detected Jan 17 12:22:50.958746 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 17 12:22:50.974469 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 12:22:50.983773 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 17 12:22:50.984883 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 17 12:22:50.996819 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 12:22:51.012245 disk-uuid[549]: Primary Header is updated. Jan 17 12:22:51.012245 disk-uuid[549]: Secondary Entries is updated. Jan 17 12:22:51.012245 disk-uuid[549]: Secondary Header is updated. Jan 17 12:22:51.022473 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:22:51.032693 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:22:51.049454 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:22:52.050447 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:22:52.051465 disk-uuid[550]: The operation has completed successfully. Jan 17 12:22:52.147514 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 12:22:52.147754 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 12:22:52.162799 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 12:22:52.183007 sh[563]: Success Jan 17 12:22:52.207482 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 17 12:22:52.347776 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 12:22:52.362641 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 12:22:52.369302 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 12:22:52.414479 kernel: BTRFS info (device dm-0): first mount of filesystem e459b8ee-f1f7-4c3d-a087-3f1955f52c85 Jan 17 12:22:52.414612 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:22:52.414647 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 12:22:52.418474 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 12:22:52.418606 kernel: BTRFS info (device dm-0): using free space tree Jan 17 12:22:52.433782 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 12:22:52.435848 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 12:22:52.442850 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 12:22:52.455689 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 12:22:52.484472 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:22:52.484623 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:22:52.488204 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:22:52.494428 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:22:52.515705 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 12:22:52.519122 kernel: BTRFS info (device vda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:22:52.530267 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 12:22:52.539969 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 12:22:52.692423 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:22:52.705941 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:22:52.769033 ignition[667]: Ignition 2.19.0 Jan 17 12:22:52.770184 ignition[667]: Stage: fetch-offline Jan 17 12:22:52.770287 ignition[667]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:22:52.772358 systemd-networkd[747]: lo: Link UP Jan 17 12:22:52.770305 ignition[667]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 12:22:52.772366 systemd-networkd[747]: lo: Gained carrier Jan 17 12:22:52.770569 ignition[667]: parsed url from cmdline: "" Jan 17 12:22:52.776200 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:22:52.770577 ignition[667]: no config URL provided Jan 17 12:22:52.778577 systemd-networkd[747]: Enumeration completed Jan 17 12:22:52.770588 ignition[667]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:22:52.779424 systemd-networkd[747]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 17 12:22:52.770605 ignition[667]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:22:52.779431 systemd-networkd[747]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jan 17 12:22:52.770617 ignition[667]: failed to fetch config: resource requires networking Jan 17 12:22:52.780964 systemd-networkd[747]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:22:52.772117 ignition[667]: Ignition finished successfully Jan 17 12:22:52.780970 systemd-networkd[747]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:22:52.783065 systemd-networkd[747]: eth0: Link UP Jan 17 12:22:52.783072 systemd-networkd[747]: eth0: Gained carrier Jan 17 12:22:52.783090 systemd-networkd[747]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 17 12:22:52.783638 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:22:52.784836 systemd[1]: Reached target network.target - Network. Jan 17 12:22:52.792327 systemd-networkd[747]: eth1: Link UP Jan 17 12:22:52.792334 systemd-networkd[747]: eth1: Gained carrier Jan 17 12:22:52.792356 systemd-networkd[747]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:22:52.795218 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 12:22:52.811591 systemd-networkd[747]: eth0: DHCPv4 address 146.190.50.84/19, gateway 146.190.32.1 acquired from 169.254.169.253 Jan 17 12:22:52.814781 systemd-networkd[747]: eth1: DHCPv4 address 10.124.0.20/20 acquired from 169.254.169.253 Jan 17 12:22:52.840560 ignition[754]: Ignition 2.19.0 Jan 17 12:22:52.840581 ignition[754]: Stage: fetch Jan 17 12:22:52.840944 ignition[754]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:22:52.840969 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 12:22:52.841139 ignition[754]: parsed url from cmdline: "" Jan 17 12:22:52.841146 ignition[754]: no config URL provided Jan 17 12:22:52.841155 ignition[754]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:22:52.841169 ignition[754]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:22:52.841203 ignition[754]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jan 17 12:22:52.874035 ignition[754]: GET result: OK Jan 17 12:22:52.874257 ignition[754]: parsing config with SHA512: 6054f815618d7135e9c351261a6a5a5c9bce9ab2a5047027046bd11350862c12e4e6cb56833a66769b85fbd50a77f18e82d3354991194b90049711a84ac478a4 Jan 17 12:22:52.881863 unknown[754]: fetched base config from "system" Jan 17 12:22:52.882698 unknown[754]: fetched base config from "system" Jan 17 12:22:52.883203 ignition[754]: fetch: fetch complete Jan 17 12:22:52.882723 unknown[754]: fetched user config from "digitalocean" Jan 17 12:22:52.883211 ignition[754]: fetch: fetch passed Jan 17 12:22:52.885806 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 12:22:52.883320 ignition[754]: Ignition finished successfully Jan 17 12:22:52.894863 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 12:22:52.942017 ignition[761]: Ignition 2.19.0 Jan 17 12:22:52.942038 ignition[761]: Stage: kargs Jan 17 12:22:52.942521 ignition[761]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:22:52.942543 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 12:22:52.943873 ignition[761]: kargs: kargs passed Jan 17 12:22:52.946201 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 12:22:52.943987 ignition[761]: Ignition finished successfully Jan 17 12:22:52.951804 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 12:22:53.003948 ignition[768]: Ignition 2.19.0 Jan 17 12:22:53.003965 ignition[768]: Stage: disks Jan 17 12:22:53.004462 ignition[768]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:22:53.008568 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 12:22:53.004486 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 12:22:53.012284 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 12:22:53.005801 ignition[768]: disks: disks passed Jan 17 12:22:53.019256 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:22:53.005891 ignition[768]: Ignition finished successfully Jan 17 12:22:53.020770 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:22:53.022149 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:22:53.023355 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:22:53.031781 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 12:22:53.077031 systemd-fsck[777]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 12:22:53.086232 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 12:22:53.093678 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 12:22:53.246411 kernel: EXT4-fs (vda9): mounted filesystem 0ba4fe0e-76d7-406f-b570-4642d86198f6 r/w with ordered data mode. Quota mode: none. Jan 17 12:22:53.248779 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 12:22:53.251707 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 12:22:53.262640 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:22:53.271707 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 12:22:53.275837 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Jan 17 12:22:53.288747 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 17 12:22:53.290105 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 12:22:53.290176 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:22:53.303512 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (785) Jan 17 12:22:53.310422 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:22:53.312020 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 12:22:53.320571 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:22:53.320633 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:22:53.326460 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:22:53.330972 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 12:22:53.344297 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:22:53.464263 coreos-metadata[788]: Jan 17 12:22:53.463 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 17 12:22:53.474897 coreos-metadata[787]: Jan 17 12:22:53.474 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 17 12:22:53.479040 initrd-setup-root[816]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 12:22:53.483278 coreos-metadata[788]: Jan 17 12:22:53.482 INFO Fetch successful Jan 17 12:22:53.489213 coreos-metadata[787]: Jan 17 12:22:53.488 INFO Fetch successful Jan 17 12:22:53.490930 coreos-metadata[788]: Jan 17 12:22:53.490 INFO wrote hostname ci-4081.3.0-1-97f5d36106 to /sysroot/etc/hostname Jan 17 12:22:53.494590 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 12:22:53.498302 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Jan 17 12:22:53.498609 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Jan 17 12:22:53.503915 initrd-setup-root[823]: cut: /sysroot/etc/group: No such file or directory Jan 17 12:22:53.515235 initrd-setup-root[832]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 12:22:53.523992 initrd-setup-root[839]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 12:22:53.705582 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 12:22:53.712665 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 12:22:53.716803 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 12:22:53.748776 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 12:22:53.754108 kernel: BTRFS info (device vda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:22:53.789777 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 12:22:53.807653 ignition[906]: INFO : Ignition 2.19.0 Jan 17 12:22:53.807653 ignition[906]: INFO : Stage: mount Jan 17 12:22:53.809632 ignition[906]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:22:53.809632 ignition[906]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 12:22:53.812946 ignition[906]: INFO : mount: mount passed Jan 17 12:22:53.812946 ignition[906]: INFO : Ignition finished successfully Jan 17 12:22:53.813034 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 12:22:53.822693 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 12:22:53.854832 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:22:53.877455 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (918) Jan 17 12:22:53.882505 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:22:53.882657 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:22:53.882687 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:22:53.892444 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:22:53.896223 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:22:53.938217 ignition[935]: INFO : Ignition 2.19.0 Jan 17 12:22:53.939571 ignition[935]: INFO : Stage: files Jan 17 12:22:53.940247 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:22:53.940247 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 12:22:53.942504 ignition[935]: DEBUG : files: compiled without relabeling support, skipping Jan 17 12:22:53.943552 ignition[935]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 12:22:53.943552 ignition[935]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 12:22:53.948532 ignition[935]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 12:22:53.949862 ignition[935]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 12:22:53.949862 ignition[935]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 12:22:53.949282 unknown[935]: wrote ssh authorized keys file for user: core Jan 17 12:22:53.953342 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 12:22:53.953342 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 12:22:53.953342 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 17 12:22:53.953342 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 12:22:53.953342 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:22:53.961056 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:22:53.961056 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:22:53.961056 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:22:53.961056 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:22:53.961056 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 17 12:22:54.017974 systemd-networkd[747]: eth0: Gained IPv6LL Jan 17 12:22:54.402130 systemd-networkd[747]: eth1: Gained IPv6LL Jan 17 12:22:54.495828 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Jan 17 12:22:55.099493 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:22:55.099493 ignition[935]: INFO : files: op(8): [started] processing unit "containerd.service" Jan 17 12:22:55.102818 ignition[935]: INFO : files: op(8): op(9): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 12:22:55.102818 ignition[935]: INFO : files: op(8): op(9): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 12:22:55.102818 ignition[935]: INFO : files: op(8): [finished] processing unit "containerd.service" Jan 17 12:22:55.102818 ignition[935]: INFO : files: createResultFile: createFiles: op(a): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:22:55.102818 ignition[935]: INFO : files: createResultFile: createFiles: op(a): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:22:55.102818 ignition[935]: INFO : files: files passed Jan 17 12:22:55.102818 ignition[935]: INFO : Ignition finished successfully Jan 17 12:22:55.104437 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 12:22:55.125960 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 12:22:55.138703 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 12:22:55.143438 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 12:22:55.143591 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 12:22:55.170630 initrd-setup-root-after-ignition[963]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:22:55.173892 initrd-setup-root-after-ignition[963]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:22:55.175561 initrd-setup-root-after-ignition[967]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:22:55.176535 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:22:55.179018 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 12:22:55.186273 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 12:22:55.246151 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 12:22:55.246346 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 12:22:55.248591 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 12:22:55.249648 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 12:22:55.251306 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 12:22:55.267844 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 12:22:55.290624 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:22:55.298829 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 12:22:55.331553 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:22:55.332782 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:22:55.334667 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 12:22:55.336108 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 12:22:55.336510 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:22:55.338255 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 12:22:55.339822 systemd[1]: Stopped target basic.target - Basic System. Jan 17 12:22:55.341032 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 12:22:55.343073 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:22:55.344475 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 12:22:55.346052 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 12:22:55.347464 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:22:55.348870 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 12:22:55.350285 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 12:22:55.351561 systemd[1]: Stopped target swap.target - Swaps. Jan 17 12:22:55.352723 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 12:22:55.353080 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:22:55.355472 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:22:55.356465 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:22:55.357750 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 12:22:55.358052 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:22:55.359282 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 12:22:55.359633 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 12:22:55.361358 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 12:22:55.361728 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:22:55.363455 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 12:22:55.363785 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 12:22:55.364796 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 17 12:22:55.365137 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 12:22:55.379948 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 12:22:55.384824 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 12:22:55.385656 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 12:22:55.385964 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:22:55.390243 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 12:22:55.390616 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:22:55.408717 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 12:22:55.408897 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 12:22:55.420915 ignition[987]: INFO : Ignition 2.19.0 Jan 17 12:22:55.423923 ignition[987]: INFO : Stage: umount Jan 17 12:22:55.423923 ignition[987]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:22:55.423923 ignition[987]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 12:22:55.423923 ignition[987]: INFO : umount: umount passed Jan 17 12:22:55.423923 ignition[987]: INFO : Ignition finished successfully Jan 17 12:22:55.429158 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 12:22:55.429395 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 12:22:55.432035 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 12:22:55.432140 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 12:22:55.433209 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 12:22:55.433313 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 12:22:55.436719 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 12:22:55.436865 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 12:22:55.438050 systemd[1]: Stopped target network.target - Network. Jan 17 12:22:55.441680 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 12:22:55.441808 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:22:55.442996 systemd[1]: Stopped target paths.target - Path Units. Jan 17 12:22:55.443616 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 12:22:55.450522 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:22:55.452443 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 12:22:55.458326 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 12:22:55.459945 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 12:22:55.460036 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:22:55.461229 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 12:22:55.461313 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:22:55.462741 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 12:22:55.462851 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 12:22:55.464047 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 12:22:55.464156 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 12:22:55.465741 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 12:22:55.467958 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 12:22:55.469508 systemd-networkd[747]: eth0: DHCPv6 lease lost Jan 17 12:22:55.471162 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 12:22:55.473187 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 12:22:55.473362 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 12:22:55.475525 systemd-networkd[747]: eth1: DHCPv6 lease lost Jan 17 12:22:55.478324 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 12:22:55.479581 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 12:22:55.483309 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 12:22:55.483595 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 12:22:55.489453 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 12:22:55.489534 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:22:55.491010 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 12:22:55.491114 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 12:22:55.498650 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 12:22:55.499522 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 12:22:55.499693 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:22:55.503838 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:22:55.504799 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:22:55.506735 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 12:22:55.506836 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 12:22:55.508989 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 12:22:55.509133 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:22:55.510919 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:22:55.524908 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 12:22:55.525166 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:22:55.532071 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 12:22:55.532161 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 12:22:55.533044 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 12:22:55.533125 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:22:55.535033 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 12:22:55.535140 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:22:55.536685 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 12:22:55.536805 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 12:22:55.538186 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:22:55.538287 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:22:55.546793 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 12:22:55.548446 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 12:22:55.548623 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:22:55.549592 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 17 12:22:55.549775 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:22:55.553010 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 12:22:55.553145 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:22:55.555764 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:22:55.555854 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:22:55.568655 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 12:22:55.568858 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 12:22:55.580245 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 12:22:55.580533 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 12:22:55.584152 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 12:22:55.589745 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 12:22:55.617911 systemd[1]: Switching root. Jan 17 12:22:55.713169 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Jan 17 12:22:55.713317 systemd-journald[183]: Journal stopped Jan 17 12:22:57.619129 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 12:22:57.619252 kernel: SELinux: policy capability open_perms=1 Jan 17 12:22:57.619279 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 12:22:57.619303 kernel: SELinux: policy capability always_check_network=0 Jan 17 12:22:57.619328 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 12:22:57.619358 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 12:22:57.619404 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 12:22:57.619430 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 12:22:57.619466 kernel: audit: type=1403 audit(1737116576.006:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 12:22:57.619522 systemd[1]: Successfully loaded SELinux policy in 56.734ms. Jan 17 12:22:57.619558 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.978ms. Jan 17 12:22:57.619587 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:22:57.619615 systemd[1]: Detected virtualization kvm. Jan 17 12:22:57.619641 systemd[1]: Detected architecture x86-64. Jan 17 12:22:57.619674 systemd[1]: Detected first boot. Jan 17 12:22:57.619702 systemd[1]: Hostname set to . Jan 17 12:22:57.619728 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:22:57.619755 zram_generator::config[1050]: No configuration found. Jan 17 12:22:57.619787 systemd[1]: Populated /etc with preset unit settings. Jan 17 12:22:57.619816 systemd[1]: Queued start job for default target multi-user.target. Jan 17 12:22:57.619843 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 17 12:22:57.619876 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 12:22:57.619904 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 12:22:57.619933 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 12:22:57.619960 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 12:22:57.619988 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 12:22:57.620015 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 12:22:57.620042 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 12:22:57.620071 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 12:22:57.620098 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:22:57.620131 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:22:57.620151 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 12:22:57.620169 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 12:22:57.620187 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 12:22:57.620219 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:22:57.620247 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 12:22:57.620274 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:22:57.620303 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 12:22:57.620330 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:22:57.620363 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:22:57.620429 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:22:57.620460 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:22:57.620489 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 12:22:57.620517 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 12:22:57.620548 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:22:57.620575 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:22:57.620609 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:22:57.620637 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:22:57.620665 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:22:57.620693 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 12:22:57.620722 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 12:22:57.620749 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 12:22:57.620777 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 12:22:57.620807 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:22:57.620841 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 12:22:57.620901 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 12:22:57.620930 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 12:22:57.620958 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 12:22:57.620988 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:22:57.621016 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:22:57.621044 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 12:22:57.621072 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:22:57.621100 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:22:57.621134 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:22:57.621162 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 12:22:57.621194 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:22:57.621230 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 12:22:57.621263 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 17 12:22:57.621296 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 17 12:22:57.621328 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:22:57.621360 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:22:57.621511 kernel: loop: module loaded Jan 17 12:22:57.621551 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 12:22:57.621579 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 12:22:57.621606 kernel: fuse: init (API version 7.39) Jan 17 12:22:57.621632 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:22:57.621661 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:22:57.621689 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 12:22:57.621719 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 12:22:57.621748 kernel: ACPI: bus type drm_connector registered Jan 17 12:22:57.621780 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 12:22:57.621871 systemd-journald[1138]: Collecting audit messages is disabled. Jan 17 12:22:57.621959 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 12:22:57.621990 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 12:22:57.622018 systemd-journald[1138]: Journal started Jan 17 12:22:57.622081 systemd-journald[1138]: Runtime Journal (/run/log/journal/ca4b2c5d16dd48ec9e5d696ffacc59e7) is 4.9M, max 39.3M, 34.4M free. Jan 17 12:22:57.624434 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:22:57.632169 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 12:22:57.635518 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:22:57.638318 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 12:22:57.638658 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 12:22:57.641253 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:22:57.641914 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:22:57.644328 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:22:57.645124 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:22:57.647811 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:22:57.649570 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:22:57.651045 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 12:22:57.651342 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 12:22:57.652984 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:22:57.655773 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:22:57.659831 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:22:57.665811 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 12:22:57.671916 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 12:22:57.690183 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 12:22:57.707945 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 12:22:57.715614 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 12:22:57.726609 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 12:22:57.727396 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 12:22:57.738319 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 12:22:57.764866 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 12:22:57.772422 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:22:57.782729 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 12:22:57.785027 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:22:57.798078 systemd-journald[1138]: Time spent on flushing to /var/log/journal/ca4b2c5d16dd48ec9e5d696ffacc59e7 is 91.042ms for 957 entries. Jan 17 12:22:57.798078 systemd-journald[1138]: System Journal (/var/log/journal/ca4b2c5d16dd48ec9e5d696ffacc59e7) is 8.0M, max 195.6M, 187.6M free. Jan 17 12:22:57.953089 systemd-journald[1138]: Received client request to flush runtime journal. Jan 17 12:22:57.800691 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:22:57.818873 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:22:57.826772 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 12:22:57.831864 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 12:22:57.863208 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:22:57.889700 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 12:22:57.914199 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 12:22:57.916861 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 12:22:57.950193 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:22:57.965568 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 12:22:57.986236 udevadm[1196]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 17 12:22:57.989569 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Jan 17 12:22:57.989615 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Jan 17 12:22:58.002280 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:22:58.021927 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 12:22:58.079841 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 12:22:58.098928 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:22:58.139349 systemd-tmpfiles[1212]: ACLs are not supported, ignoring. Jan 17 12:22:58.140054 systemd-tmpfiles[1212]: ACLs are not supported, ignoring. Jan 17 12:22:58.153213 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:22:59.221181 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 12:22:59.231833 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:22:59.290748 systemd-udevd[1218]: Using default interface naming scheme 'v255'. Jan 17 12:22:59.335234 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:22:59.346767 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:22:59.401755 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 12:22:59.491508 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 17 12:22:59.522734 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 12:22:59.562288 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1231) Jan 17 12:22:59.585352 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:22:59.585712 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:22:59.591780 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:22:59.607731 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:22:59.620693 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:22:59.621360 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 12:22:59.621455 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 12:22:59.621563 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:22:59.634994 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:22:59.635554 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:22:59.651921 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:22:59.652361 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:22:59.662389 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:22:59.668747 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:22:59.672230 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:22:59.672297 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:22:59.737775 systemd-networkd[1221]: lo: Link UP Jan 17 12:22:59.738391 systemd-networkd[1221]: lo: Gained carrier Jan 17 12:22:59.743625 systemd-networkd[1221]: Enumeration completed Jan 17 12:22:59.744244 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:22:59.744993 systemd-networkd[1221]: eth0: Configuring with /run/systemd/network/10-5e:87:b8:11:bd:97.network. Jan 17 12:22:59.746363 systemd-networkd[1221]: eth1: Configuring with /run/systemd/network/10-5e:c5:5d:26:27:08.network. Jan 17 12:22:59.747485 systemd-networkd[1221]: eth0: Link UP Jan 17 12:22:59.747684 systemd-networkd[1221]: eth0: Gained carrier Jan 17 12:22:59.751943 systemd-networkd[1221]: eth1: Link UP Jan 17 12:22:59.754291 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 12:22:59.754746 systemd-networkd[1221]: eth1: Gained carrier Jan 17 12:22:59.856335 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 12:22:59.860476 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 17 12:22:59.867415 kernel: ACPI: button: Power Button [PWRF] Jan 17 12:22:59.872499 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 17 12:22:59.944419 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 17 12:22:59.999508 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 12:23:00.012432 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 17 12:23:00.023421 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 17 12:23:00.032015 kernel: Console: switching to colour dummy device 80x25 Jan 17 12:23:00.035430 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 17 12:23:00.035585 kernel: [drm] features: -context_init Jan 17 12:23:00.038991 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:23:00.041413 kernel: [drm] number of scanouts: 1 Jan 17 12:23:00.044401 kernel: [drm] number of cap sets: 0 Jan 17 12:23:00.049420 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 17 12:23:00.066495 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 17 12:23:00.066644 kernel: Console: switching to colour frame buffer device 128x48 Jan 17 12:23:00.086527 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 17 12:23:00.100889 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:23:00.102573 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:23:00.120991 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:23:00.145366 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:23:00.148128 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:23:00.170334 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:23:00.299427 kernel: EDAC MC: Ver: 3.0.0 Jan 17 12:23:00.328077 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:23:00.341600 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 12:23:00.354031 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 12:23:00.376418 lvm[1283]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:23:00.422879 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 12:23:00.424175 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:23:00.431806 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 12:23:00.449047 lvm[1286]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:23:00.487859 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 12:23:00.488991 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:23:00.500674 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jan 17 12:23:00.501323 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 12:23:00.501400 systemd[1]: Reached target machines.target - Containers. Jan 17 12:23:00.505763 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 12:23:00.538410 kernel: ISO 9660 Extensions: RRIP_1991A Jan 17 12:23:00.540691 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jan 17 12:23:00.544548 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:23:00.551187 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 12:23:00.560921 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 12:23:00.570756 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 12:23:00.574584 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:23:00.584926 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 12:23:00.593768 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 12:23:00.606451 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 12:23:00.614396 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 12:23:00.655922 kernel: loop0: detected capacity change from 0 to 8 Jan 17 12:23:00.684432 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 12:23:00.688410 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 12:23:00.692007 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 12:23:00.724459 kernel: loop1: detected capacity change from 0 to 142488 Jan 17 12:23:00.798449 kernel: loop2: detected capacity change from 0 to 140768 Jan 17 12:23:00.857681 kernel: loop3: detected capacity change from 0 to 211296 Jan 17 12:23:00.921440 kernel: loop4: detected capacity change from 0 to 8 Jan 17 12:23:00.925863 kernel: loop5: detected capacity change from 0 to 142488 Jan 17 12:23:00.977940 kernel: loop6: detected capacity change from 0 to 140768 Jan 17 12:23:00.993886 systemd-networkd[1221]: eth1: Gained IPv6LL Jan 17 12:23:01.005247 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 12:23:01.037467 kernel: loop7: detected capacity change from 0 to 211296 Jan 17 12:23:01.057956 systemd-networkd[1221]: eth0: Gained IPv6LL Jan 17 12:23:01.072536 (sd-merge)[1312]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jan 17 12:23:01.074495 (sd-merge)[1312]: Merged extensions into '/usr'. Jan 17 12:23:01.086190 systemd[1]: Reloading requested from client PID 1300 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 12:23:01.086218 systemd[1]: Reloading... Jan 17 12:23:01.230924 zram_generator::config[1341]: No configuration found. Jan 17 12:23:01.572979 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:23:01.677551 ldconfig[1297]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 12:23:01.715670 systemd[1]: Reloading finished in 628 ms. Jan 17 12:23:01.744657 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 12:23:01.749215 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 12:23:01.764852 systemd[1]: Starting ensure-sysext.service... Jan 17 12:23:01.778917 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:23:01.796680 systemd[1]: Reloading requested from client PID 1391 ('systemctl') (unit ensure-sysext.service)... Jan 17 12:23:01.796727 systemd[1]: Reloading... Jan 17 12:23:01.849305 systemd-tmpfiles[1392]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 12:23:01.850134 systemd-tmpfiles[1392]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 12:23:01.853247 systemd-tmpfiles[1392]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 12:23:01.854220 systemd-tmpfiles[1392]: ACLs are not supported, ignoring. Jan 17 12:23:01.854402 systemd-tmpfiles[1392]: ACLs are not supported, ignoring. Jan 17 12:23:01.864326 systemd-tmpfiles[1392]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:23:01.864598 systemd-tmpfiles[1392]: Skipping /boot Jan 17 12:23:01.886160 systemd-tmpfiles[1392]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:23:01.886408 systemd-tmpfiles[1392]: Skipping /boot Jan 17 12:23:01.988428 zram_generator::config[1424]: No configuration found. Jan 17 12:23:02.240197 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:23:02.369913 systemd[1]: Reloading finished in 572 ms. Jan 17 12:23:02.403612 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:23:02.446207 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:23:02.456298 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 12:23:02.473899 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 12:23:02.490652 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:23:02.511830 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 12:23:02.532876 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:23:02.533894 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:23:02.545947 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:23:02.567045 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:23:02.594958 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:23:02.598296 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:23:02.602904 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:23:02.628405 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 12:23:02.641199 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:23:02.644110 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:23:02.644456 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:23:02.657001 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 12:23:02.659229 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:23:02.667246 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 12:23:02.696762 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:23:02.697071 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:23:02.703169 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:23:02.704187 augenrules[1499]: No rules Jan 17 12:23:02.704901 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:23:02.708286 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:23:02.710684 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:23:02.721042 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:23:02.737443 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 12:23:02.768769 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 12:23:02.783068 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:23:02.784170 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:23:02.797746 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:23:02.820730 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:23:02.835774 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:23:02.845123 systemd-resolved[1481]: Positive Trust Anchors: Jan 17 12:23:02.845156 systemd-resolved[1481]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:23:02.845332 systemd-resolved[1481]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:23:02.855209 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:23:02.859306 systemd-resolved[1481]: Using system hostname 'ci-4081.3.0-1-97f5d36106'. Jan 17 12:23:02.865565 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:23:02.866060 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 12:23:02.866256 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:23:02.867292 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:23:02.882316 systemd[1]: Finished ensure-sysext.service. Jan 17 12:23:02.888276 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:23:02.888647 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:23:02.892153 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:23:02.892795 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:23:02.897352 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:23:02.897891 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:23:02.903522 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:23:02.903891 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:23:02.919723 systemd[1]: Reached target network.target - Network. Jan 17 12:23:02.925760 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 12:23:02.926931 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:23:02.927876 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:23:02.928018 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:23:02.938798 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 12:23:03.052941 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 12:23:03.055038 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:23:03.056264 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 12:23:03.057256 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 12:23:03.058549 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 12:23:03.059497 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 12:23:03.059663 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:23:03.060771 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 12:23:03.062481 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 12:23:03.063936 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 12:23:03.065579 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:23:03.073660 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 12:23:03.080930 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 12:23:03.084763 systemd-timesyncd[1534]: Contacted time server 5.78.62.36:123 (0.flatcar.pool.ntp.org). Jan 17 12:23:03.084903 systemd-timesyncd[1534]: Initial clock synchronization to Fri 2025-01-17 12:23:03.188540 UTC. Jan 17 12:23:03.088547 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 12:23:03.094811 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 12:23:03.096023 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:23:03.096997 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:23:03.101593 systemd[1]: System is tainted: cgroupsv1 Jan 17 12:23:03.102039 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:23:03.102135 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:23:03.109750 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 12:23:03.126518 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 12:23:03.143985 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 12:23:03.154663 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 12:23:03.173989 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 12:23:03.174986 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 12:23:03.182825 coreos-metadata[1539]: Jan 17 12:23:03.182 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 17 12:23:03.195720 jq[1544]: false Jan 17 12:23:03.200231 coreos-metadata[1539]: Jan 17 12:23:03.198 INFO Fetch successful Jan 17 12:23:03.201568 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:23:03.225459 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 12:23:03.241637 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 12:23:03.256804 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 12:23:03.268554 dbus-daemon[1542]: [system] SELinux support is enabled Jan 17 12:23:03.273888 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 12:23:03.279927 extend-filesystems[1545]: Found loop4 Jan 17 12:23:03.291335 extend-filesystems[1545]: Found loop5 Jan 17 12:23:03.291335 extend-filesystems[1545]: Found loop6 Jan 17 12:23:03.291335 extend-filesystems[1545]: Found loop7 Jan 17 12:23:03.291335 extend-filesystems[1545]: Found vda Jan 17 12:23:03.291335 extend-filesystems[1545]: Found vda1 Jan 17 12:23:03.291335 extend-filesystems[1545]: Found vda2 Jan 17 12:23:03.291335 extend-filesystems[1545]: Found vda3 Jan 17 12:23:03.291335 extend-filesystems[1545]: Found usr Jan 17 12:23:03.291335 extend-filesystems[1545]: Found vda4 Jan 17 12:23:03.291335 extend-filesystems[1545]: Found vda6 Jan 17 12:23:03.291335 extend-filesystems[1545]: Found vda7 Jan 17 12:23:03.291335 extend-filesystems[1545]: Found vda9 Jan 17 12:23:03.291335 extend-filesystems[1545]: Checking size of /dev/vda9 Jan 17 12:23:03.305823 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 12:23:03.316114 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 12:23:03.336107 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 12:23:03.351743 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 12:23:03.366322 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 12:23:03.374344 extend-filesystems[1545]: Resized partition /dev/vda9 Jan 17 12:23:03.401501 extend-filesystems[1573]: resize2fs 1.47.1 (20-May-2024) Jan 17 12:23:03.411833 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jan 17 12:23:03.423678 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 12:23:03.425404 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 12:23:03.440074 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 12:23:03.443320 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 12:23:03.452616 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 12:23:03.453070 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 12:23:03.492471 jq[1568]: true Jan 17 12:23:03.518637 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 12:23:03.522462 (ntainerd)[1581]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 12:23:03.564618 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 12:23:03.564695 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 12:23:03.571857 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 12:23:03.572012 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jan 17 12:23:03.572050 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 12:23:03.579921 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 12:23:03.586455 update_engine[1562]: I20250117 12:23:03.580605 1562 main.cc:92] Flatcar Update Engine starting Jan 17 12:23:03.586455 update_engine[1562]: I20250117 12:23:03.585776 1562 update_check_scheduler.cc:74] Next update check in 7m35s Jan 17 12:23:03.586330 systemd[1]: Started update-engine.service - Update Engine. Jan 17 12:23:03.595152 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 12:23:03.596184 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 12:23:03.603793 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 12:23:03.615880 jq[1590]: true Jan 17 12:23:03.673486 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1607) Jan 17 12:23:03.869779 bash[1626]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:23:03.875659 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 12:23:03.895270 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 17 12:23:03.895596 locksmithd[1601]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 12:23:03.905884 systemd[1]: Starting sshkeys.service... Jan 17 12:23:03.956561 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 17 12:23:03.976523 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 17 12:23:03.990575 extend-filesystems[1573]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 17 12:23:03.990575 extend-filesystems[1573]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 17 12:23:03.990575 extend-filesystems[1573]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 17 12:23:03.980134 systemd-logind[1557]: New seat seat0. Jan 17 12:23:04.041220 extend-filesystems[1545]: Resized filesystem in /dev/vda9 Jan 17 12:23:04.041220 extend-filesystems[1545]: Found vdb Jan 17 12:23:03.984626 systemd-logind[1557]: Watching system buttons on /dev/input/event1 (Power Button) Jan 17 12:23:03.984715 systemd-logind[1557]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 12:23:03.989131 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 12:23:04.024698 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 12:23:04.025117 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 12:23:04.233193 coreos-metadata[1636]: Jan 17 12:23:04.232 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 17 12:23:04.251000 coreos-metadata[1636]: Jan 17 12:23:04.250 INFO Fetch successful Jan 17 12:23:04.282069 unknown[1636]: wrote ssh authorized keys file for user: core Jan 17 12:23:04.361249 update-ssh-keys[1649]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:23:04.356769 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 17 12:23:04.366239 systemd[1]: Finished sshkeys.service. Jan 17 12:23:04.485036 containerd[1581]: time="2025-01-17T12:23:04.481792835Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 12:23:04.559330 containerd[1581]: time="2025-01-17T12:23:04.559214621Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:23:04.567994 containerd[1581]: time="2025-01-17T12:23:04.567909721Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:23:04.567994 containerd[1581]: time="2025-01-17T12:23:04.567973294Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 12:23:04.567994 containerd[1581]: time="2025-01-17T12:23:04.568009370Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 12:23:04.568322 containerd[1581]: time="2025-01-17T12:23:04.568264180Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 12:23:04.568322 containerd[1581]: time="2025-01-17T12:23:04.568290094Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 12:23:04.568456 containerd[1581]: time="2025-01-17T12:23:04.568367829Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:23:04.568456 containerd[1581]: time="2025-01-17T12:23:04.568418850Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:23:04.570422 containerd[1581]: time="2025-01-17T12:23:04.568895911Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:23:04.570422 containerd[1581]: time="2025-01-17T12:23:04.568952855Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 12:23:04.570422 containerd[1581]: time="2025-01-17T12:23:04.568983302Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:23:04.570422 containerd[1581]: time="2025-01-17T12:23:04.568999550Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 12:23:04.570422 containerd[1581]: time="2025-01-17T12:23:04.569176099Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:23:04.570422 containerd[1581]: time="2025-01-17T12:23:04.569730786Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:23:04.571656 containerd[1581]: time="2025-01-17T12:23:04.571596653Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:23:04.571656 containerd[1581]: time="2025-01-17T12:23:04.571645777Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 12:23:04.571843 containerd[1581]: time="2025-01-17T12:23:04.571822938Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 12:23:04.571960 containerd[1581]: time="2025-01-17T12:23:04.571938192Z" level=info msg="metadata content store policy set" policy=shared Jan 17 12:23:04.603482 containerd[1581]: time="2025-01-17T12:23:04.603365210Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 12:23:04.603482 containerd[1581]: time="2025-01-17T12:23:04.603503984Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 12:23:04.603789 containerd[1581]: time="2025-01-17T12:23:04.603529305Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 12:23:04.603789 containerd[1581]: time="2025-01-17T12:23:04.603554186Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 12:23:04.605432 containerd[1581]: time="2025-01-17T12:23:04.604923358Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 12:23:04.610770 containerd[1581]: time="2025-01-17T12:23:04.610651537Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 12:23:04.614441 containerd[1581]: time="2025-01-17T12:23:04.613369598Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 12:23:04.619367 containerd[1581]: time="2025-01-17T12:23:04.619289360Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 12:23:04.623431 containerd[1581]: time="2025-01-17T12:23:04.621567579Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 12:23:04.623431 containerd[1581]: time="2025-01-17T12:23:04.621666940Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 12:23:04.623431 containerd[1581]: time="2025-01-17T12:23:04.621695584Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 12:23:04.623431 containerd[1581]: time="2025-01-17T12:23:04.621726521Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 12:23:04.623431 containerd[1581]: time="2025-01-17T12:23:04.621760423Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 12:23:04.623431 containerd[1581]: time="2025-01-17T12:23:04.621794408Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 12:23:04.623431 containerd[1581]: time="2025-01-17T12:23:04.621839145Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 12:23:04.623431 containerd[1581]: time="2025-01-17T12:23:04.621862519Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 12:23:04.623431 containerd[1581]: time="2025-01-17T12:23:04.621882451Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 12:23:04.623431 containerd[1581]: time="2025-01-17T12:23:04.621905303Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 12:23:04.623431 containerd[1581]: time="2025-01-17T12:23:04.621941226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 12:23:04.623431 containerd[1581]: time="2025-01-17T12:23:04.621964348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 12:23:04.623431 containerd[1581]: time="2025-01-17T12:23:04.621983813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 12:23:04.623431 containerd[1581]: time="2025-01-17T12:23:04.622006209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 12:23:04.624482 containerd[1581]: time="2025-01-17T12:23:04.622045707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 12:23:04.624482 containerd[1581]: time="2025-01-17T12:23:04.622071959Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 12:23:04.624482 containerd[1581]: time="2025-01-17T12:23:04.622091129Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 12:23:04.624482 containerd[1581]: time="2025-01-17T12:23:04.622111766Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 12:23:04.624482 containerd[1581]: time="2025-01-17T12:23:04.622134050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 12:23:04.624482 containerd[1581]: time="2025-01-17T12:23:04.622163018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 12:23:04.624482 containerd[1581]: time="2025-01-17T12:23:04.622187463Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 12:23:04.624482 containerd[1581]: time="2025-01-17T12:23:04.622212173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 12:23:04.624482 containerd[1581]: time="2025-01-17T12:23:04.622233615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 12:23:04.624482 containerd[1581]: time="2025-01-17T12:23:04.622268339Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 12:23:04.624482 containerd[1581]: time="2025-01-17T12:23:04.622322246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 12:23:04.624482 containerd[1581]: time="2025-01-17T12:23:04.622345173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 12:23:04.624482 containerd[1581]: time="2025-01-17T12:23:04.622363780Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 12:23:04.624482 containerd[1581]: time="2025-01-17T12:23:04.622480787Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 12:23:04.625093 containerd[1581]: time="2025-01-17T12:23:04.622512068Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 12:23:04.625093 containerd[1581]: time="2025-01-17T12:23:04.622532531Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 12:23:04.625093 containerd[1581]: time="2025-01-17T12:23:04.622552718Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 12:23:04.625093 containerd[1581]: time="2025-01-17T12:23:04.622568807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 12:23:04.625093 containerd[1581]: time="2025-01-17T12:23:04.622598816Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 12:23:04.625093 containerd[1581]: time="2025-01-17T12:23:04.622704728Z" level=info msg="NRI interface is disabled by configuration." Jan 17 12:23:04.625093 containerd[1581]: time="2025-01-17T12:23:04.622741506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 12:23:04.627523 containerd[1581]: time="2025-01-17T12:23:04.623271486Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 12:23:04.632441 containerd[1581]: time="2025-01-17T12:23:04.628055463Z" level=info msg="Connect containerd service" Jan 17 12:23:04.632441 containerd[1581]: time="2025-01-17T12:23:04.628197438Z" level=info msg="using legacy CRI server" Jan 17 12:23:04.632441 containerd[1581]: time="2025-01-17T12:23:04.628217180Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 12:23:04.632441 containerd[1581]: time="2025-01-17T12:23:04.628490903Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 12:23:04.632441 containerd[1581]: time="2025-01-17T12:23:04.629611344Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:23:04.632441 containerd[1581]: time="2025-01-17T12:23:04.629818091Z" level=info msg="Start subscribing containerd event" Jan 17 12:23:04.632441 containerd[1581]: time="2025-01-17T12:23:04.629926857Z" level=info msg="Start recovering state" Jan 17 12:23:04.632441 containerd[1581]: time="2025-01-17T12:23:04.630049460Z" level=info msg="Start event monitor" Jan 17 12:23:04.632441 containerd[1581]: time="2025-01-17T12:23:04.630078597Z" level=info msg="Start snapshots syncer" Jan 17 12:23:04.632441 containerd[1581]: time="2025-01-17T12:23:04.630096946Z" level=info msg="Start cni network conf syncer for default" Jan 17 12:23:04.632441 containerd[1581]: time="2025-01-17T12:23:04.631570920Z" level=info msg="Start streaming server" Jan 17 12:23:04.633943 containerd[1581]: time="2025-01-17T12:23:04.630379230Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 12:23:04.634130 containerd[1581]: time="2025-01-17T12:23:04.634101492Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 12:23:04.636433 containerd[1581]: time="2025-01-17T12:23:04.636315778Z" level=info msg="containerd successfully booted in 0.156992s" Jan 17 12:23:04.636687 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 12:23:04.694612 sshd_keygen[1591]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 12:23:04.763163 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 12:23:04.781918 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 12:23:04.808173 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 12:23:04.808775 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 12:23:04.826109 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 12:23:04.857468 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 12:23:04.874092 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 12:23:04.883802 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 12:23:04.885245 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 12:23:05.669782 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:23:05.677534 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 12:23:05.682553 (kubelet)[1690]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:23:05.686095 systemd[1]: Startup finished in 8.725s (kernel) + 9.735s (userspace) = 18.460s. Jan 17 12:23:06.828501 kubelet[1690]: E0117 12:23:06.828336 1690 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:23:06.832208 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:23:06.832565 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:23:09.966210 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 12:23:09.975012 systemd[1]: Started sshd@0-146.190.50.84:22-139.178.68.195:47886.service - OpenSSH per-connection server daemon (139.178.68.195:47886). Jan 17 12:23:10.103432 sshd[1703]: Accepted publickey for core from 139.178.68.195 port 47886 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:23:10.106807 sshd[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:10.133528 systemd-logind[1557]: New session 1 of user core. Jan 17 12:23:10.135420 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 12:23:10.147522 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 12:23:10.177788 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 12:23:10.190122 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 12:23:10.213784 (systemd)[1709]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 12:23:10.438342 systemd[1709]: Queued start job for default target default.target. Jan 17 12:23:10.439070 systemd[1709]: Created slice app.slice - User Application Slice. Jan 17 12:23:10.439111 systemd[1709]: Reached target paths.target - Paths. Jan 17 12:23:10.439132 systemd[1709]: Reached target timers.target - Timers. Jan 17 12:23:10.446749 systemd[1709]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 12:23:10.465586 systemd[1709]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 12:23:10.465929 systemd[1709]: Reached target sockets.target - Sockets. Jan 17 12:23:10.466124 systemd[1709]: Reached target basic.target - Basic System. Jan 17 12:23:10.466655 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 12:23:10.468496 systemd[1709]: Reached target default.target - Main User Target. Jan 17 12:23:10.468662 systemd[1709]: Startup finished in 242ms. Jan 17 12:23:10.476037 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 12:23:10.552203 systemd[1]: Started sshd@1-146.190.50.84:22-139.178.68.195:47888.service - OpenSSH per-connection server daemon (139.178.68.195:47888). Jan 17 12:23:10.610066 sshd[1721]: Accepted publickey for core from 139.178.68.195 port 47888 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:23:10.613740 sshd[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:10.623742 systemd-logind[1557]: New session 2 of user core. Jan 17 12:23:10.632182 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 12:23:10.705424 sshd[1721]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:10.718152 systemd[1]: Started sshd@2-146.190.50.84:22-139.178.68.195:47892.service - OpenSSH per-connection server daemon (139.178.68.195:47892). Jan 17 12:23:10.719060 systemd[1]: sshd@1-146.190.50.84:22-139.178.68.195:47888.service: Deactivated successfully. Jan 17 12:23:10.728531 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 12:23:10.729461 systemd-logind[1557]: Session 2 logged out. Waiting for processes to exit. Jan 17 12:23:10.733246 systemd-logind[1557]: Removed session 2. Jan 17 12:23:10.777447 sshd[1726]: Accepted publickey for core from 139.178.68.195 port 47892 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:23:10.780597 sshd[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:10.789493 systemd-logind[1557]: New session 3 of user core. Jan 17 12:23:10.797118 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 12:23:10.860451 sshd[1726]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:10.870996 systemd[1]: Started sshd@3-146.190.50.84:22-139.178.68.195:47900.service - OpenSSH per-connection server daemon (139.178.68.195:47900). Jan 17 12:23:10.871731 systemd[1]: sshd@2-146.190.50.84:22-139.178.68.195:47892.service: Deactivated successfully. Jan 17 12:23:10.878774 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 12:23:10.881636 systemd-logind[1557]: Session 3 logged out. Waiting for processes to exit. Jan 17 12:23:10.885918 systemd-logind[1557]: Removed session 3. Jan 17 12:23:10.932913 sshd[1734]: Accepted publickey for core from 139.178.68.195 port 47900 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:23:10.935790 sshd[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:10.948581 systemd-logind[1557]: New session 4 of user core. Jan 17 12:23:10.958239 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 12:23:11.033706 sshd[1734]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:11.042592 systemd[1]: Started sshd@4-146.190.50.84:22-139.178.68.195:47904.service - OpenSSH per-connection server daemon (139.178.68.195:47904). Jan 17 12:23:11.043468 systemd[1]: sshd@3-146.190.50.84:22-139.178.68.195:47900.service: Deactivated successfully. Jan 17 12:23:11.052738 systemd-logind[1557]: Session 4 logged out. Waiting for processes to exit. Jan 17 12:23:11.053329 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 12:23:11.057672 systemd-logind[1557]: Removed session 4. Jan 17 12:23:11.101707 sshd[1742]: Accepted publickey for core from 139.178.68.195 port 47904 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:23:11.104852 sshd[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:11.114815 systemd-logind[1557]: New session 5 of user core. Jan 17 12:23:11.125364 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 12:23:11.217513 sudo[1749]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 12:23:11.218114 sudo[1749]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:23:11.239046 sudo[1749]: pam_unix(sudo:session): session closed for user root Jan 17 12:23:11.245957 sshd[1742]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:11.258175 systemd[1]: Started sshd@5-146.190.50.84:22-139.178.68.195:47908.service - OpenSSH per-connection server daemon (139.178.68.195:47908). Jan 17 12:23:11.259074 systemd[1]: sshd@4-146.190.50.84:22-139.178.68.195:47904.service: Deactivated successfully. Jan 17 12:23:11.264862 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 12:23:11.269241 systemd-logind[1557]: Session 5 logged out. Waiting for processes to exit. Jan 17 12:23:11.274067 systemd-logind[1557]: Removed session 5. Jan 17 12:23:11.324485 sshd[1751]: Accepted publickey for core from 139.178.68.195 port 47908 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:23:11.327573 sshd[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:11.338583 systemd-logind[1557]: New session 6 of user core. Jan 17 12:23:11.342146 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 12:23:11.415094 sudo[1759]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 12:23:11.415873 sudo[1759]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:23:11.422760 sudo[1759]: pam_unix(sudo:session): session closed for user root Jan 17 12:23:11.432710 sudo[1758]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 12:23:11.433077 sudo[1758]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:23:11.455348 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 12:23:11.471550 auditctl[1762]: No rules Jan 17 12:23:11.472619 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 12:23:11.473074 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 12:23:11.488288 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:23:11.548979 augenrules[1781]: No rules Jan 17 12:23:11.551297 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:23:11.554506 sudo[1758]: pam_unix(sudo:session): session closed for user root Jan 17 12:23:11.562748 sshd[1751]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:11.571041 systemd[1]: Started sshd@6-146.190.50.84:22-139.178.68.195:47912.service - OpenSSH per-connection server daemon (139.178.68.195:47912). Jan 17 12:23:11.572236 systemd[1]: sshd@5-146.190.50.84:22-139.178.68.195:47908.service: Deactivated successfully. Jan 17 12:23:11.579153 systemd-logind[1557]: Session 6 logged out. Waiting for processes to exit. Jan 17 12:23:11.580552 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 12:23:11.587189 systemd-logind[1557]: Removed session 6. Jan 17 12:23:11.634621 sshd[1787]: Accepted publickey for core from 139.178.68.195 port 47912 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:23:11.637320 sshd[1787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:11.647312 systemd-logind[1557]: New session 7 of user core. Jan 17 12:23:11.653121 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 12:23:11.723203 sudo[1794]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 12:23:11.724013 sudo[1794]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:23:13.019803 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:23:13.036003 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:23:13.086558 systemd[1]: Reloading requested from client PID 1831 ('systemctl') (unit session-7.scope)... Jan 17 12:23:13.086586 systemd[1]: Reloading... Jan 17 12:23:13.301422 zram_generator::config[1873]: No configuration found. Jan 17 12:23:13.555403 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:23:13.720809 systemd[1]: Reloading finished in 633 ms. Jan 17 12:23:13.786523 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 12:23:13.786861 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 12:23:13.787568 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:23:13.795070 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:23:14.002689 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:23:14.026542 (kubelet)[1932]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:23:14.124093 kubelet[1932]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:23:14.124093 kubelet[1932]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:23:14.124093 kubelet[1932]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:23:14.124927 kubelet[1932]: I0117 12:23:14.124200 1932 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:23:14.458265 kubelet[1932]: I0117 12:23:14.458169 1932 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 17 12:23:14.458265 kubelet[1932]: I0117 12:23:14.458240 1932 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:23:14.458769 kubelet[1932]: I0117 12:23:14.458715 1932 server.go:919] "Client rotation is on, will bootstrap in background" Jan 17 12:23:14.489355 kubelet[1932]: I0117 12:23:14.489274 1932 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:23:14.512939 kubelet[1932]: I0117 12:23:14.512851 1932 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:23:14.513746 kubelet[1932]: I0117 12:23:14.513702 1932 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:23:14.514036 kubelet[1932]: I0117 12:23:14.513995 1932 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:23:14.514525 kubelet[1932]: I0117 12:23:14.514051 1932 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:23:14.514525 kubelet[1932]: I0117 12:23:14.514072 1932 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:23:14.514525 kubelet[1932]: I0117 12:23:14.514261 1932 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:23:14.514525 kubelet[1932]: I0117 12:23:14.514419 1932 kubelet.go:396] "Attempting to sync node with API server" Jan 17 12:23:14.514525 kubelet[1932]: I0117 12:23:14.514439 1932 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:23:14.514974 kubelet[1932]: I0117 12:23:14.514949 1932 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:23:14.515105 kubelet[1932]: E0117 12:23:14.515081 1932 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:14.515161 kubelet[1932]: E0117 12:23:14.515141 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:14.515244 kubelet[1932]: I0117 12:23:14.515207 1932 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:23:14.519281 kubelet[1932]: I0117 12:23:14.518569 1932 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:23:14.524825 kubelet[1932]: I0117 12:23:14.524733 1932 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:23:14.528987 kubelet[1932]: W0117 12:23:14.528912 1932 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 12:23:14.530211 kubelet[1932]: I0117 12:23:14.530174 1932 server.go:1256] "Started kubelet" Jan 17 12:23:14.534418 kubelet[1932]: I0117 12:23:14.531890 1932 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:23:14.534418 kubelet[1932]: I0117 12:23:14.533358 1932 server.go:461] "Adding debug handlers to kubelet server" Jan 17 12:23:14.537544 kubelet[1932]: I0117 12:23:14.537492 1932 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:23:14.538612 kubelet[1932]: I0117 12:23:14.538527 1932 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:23:14.538972 kubelet[1932]: I0117 12:23:14.538947 1932 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:23:14.541208 kubelet[1932]: W0117 12:23:14.540726 1932 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 17 12:23:14.541208 kubelet[1932]: E0117 12:23:14.540771 1932 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 17 12:23:14.541208 kubelet[1932]: W0117 12:23:14.540904 1932 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "146.190.50.84" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 17 12:23:14.541208 kubelet[1932]: E0117 12:23:14.540925 1932 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "146.190.50.84" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 17 12:23:14.547562 kubelet[1932]: E0117 12:23:14.546980 1932 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"146.190.50.84\" not found" Jan 17 12:23:14.547562 kubelet[1932]: I0117 12:23:14.547035 1932 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:23:14.547562 kubelet[1932]: I0117 12:23:14.547135 1932 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 17 12:23:14.547562 kubelet[1932]: I0117 12:23:14.547219 1932 reconciler_new.go:29] "Reconciler: start to sync state" Jan 17 12:23:14.551433 kubelet[1932]: E0117 12:23:14.550807 1932 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{146.190.50.84.181b7a56ce67bf39 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:146.190.50.84,UID:146.190.50.84,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:146.190.50.84,},FirstTimestamp:2025-01-17 12:23:14.530139961 +0000 UTC m=+0.494001455,LastTimestamp:2025-01-17 12:23:14.530139961 +0000 UTC m=+0.494001455,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:146.190.50.84,}" Jan 17 12:23:14.556039 kubelet[1932]: I0117 12:23:14.555335 1932 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:23:14.556039 kubelet[1932]: I0117 12:23:14.555585 1932 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:23:14.558410 kubelet[1932]: E0117 12:23:14.557564 1932 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:23:14.562577 kubelet[1932]: I0117 12:23:14.562535 1932 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:23:14.578125 kubelet[1932]: E0117 12:23:14.578012 1932 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"146.190.50.84\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 17 12:23:14.578340 kubelet[1932]: E0117 12:23:14.578216 1932 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{146.190.50.84.181b7a56d009c748 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:146.190.50.84,UID:146.190.50.84,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:146.190.50.84,},FirstTimestamp:2025-01-17 12:23:14.557536072 +0000 UTC m=+0.521397568,LastTimestamp:2025-01-17 12:23:14.557536072 +0000 UTC m=+0.521397568,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:146.190.50.84,}" Jan 17 12:23:14.578500 kubelet[1932]: W0117 12:23:14.578348 1932 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 17 12:23:14.582676 kubelet[1932]: E0117 12:23:14.582600 1932 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 17 12:23:14.622297 kubelet[1932]: I0117 12:23:14.622252 1932 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:23:14.622573 kubelet[1932]: I0117 12:23:14.622551 1932 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:23:14.623108 kubelet[1932]: I0117 12:23:14.622699 1932 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:23:14.624015 kubelet[1932]: E0117 12:23:14.623983 1932 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{146.190.50.84.181b7a56d3c5b320 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:146.190.50.84,UID:146.190.50.84,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 146.190.50.84 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:146.190.50.84,},FirstTimestamp:2025-01-17 12:23:14.620183328 +0000 UTC m=+0.584044825,LastTimestamp:2025-01-17 12:23:14.620183328 +0000 UTC m=+0.584044825,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:146.190.50.84,}" Jan 17 12:23:14.629784 kubelet[1932]: I0117 12:23:14.629727 1932 policy_none.go:49] "None policy: Start" Jan 17 12:23:14.631837 kubelet[1932]: I0117 12:23:14.631074 1932 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:23:14.631837 kubelet[1932]: I0117 12:23:14.631123 1932 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:23:14.650494 kubelet[1932]: I0117 12:23:14.650362 1932 kubelet_node_status.go:73] "Attempting to register node" node="146.190.50.84" Jan 17 12:23:14.657298 kubelet[1932]: I0117 12:23:14.657193 1932 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:23:14.659446 kubelet[1932]: I0117 12:23:14.659403 1932 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:23:14.663858 kubelet[1932]: I0117 12:23:14.663316 1932 kubelet_node_status.go:76] "Successfully registered node" node="146.190.50.84" Jan 17 12:23:14.671693 kubelet[1932]: E0117 12:23:14.670311 1932 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"146.190.50.84\" not found" Jan 17 12:23:14.697642 kubelet[1932]: E0117 12:23:14.697592 1932 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"146.190.50.84\" not found" Jan 17 12:23:14.698288 kubelet[1932]: I0117 12:23:14.698254 1932 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:23:14.700599 kubelet[1932]: I0117 12:23:14.700552 1932 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:23:14.701286 kubelet[1932]: I0117 12:23:14.700923 1932 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:23:14.701286 kubelet[1932]: I0117 12:23:14.700984 1932 kubelet.go:2329] "Starting kubelet main sync loop" Jan 17 12:23:14.701286 kubelet[1932]: E0117 12:23:14.701176 1932 kubelet.go:2353] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 17 12:23:14.798081 kubelet[1932]: E0117 12:23:14.797847 1932 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"146.190.50.84\" not found" Jan 17 12:23:14.898800 kubelet[1932]: E0117 12:23:14.898657 1932 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"146.190.50.84\" not found" Jan 17 12:23:14.999234 kubelet[1932]: E0117 12:23:14.999146 1932 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"146.190.50.84\" not found" Jan 17 12:23:15.100409 kubelet[1932]: E0117 12:23:15.100143 1932 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"146.190.50.84\" not found" Jan 17 12:23:15.200470 kubelet[1932]: E0117 12:23:15.200355 1932 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"146.190.50.84\" not found" Jan 17 12:23:15.301641 kubelet[1932]: E0117 12:23:15.301544 1932 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"146.190.50.84\" not found" Jan 17 12:23:15.402131 kubelet[1932]: E0117 12:23:15.402034 1932 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"146.190.50.84\" not found" Jan 17 12:23:15.462859 kubelet[1932]: I0117 12:23:15.462491 1932 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 17 12:23:15.462859 kubelet[1932]: W0117 12:23:15.462763 1932 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Jan 17 12:23:15.462859 kubelet[1932]: W0117 12:23:15.462817 1932 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.Service ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Jan 17 12:23:15.503192 kubelet[1932]: E0117 12:23:15.503107 1932 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"146.190.50.84\" not found" Jan 17 12:23:15.515406 kubelet[1932]: E0117 12:23:15.515311 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:15.536951 sudo[1794]: pam_unix(sudo:session): session closed for user root Jan 17 12:23:15.541279 sshd[1787]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:15.545733 systemd[1]: sshd@6-146.190.50.84:22-139.178.68.195:47912.service: Deactivated successfully. Jan 17 12:23:15.553554 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 12:23:15.557019 systemd-logind[1557]: Session 7 logged out. Waiting for processes to exit. Jan 17 12:23:15.559125 systemd-logind[1557]: Removed session 7. Jan 17 12:23:15.603455 kubelet[1932]: E0117 12:23:15.603343 1932 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"146.190.50.84\" not found" Jan 17 12:23:15.704223 kubelet[1932]: E0117 12:23:15.703982 1932 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"146.190.50.84\" not found" Jan 17 12:23:15.804945 kubelet[1932]: E0117 12:23:15.804849 1932 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"146.190.50.84\" not found" Jan 17 12:23:15.905713 kubelet[1932]: E0117 12:23:15.905634 1932 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"146.190.50.84\" not found" Jan 17 12:23:16.006575 kubelet[1932]: E0117 12:23:16.006358 1932 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"146.190.50.84\" not found" Jan 17 12:23:16.108427 kubelet[1932]: I0117 12:23:16.108178 1932 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 17 12:23:16.109321 containerd[1581]: time="2025-01-17T12:23:16.109170566Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 12:23:16.111214 kubelet[1932]: I0117 12:23:16.110344 1932 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 17 12:23:16.515741 kubelet[1932]: E0117 12:23:16.515638 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:16.516874 kubelet[1932]: I0117 12:23:16.516831 1932 apiserver.go:52] "Watching apiserver" Jan 17 12:23:16.524260 kubelet[1932]: I0117 12:23:16.524182 1932 topology_manager.go:215] "Topology Admit Handler" podUID="8f5ac4d3-f162-466f-b827-8117baf6aa14" podNamespace="calico-system" podName="calico-node-km4vs" Jan 17 12:23:16.524944 kubelet[1932]: I0117 12:23:16.524661 1932 topology_manager.go:215] "Topology Admit Handler" podUID="34cd60d5-9542-4f9c-a341-f3b5c6223a93" podNamespace="calico-system" podName="csi-node-driver-qzxmk" Jan 17 12:23:16.524944 kubelet[1932]: I0117 12:23:16.524744 1932 topology_manager.go:215] "Topology Admit Handler" podUID="b034bd4d-f8b4-4f27-bcef-af19ea6a2351" podNamespace="kube-system" podName="kube-proxy-pfrjq" Jan 17 12:23:16.525599 kubelet[1932]: E0117 12:23:16.525321 1932 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qzxmk" podUID="34cd60d5-9542-4f9c-a341-f3b5c6223a93" Jan 17 12:23:16.547848 kubelet[1932]: I0117 12:23:16.547704 1932 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 17 12:23:16.560776 kubelet[1932]: I0117 12:23:16.559405 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8f5ac4d3-f162-466f-b827-8117baf6aa14-var-run-calico\") pod \"calico-node-km4vs\" (UID: \"8f5ac4d3-f162-466f-b827-8117baf6aa14\") " pod="calico-system/calico-node-km4vs" Jan 17 12:23:16.560776 kubelet[1932]: I0117 12:23:16.559491 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8f5ac4d3-f162-466f-b827-8117baf6aa14-cni-bin-dir\") pod \"calico-node-km4vs\" (UID: \"8f5ac4d3-f162-466f-b827-8117baf6aa14\") " pod="calico-system/calico-node-km4vs" Jan 17 12:23:16.560776 kubelet[1932]: I0117 12:23:16.559523 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/34cd60d5-9542-4f9c-a341-f3b5c6223a93-socket-dir\") pod \"csi-node-driver-qzxmk\" (UID: \"34cd60d5-9542-4f9c-a341-f3b5c6223a93\") " pod="calico-system/csi-node-driver-qzxmk" Jan 17 12:23:16.560776 kubelet[1932]: I0117 12:23:16.559558 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b034bd4d-f8b4-4f27-bcef-af19ea6a2351-kube-proxy\") pod \"kube-proxy-pfrjq\" (UID: \"b034bd4d-f8b4-4f27-bcef-af19ea6a2351\") " pod="kube-system/kube-proxy-pfrjq" Jan 17 12:23:16.560776 kubelet[1932]: I0117 12:23:16.559592 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8f5ac4d3-f162-466f-b827-8117baf6aa14-var-lib-calico\") pod \"calico-node-km4vs\" (UID: \"8f5ac4d3-f162-466f-b827-8117baf6aa14\") " pod="calico-system/calico-node-km4vs" Jan 17 12:23:16.561269 kubelet[1932]: I0117 12:23:16.559623 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8f5ac4d3-f162-466f-b827-8117baf6aa14-cni-log-dir\") pod \"calico-node-km4vs\" (UID: \"8f5ac4d3-f162-466f-b827-8117baf6aa14\") " pod="calico-system/calico-node-km4vs" Jan 17 12:23:16.561269 kubelet[1932]: I0117 12:23:16.559661 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8f5ac4d3-f162-466f-b827-8117baf6aa14-flexvol-driver-host\") pod \"calico-node-km4vs\" (UID: \"8f5ac4d3-f162-466f-b827-8117baf6aa14\") " pod="calico-system/calico-node-km4vs" Jan 17 12:23:16.561269 kubelet[1932]: I0117 12:23:16.559691 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/34cd60d5-9542-4f9c-a341-f3b5c6223a93-varrun\") pod \"csi-node-driver-qzxmk\" (UID: \"34cd60d5-9542-4f9c-a341-f3b5c6223a93\") " pod="calico-system/csi-node-driver-qzxmk" Jan 17 12:23:16.561269 kubelet[1932]: I0117 12:23:16.559724 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/34cd60d5-9542-4f9c-a341-f3b5c6223a93-kubelet-dir\") pod \"csi-node-driver-qzxmk\" (UID: \"34cd60d5-9542-4f9c-a341-f3b5c6223a93\") " pod="calico-system/csi-node-driver-qzxmk" Jan 17 12:23:16.561269 kubelet[1932]: I0117 12:23:16.559787 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qd9j\" (UniqueName: \"kubernetes.io/projected/34cd60d5-9542-4f9c-a341-f3b5c6223a93-kube-api-access-9qd9j\") pod \"csi-node-driver-qzxmk\" (UID: \"34cd60d5-9542-4f9c-a341-f3b5c6223a93\") " pod="calico-system/csi-node-driver-qzxmk" Jan 17 12:23:16.561642 kubelet[1932]: I0117 12:23:16.559820 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b034bd4d-f8b4-4f27-bcef-af19ea6a2351-lib-modules\") pod \"kube-proxy-pfrjq\" (UID: \"b034bd4d-f8b4-4f27-bcef-af19ea6a2351\") " pod="kube-system/kube-proxy-pfrjq" Jan 17 12:23:16.561642 kubelet[1932]: I0117 12:23:16.559853 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56vz8\" (UniqueName: \"kubernetes.io/projected/b034bd4d-f8b4-4f27-bcef-af19ea6a2351-kube-api-access-56vz8\") pod \"kube-proxy-pfrjq\" (UID: \"b034bd4d-f8b4-4f27-bcef-af19ea6a2351\") " pod="kube-system/kube-proxy-pfrjq" Jan 17 12:23:16.561642 kubelet[1932]: I0117 12:23:16.559992 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8f5ac4d3-f162-466f-b827-8117baf6aa14-policysync\") pod \"calico-node-km4vs\" (UID: \"8f5ac4d3-f162-466f-b827-8117baf6aa14\") " pod="calico-system/calico-node-km4vs" Jan 17 12:23:16.561642 kubelet[1932]: I0117 12:23:16.560055 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f5ac4d3-f162-466f-b827-8117baf6aa14-tigera-ca-bundle\") pod \"calico-node-km4vs\" (UID: \"8f5ac4d3-f162-466f-b827-8117baf6aa14\") " pod="calico-system/calico-node-km4vs" Jan 17 12:23:16.561642 kubelet[1932]: I0117 12:23:16.560107 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/34cd60d5-9542-4f9c-a341-f3b5c6223a93-registration-dir\") pod \"csi-node-driver-qzxmk\" (UID: \"34cd60d5-9542-4f9c-a341-f3b5c6223a93\") " pod="calico-system/csi-node-driver-qzxmk" Jan 17 12:23:16.561896 kubelet[1932]: I0117 12:23:16.560152 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b034bd4d-f8b4-4f27-bcef-af19ea6a2351-xtables-lock\") pod \"kube-proxy-pfrjq\" (UID: \"b034bd4d-f8b4-4f27-bcef-af19ea6a2351\") " pod="kube-system/kube-proxy-pfrjq" Jan 17 12:23:16.561896 kubelet[1932]: I0117 12:23:16.560188 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8f5ac4d3-f162-466f-b827-8117baf6aa14-xtables-lock\") pod \"calico-node-km4vs\" (UID: \"8f5ac4d3-f162-466f-b827-8117baf6aa14\") " pod="calico-system/calico-node-km4vs" Jan 17 12:23:16.561896 kubelet[1932]: I0117 12:23:16.560251 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8f5ac4d3-f162-466f-b827-8117baf6aa14-node-certs\") pod \"calico-node-km4vs\" (UID: \"8f5ac4d3-f162-466f-b827-8117baf6aa14\") " pod="calico-system/calico-node-km4vs" Jan 17 12:23:16.561896 kubelet[1932]: I0117 12:23:16.560289 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8f5ac4d3-f162-466f-b827-8117baf6aa14-cni-net-dir\") pod \"calico-node-km4vs\" (UID: \"8f5ac4d3-f162-466f-b827-8117baf6aa14\") " pod="calico-system/calico-node-km4vs" Jan 17 12:23:16.561896 kubelet[1932]: I0117 12:23:16.560326 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpfhg\" (UniqueName: \"kubernetes.io/projected/8f5ac4d3-f162-466f-b827-8117baf6aa14-kube-api-access-cpfhg\") pod \"calico-node-km4vs\" (UID: \"8f5ac4d3-f162-466f-b827-8117baf6aa14\") " pod="calico-system/calico-node-km4vs" Jan 17 12:23:16.562163 kubelet[1932]: I0117 12:23:16.560364 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8f5ac4d3-f162-466f-b827-8117baf6aa14-lib-modules\") pod \"calico-node-km4vs\" (UID: \"8f5ac4d3-f162-466f-b827-8117baf6aa14\") " pod="calico-system/calico-node-km4vs" Jan 17 12:23:16.671711 kubelet[1932]: E0117 12:23:16.671532 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:16.671711 kubelet[1932]: W0117 12:23:16.671576 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:16.671711 kubelet[1932]: E0117 12:23:16.671614 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:16.698698 kubelet[1932]: E0117 12:23:16.696491 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:16.698698 kubelet[1932]: W0117 12:23:16.696531 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:16.698698 kubelet[1932]: E0117 12:23:16.696574 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:16.703731 kubelet[1932]: E0117 12:23:16.703643 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:16.703961 kubelet[1932]: W0117 12:23:16.703717 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:16.703961 kubelet[1932]: E0117 12:23:16.703903 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:16.708290 kubelet[1932]: E0117 12:23:16.706029 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:16.708290 kubelet[1932]: W0117 12:23:16.706061 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:16.708290 kubelet[1932]: E0117 12:23:16.706096 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:16.832867 kubelet[1932]: E0117 12:23:16.832442 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:23:16.833130 kubelet[1932]: E0117 12:23:16.833066 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:23:16.833905 containerd[1581]: time="2025-01-17T12:23:16.833833800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pfrjq,Uid:b034bd4d-f8b4-4f27-bcef-af19ea6a2351,Namespace:kube-system,Attempt:0,}" Jan 17 12:23:16.834869 containerd[1581]: time="2025-01-17T12:23:16.834749367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-km4vs,Uid:8f5ac4d3-f162-466f-b827-8117baf6aa14,Namespace:calico-system,Attempt:0,}" Jan 17 12:23:17.516973 kubelet[1932]: E0117 12:23:17.516820 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:17.589824 containerd[1581]: time="2025-01-17T12:23:17.588858412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:23:17.597486 containerd[1581]: time="2025-01-17T12:23:17.597390528Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:23:17.601440 containerd[1581]: time="2025-01-17T12:23:17.601299521Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:23:17.604399 containerd[1581]: time="2025-01-17T12:23:17.604295306Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 17 12:23:17.606905 containerd[1581]: time="2025-01-17T12:23:17.606808778Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:23:17.614422 containerd[1581]: time="2025-01-17T12:23:17.613700588Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:23:17.615558 containerd[1581]: time="2025-01-17T12:23:17.615056616Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 780.22666ms" Jan 17 12:23:17.620854 containerd[1581]: time="2025-01-17T12:23:17.620777296Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 786.724503ms" Jan 17 12:23:17.677545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2564700604.mount: Deactivated successfully. Jan 17 12:23:17.702725 kubelet[1932]: E0117 12:23:17.702291 1932 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qzxmk" podUID="34cd60d5-9542-4f9c-a341-f3b5c6223a93" Jan 17 12:23:17.872126 containerd[1581]: time="2025-01-17T12:23:17.871821912Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:23:17.872126 containerd[1581]: time="2025-01-17T12:23:17.871922241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:23:17.872126 containerd[1581]: time="2025-01-17T12:23:17.871964576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:17.875138 containerd[1581]: time="2025-01-17T12:23:17.874931017Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:23:17.875138 containerd[1581]: time="2025-01-17T12:23:17.875072327Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:23:17.875138 containerd[1581]: time="2025-01-17T12:23:17.875094066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:17.877057 containerd[1581]: time="2025-01-17T12:23:17.876293608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:17.879140 containerd[1581]: time="2025-01-17T12:23:17.879030711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:18.065991 containerd[1581]: time="2025-01-17T12:23:18.065932007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pfrjq,Uid:b034bd4d-f8b4-4f27-bcef-af19ea6a2351,Namespace:kube-system,Attempt:0,} returns sandbox id \"7efb8723f40ac471e27cac97bd1ff2c63bd0adac776627792b69e97012a2d788\"" Jan 17 12:23:18.068341 kubelet[1932]: E0117 12:23:18.068302 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:23:18.070856 containerd[1581]: time="2025-01-17T12:23:18.070804741Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\"" Jan 17 12:23:18.087733 containerd[1581]: time="2025-01-17T12:23:18.087201878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-km4vs,Uid:8f5ac4d3-f162-466f-b827-8117baf6aa14,Namespace:calico-system,Attempt:0,} returns sandbox id \"d7b37ad083c7ff8e0043e13f25a3ec152da8e4738a0cc1bb8aff2bb0b2a46bfb\"" Jan 17 12:23:18.090530 kubelet[1932]: E0117 12:23:18.089500 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:23:18.517892 kubelet[1932]: E0117 12:23:18.517820 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:19.431777 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2206364794.mount: Deactivated successfully. Jan 17 12:23:19.518400 kubelet[1932]: E0117 12:23:19.518298 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:19.703070 kubelet[1932]: E0117 12:23:19.702360 1932 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qzxmk" podUID="34cd60d5-9542-4f9c-a341-f3b5c6223a93" Jan 17 12:23:20.302008 containerd[1581]: time="2025-01-17T12:23:20.301933859Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:20.305003 containerd[1581]: time="2025-01-17T12:23:20.304900731Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.13: active requests=0, bytes read=28620941" Jan 17 12:23:20.307760 containerd[1581]: time="2025-01-17T12:23:20.307688719Z" level=info msg="ImageCreate event name:\"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:20.316330 containerd[1581]: time="2025-01-17T12:23:20.316215729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:20.319671 containerd[1581]: time="2025-01-17T12:23:20.318644744Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.13\" with image id \"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\", repo tag \"registry.k8s.io/kube-proxy:v1.29.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\", size \"28619960\" in 2.247777636s" Jan 17 12:23:20.319671 containerd[1581]: time="2025-01-17T12:23:20.318715077Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\" returns image reference \"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\"" Jan 17 12:23:20.320423 containerd[1581]: time="2025-01-17T12:23:20.320362681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 17 12:23:20.322269 containerd[1581]: time="2025-01-17T12:23:20.321994321Z" level=info msg="CreateContainer within sandbox \"7efb8723f40ac471e27cac97bd1ff2c63bd0adac776627792b69e97012a2d788\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 12:23:20.385528 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1515361506.mount: Deactivated successfully. Jan 17 12:23:20.399053 containerd[1581]: time="2025-01-17T12:23:20.398985103Z" level=info msg="CreateContainer within sandbox \"7efb8723f40ac471e27cac97bd1ff2c63bd0adac776627792b69e97012a2d788\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3143aec212a1ccb92cc92b5833f2728e1235ba5b16978a19e9c2a116a1a6adf7\"" Jan 17 12:23:20.400644 containerd[1581]: time="2025-01-17T12:23:20.400551118Z" level=info msg="StartContainer for \"3143aec212a1ccb92cc92b5833f2728e1235ba5b16978a19e9c2a116a1a6adf7\"" Jan 17 12:23:20.507982 containerd[1581]: time="2025-01-17T12:23:20.507926006Z" level=info msg="StartContainer for \"3143aec212a1ccb92cc92b5833f2728e1235ba5b16978a19e9c2a116a1a6adf7\" returns successfully" Jan 17 12:23:20.518679 kubelet[1932]: E0117 12:23:20.518521 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:20.762906 kubelet[1932]: E0117 12:23:20.762849 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:23:20.775437 kubelet[1932]: E0117 12:23:20.775229 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:20.775827 kubelet[1932]: W0117 12:23:20.775277 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:20.776402 kubelet[1932]: E0117 12:23:20.776044 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:20.777831 kubelet[1932]: E0117 12:23:20.777713 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:20.777831 kubelet[1932]: W0117 12:23:20.777731 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:20.777831 kubelet[1932]: E0117 12:23:20.777769 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:20.778275 kubelet[1932]: E0117 12:23:20.778116 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:20.778275 kubelet[1932]: W0117 12:23:20.778132 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:20.778275 kubelet[1932]: E0117 12:23:20.778168 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:20.782048 kubelet[1932]: E0117 12:23:20.781636 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:20.782048 kubelet[1932]: W0117 12:23:20.781673 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:20.782048 kubelet[1932]: E0117 12:23:20.781708 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:20.785219 kubelet[1932]: I0117 12:23:20.784846 1932 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-pfrjq" podStartSLOduration=4.534853007 podStartE2EDuration="6.783803609s" podCreationTimestamp="2025-01-17 12:23:14 +0000 UTC" firstStartedPulling="2025-01-17 12:23:18.070258013 +0000 UTC m=+4.034119486" lastFinishedPulling="2025-01-17 12:23:20.319208613 +0000 UTC m=+6.283070088" observedRunningTime="2025-01-17 12:23:20.782625202 +0000 UTC m=+6.746486698" watchObservedRunningTime="2025-01-17 12:23:20.783803609 +0000 UTC m=+6.747665135" Jan 17 12:23:20.786563 kubelet[1932]: E0117 12:23:20.785657 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:20.786563 kubelet[1932]: W0117 12:23:20.785694 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:20.786563 kubelet[1932]: E0117 12:23:20.785724 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:20.788316 kubelet[1932]: E0117 12:23:20.787057 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:20.788316 kubelet[1932]: W0117 12:23:20.787490 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:20.788316 kubelet[1932]: E0117 12:23:20.787525 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:20.789086 kubelet[1932]: E0117 12:23:20.789061 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:20.789908 kubelet[1932]: W0117 12:23:20.789195 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:20.789908 kubelet[1932]: E0117 12:23:20.789235 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:20.790488 kubelet[1932]: E0117 12:23:20.790468 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:20.791598 kubelet[1932]: W0117 12:23:20.791555 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:20.791687 kubelet[1932]: E0117 12:23:20.791633 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:20.793910 kubelet[1932]: E0117 12:23:20.793866 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:20.793910 kubelet[1932]: W0117 12:23:20.793891 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:20.793910 kubelet[1932]: E0117 12:23:20.793922 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:20.794453 kubelet[1932]: E0117 12:23:20.794362 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:20.794568 kubelet[1932]: W0117 12:23:20.794506 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:20.794568 kubelet[1932]: E0117 12:23:20.794531 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:20.797045 kubelet[1932]: E0117 12:23:20.796994 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:20.797045 kubelet[1932]: W0117 12:23:20.797030 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:20.797226 kubelet[1932]: E0117 12:23:20.797068 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:20.797758 kubelet[1932]: E0117 12:23:20.797703 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:20.797878 kubelet[1932]: W0117 12:23:20.797767 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:20.797878 kubelet[1932]: E0117 12:23:20.797802 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:20.798922 kubelet[1932]: E0117 12:23:20.798894 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:20.799176 kubelet[1932]: W0117 12:23:20.799069 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:20.799176 kubelet[1932]: E0117 12:23:20.799102 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:20.799698 kubelet[1932]: E0117 12:23:20.799601 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:20.799698 kubelet[1932]: W0117 12:23:20.799620 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:20.799698 kubelet[1932]: E0117 12:23:20.799645 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:20.800256 kubelet[1932]: E0117 12:23:20.800104 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:20.800256 kubelet[1932]: W0117 12:23:20.800117 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:20.800256 kubelet[1932]: E0117 12:23:20.800139 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:20.800730 kubelet[1932]: E0117 12:23:20.800715 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:20.800863 kubelet[1932]: W0117 12:23:20.800811 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:20.800863 kubelet[1932]: E0117 12:23:20.800833 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:20.801615 kubelet[1932]: E0117 12:23:20.801466 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:20.801615 kubelet[1932]: W0117 12:23:20.801481 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:20.801615 kubelet[1932]: E0117 12:23:20.801532 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:20.802161 kubelet[1932]: E0117 12:23:20.802069 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:20.802161 kubelet[1932]: W0117 12:23:20.802087 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:20.802161 kubelet[1932]: E0117 12:23:20.802108 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:20.802779 kubelet[1932]: E0117 12:23:20.802649 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:20.802779 kubelet[1932]: W0117 12:23:20.802662 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:20.802779 kubelet[1932]: E0117 12:23:20.802679 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:20.804521 kubelet[1932]: E0117 12:23:20.804493 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:20.804821 kubelet[1932]: W0117 12:23:20.804708 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:20.804821 kubelet[1932]: E0117 12:23:20.804755 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:20.902276 kubelet[1932]: E0117 12:23:20.902231 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:20.902276 kubelet[1932]: W0117 12:23:20.902263 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:20.902535 kubelet[1932]: E0117 12:23:20.902308 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:20.904408 kubelet[1932]: E0117 12:23:20.902793 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:20.904408 kubelet[1932]: W0117 12:23:20.902813 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:20.904408 kubelet[1932]: E0117 12:23:20.902846 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:20.904408 kubelet[1932]: E0117 12:23:20.903204 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:20.904408 kubelet[1932]: W0117 12:23:20.903221 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:20.904408 kubelet[1932]: E0117 12:23:20.903256 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:20.904408 kubelet[1932]: E0117 12:23:20.903565 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:20.904408 kubelet[1932]: W0117 12:23:20.903581 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:20.904408 kubelet[1932]: E0117 12:23:20.903613 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:20.904408 kubelet[1932]: E0117 12:23:20.903914 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:20.905111 kubelet[1932]: W0117 12:23:20.903928 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:20.905111 kubelet[1932]: E0117 12:23:20.903951 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:20.905111 kubelet[1932]: E0117 12:23:20.904265 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:20.905111 kubelet[1932]: W0117 12:23:20.904282 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:20.905111 kubelet[1932]: E0117 12:23:20.904398 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:20.905111 kubelet[1932]: E0117 12:23:20.904910 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:20.905111 kubelet[1932]: W0117 12:23:20.904923 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:20.905111 kubelet[1932]: E0117 12:23:20.904949 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:20.905726 kubelet[1932]: E0117 12:23:20.905359 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:20.905726 kubelet[1932]: W0117 12:23:20.905384 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:20.905726 kubelet[1932]: E0117 12:23:20.905428 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:20.905726 kubelet[1932]: E0117 12:23:20.905714 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:20.905726 kubelet[1932]: W0117 12:23:20.905727 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:20.906019 kubelet[1932]: E0117 12:23:20.905751 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:20.906081 kubelet[1932]: E0117 12:23:20.906064 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:20.906081 kubelet[1932]: W0117 12:23:20.906076 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:20.906191 kubelet[1932]: E0117 12:23:20.906165 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:20.907484 kubelet[1932]: E0117 12:23:20.906747 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:20.907484 kubelet[1932]: W0117 12:23:20.906765 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:20.907484 kubelet[1932]: E0117 12:23:20.906785 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:20.907484 kubelet[1932]: E0117 12:23:20.907031 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:20.907484 kubelet[1932]: W0117 12:23:20.907041 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:20.907484 kubelet[1932]: E0117 12:23:20.907057 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:21.518801 kubelet[1932]: E0117 12:23:21.518739 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:21.702508 kubelet[1932]: E0117 12:23:21.701987 1932 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qzxmk" podUID="34cd60d5-9542-4f9c-a341-f3b5c6223a93" Jan 17 12:23:21.768278 kubelet[1932]: E0117 12:23:21.768229 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:23:21.812150 kubelet[1932]: E0117 12:23:21.811630 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:21.812150 kubelet[1932]: W0117 12:23:21.811665 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:21.812150 kubelet[1932]: E0117 12:23:21.811697 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:21.812829 kubelet[1932]: E0117 12:23:21.812612 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:21.812829 kubelet[1932]: W0117 12:23:21.812724 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:21.812829 kubelet[1932]: E0117 12:23:21.812754 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:21.813733 kubelet[1932]: E0117 12:23:21.813446 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:21.813733 kubelet[1932]: W0117 12:23:21.813470 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:21.813733 kubelet[1932]: E0117 12:23:21.813491 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:21.814257 kubelet[1932]: E0117 12:23:21.813907 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:21.814257 kubelet[1932]: W0117 12:23:21.813918 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:21.814257 kubelet[1932]: E0117 12:23:21.813938 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:21.815133 kubelet[1932]: E0117 12:23:21.814761 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:21.815133 kubelet[1932]: W0117 12:23:21.814779 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:21.815133 kubelet[1932]: E0117 12:23:21.814801 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:21.815699 kubelet[1932]: E0117 12:23:21.815453 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:21.815699 kubelet[1932]: W0117 12:23:21.815467 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:21.815699 kubelet[1932]: E0117 12:23:21.815581 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:21.816456 kubelet[1932]: E0117 12:23:21.816440 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:21.816886 kubelet[1932]: W0117 12:23:21.816580 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:21.816886 kubelet[1932]: E0117 12:23:21.816614 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:21.817257 kubelet[1932]: E0117 12:23:21.817092 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:21.817257 kubelet[1932]: W0117 12:23:21.817111 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:21.817257 kubelet[1932]: E0117 12:23:21.817132 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:21.818065 kubelet[1932]: E0117 12:23:21.817681 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:21.818065 kubelet[1932]: W0117 12:23:21.817696 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:21.818065 kubelet[1932]: E0117 12:23:21.817714 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:21.818065 kubelet[1932]: E0117 12:23:21.817984 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:21.818065 kubelet[1932]: W0117 12:23:21.817998 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:21.818065 kubelet[1932]: E0117 12:23:21.818016 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:21.819318 kubelet[1932]: E0117 12:23:21.818736 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:21.819318 kubelet[1932]: W0117 12:23:21.818755 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:21.819318 kubelet[1932]: E0117 12:23:21.818777 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:21.819318 kubelet[1932]: E0117 12:23:21.819098 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:21.819318 kubelet[1932]: W0117 12:23:21.819112 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:21.819318 kubelet[1932]: E0117 12:23:21.819130 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:21.819971 kubelet[1932]: E0117 12:23:21.819862 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:21.819971 kubelet[1932]: W0117 12:23:21.819879 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:21.819971 kubelet[1932]: E0117 12:23:21.819900 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:21.820675 kubelet[1932]: E0117 12:23:21.820407 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:21.820675 kubelet[1932]: W0117 12:23:21.820425 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:21.820675 kubelet[1932]: E0117 12:23:21.820443 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:21.821074 kubelet[1932]: E0117 12:23:21.820917 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:21.821074 kubelet[1932]: W0117 12:23:21.820933 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:21.821074 kubelet[1932]: E0117 12:23:21.820958 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:21.821576 kubelet[1932]: E0117 12:23:21.821420 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:21.821576 kubelet[1932]: W0117 12:23:21.821437 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:21.821576 kubelet[1932]: E0117 12:23:21.821457 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:21.822246 kubelet[1932]: E0117 12:23:21.821992 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:21.822246 kubelet[1932]: W0117 12:23:21.822010 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:21.822246 kubelet[1932]: E0117 12:23:21.822027 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:21.822671 kubelet[1932]: E0117 12:23:21.822480 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:21.822671 kubelet[1932]: W0117 12:23:21.822497 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:21.822671 kubelet[1932]: E0117 12:23:21.822516 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:21.823245 kubelet[1932]: E0117 12:23:21.823077 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:21.823245 kubelet[1932]: W0117 12:23:21.823093 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:21.823245 kubelet[1932]: E0117 12:23:21.823112 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:21.823755 kubelet[1932]: E0117 12:23:21.823672 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:21.823755 kubelet[1932]: W0117 12:23:21.823688 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:21.823755 kubelet[1932]: E0117 12:23:21.823708 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:21.915138 kubelet[1932]: E0117 12:23:21.914704 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:21.915138 kubelet[1932]: W0117 12:23:21.914741 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:21.915138 kubelet[1932]: E0117 12:23:21.914806 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:21.915680 kubelet[1932]: E0117 12:23:21.915656 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:21.916012 kubelet[1932]: W0117 12:23:21.915765 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:21.916012 kubelet[1932]: E0117 12:23:21.915816 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:21.917060 kubelet[1932]: E0117 12:23:21.916863 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:21.917060 kubelet[1932]: W0117 12:23:21.916886 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:21.917060 kubelet[1932]: E0117 12:23:21.916924 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:21.917881 kubelet[1932]: E0117 12:23:21.917554 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:21.917881 kubelet[1932]: W0117 12:23:21.917574 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:21.917881 kubelet[1932]: E0117 12:23:21.917623 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:21.918722 kubelet[1932]: E0117 12:23:21.918489 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:21.918722 kubelet[1932]: W0117 12:23:21.918514 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:21.918722 kubelet[1932]: E0117 12:23:21.918557 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:21.919107 kubelet[1932]: E0117 12:23:21.919046 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:21.919107 kubelet[1932]: W0117 12:23:21.919063 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:21.919952 kubelet[1932]: E0117 12:23:21.919492 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:21.919952 kubelet[1932]: E0117 12:23:21.919734 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:21.919952 kubelet[1932]: W0117 12:23:21.919751 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:21.919952 kubelet[1932]: E0117 12:23:21.919794 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:21.921134 kubelet[1932]: E0117 12:23:21.920737 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:21.921134 kubelet[1932]: W0117 12:23:21.920762 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:21.921134 kubelet[1932]: E0117 12:23:21.920793 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:21.921347 kubelet[1932]: E0117 12:23:21.921175 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:21.921347 kubelet[1932]: W0117 12:23:21.921190 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:21.921347 kubelet[1932]: E0117 12:23:21.921225 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:21.921539 kubelet[1932]: E0117 12:23:21.921493 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:21.921539 kubelet[1932]: W0117 12:23:21.921503 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:21.921539 kubelet[1932]: E0117 12:23:21.921530 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:21.921985 kubelet[1932]: E0117 12:23:21.921781 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:21.921985 kubelet[1932]: W0117 12:23:21.921801 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:21.921985 kubelet[1932]: E0117 12:23:21.921820 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:21.922682 kubelet[1932]: E0117 12:23:21.922651 1932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:23:21.922682 kubelet[1932]: W0117 12:23:21.922671 1932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:23:21.922981 kubelet[1932]: E0117 12:23:21.922694 1932 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:23:21.970282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount235872162.mount: Deactivated successfully. Jan 17 12:23:22.171406 containerd[1581]: time="2025-01-17T12:23:22.170396977Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:22.173851 containerd[1581]: time="2025-01-17T12:23:22.173509017Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 17 12:23:22.179409 containerd[1581]: time="2025-01-17T12:23:22.177640623Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:22.185059 containerd[1581]: time="2025-01-17T12:23:22.184962620Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:22.186354 containerd[1581]: time="2025-01-17T12:23:22.185924848Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.865493708s" Jan 17 12:23:22.186354 containerd[1581]: time="2025-01-17T12:23:22.185991252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 17 12:23:22.189644 containerd[1581]: time="2025-01-17T12:23:22.189446661Z" level=info msg="CreateContainer within sandbox \"d7b37ad083c7ff8e0043e13f25a3ec152da8e4738a0cc1bb8aff2bb0b2a46bfb\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 12:23:22.222798 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1604337472.mount: Deactivated successfully. Jan 17 12:23:22.235297 containerd[1581]: time="2025-01-17T12:23:22.235206827Z" level=info msg="CreateContainer within sandbox \"d7b37ad083c7ff8e0043e13f25a3ec152da8e4738a0cc1bb8aff2bb0b2a46bfb\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5d544fbec6ee501d81eae661be5d57ddb330ead2172fc0b30eb407584006ef44\"" Jan 17 12:23:22.238427 containerd[1581]: time="2025-01-17T12:23:22.236634554Z" level=info msg="StartContainer for \"5d544fbec6ee501d81eae661be5d57ddb330ead2172fc0b30eb407584006ef44\"" Jan 17 12:23:22.372887 containerd[1581]: time="2025-01-17T12:23:22.372816990Z" level=info msg="StartContainer for \"5d544fbec6ee501d81eae661be5d57ddb330ead2172fc0b30eb407584006ef44\" returns successfully" Jan 17 12:23:22.525841 kubelet[1932]: E0117 12:23:22.519396 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:22.547400 containerd[1581]: time="2025-01-17T12:23:22.546943819Z" level=info msg="shim disconnected" id=5d544fbec6ee501d81eae661be5d57ddb330ead2172fc0b30eb407584006ef44 namespace=k8s.io Jan 17 12:23:22.547400 containerd[1581]: time="2025-01-17T12:23:22.547126368Z" level=warning msg="cleaning up after shim disconnected" id=5d544fbec6ee501d81eae661be5d57ddb330ead2172fc0b30eb407584006ef44 namespace=k8s.io Jan 17 12:23:22.547400 containerd[1581]: time="2025-01-17T12:23:22.547164235Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:23:22.569439 containerd[1581]: time="2025-01-17T12:23:22.568217455Z" level=warning msg="cleanup warnings time=\"2025-01-17T12:23:22Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 12:23:22.772503 kubelet[1932]: E0117 12:23:22.772344 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:23:22.774461 containerd[1581]: time="2025-01-17T12:23:22.774051980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 17 12:23:22.908096 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d544fbec6ee501d81eae661be5d57ddb330ead2172fc0b30eb407584006ef44-rootfs.mount: Deactivated successfully. Jan 17 12:23:23.009840 systemd-resolved[1481]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jan 17 12:23:23.520738 kubelet[1932]: E0117 12:23:23.520652 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:23.702009 kubelet[1932]: E0117 12:23:23.701898 1932 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qzxmk" podUID="34cd60d5-9542-4f9c-a341-f3b5c6223a93" Jan 17 12:23:24.521336 kubelet[1932]: E0117 12:23:24.521231 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:25.523284 kubelet[1932]: E0117 12:23:25.523099 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:25.701939 kubelet[1932]: E0117 12:23:25.701724 1932 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qzxmk" podUID="34cd60d5-9542-4f9c-a341-f3b5c6223a93" Jan 17 12:23:26.524178 kubelet[1932]: E0117 12:23:26.524091 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:27.518161 containerd[1581]: time="2025-01-17T12:23:27.518088878Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:27.524425 kubelet[1932]: E0117 12:23:27.524312 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:27.528993 containerd[1581]: time="2025-01-17T12:23:27.528886839Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 17 12:23:27.547480 containerd[1581]: time="2025-01-17T12:23:27.547342676Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:27.572625 containerd[1581]: time="2025-01-17T12:23:27.572508193Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:27.574841 containerd[1581]: time="2025-01-17T12:23:27.573939465Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.799822043s" Jan 17 12:23:27.574841 containerd[1581]: time="2025-01-17T12:23:27.574004674Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 17 12:23:27.577609 containerd[1581]: time="2025-01-17T12:23:27.577270340Z" level=info msg="CreateContainer within sandbox \"d7b37ad083c7ff8e0043e13f25a3ec152da8e4738a0cc1bb8aff2bb0b2a46bfb\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 12:23:27.696809 containerd[1581]: time="2025-01-17T12:23:27.696730827Z" level=info msg="CreateContainer within sandbox \"d7b37ad083c7ff8e0043e13f25a3ec152da8e4738a0cc1bb8aff2bb0b2a46bfb\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4ab40bc4ea9ac63921816054ab9047b03a2a1da2bdd9a01740258ea1c626b718\"" Jan 17 12:23:27.698568 containerd[1581]: time="2025-01-17T12:23:27.698227332Z" level=info msg="StartContainer for \"4ab40bc4ea9ac63921816054ab9047b03a2a1da2bdd9a01740258ea1c626b718\"" Jan 17 12:23:27.702309 kubelet[1932]: E0117 12:23:27.702254 1932 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qzxmk" podUID="34cd60d5-9542-4f9c-a341-f3b5c6223a93" Jan 17 12:23:27.765453 systemd[1]: run-containerd-runc-k8s.io-4ab40bc4ea9ac63921816054ab9047b03a2a1da2bdd9a01740258ea1c626b718-runc.Ba2bnz.mount: Deactivated successfully. Jan 17 12:23:27.826566 containerd[1581]: time="2025-01-17T12:23:27.824668435Z" level=info msg="StartContainer for \"4ab40bc4ea9ac63921816054ab9047b03a2a1da2bdd9a01740258ea1c626b718\" returns successfully" Jan 17 12:23:28.525336 kubelet[1932]: E0117 12:23:28.525272 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:28.808585 kubelet[1932]: E0117 12:23:28.807172 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:23:28.816305 kubelet[1932]: I0117 12:23:28.816235 1932 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 17 12:23:28.846530 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ab40bc4ea9ac63921816054ab9047b03a2a1da2bdd9a01740258ea1c626b718-rootfs.mount: Deactivated successfully. Jan 17 12:23:28.890850 containerd[1581]: time="2025-01-17T12:23:28.890594315Z" level=info msg="shim disconnected" id=4ab40bc4ea9ac63921816054ab9047b03a2a1da2bdd9a01740258ea1c626b718 namespace=k8s.io Jan 17 12:23:28.891524 containerd[1581]: time="2025-01-17T12:23:28.890844109Z" level=warning msg="cleaning up after shim disconnected" id=4ab40bc4ea9ac63921816054ab9047b03a2a1da2bdd9a01740258ea1c626b718 namespace=k8s.io Jan 17 12:23:28.891524 containerd[1581]: time="2025-01-17T12:23:28.891053283Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:23:29.525628 kubelet[1932]: E0117 12:23:29.525531 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:29.707140 containerd[1581]: time="2025-01-17T12:23:29.705576390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qzxmk,Uid:34cd60d5-9542-4f9c-a341-f3b5c6223a93,Namespace:calico-system,Attempt:0,}" Jan 17 12:23:29.813077 kubelet[1932]: E0117 12:23:29.812927 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:23:29.817776 containerd[1581]: time="2025-01-17T12:23:29.817720723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 17 12:23:29.824813 systemd-resolved[1481]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Jan 17 12:23:29.828496 containerd[1581]: time="2025-01-17T12:23:29.828402321Z" level=error msg="Failed to destroy network for sandbox \"0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:29.830843 containerd[1581]: time="2025-01-17T12:23:29.830778500Z" level=error msg="encountered an error cleaning up failed sandbox \"0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:29.831013 containerd[1581]: time="2025-01-17T12:23:29.830882760Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qzxmk,Uid:34cd60d5-9542-4f9c-a341-f3b5c6223a93,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:29.832653 kubelet[1932]: E0117 12:23:29.832609 1932 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:29.832801 kubelet[1932]: E0117 12:23:29.832698 1932 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qzxmk" Jan 17 12:23:29.832801 kubelet[1932]: E0117 12:23:29.832729 1932 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qzxmk" Jan 17 12:23:29.833022 kubelet[1932]: E0117 12:23:29.832814 1932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qzxmk_calico-system(34cd60d5-9542-4f9c-a341-f3b5c6223a93)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qzxmk_calico-system(34cd60d5-9542-4f9c-a341-f3b5c6223a93)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qzxmk" podUID="34cd60d5-9542-4f9c-a341-f3b5c6223a93" Jan 17 12:23:29.833963 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e-shm.mount: Deactivated successfully. Jan 17 12:23:30.526184 kubelet[1932]: E0117 12:23:30.526082 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:30.816555 kubelet[1932]: I0117 12:23:30.815354 1932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e" Jan 17 12:23:30.817482 containerd[1581]: time="2025-01-17T12:23:30.817347164Z" level=info msg="StopPodSandbox for \"0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e\"" Jan 17 12:23:30.818051 containerd[1581]: time="2025-01-17T12:23:30.817731544Z" level=info msg="Ensure that sandbox 0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e in task-service has been cleanup successfully" Jan 17 12:23:30.862723 containerd[1581]: time="2025-01-17T12:23:30.862629115Z" level=error msg="StopPodSandbox for \"0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e\" failed" error="failed to destroy network for sandbox \"0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:30.863461 kubelet[1932]: E0117 12:23:30.863204 1932 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e" Jan 17 12:23:30.863461 kubelet[1932]: E0117 12:23:30.863306 1932 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e"} Jan 17 12:23:30.863461 kubelet[1932]: E0117 12:23:30.863386 1932 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"34cd60d5-9542-4f9c-a341-f3b5c6223a93\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:23:30.863461 kubelet[1932]: E0117 12:23:30.863432 1932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"34cd60d5-9542-4f9c-a341-f3b5c6223a93\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qzxmk" podUID="34cd60d5-9542-4f9c-a341-f3b5c6223a93" Jan 17 12:23:31.528707 kubelet[1932]: E0117 12:23:31.526679 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:32.527063 kubelet[1932]: E0117 12:23:32.526872 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:33.142432 kubelet[1932]: I0117 12:23:33.141433 1932 topology_manager.go:215] "Topology Admit Handler" podUID="10c50357-38da-4ca5-9a41-fc973e78f347" podNamespace="default" podName="nginx-deployment-6d5f899847-jx6fq" Jan 17 12:23:33.300646 kubelet[1932]: I0117 12:23:33.300572 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trz5k\" (UniqueName: \"kubernetes.io/projected/10c50357-38da-4ca5-9a41-fc973e78f347-kube-api-access-trz5k\") pod \"nginx-deployment-6d5f899847-jx6fq\" (UID: \"10c50357-38da-4ca5-9a41-fc973e78f347\") " pod="default/nginx-deployment-6d5f899847-jx6fq" Jan 17 12:23:33.448514 containerd[1581]: time="2025-01-17T12:23:33.448155683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-jx6fq,Uid:10c50357-38da-4ca5-9a41-fc973e78f347,Namespace:default,Attempt:0,}" Jan 17 12:23:33.527968 kubelet[1932]: E0117 12:23:33.527879 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:33.664005 containerd[1581]: time="2025-01-17T12:23:33.663744092Z" level=error msg="Failed to destroy network for sandbox \"4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:33.666440 containerd[1581]: time="2025-01-17T12:23:33.664772290Z" level=error msg="encountered an error cleaning up failed sandbox \"4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:33.666440 containerd[1581]: time="2025-01-17T12:23:33.664867915Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-jx6fq,Uid:10c50357-38da-4ca5-9a41-fc973e78f347,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:33.668186 kubelet[1932]: E0117 12:23:33.666789 1932 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:33.668186 kubelet[1932]: E0117 12:23:33.666864 1932 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-jx6fq" Jan 17 12:23:33.668186 kubelet[1932]: E0117 12:23:33.666898 1932 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-jx6fq" Jan 17 12:23:33.669629 kubelet[1932]: E0117 12:23:33.666987 1932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-jx6fq_default(10c50357-38da-4ca5-9a41-fc973e78f347)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-jx6fq_default(10c50357-38da-4ca5-9a41-fc973e78f347)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-jx6fq" podUID="10c50357-38da-4ca5-9a41-fc973e78f347" Jan 17 12:23:33.669217 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8-shm.mount: Deactivated successfully. Jan 17 12:23:33.825110 kubelet[1932]: I0117 12:23:33.824930 1932 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8" Jan 17 12:23:33.828237 containerd[1581]: time="2025-01-17T12:23:33.827359585Z" level=info msg="StopPodSandbox for \"4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8\"" Jan 17 12:23:33.828237 containerd[1581]: time="2025-01-17T12:23:33.827760080Z" level=info msg="Ensure that sandbox 4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8 in task-service has been cleanup successfully" Jan 17 12:23:33.925617 containerd[1581]: time="2025-01-17T12:23:33.925476988Z" level=error msg="StopPodSandbox for \"4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8\" failed" error="failed to destroy network for sandbox \"4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:23:33.926087 kubelet[1932]: E0117 12:23:33.926031 1932 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8" Jan 17 12:23:33.926320 kubelet[1932]: E0117 12:23:33.926097 1932 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8"} Jan 17 12:23:33.926320 kubelet[1932]: E0117 12:23:33.926173 1932 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"10c50357-38da-4ca5-9a41-fc973e78f347\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:23:33.926320 kubelet[1932]: E0117 12:23:33.926223 1932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"10c50357-38da-4ca5-9a41-fc973e78f347\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-jx6fq" podUID="10c50357-38da-4ca5-9a41-fc973e78f347" Jan 17 12:23:34.515455 kubelet[1932]: E0117 12:23:34.515354 1932 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:34.528987 kubelet[1932]: E0117 12:23:34.528897 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:35.530142 kubelet[1932]: E0117 12:23:35.530073 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:36.530888 kubelet[1932]: E0117 12:23:36.530811 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:37.053669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount788416897.mount: Deactivated successfully. Jan 17 12:23:37.126084 containerd[1581]: time="2025-01-17T12:23:37.124782439Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:37.127758 containerd[1581]: time="2025-01-17T12:23:37.127656549Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 17 12:23:37.131628 containerd[1581]: time="2025-01-17T12:23:37.131520008Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:37.137727 containerd[1581]: time="2025-01-17T12:23:37.137659142Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:37.139318 containerd[1581]: time="2025-01-17T12:23:37.139091223Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 7.319820586s" Jan 17 12:23:37.139318 containerd[1581]: time="2025-01-17T12:23:37.139175963Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 17 12:23:37.186704 containerd[1581]: time="2025-01-17T12:23:37.186633128Z" level=info msg="CreateContainer within sandbox \"d7b37ad083c7ff8e0043e13f25a3ec152da8e4738a0cc1bb8aff2bb0b2a46bfb\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 12:23:37.228542 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount413637607.mount: Deactivated successfully. Jan 17 12:23:37.236151 containerd[1581]: time="2025-01-17T12:23:37.236040317Z" level=info msg="CreateContainer within sandbox \"d7b37ad083c7ff8e0043e13f25a3ec152da8e4738a0cc1bb8aff2bb0b2a46bfb\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"13a45b84abfa8a7b5bd435c873e1d520af3f122ae2c550c95c7433feac12d74e\"" Jan 17 12:23:37.237708 containerd[1581]: time="2025-01-17T12:23:37.237340028Z" level=info msg="StartContainer for \"13a45b84abfa8a7b5bd435c873e1d520af3f122ae2c550c95c7433feac12d74e\"" Jan 17 12:23:37.401957 containerd[1581]: time="2025-01-17T12:23:37.401887943Z" level=info msg="StartContainer for \"13a45b84abfa8a7b5bd435c873e1d520af3f122ae2c550c95c7433feac12d74e\" returns successfully" Jan 17 12:23:37.530752 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 17 12:23:37.530979 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 17 12:23:37.532209 kubelet[1932]: E0117 12:23:37.531818 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:37.846619 kubelet[1932]: E0117 12:23:37.846229 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:23:38.532182 kubelet[1932]: E0117 12:23:38.532090 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:38.844898 kubelet[1932]: E0117 12:23:38.844749 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:23:39.428435 kernel: bpftool[2774]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 17 12:23:39.533165 kubelet[1932]: E0117 12:23:39.533079 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:39.803982 systemd-networkd[1221]: vxlan.calico: Link UP Jan 17 12:23:39.803993 systemd-networkd[1221]: vxlan.calico: Gained carrier Jan 17 12:23:40.534068 kubelet[1932]: E0117 12:23:40.533981 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:41.534850 kubelet[1932]: E0117 12:23:41.534776 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:41.569704 systemd-networkd[1221]: vxlan.calico: Gained IPv6LL Jan 17 12:23:42.535812 kubelet[1932]: E0117 12:23:42.535665 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:43.536037 kubelet[1932]: E0117 12:23:43.535961 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:44.537057 kubelet[1932]: E0117 12:23:44.536979 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:44.703652 containerd[1581]: time="2025-01-17T12:23:44.703598468Z" level=info msg="StopPodSandbox for \"0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e\"" Jan 17 12:23:44.866720 kubelet[1932]: I0117 12:23:44.865823 1932 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-km4vs" podStartSLOduration=11.816460489 podStartE2EDuration="30.86575689s" podCreationTimestamp="2025-01-17 12:23:14 +0000 UTC" firstStartedPulling="2025-01-17 12:23:18.090485937 +0000 UTC m=+4.054347426" lastFinishedPulling="2025-01-17 12:23:37.139782332 +0000 UTC m=+23.103643827" observedRunningTime="2025-01-17 12:23:37.871814789 +0000 UTC m=+23.835676292" watchObservedRunningTime="2025-01-17 12:23:44.86575689 +0000 UTC m=+30.829618385" Jan 17 12:23:44.964972 containerd[1581]: 2025-01-17 12:23:44.865 [INFO][2859] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e" Jan 17 12:23:44.964972 containerd[1581]: 2025-01-17 12:23:44.866 [INFO][2859] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e" iface="eth0" netns="/var/run/netns/cni-eae1773e-5a1d-eef3-ef69-9f554559080d" Jan 17 12:23:44.964972 containerd[1581]: 2025-01-17 12:23:44.866 [INFO][2859] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e" iface="eth0" netns="/var/run/netns/cni-eae1773e-5a1d-eef3-ef69-9f554559080d" Jan 17 12:23:44.964972 containerd[1581]: 2025-01-17 12:23:44.868 [INFO][2859] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e" iface="eth0" netns="/var/run/netns/cni-eae1773e-5a1d-eef3-ef69-9f554559080d" Jan 17 12:23:44.964972 containerd[1581]: 2025-01-17 12:23:44.868 [INFO][2859] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e" Jan 17 12:23:44.964972 containerd[1581]: 2025-01-17 12:23:44.868 [INFO][2859] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e" Jan 17 12:23:44.964972 containerd[1581]: 2025-01-17 12:23:44.942 [INFO][2865] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e" HandleID="k8s-pod-network.0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e" Workload="146.190.50.84-k8s-csi--node--driver--qzxmk-eth0" Jan 17 12:23:44.964972 containerd[1581]: 2025-01-17 12:23:44.943 [INFO][2865] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:44.964972 containerd[1581]: 2025-01-17 12:23:44.943 [INFO][2865] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:44.964972 containerd[1581]: 2025-01-17 12:23:44.956 [WARNING][2865] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e" HandleID="k8s-pod-network.0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e" Workload="146.190.50.84-k8s-csi--node--driver--qzxmk-eth0" Jan 17 12:23:44.964972 containerd[1581]: 2025-01-17 12:23:44.956 [INFO][2865] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e" HandleID="k8s-pod-network.0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e" Workload="146.190.50.84-k8s-csi--node--driver--qzxmk-eth0" Jan 17 12:23:44.964972 containerd[1581]: 2025-01-17 12:23:44.959 [INFO][2865] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:44.964972 containerd[1581]: 2025-01-17 12:23:44.962 [INFO][2859] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e" Jan 17 12:23:44.968747 containerd[1581]: time="2025-01-17T12:23:44.968435420Z" level=info msg="TearDown network for sandbox \"0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e\" successfully" Jan 17 12:23:44.968747 containerd[1581]: time="2025-01-17T12:23:44.968504892Z" level=info msg="StopPodSandbox for \"0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e\" returns successfully" Jan 17 12:23:44.969436 containerd[1581]: time="2025-01-17T12:23:44.969325871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qzxmk,Uid:34cd60d5-9542-4f9c-a341-f3b5c6223a93,Namespace:calico-system,Attempt:1,}" Jan 17 12:23:44.969692 systemd[1]: run-netns-cni\x2deae1773e\x2d5a1d\x2deef3\x2def69\x2d9f554559080d.mount: Deactivated successfully. Jan 17 12:23:45.220260 systemd-networkd[1221]: calif51086bc2db: Link UP Jan 17 12:23:45.222896 systemd-networkd[1221]: calif51086bc2db: Gained carrier Jan 17 12:23:45.255156 containerd[1581]: 2025-01-17 12:23:45.077 [INFO][2872] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {146.190.50.84-k8s-csi--node--driver--qzxmk-eth0 csi-node-driver- calico-system 34cd60d5-9542-4f9c-a341-f3b5c6223a93 1142 0 2025-01-17 12:23:14 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 146.190.50.84 csi-node-driver-qzxmk eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif51086bc2db [] []}} ContainerID="e34cf9516f0a79cc99572c22457fdbb16feaf658f488f3a71749b876519ac1b4" Namespace="calico-system" Pod="csi-node-driver-qzxmk" WorkloadEndpoint="146.190.50.84-k8s-csi--node--driver--qzxmk-" Jan 17 12:23:45.255156 containerd[1581]: 2025-01-17 12:23:45.077 [INFO][2872] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e34cf9516f0a79cc99572c22457fdbb16feaf658f488f3a71749b876519ac1b4" Namespace="calico-system" Pod="csi-node-driver-qzxmk" WorkloadEndpoint="146.190.50.84-k8s-csi--node--driver--qzxmk-eth0" Jan 17 12:23:45.255156 containerd[1581]: 2025-01-17 12:23:45.137 [INFO][2884] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e34cf9516f0a79cc99572c22457fdbb16feaf658f488f3a71749b876519ac1b4" HandleID="k8s-pod-network.e34cf9516f0a79cc99572c22457fdbb16feaf658f488f3a71749b876519ac1b4" Workload="146.190.50.84-k8s-csi--node--driver--qzxmk-eth0" Jan 17 12:23:45.255156 containerd[1581]: 2025-01-17 12:23:45.156 [INFO][2884] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e34cf9516f0a79cc99572c22457fdbb16feaf658f488f3a71749b876519ac1b4" HandleID="k8s-pod-network.e34cf9516f0a79cc99572c22457fdbb16feaf658f488f3a71749b876519ac1b4" Workload="146.190.50.84-k8s-csi--node--driver--qzxmk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000383a80), Attrs:map[string]string{"namespace":"calico-system", "node":"146.190.50.84", "pod":"csi-node-driver-qzxmk", "timestamp":"2025-01-17 12:23:45.137950112 +0000 UTC"}, Hostname:"146.190.50.84", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:23:45.255156 containerd[1581]: 2025-01-17 12:23:45.156 [INFO][2884] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:45.255156 containerd[1581]: 2025-01-17 12:23:45.157 [INFO][2884] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:45.255156 containerd[1581]: 2025-01-17 12:23:45.157 [INFO][2884] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '146.190.50.84' Jan 17 12:23:45.255156 containerd[1581]: 2025-01-17 12:23:45.161 [INFO][2884] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e34cf9516f0a79cc99572c22457fdbb16feaf658f488f3a71749b876519ac1b4" host="146.190.50.84" Jan 17 12:23:45.255156 containerd[1581]: 2025-01-17 12:23:45.170 [INFO][2884] ipam/ipam.go 372: Looking up existing affinities for host host="146.190.50.84" Jan 17 12:23:45.255156 containerd[1581]: 2025-01-17 12:23:45.178 [INFO][2884] ipam/ipam.go 489: Trying affinity for 192.168.35.128/26 host="146.190.50.84" Jan 17 12:23:45.255156 containerd[1581]: 2025-01-17 12:23:45.182 [INFO][2884] ipam/ipam.go 155: Attempting to load block cidr=192.168.35.128/26 host="146.190.50.84" Jan 17 12:23:45.255156 containerd[1581]: 2025-01-17 12:23:45.190 [INFO][2884] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.35.128/26 host="146.190.50.84" Jan 17 12:23:45.255156 containerd[1581]: 2025-01-17 12:23:45.190 [INFO][2884] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.35.128/26 handle="k8s-pod-network.e34cf9516f0a79cc99572c22457fdbb16feaf658f488f3a71749b876519ac1b4" host="146.190.50.84" Jan 17 12:23:45.255156 containerd[1581]: 2025-01-17 12:23:45.193 [INFO][2884] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e34cf9516f0a79cc99572c22457fdbb16feaf658f488f3a71749b876519ac1b4 Jan 17 12:23:45.255156 containerd[1581]: 2025-01-17 12:23:45.200 [INFO][2884] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.35.128/26 handle="k8s-pod-network.e34cf9516f0a79cc99572c22457fdbb16feaf658f488f3a71749b876519ac1b4" host="146.190.50.84" Jan 17 12:23:45.255156 containerd[1581]: 2025-01-17 12:23:45.209 [INFO][2884] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.35.129/26] block=192.168.35.128/26 handle="k8s-pod-network.e34cf9516f0a79cc99572c22457fdbb16feaf658f488f3a71749b876519ac1b4" host="146.190.50.84" Jan 17 12:23:45.255156 containerd[1581]: 2025-01-17 12:23:45.209 [INFO][2884] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.35.129/26] handle="k8s-pod-network.e34cf9516f0a79cc99572c22457fdbb16feaf658f488f3a71749b876519ac1b4" host="146.190.50.84" Jan 17 12:23:45.255156 containerd[1581]: 2025-01-17 12:23:45.209 [INFO][2884] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:45.255156 containerd[1581]: 2025-01-17 12:23:45.209 [INFO][2884] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.35.129/26] IPv6=[] ContainerID="e34cf9516f0a79cc99572c22457fdbb16feaf658f488f3a71749b876519ac1b4" HandleID="k8s-pod-network.e34cf9516f0a79cc99572c22457fdbb16feaf658f488f3a71749b876519ac1b4" Workload="146.190.50.84-k8s-csi--node--driver--qzxmk-eth0" Jan 17 12:23:45.258434 containerd[1581]: 2025-01-17 12:23:45.212 [INFO][2872] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e34cf9516f0a79cc99572c22457fdbb16feaf658f488f3a71749b876519ac1b4" Namespace="calico-system" Pod="csi-node-driver-qzxmk" WorkloadEndpoint="146.190.50.84-k8s-csi--node--driver--qzxmk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"146.190.50.84-k8s-csi--node--driver--qzxmk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"34cd60d5-9542-4f9c-a341-f3b5c6223a93", ResourceVersion:"1142", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 23, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"146.190.50.84", ContainerID:"", Pod:"csi-node-driver-qzxmk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.35.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif51086bc2db", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:45.258434 containerd[1581]: 2025-01-17 12:23:45.212 [INFO][2872] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.35.129/32] ContainerID="e34cf9516f0a79cc99572c22457fdbb16feaf658f488f3a71749b876519ac1b4" Namespace="calico-system" Pod="csi-node-driver-qzxmk" WorkloadEndpoint="146.190.50.84-k8s-csi--node--driver--qzxmk-eth0" Jan 17 12:23:45.258434 containerd[1581]: 2025-01-17 12:23:45.212 [INFO][2872] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif51086bc2db ContainerID="e34cf9516f0a79cc99572c22457fdbb16feaf658f488f3a71749b876519ac1b4" Namespace="calico-system" Pod="csi-node-driver-qzxmk" WorkloadEndpoint="146.190.50.84-k8s-csi--node--driver--qzxmk-eth0" Jan 17 12:23:45.258434 containerd[1581]: 2025-01-17 12:23:45.224 [INFO][2872] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e34cf9516f0a79cc99572c22457fdbb16feaf658f488f3a71749b876519ac1b4" Namespace="calico-system" Pod="csi-node-driver-qzxmk" WorkloadEndpoint="146.190.50.84-k8s-csi--node--driver--qzxmk-eth0" Jan 17 12:23:45.258434 containerd[1581]: 2025-01-17 12:23:45.225 [INFO][2872] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e34cf9516f0a79cc99572c22457fdbb16feaf658f488f3a71749b876519ac1b4" Namespace="calico-system" Pod="csi-node-driver-qzxmk" WorkloadEndpoint="146.190.50.84-k8s-csi--node--driver--qzxmk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"146.190.50.84-k8s-csi--node--driver--qzxmk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"34cd60d5-9542-4f9c-a341-f3b5c6223a93", ResourceVersion:"1142", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 23, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"146.190.50.84", ContainerID:"e34cf9516f0a79cc99572c22457fdbb16feaf658f488f3a71749b876519ac1b4", Pod:"csi-node-driver-qzxmk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.35.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif51086bc2db", MAC:"36:cc:28:73:fe:25", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:45.258434 containerd[1581]: 2025-01-17 12:23:45.243 [INFO][2872] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e34cf9516f0a79cc99572c22457fdbb16feaf658f488f3a71749b876519ac1b4" Namespace="calico-system" Pod="csi-node-driver-qzxmk" WorkloadEndpoint="146.190.50.84-k8s-csi--node--driver--qzxmk-eth0" Jan 17 12:23:45.311163 containerd[1581]: time="2025-01-17T12:23:45.310218590Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:23:45.311163 containerd[1581]: time="2025-01-17T12:23:45.310339546Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:23:45.311163 containerd[1581]: time="2025-01-17T12:23:45.310363542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:45.311163 containerd[1581]: time="2025-01-17T12:23:45.310617377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:45.400687 containerd[1581]: time="2025-01-17T12:23:45.400624282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qzxmk,Uid:34cd60d5-9542-4f9c-a341-f3b5c6223a93,Namespace:calico-system,Attempt:1,} returns sandbox id \"e34cf9516f0a79cc99572c22457fdbb16feaf658f488f3a71749b876519ac1b4\"" Jan 17 12:23:45.403003 containerd[1581]: time="2025-01-17T12:23:45.402909108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 17 12:23:45.538275 kubelet[1932]: E0117 12:23:45.538029 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:45.973433 systemd[1]: run-containerd-runc-k8s.io-e34cf9516f0a79cc99572c22457fdbb16feaf658f488f3a71749b876519ac1b4-runc.hNOlAP.mount: Deactivated successfully. Jan 17 12:23:46.369927 systemd-networkd[1221]: calif51086bc2db: Gained IPv6LL Jan 17 12:23:46.539195 kubelet[1932]: E0117 12:23:46.539119 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:47.010165 containerd[1581]: time="2025-01-17T12:23:47.010096439Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:47.012908 containerd[1581]: time="2025-01-17T12:23:47.012808939Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 17 12:23:47.016878 containerd[1581]: time="2025-01-17T12:23:47.016812172Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:47.028039 containerd[1581]: time="2025-01-17T12:23:47.027798388Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:47.032905 containerd[1581]: time="2025-01-17T12:23:47.030335302Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.627366953s" Jan 17 12:23:47.032905 containerd[1581]: time="2025-01-17T12:23:47.030440605Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 17 12:23:47.034232 containerd[1581]: time="2025-01-17T12:23:47.034185708Z" level=info msg="CreateContainer within sandbox \"e34cf9516f0a79cc99572c22457fdbb16feaf658f488f3a71749b876519ac1b4\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 17 12:23:47.081719 containerd[1581]: time="2025-01-17T12:23:47.081587556Z" level=info msg="CreateContainer within sandbox \"e34cf9516f0a79cc99572c22457fdbb16feaf658f488f3a71749b876519ac1b4\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"55fb03789c7afa1b83b377e4ac9308b4c5b4ef398e8f77edcb4e33025ad6e897\"" Jan 17 12:23:47.083533 containerd[1581]: time="2025-01-17T12:23:47.082938244Z" level=info msg="StartContainer for \"55fb03789c7afa1b83b377e4ac9308b4c5b4ef398e8f77edcb4e33025ad6e897\"" Jan 17 12:23:47.192097 containerd[1581]: time="2025-01-17T12:23:47.192023751Z" level=info msg="StartContainer for \"55fb03789c7afa1b83b377e4ac9308b4c5b4ef398e8f77edcb4e33025ad6e897\" returns successfully" Jan 17 12:23:47.195488 containerd[1581]: time="2025-01-17T12:23:47.195437761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 17 12:23:47.539796 kubelet[1932]: E0117 12:23:47.539713 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:47.886275 kubelet[1932]: E0117 12:23:47.886233 1932 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 17 12:23:48.067791 systemd[1]: run-containerd-runc-k8s.io-13a45b84abfa8a7b5bd435c873e1d520af3f122ae2c550c95c7433feac12d74e-runc.II35eN.mount: Deactivated successfully. Jan 17 12:23:48.540246 kubelet[1932]: E0117 12:23:48.540156 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:48.706417 containerd[1581]: time="2025-01-17T12:23:48.706323288Z" level=info msg="StopPodSandbox for \"4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8\"" Jan 17 12:23:48.903835 containerd[1581]: time="2025-01-17T12:23:48.903759980Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:48.909632 containerd[1581]: time="2025-01-17T12:23:48.909546196Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 17 12:23:48.915843 containerd[1581]: time="2025-01-17T12:23:48.915775695Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:48.922171 containerd[1581]: time="2025-01-17T12:23:48.921913269Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:48.923320 containerd[1581]: time="2025-01-17T12:23:48.923261179Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.727722307s" Jan 17 12:23:48.923320 containerd[1581]: time="2025-01-17T12:23:48.923318866Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 17 12:23:48.929408 containerd[1581]: time="2025-01-17T12:23:48.929018852Z" level=info msg="CreateContainer within sandbox \"e34cf9516f0a79cc99572c22457fdbb16feaf658f488f3a71749b876519ac1b4\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 17 12:23:48.931812 containerd[1581]: 2025-01-17 12:23:48.837 [INFO][3030] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8" Jan 17 12:23:48.931812 containerd[1581]: 2025-01-17 12:23:48.843 [INFO][3030] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8" iface="eth0" netns="/var/run/netns/cni-3cfad7e9-222a-8a5c-b5e8-5f995026fcf4" Jan 17 12:23:48.931812 containerd[1581]: 2025-01-17 12:23:48.843 [INFO][3030] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8" iface="eth0" netns="/var/run/netns/cni-3cfad7e9-222a-8a5c-b5e8-5f995026fcf4" Jan 17 12:23:48.931812 containerd[1581]: 2025-01-17 12:23:48.847 [INFO][3030] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8" iface="eth0" netns="/var/run/netns/cni-3cfad7e9-222a-8a5c-b5e8-5f995026fcf4" Jan 17 12:23:48.931812 containerd[1581]: 2025-01-17 12:23:48.848 [INFO][3030] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8" Jan 17 12:23:48.931812 containerd[1581]: 2025-01-17 12:23:48.848 [INFO][3030] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8" Jan 17 12:23:48.931812 containerd[1581]: 2025-01-17 12:23:48.903 [INFO][3036] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8" HandleID="k8s-pod-network.4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8" Workload="146.190.50.84-k8s-nginx--deployment--6d5f899847--jx6fq-eth0" Jan 17 12:23:48.931812 containerd[1581]: 2025-01-17 12:23:48.905 [INFO][3036] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:48.931812 containerd[1581]: 2025-01-17 12:23:48.905 [INFO][3036] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:48.931812 containerd[1581]: 2025-01-17 12:23:48.918 [WARNING][3036] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8" HandleID="k8s-pod-network.4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8" Workload="146.190.50.84-k8s-nginx--deployment--6d5f899847--jx6fq-eth0" Jan 17 12:23:48.931812 containerd[1581]: 2025-01-17 12:23:48.918 [INFO][3036] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8" HandleID="k8s-pod-network.4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8" Workload="146.190.50.84-k8s-nginx--deployment--6d5f899847--jx6fq-eth0" Jan 17 12:23:48.931812 containerd[1581]: 2025-01-17 12:23:48.923 [INFO][3036] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:48.931812 containerd[1581]: 2025-01-17 12:23:48.928 [INFO][3030] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8" Jan 17 12:23:48.939024 systemd[1]: run-netns-cni\x2d3cfad7e9\x2d222a\x2d8a5c\x2db5e8\x2d5f995026fcf4.mount: Deactivated successfully. Jan 17 12:23:48.940535 containerd[1581]: time="2025-01-17T12:23:48.940100375Z" level=info msg="TearDown network for sandbox \"4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8\" successfully" Jan 17 12:23:48.940535 containerd[1581]: time="2025-01-17T12:23:48.940171147Z" level=info msg="StopPodSandbox for \"4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8\" returns successfully" Jan 17 12:23:48.943055 containerd[1581]: time="2025-01-17T12:23:48.942961975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-jx6fq,Uid:10c50357-38da-4ca5-9a41-fc973e78f347,Namespace:default,Attempt:1,}" Jan 17 12:23:48.990711 containerd[1581]: time="2025-01-17T12:23:48.990642837Z" level=info msg="CreateContainer within sandbox \"e34cf9516f0a79cc99572c22457fdbb16feaf658f488f3a71749b876519ac1b4\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d8aa7f86a95d9963074c2414bcee432518d8b914ed66b5c01ab14398f702c7a1\"" Jan 17 12:23:48.993329 containerd[1581]: time="2025-01-17T12:23:48.993180356Z" level=info msg="StartContainer for \"d8aa7f86a95d9963074c2414bcee432518d8b914ed66b5c01ab14398f702c7a1\"" Jan 17 12:23:49.158795 containerd[1581]: time="2025-01-17T12:23:49.158616899Z" level=info msg="StartContainer for \"d8aa7f86a95d9963074c2414bcee432518d8b914ed66b5c01ab14398f702c7a1\" returns successfully" Jan 17 12:23:49.257518 update_engine[1562]: I20250117 12:23:49.257407 1562 update_attempter.cc:509] Updating boot flags... Jan 17 12:23:49.308293 systemd-networkd[1221]: cali723a9c84411: Link UP Jan 17 12:23:49.310498 systemd-networkd[1221]: cali723a9c84411: Gained carrier Jan 17 12:23:49.336620 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (3106) Jan 17 12:23:49.350119 containerd[1581]: 2025-01-17 12:23:49.126 [INFO][3052] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {146.190.50.84-k8s-nginx--deployment--6d5f899847--jx6fq-eth0 nginx-deployment-6d5f899847- default 10c50357-38da-4ca5-9a41-fc973e78f347 1168 0 2025-01-17 12:23:33 +0000 UTC map[app:nginx pod-template-hash:6d5f899847 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 146.190.50.84 nginx-deployment-6d5f899847-jx6fq eth0 default [] [] [kns.default ksa.default.default] cali723a9c84411 [] []}} ContainerID="434171b5e90fa5baeabef7db195f19ef8cddb28bd68f1f349b207778da1654c9" Namespace="default" Pod="nginx-deployment-6d5f899847-jx6fq" WorkloadEndpoint="146.190.50.84-k8s-nginx--deployment--6d5f899847--jx6fq-" Jan 17 12:23:49.350119 containerd[1581]: 2025-01-17 12:23:49.127 [INFO][3052] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="434171b5e90fa5baeabef7db195f19ef8cddb28bd68f1f349b207778da1654c9" Namespace="default" Pod="nginx-deployment-6d5f899847-jx6fq" WorkloadEndpoint="146.190.50.84-k8s-nginx--deployment--6d5f899847--jx6fq-eth0" Jan 17 12:23:49.350119 containerd[1581]: 2025-01-17 12:23:49.193 [INFO][3089] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="434171b5e90fa5baeabef7db195f19ef8cddb28bd68f1f349b207778da1654c9" HandleID="k8s-pod-network.434171b5e90fa5baeabef7db195f19ef8cddb28bd68f1f349b207778da1654c9" Workload="146.190.50.84-k8s-nginx--deployment--6d5f899847--jx6fq-eth0" Jan 17 12:23:49.350119 containerd[1581]: 2025-01-17 12:23:49.217 [INFO][3089] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="434171b5e90fa5baeabef7db195f19ef8cddb28bd68f1f349b207778da1654c9" HandleID="k8s-pod-network.434171b5e90fa5baeabef7db195f19ef8cddb28bd68f1f349b207778da1654c9" Workload="146.190.50.84-k8s-nginx--deployment--6d5f899847--jx6fq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319690), Attrs:map[string]string{"namespace":"default", "node":"146.190.50.84", "pod":"nginx-deployment-6d5f899847-jx6fq", "timestamp":"2025-01-17 12:23:49.193458903 +0000 UTC"}, Hostname:"146.190.50.84", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:23:49.350119 containerd[1581]: 2025-01-17 12:23:49.217 [INFO][3089] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:23:49.350119 containerd[1581]: 2025-01-17 12:23:49.217 [INFO][3089] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:23:49.350119 containerd[1581]: 2025-01-17 12:23:49.217 [INFO][3089] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '146.190.50.84' Jan 17 12:23:49.350119 containerd[1581]: 2025-01-17 12:23:49.226 [INFO][3089] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.434171b5e90fa5baeabef7db195f19ef8cddb28bd68f1f349b207778da1654c9" host="146.190.50.84" Jan 17 12:23:49.350119 containerd[1581]: 2025-01-17 12:23:49.239 [INFO][3089] ipam/ipam.go 372: Looking up existing affinities for host host="146.190.50.84" Jan 17 12:23:49.350119 containerd[1581]: 2025-01-17 12:23:49.250 [INFO][3089] ipam/ipam.go 489: Trying affinity for 192.168.35.128/26 host="146.190.50.84" Jan 17 12:23:49.350119 containerd[1581]: 2025-01-17 12:23:49.255 [INFO][3089] ipam/ipam.go 155: Attempting to load block cidr=192.168.35.128/26 host="146.190.50.84" Jan 17 12:23:49.350119 containerd[1581]: 2025-01-17 12:23:49.262 [INFO][3089] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.35.128/26 host="146.190.50.84" Jan 17 12:23:49.350119 containerd[1581]: 2025-01-17 12:23:49.262 [INFO][3089] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.35.128/26 handle="k8s-pod-network.434171b5e90fa5baeabef7db195f19ef8cddb28bd68f1f349b207778da1654c9" host="146.190.50.84" Jan 17 12:23:49.350119 containerd[1581]: 2025-01-17 12:23:49.268 [INFO][3089] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.434171b5e90fa5baeabef7db195f19ef8cddb28bd68f1f349b207778da1654c9 Jan 17 12:23:49.350119 containerd[1581]: 2025-01-17 12:23:49.278 [INFO][3089] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.35.128/26 handle="k8s-pod-network.434171b5e90fa5baeabef7db195f19ef8cddb28bd68f1f349b207778da1654c9" host="146.190.50.84" Jan 17 12:23:49.350119 containerd[1581]: 2025-01-17 12:23:49.292 [INFO][3089] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.35.130/26] block=192.168.35.128/26 handle="k8s-pod-network.434171b5e90fa5baeabef7db195f19ef8cddb28bd68f1f349b207778da1654c9" host="146.190.50.84" Jan 17 12:23:49.350119 containerd[1581]: 2025-01-17 12:23:49.292 [INFO][3089] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.35.130/26] handle="k8s-pod-network.434171b5e90fa5baeabef7db195f19ef8cddb28bd68f1f349b207778da1654c9" host="146.190.50.84" Jan 17 12:23:49.350119 containerd[1581]: 2025-01-17 12:23:49.292 [INFO][3089] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:23:49.350119 containerd[1581]: 2025-01-17 12:23:49.292 [INFO][3089] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.35.130/26] IPv6=[] ContainerID="434171b5e90fa5baeabef7db195f19ef8cddb28bd68f1f349b207778da1654c9" HandleID="k8s-pod-network.434171b5e90fa5baeabef7db195f19ef8cddb28bd68f1f349b207778da1654c9" Workload="146.190.50.84-k8s-nginx--deployment--6d5f899847--jx6fq-eth0" Jan 17 12:23:49.351241 containerd[1581]: 2025-01-17 12:23:49.297 [INFO][3052] cni-plugin/k8s.go 386: Populated endpoint ContainerID="434171b5e90fa5baeabef7db195f19ef8cddb28bd68f1f349b207778da1654c9" Namespace="default" Pod="nginx-deployment-6d5f899847-jx6fq" WorkloadEndpoint="146.190.50.84-k8s-nginx--deployment--6d5f899847--jx6fq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"146.190.50.84-k8s-nginx--deployment--6d5f899847--jx6fq-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"10c50357-38da-4ca5-9a41-fc973e78f347", ResourceVersion:"1168", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 23, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"146.190.50.84", ContainerID:"", Pod:"nginx-deployment-6d5f899847-jx6fq", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.35.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali723a9c84411", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:49.351241 containerd[1581]: 2025-01-17 12:23:49.298 [INFO][3052] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.35.130/32] ContainerID="434171b5e90fa5baeabef7db195f19ef8cddb28bd68f1f349b207778da1654c9" Namespace="default" Pod="nginx-deployment-6d5f899847-jx6fq" WorkloadEndpoint="146.190.50.84-k8s-nginx--deployment--6d5f899847--jx6fq-eth0" Jan 17 12:23:49.351241 containerd[1581]: 2025-01-17 12:23:49.298 [INFO][3052] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali723a9c84411 ContainerID="434171b5e90fa5baeabef7db195f19ef8cddb28bd68f1f349b207778da1654c9" Namespace="default" Pod="nginx-deployment-6d5f899847-jx6fq" WorkloadEndpoint="146.190.50.84-k8s-nginx--deployment--6d5f899847--jx6fq-eth0" Jan 17 12:23:49.351241 containerd[1581]: 2025-01-17 12:23:49.311 [INFO][3052] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="434171b5e90fa5baeabef7db195f19ef8cddb28bd68f1f349b207778da1654c9" Namespace="default" Pod="nginx-deployment-6d5f899847-jx6fq" WorkloadEndpoint="146.190.50.84-k8s-nginx--deployment--6d5f899847--jx6fq-eth0" Jan 17 12:23:49.351241 containerd[1581]: 2025-01-17 12:23:49.313 [INFO][3052] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="434171b5e90fa5baeabef7db195f19ef8cddb28bd68f1f349b207778da1654c9" Namespace="default" Pod="nginx-deployment-6d5f899847-jx6fq" WorkloadEndpoint="146.190.50.84-k8s-nginx--deployment--6d5f899847--jx6fq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"146.190.50.84-k8s-nginx--deployment--6d5f899847--jx6fq-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"10c50357-38da-4ca5-9a41-fc973e78f347", ResourceVersion:"1168", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 23, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"146.190.50.84", ContainerID:"434171b5e90fa5baeabef7db195f19ef8cddb28bd68f1f349b207778da1654c9", Pod:"nginx-deployment-6d5f899847-jx6fq", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.35.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali723a9c84411", MAC:"86:7f:f7:4b:32:96", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:23:49.351241 containerd[1581]: 2025-01-17 12:23:49.335 [INFO][3052] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="434171b5e90fa5baeabef7db195f19ef8cddb28bd68f1f349b207778da1654c9" Namespace="default" Pod="nginx-deployment-6d5f899847-jx6fq" WorkloadEndpoint="146.190.50.84-k8s-nginx--deployment--6d5f899847--jx6fq-eth0" Jan 17 12:23:49.480901 containerd[1581]: time="2025-01-17T12:23:49.476792243Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:23:49.480901 containerd[1581]: time="2025-01-17T12:23:49.476915558Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:23:49.480901 containerd[1581]: time="2025-01-17T12:23:49.476955249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:49.480901 containerd[1581]: time="2025-01-17T12:23:49.477157756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:49.519521 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (3105) Jan 17 12:23:49.540481 kubelet[1932]: E0117 12:23:49.540352 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:49.638483 containerd[1581]: time="2025-01-17T12:23:49.638420905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-jx6fq,Uid:10c50357-38da-4ca5-9a41-fc973e78f347,Namespace:default,Attempt:1,} returns sandbox id \"434171b5e90fa5baeabef7db195f19ef8cddb28bd68f1f349b207778da1654c9\"" Jan 17 12:23:49.646671 containerd[1581]: time="2025-01-17T12:23:49.646460592Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 17 12:23:49.679033 kubelet[1932]: I0117 12:23:49.678975 1932 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 17 12:23:49.682625 kubelet[1932]: I0117 12:23:49.682513 1932 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 17 12:23:49.925114 kubelet[1932]: I0117 12:23:49.924730 1932 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-qzxmk" podStartSLOduration=32.402462194 podStartE2EDuration="35.92466056s" podCreationTimestamp="2025-01-17 12:23:14 +0000 UTC" firstStartedPulling="2025-01-17 12:23:45.402553219 +0000 UTC m=+31.366414704" lastFinishedPulling="2025-01-17 12:23:48.924751595 +0000 UTC m=+34.888613070" observedRunningTime="2025-01-17 12:23:49.922254072 +0000 UTC m=+35.886115571" watchObservedRunningTime="2025-01-17 12:23:49.92466056 +0000 UTC m=+35.888522062" Jan 17 12:23:50.541425 kubelet[1932]: E0117 12:23:50.541298 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:50.913710 systemd-networkd[1221]: cali723a9c84411: Gained IPv6LL Jan 17 12:23:51.541936 kubelet[1932]: E0117 12:23:51.541477 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:52.542763 kubelet[1932]: E0117 12:23:52.542555 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:53.266259 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2233097342.mount: Deactivated successfully. Jan 17 12:23:53.543788 kubelet[1932]: E0117 12:23:53.543201 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:54.515504 kubelet[1932]: E0117 12:23:54.515448 1932 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:54.543852 kubelet[1932]: E0117 12:23:54.543641 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:55.126559 containerd[1581]: time="2025-01-17T12:23:55.126464523Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:55.129994 containerd[1581]: time="2025-01-17T12:23:55.129457948Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71036018" Jan 17 12:23:55.133165 containerd[1581]: time="2025-01-17T12:23:55.133042007Z" level=info msg="ImageCreate event name:\"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:55.142526 containerd[1581]: time="2025-01-17T12:23:55.141261771Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:55.143035 containerd[1581]: time="2025-01-17T12:23:55.142804844Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 5.496183107s" Jan 17 12:23:55.143035 containerd[1581]: time="2025-01-17T12:23:55.142874461Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 17 12:23:55.146291 containerd[1581]: time="2025-01-17T12:23:55.146061780Z" level=info msg="CreateContainer within sandbox \"434171b5e90fa5baeabef7db195f19ef8cddb28bd68f1f349b207778da1654c9\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 17 12:23:55.180427 containerd[1581]: time="2025-01-17T12:23:55.180339093Z" level=info msg="CreateContainer within sandbox \"434171b5e90fa5baeabef7db195f19ef8cddb28bd68f1f349b207778da1654c9\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"9d17a80f58378ce1a76d94bfaee761585d002fd411963496fae5996801c9d604\"" Jan 17 12:23:55.182529 containerd[1581]: time="2025-01-17T12:23:55.181229129Z" level=info msg="StartContainer for \"9d17a80f58378ce1a76d94bfaee761585d002fd411963496fae5996801c9d604\"" Jan 17 12:23:55.339641 containerd[1581]: time="2025-01-17T12:23:55.339448467Z" level=info msg="StartContainer for \"9d17a80f58378ce1a76d94bfaee761585d002fd411963496fae5996801c9d604\" returns successfully" Jan 17 12:23:55.544955 kubelet[1932]: E0117 12:23:55.544711 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:56.545482 kubelet[1932]: E0117 12:23:56.545412 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:57.546753 kubelet[1932]: E0117 12:23:57.546677 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:58.547319 kubelet[1932]: E0117 12:23:58.547226 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:59.547710 kubelet[1932]: E0117 12:23:59.547634 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:00.548037 kubelet[1932]: E0117 12:24:00.547939 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:01.548954 kubelet[1932]: E0117 12:24:01.548873 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:02.549882 kubelet[1932]: E0117 12:24:02.549797 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:03.550777 kubelet[1932]: E0117 12:24:03.550699 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:04.551137 kubelet[1932]: E0117 12:24:04.551074 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:05.018154 kubelet[1932]: I0117 12:24:05.017994 1932 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-jx6fq" podStartSLOduration=26.519554547 podStartE2EDuration="32.017903802s" podCreationTimestamp="2025-01-17 12:23:33 +0000 UTC" firstStartedPulling="2025-01-17 12:23:49.6451023 +0000 UTC m=+35.608963815" lastFinishedPulling="2025-01-17 12:23:55.143451583 +0000 UTC m=+41.107313070" observedRunningTime="2025-01-17 12:23:55.935678653 +0000 UTC m=+41.899540147" watchObservedRunningTime="2025-01-17 12:24:05.017903802 +0000 UTC m=+50.981765298" Jan 17 12:24:05.018580 kubelet[1932]: I0117 12:24:05.018443 1932 topology_manager.go:215] "Topology Admit Handler" podUID="61551699-ab10-4ebd-ae74-7c6d57667877" podNamespace="default" podName="nfs-server-provisioner-0" Jan 17 12:24:05.155650 kubelet[1932]: I0117 12:24:05.155584 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/61551699-ab10-4ebd-ae74-7c6d57667877-data\") pod \"nfs-server-provisioner-0\" (UID: \"61551699-ab10-4ebd-ae74-7c6d57667877\") " pod="default/nfs-server-provisioner-0" Jan 17 12:24:05.155861 kubelet[1932]: I0117 12:24:05.155671 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vpmv\" (UniqueName: \"kubernetes.io/projected/61551699-ab10-4ebd-ae74-7c6d57667877-kube-api-access-7vpmv\") pod \"nfs-server-provisioner-0\" (UID: \"61551699-ab10-4ebd-ae74-7c6d57667877\") " pod="default/nfs-server-provisioner-0" Jan 17 12:24:05.324634 containerd[1581]: time="2025-01-17T12:24:05.323819433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:61551699-ab10-4ebd-ae74-7c6d57667877,Namespace:default,Attempt:0,}" Jan 17 12:24:05.551903 kubelet[1932]: E0117 12:24:05.551825 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:05.608511 systemd-networkd[1221]: cali60e51b789ff: Link UP Jan 17 12:24:05.610817 systemd-networkd[1221]: cali60e51b789ff: Gained carrier Jan 17 12:24:05.632368 containerd[1581]: 2025-01-17 12:24:05.441 [INFO][3278] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {146.190.50.84-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 61551699-ab10-4ebd-ae74-7c6d57667877 1240 0 2025-01-17 12:24:04 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 146.190.50.84 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="f6751ab87d9aaab75fb28a97c86ec989a5755a1cbbe9250983ef3a729506c08c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="146.190.50.84-k8s-nfs--server--provisioner--0-" Jan 17 12:24:05.632368 containerd[1581]: 2025-01-17 12:24:05.442 [INFO][3278] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f6751ab87d9aaab75fb28a97c86ec989a5755a1cbbe9250983ef3a729506c08c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="146.190.50.84-k8s-nfs--server--provisioner--0-eth0" Jan 17 12:24:05.632368 containerd[1581]: 2025-01-17 12:24:05.509 [INFO][3288] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f6751ab87d9aaab75fb28a97c86ec989a5755a1cbbe9250983ef3a729506c08c" HandleID="k8s-pod-network.f6751ab87d9aaab75fb28a97c86ec989a5755a1cbbe9250983ef3a729506c08c" Workload="146.190.50.84-k8s-nfs--server--provisioner--0-eth0" Jan 17 12:24:05.632368 containerd[1581]: 2025-01-17 12:24:05.529 [INFO][3288] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f6751ab87d9aaab75fb28a97c86ec989a5755a1cbbe9250983ef3a729506c08c" HandleID="k8s-pod-network.f6751ab87d9aaab75fb28a97c86ec989a5755a1cbbe9250983ef3a729506c08c" Workload="146.190.50.84-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290820), Attrs:map[string]string{"namespace":"default", "node":"146.190.50.84", "pod":"nfs-server-provisioner-0", "timestamp":"2025-01-17 12:24:05.509886814 +0000 UTC"}, Hostname:"146.190.50.84", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:24:05.632368 containerd[1581]: 2025-01-17 12:24:05.529 [INFO][3288] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:24:05.632368 containerd[1581]: 2025-01-17 12:24:05.529 [INFO][3288] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:24:05.632368 containerd[1581]: 2025-01-17 12:24:05.530 [INFO][3288] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '146.190.50.84' Jan 17 12:24:05.632368 containerd[1581]: 2025-01-17 12:24:05.536 [INFO][3288] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f6751ab87d9aaab75fb28a97c86ec989a5755a1cbbe9250983ef3a729506c08c" host="146.190.50.84" Jan 17 12:24:05.632368 containerd[1581]: 2025-01-17 12:24:05.553 [INFO][3288] ipam/ipam.go 372: Looking up existing affinities for host host="146.190.50.84" Jan 17 12:24:05.632368 containerd[1581]: 2025-01-17 12:24:05.566 [INFO][3288] ipam/ipam.go 489: Trying affinity for 192.168.35.128/26 host="146.190.50.84" Jan 17 12:24:05.632368 containerd[1581]: 2025-01-17 12:24:05.574 [INFO][3288] ipam/ipam.go 155: Attempting to load block cidr=192.168.35.128/26 host="146.190.50.84" Jan 17 12:24:05.632368 containerd[1581]: 2025-01-17 12:24:05.578 [INFO][3288] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.35.128/26 host="146.190.50.84" Jan 17 12:24:05.632368 containerd[1581]: 2025-01-17 12:24:05.578 [INFO][3288] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.35.128/26 handle="k8s-pod-network.f6751ab87d9aaab75fb28a97c86ec989a5755a1cbbe9250983ef3a729506c08c" host="146.190.50.84" Jan 17 12:24:05.632368 containerd[1581]: 2025-01-17 12:24:05.582 [INFO][3288] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f6751ab87d9aaab75fb28a97c86ec989a5755a1cbbe9250983ef3a729506c08c Jan 17 12:24:05.632368 containerd[1581]: 2025-01-17 12:24:05.590 [INFO][3288] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.35.128/26 handle="k8s-pod-network.f6751ab87d9aaab75fb28a97c86ec989a5755a1cbbe9250983ef3a729506c08c" host="146.190.50.84" Jan 17 12:24:05.632368 containerd[1581]: 2025-01-17 12:24:05.600 [INFO][3288] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.35.131/26] block=192.168.35.128/26 handle="k8s-pod-network.f6751ab87d9aaab75fb28a97c86ec989a5755a1cbbe9250983ef3a729506c08c" host="146.190.50.84" Jan 17 12:24:05.632368 containerd[1581]: 2025-01-17 12:24:05.600 [INFO][3288] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.35.131/26] handle="k8s-pod-network.f6751ab87d9aaab75fb28a97c86ec989a5755a1cbbe9250983ef3a729506c08c" host="146.190.50.84" Jan 17 12:24:05.632368 containerd[1581]: 2025-01-17 12:24:05.600 [INFO][3288] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:24:05.632368 containerd[1581]: 2025-01-17 12:24:05.600 [INFO][3288] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.35.131/26] IPv6=[] ContainerID="f6751ab87d9aaab75fb28a97c86ec989a5755a1cbbe9250983ef3a729506c08c" HandleID="k8s-pod-network.f6751ab87d9aaab75fb28a97c86ec989a5755a1cbbe9250983ef3a729506c08c" Workload="146.190.50.84-k8s-nfs--server--provisioner--0-eth0" Jan 17 12:24:05.635201 containerd[1581]: 2025-01-17 12:24:05.603 [INFO][3278] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f6751ab87d9aaab75fb28a97c86ec989a5755a1cbbe9250983ef3a729506c08c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="146.190.50.84-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"146.190.50.84-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"61551699-ab10-4ebd-ae74-7c6d57667877", ResourceVersion:"1240", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 24, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"146.190.50.84", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.35.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:24:05.635201 containerd[1581]: 2025-01-17 12:24:05.603 [INFO][3278] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.35.131/32] ContainerID="f6751ab87d9aaab75fb28a97c86ec989a5755a1cbbe9250983ef3a729506c08c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="146.190.50.84-k8s-nfs--server--provisioner--0-eth0" Jan 17 12:24:05.635201 containerd[1581]: 2025-01-17 12:24:05.604 [INFO][3278] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="f6751ab87d9aaab75fb28a97c86ec989a5755a1cbbe9250983ef3a729506c08c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="146.190.50.84-k8s-nfs--server--provisioner--0-eth0" Jan 17 12:24:05.635201 containerd[1581]: 2025-01-17 12:24:05.610 [INFO][3278] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f6751ab87d9aaab75fb28a97c86ec989a5755a1cbbe9250983ef3a729506c08c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="146.190.50.84-k8s-nfs--server--provisioner--0-eth0" Jan 17 12:24:05.635868 containerd[1581]: 2025-01-17 12:24:05.612 [INFO][3278] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f6751ab87d9aaab75fb28a97c86ec989a5755a1cbbe9250983ef3a729506c08c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="146.190.50.84-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"146.190.50.84-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"61551699-ab10-4ebd-ae74-7c6d57667877", ResourceVersion:"1240", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 24, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"146.190.50.84", ContainerID:"f6751ab87d9aaab75fb28a97c86ec989a5755a1cbbe9250983ef3a729506c08c", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.35.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"22:7a:64:f9:02:48", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:24:05.635868 containerd[1581]: 2025-01-17 12:24:05.630 [INFO][3278] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f6751ab87d9aaab75fb28a97c86ec989a5755a1cbbe9250983ef3a729506c08c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="146.190.50.84-k8s-nfs--server--provisioner--0-eth0" Jan 17 12:24:05.675458 containerd[1581]: time="2025-01-17T12:24:05.674839590Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:24:05.675458 containerd[1581]: time="2025-01-17T12:24:05.674939695Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:24:05.675458 containerd[1581]: time="2025-01-17T12:24:05.674965305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:24:05.676184 containerd[1581]: time="2025-01-17T12:24:05.676037478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:24:05.784659 containerd[1581]: time="2025-01-17T12:24:05.784498853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:61551699-ab10-4ebd-ae74-7c6d57667877,Namespace:default,Attempt:0,} returns sandbox id \"f6751ab87d9aaab75fb28a97c86ec989a5755a1cbbe9250983ef3a729506c08c\"" Jan 17 12:24:05.787528 containerd[1581]: time="2025-01-17T12:24:05.787466689Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 17 12:24:06.552666 kubelet[1932]: E0117 12:24:06.552601 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:07.233701 systemd-networkd[1221]: cali60e51b789ff: Gained IPv6LL Jan 17 12:24:07.554450 kubelet[1932]: E0117 12:24:07.553430 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:08.432540 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount916764605.mount: Deactivated successfully. Jan 17 12:24:08.554966 kubelet[1932]: E0117 12:24:08.554461 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:09.555001 kubelet[1932]: E0117 12:24:09.554940 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:10.555565 kubelet[1932]: E0117 12:24:10.555502 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:11.556099 kubelet[1932]: E0117 12:24:11.556020 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:11.984429 containerd[1581]: time="2025-01-17T12:24:11.983410088Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:11.991429 containerd[1581]: time="2025-01-17T12:24:11.989849690Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jan 17 12:24:11.997458 containerd[1581]: time="2025-01-17T12:24:11.995997996Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:12.033001 containerd[1581]: time="2025-01-17T12:24:12.032908865Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:12.038783 containerd[1581]: time="2025-01-17T12:24:12.038674198Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 6.251139348s" Jan 17 12:24:12.038783 containerd[1581]: time="2025-01-17T12:24:12.038746359Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 17 12:24:12.044872 containerd[1581]: time="2025-01-17T12:24:12.044773427Z" level=info msg="CreateContainer within sandbox \"f6751ab87d9aaab75fb28a97c86ec989a5755a1cbbe9250983ef3a729506c08c\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 17 12:24:12.190699 containerd[1581]: time="2025-01-17T12:24:12.190618208Z" level=info msg="CreateContainer within sandbox \"f6751ab87d9aaab75fb28a97c86ec989a5755a1cbbe9250983ef3a729506c08c\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"9cb8da7815567cf3c19cbc466928b0985b566d4f7ac1bbd48a34a3679468d4f7\"" Jan 17 12:24:12.192431 containerd[1581]: time="2025-01-17T12:24:12.191792679Z" level=info msg="StartContainer for \"9cb8da7815567cf3c19cbc466928b0985b566d4f7ac1bbd48a34a3679468d4f7\"" Jan 17 12:24:12.378592 containerd[1581]: time="2025-01-17T12:24:12.378365340Z" level=info msg="StartContainer for \"9cb8da7815567cf3c19cbc466928b0985b566d4f7ac1bbd48a34a3679468d4f7\" returns successfully" Jan 17 12:24:12.561448 kubelet[1932]: E0117 12:24:12.561328 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:13.038704 kubelet[1932]: I0117 12:24:13.038593 1932 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.786298828 podStartE2EDuration="9.038479997s" podCreationTimestamp="2025-01-17 12:24:04 +0000 UTC" firstStartedPulling="2025-01-17 12:24:05.786943075 +0000 UTC m=+51.750804550" lastFinishedPulling="2025-01-17 12:24:12.039124243 +0000 UTC m=+58.002985719" observedRunningTime="2025-01-17 12:24:13.037689083 +0000 UTC m=+59.001550600" watchObservedRunningTime="2025-01-17 12:24:13.038479997 +0000 UTC m=+59.002341493" Jan 17 12:24:13.562298 kubelet[1932]: E0117 12:24:13.562204 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:14.516882 kubelet[1932]: E0117 12:24:14.516762 1932 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:14.556102 containerd[1581]: time="2025-01-17T12:24:14.556045837Z" level=info msg="StopPodSandbox for \"4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8\"" Jan 17 12:24:14.562946 kubelet[1932]: E0117 12:24:14.562865 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:14.712338 containerd[1581]: 2025-01-17 12:24:14.628 [WARNING][3461] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"146.190.50.84-k8s-nginx--deployment--6d5f899847--jx6fq-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"10c50357-38da-4ca5-9a41-fc973e78f347", ResourceVersion:"1204", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 23, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"146.190.50.84", ContainerID:"434171b5e90fa5baeabef7db195f19ef8cddb28bd68f1f349b207778da1654c9", Pod:"nginx-deployment-6d5f899847-jx6fq", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.35.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali723a9c84411", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:24:14.712338 containerd[1581]: 2025-01-17 12:24:14.629 [INFO][3461] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8" Jan 17 12:24:14.712338 containerd[1581]: 2025-01-17 12:24:14.629 [INFO][3461] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8" iface="eth0" netns="" Jan 17 12:24:14.712338 containerd[1581]: 2025-01-17 12:24:14.629 [INFO][3461] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8" Jan 17 12:24:14.712338 containerd[1581]: 2025-01-17 12:24:14.629 [INFO][3461] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8" Jan 17 12:24:14.712338 containerd[1581]: 2025-01-17 12:24:14.682 [INFO][3467] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8" HandleID="k8s-pod-network.4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8" Workload="146.190.50.84-k8s-nginx--deployment--6d5f899847--jx6fq-eth0" Jan 17 12:24:14.712338 containerd[1581]: 2025-01-17 12:24:14.682 [INFO][3467] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:24:14.712338 containerd[1581]: 2025-01-17 12:24:14.682 [INFO][3467] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:24:14.712338 containerd[1581]: 2025-01-17 12:24:14.700 [WARNING][3467] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8" HandleID="k8s-pod-network.4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8" Workload="146.190.50.84-k8s-nginx--deployment--6d5f899847--jx6fq-eth0" Jan 17 12:24:14.712338 containerd[1581]: 2025-01-17 12:24:14.700 [INFO][3467] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8" HandleID="k8s-pod-network.4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8" Workload="146.190.50.84-k8s-nginx--deployment--6d5f899847--jx6fq-eth0" Jan 17 12:24:14.712338 containerd[1581]: 2025-01-17 12:24:14.705 [INFO][3467] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:24:14.712338 containerd[1581]: 2025-01-17 12:24:14.707 [INFO][3461] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8" Jan 17 12:24:14.712338 containerd[1581]: time="2025-01-17T12:24:14.710727531Z" level=info msg="TearDown network for sandbox \"4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8\" successfully" Jan 17 12:24:14.712338 containerd[1581]: time="2025-01-17T12:24:14.710769937Z" level=info msg="StopPodSandbox for \"4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8\" returns successfully" Jan 17 12:24:14.770581 containerd[1581]: time="2025-01-17T12:24:14.770080866Z" level=info msg="RemovePodSandbox for \"4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8\"" Jan 17 12:24:14.770581 containerd[1581]: time="2025-01-17T12:24:14.770169701Z" level=info msg="Forcibly stopping sandbox \"4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8\"" Jan 17 12:24:14.945695 containerd[1581]: 2025-01-17 12:24:14.871 [WARNING][3487] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"146.190.50.84-k8s-nginx--deployment--6d5f899847--jx6fq-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"10c50357-38da-4ca5-9a41-fc973e78f347", ResourceVersion:"1204", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 23, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"146.190.50.84", ContainerID:"434171b5e90fa5baeabef7db195f19ef8cddb28bd68f1f349b207778da1654c9", Pod:"nginx-deployment-6d5f899847-jx6fq", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.35.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali723a9c84411", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:24:14.945695 containerd[1581]: 2025-01-17 12:24:14.872 [INFO][3487] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8" Jan 17 12:24:14.945695 containerd[1581]: 2025-01-17 12:24:14.872 [INFO][3487] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8" iface="eth0" netns="" Jan 17 12:24:14.945695 containerd[1581]: 2025-01-17 12:24:14.872 [INFO][3487] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8" Jan 17 12:24:14.945695 containerd[1581]: 2025-01-17 12:24:14.872 [INFO][3487] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8" Jan 17 12:24:14.945695 containerd[1581]: 2025-01-17 12:24:14.913 [INFO][3494] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8" HandleID="k8s-pod-network.4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8" Workload="146.190.50.84-k8s-nginx--deployment--6d5f899847--jx6fq-eth0" Jan 17 12:24:14.945695 containerd[1581]: 2025-01-17 12:24:14.914 [INFO][3494] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:24:14.945695 containerd[1581]: 2025-01-17 12:24:14.914 [INFO][3494] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:24:14.945695 containerd[1581]: 2025-01-17 12:24:14.936 [WARNING][3494] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8" HandleID="k8s-pod-network.4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8" Workload="146.190.50.84-k8s-nginx--deployment--6d5f899847--jx6fq-eth0" Jan 17 12:24:14.945695 containerd[1581]: 2025-01-17 12:24:14.936 [INFO][3494] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8" HandleID="k8s-pod-network.4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8" Workload="146.190.50.84-k8s-nginx--deployment--6d5f899847--jx6fq-eth0" Jan 17 12:24:14.945695 containerd[1581]: 2025-01-17 12:24:14.939 [INFO][3494] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:24:14.945695 containerd[1581]: 2025-01-17 12:24:14.941 [INFO][3487] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8" Jan 17 12:24:14.945695 containerd[1581]: time="2025-01-17T12:24:14.943880609Z" level=info msg="TearDown network for sandbox \"4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8\" successfully" Jan 17 12:24:14.975535 containerd[1581]: time="2025-01-17T12:24:14.975451957Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:24:14.975940 containerd[1581]: time="2025-01-17T12:24:14.975888068Z" level=info msg="RemovePodSandbox \"4187182d907aba21b008b15174667544e9a4ec688cf61f9451555023461033e8\" returns successfully" Jan 17 12:24:14.978061 containerd[1581]: time="2025-01-17T12:24:14.977658259Z" level=info msg="StopPodSandbox for \"0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e\"" Jan 17 12:24:15.148104 containerd[1581]: 2025-01-17 12:24:15.076 [WARNING][3512] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"146.190.50.84-k8s-csi--node--driver--qzxmk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"34cd60d5-9542-4f9c-a341-f3b5c6223a93", ResourceVersion:"1181", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 23, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"146.190.50.84", ContainerID:"e34cf9516f0a79cc99572c22457fdbb16feaf658f488f3a71749b876519ac1b4", Pod:"csi-node-driver-qzxmk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.35.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif51086bc2db", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:24:15.148104 containerd[1581]: 2025-01-17 12:24:15.077 [INFO][3512] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e" Jan 17 12:24:15.148104 containerd[1581]: 2025-01-17 12:24:15.077 [INFO][3512] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e" iface="eth0" netns="" Jan 17 12:24:15.148104 containerd[1581]: 2025-01-17 12:24:15.077 [INFO][3512] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e" Jan 17 12:24:15.148104 containerd[1581]: 2025-01-17 12:24:15.077 [INFO][3512] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e" Jan 17 12:24:15.148104 containerd[1581]: 2025-01-17 12:24:15.129 [INFO][3518] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e" HandleID="k8s-pod-network.0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e" Workload="146.190.50.84-k8s-csi--node--driver--qzxmk-eth0" Jan 17 12:24:15.148104 containerd[1581]: 2025-01-17 12:24:15.130 [INFO][3518] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:24:15.148104 containerd[1581]: 2025-01-17 12:24:15.130 [INFO][3518] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:24:15.148104 containerd[1581]: 2025-01-17 12:24:15.139 [WARNING][3518] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e" HandleID="k8s-pod-network.0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e" Workload="146.190.50.84-k8s-csi--node--driver--qzxmk-eth0" Jan 17 12:24:15.148104 containerd[1581]: 2025-01-17 12:24:15.140 [INFO][3518] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e" HandleID="k8s-pod-network.0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e" Workload="146.190.50.84-k8s-csi--node--driver--qzxmk-eth0" Jan 17 12:24:15.148104 containerd[1581]: 2025-01-17 12:24:15.143 [INFO][3518] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:24:15.148104 containerd[1581]: 2025-01-17 12:24:15.146 [INFO][3512] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e" Jan 17 12:24:15.149732 containerd[1581]: time="2025-01-17T12:24:15.149314092Z" level=info msg="TearDown network for sandbox \"0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e\" successfully" Jan 17 12:24:15.149732 containerd[1581]: time="2025-01-17T12:24:15.149367960Z" level=info msg="StopPodSandbox for \"0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e\" returns successfully" Jan 17 12:24:15.150576 containerd[1581]: time="2025-01-17T12:24:15.150479233Z" level=info msg="RemovePodSandbox for \"0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e\"" Jan 17 12:24:15.150576 containerd[1581]: time="2025-01-17T12:24:15.150529164Z" level=info msg="Forcibly stopping sandbox \"0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e\"" Jan 17 12:24:15.322588 containerd[1581]: 2025-01-17 12:24:15.257 [WARNING][3536] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"146.190.50.84-k8s-csi--node--driver--qzxmk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"34cd60d5-9542-4f9c-a341-f3b5c6223a93", ResourceVersion:"1181", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 23, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"146.190.50.84", ContainerID:"e34cf9516f0a79cc99572c22457fdbb16feaf658f488f3a71749b876519ac1b4", Pod:"csi-node-driver-qzxmk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.35.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif51086bc2db", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:24:15.322588 containerd[1581]: 2025-01-17 12:24:15.257 [INFO][3536] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e" Jan 17 12:24:15.322588 containerd[1581]: 2025-01-17 12:24:15.257 [INFO][3536] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e" iface="eth0" netns="" Jan 17 12:24:15.322588 containerd[1581]: 2025-01-17 12:24:15.257 [INFO][3536] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e" Jan 17 12:24:15.322588 containerd[1581]: 2025-01-17 12:24:15.258 [INFO][3536] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e" Jan 17 12:24:15.322588 containerd[1581]: 2025-01-17 12:24:15.297 [INFO][3542] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e" HandleID="k8s-pod-network.0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e" Workload="146.190.50.84-k8s-csi--node--driver--qzxmk-eth0" Jan 17 12:24:15.322588 containerd[1581]: 2025-01-17 12:24:15.298 [INFO][3542] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:24:15.322588 containerd[1581]: 2025-01-17 12:24:15.298 [INFO][3542] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:24:15.322588 containerd[1581]: 2025-01-17 12:24:15.313 [WARNING][3542] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e" HandleID="k8s-pod-network.0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e" Workload="146.190.50.84-k8s-csi--node--driver--qzxmk-eth0" Jan 17 12:24:15.322588 containerd[1581]: 2025-01-17 12:24:15.313 [INFO][3542] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e" HandleID="k8s-pod-network.0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e" Workload="146.190.50.84-k8s-csi--node--driver--qzxmk-eth0" Jan 17 12:24:15.322588 containerd[1581]: 2025-01-17 12:24:15.318 [INFO][3542] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:24:15.322588 containerd[1581]: 2025-01-17 12:24:15.320 [INFO][3536] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e" Jan 17 12:24:15.324022 containerd[1581]: time="2025-01-17T12:24:15.322657672Z" level=info msg="TearDown network for sandbox \"0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e\" successfully" Jan 17 12:24:15.329535 containerd[1581]: time="2025-01-17T12:24:15.329322469Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:24:15.329535 containerd[1581]: time="2025-01-17T12:24:15.329506855Z" level=info msg="RemovePodSandbox \"0250f9c017126e270dfb5bd213f3fb19a925545ec5e13961c4e6c5bcba04f13e\" returns successfully" Jan 17 12:24:15.563937 kubelet[1932]: E0117 12:24:15.563356 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:16.564468 kubelet[1932]: E0117 12:24:16.564329 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:17.565733 kubelet[1932]: E0117 12:24:17.565654 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:18.566556 kubelet[1932]: E0117 12:24:18.566465 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:19.567747 kubelet[1932]: E0117 12:24:19.567631 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:20.567991 kubelet[1932]: E0117 12:24:20.567915 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:21.568286 kubelet[1932]: E0117 12:24:21.568186 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:22.088426 kubelet[1932]: I0117 12:24:22.087152 1932 topology_manager.go:215] "Topology Admit Handler" podUID="83a12b53-a59a-41f1-a6db-d5dd71d533d0" podNamespace="default" podName="test-pod-1" Jan 17 12:24:22.206814 kubelet[1932]: I0117 12:24:22.206746 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzctz\" (UniqueName: \"kubernetes.io/projected/83a12b53-a59a-41f1-a6db-d5dd71d533d0-kube-api-access-hzctz\") pod \"test-pod-1\" (UID: \"83a12b53-a59a-41f1-a6db-d5dd71d533d0\") " pod="default/test-pod-1" Jan 17 12:24:22.207998 kubelet[1932]: I0117 12:24:22.207949 1932 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-de2b777d-1ebf-4d51-93ce-34acea0d9da2\" (UniqueName: \"kubernetes.io/nfs/83a12b53-a59a-41f1-a6db-d5dd71d533d0-pvc-de2b777d-1ebf-4d51-93ce-34acea0d9da2\") pod \"test-pod-1\" (UID: \"83a12b53-a59a-41f1-a6db-d5dd71d533d0\") " pod="default/test-pod-1" Jan 17 12:24:22.393497 kernel: FS-Cache: Loaded Jan 17 12:24:22.493050 kernel: RPC: Registered named UNIX socket transport module. Jan 17 12:24:22.493220 kernel: RPC: Registered udp transport module. Jan 17 12:24:22.493258 kernel: RPC: Registered tcp transport module. Jan 17 12:24:22.494270 kernel: RPC: Registered tcp-with-tls transport module. Jan 17 12:24:22.495918 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 17 12:24:22.569494 kubelet[1932]: E0117 12:24:22.569393 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:22.914533 kernel: NFS: Registering the id_resolver key type Jan 17 12:24:22.917623 kernel: Key type id_resolver registered Jan 17 12:24:22.920946 kernel: Key type id_legacy registered Jan 17 12:24:22.975301 nfsidmap[3598]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.0-1-97f5d36106' Jan 17 12:24:22.982815 nfsidmap[3599]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.0-1-97f5d36106' Jan 17 12:24:23.294716 containerd[1581]: time="2025-01-17T12:24:23.294496877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:83a12b53-a59a-41f1-a6db-d5dd71d533d0,Namespace:default,Attempt:0,}" Jan 17 12:24:23.570072 kubelet[1932]: E0117 12:24:23.569864 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:23.608960 systemd-networkd[1221]: cali5ec59c6bf6e: Link UP Jan 17 12:24:23.609687 systemd-networkd[1221]: cali5ec59c6bf6e: Gained carrier Jan 17 12:24:23.624664 containerd[1581]: 2025-01-17 12:24:23.453 [INFO][3600] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {146.190.50.84-k8s-test--pod--1-eth0 default 83a12b53-a59a-41f1-a6db-d5dd71d533d0 1305 0 2025-01-17 12:24:05 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 146.190.50.84 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="acb2a2198a9c0eb6d99c1a55bd17188bacd13d04fc31dab2a3b2514baaa0489e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="146.190.50.84-k8s-test--pod--1-" Jan 17 12:24:23.624664 containerd[1581]: 2025-01-17 12:24:23.453 [INFO][3600] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="acb2a2198a9c0eb6d99c1a55bd17188bacd13d04fc31dab2a3b2514baaa0489e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="146.190.50.84-k8s-test--pod--1-eth0" Jan 17 12:24:23.624664 containerd[1581]: 2025-01-17 12:24:23.511 [INFO][3611] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="acb2a2198a9c0eb6d99c1a55bd17188bacd13d04fc31dab2a3b2514baaa0489e" HandleID="k8s-pod-network.acb2a2198a9c0eb6d99c1a55bd17188bacd13d04fc31dab2a3b2514baaa0489e" Workload="146.190.50.84-k8s-test--pod--1-eth0" Jan 17 12:24:23.624664 containerd[1581]: 2025-01-17 12:24:23.532 [INFO][3611] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="acb2a2198a9c0eb6d99c1a55bd17188bacd13d04fc31dab2a3b2514baaa0489e" HandleID="k8s-pod-network.acb2a2198a9c0eb6d99c1a55bd17188bacd13d04fc31dab2a3b2514baaa0489e" Workload="146.190.50.84-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000336db0), Attrs:map[string]string{"namespace":"default", "node":"146.190.50.84", "pod":"test-pod-1", "timestamp":"2025-01-17 12:24:23.510873388 +0000 UTC"}, Hostname:"146.190.50.84", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:24:23.624664 containerd[1581]: 2025-01-17 12:24:23.532 [INFO][3611] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:24:23.624664 containerd[1581]: 2025-01-17 12:24:23.533 [INFO][3611] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:24:23.624664 containerd[1581]: 2025-01-17 12:24:23.533 [INFO][3611] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '146.190.50.84' Jan 17 12:24:23.624664 containerd[1581]: 2025-01-17 12:24:23.544 [INFO][3611] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.acb2a2198a9c0eb6d99c1a55bd17188bacd13d04fc31dab2a3b2514baaa0489e" host="146.190.50.84" Jan 17 12:24:23.624664 containerd[1581]: 2025-01-17 12:24:23.556 [INFO][3611] ipam/ipam.go 372: Looking up existing affinities for host host="146.190.50.84" Jan 17 12:24:23.624664 containerd[1581]: 2025-01-17 12:24:23.565 [INFO][3611] ipam/ipam.go 489: Trying affinity for 192.168.35.128/26 host="146.190.50.84" Jan 17 12:24:23.624664 containerd[1581]: 2025-01-17 12:24:23.570 [INFO][3611] ipam/ipam.go 155: Attempting to load block cidr=192.168.35.128/26 host="146.190.50.84" Jan 17 12:24:23.624664 containerd[1581]: 2025-01-17 12:24:23.575 [INFO][3611] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.35.128/26 host="146.190.50.84" Jan 17 12:24:23.624664 containerd[1581]: 2025-01-17 12:24:23.575 [INFO][3611] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.35.128/26 handle="k8s-pod-network.acb2a2198a9c0eb6d99c1a55bd17188bacd13d04fc31dab2a3b2514baaa0489e" host="146.190.50.84" Jan 17 12:24:23.624664 containerd[1581]: 2025-01-17 12:24:23.579 [INFO][3611] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.acb2a2198a9c0eb6d99c1a55bd17188bacd13d04fc31dab2a3b2514baaa0489e Jan 17 12:24:23.624664 containerd[1581]: 2025-01-17 12:24:23.585 [INFO][3611] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.35.128/26 handle="k8s-pod-network.acb2a2198a9c0eb6d99c1a55bd17188bacd13d04fc31dab2a3b2514baaa0489e" host="146.190.50.84" Jan 17 12:24:23.624664 containerd[1581]: 2025-01-17 12:24:23.596 [INFO][3611] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.35.132/26] block=192.168.35.128/26 handle="k8s-pod-network.acb2a2198a9c0eb6d99c1a55bd17188bacd13d04fc31dab2a3b2514baaa0489e" host="146.190.50.84" Jan 17 12:24:23.624664 containerd[1581]: 2025-01-17 12:24:23.597 [INFO][3611] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.35.132/26] handle="k8s-pod-network.acb2a2198a9c0eb6d99c1a55bd17188bacd13d04fc31dab2a3b2514baaa0489e" host="146.190.50.84" Jan 17 12:24:23.624664 containerd[1581]: 2025-01-17 12:24:23.597 [INFO][3611] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:24:23.624664 containerd[1581]: 2025-01-17 12:24:23.597 [INFO][3611] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.35.132/26] IPv6=[] ContainerID="acb2a2198a9c0eb6d99c1a55bd17188bacd13d04fc31dab2a3b2514baaa0489e" HandleID="k8s-pod-network.acb2a2198a9c0eb6d99c1a55bd17188bacd13d04fc31dab2a3b2514baaa0489e" Workload="146.190.50.84-k8s-test--pod--1-eth0" Jan 17 12:24:23.624664 containerd[1581]: 2025-01-17 12:24:23.600 [INFO][3600] cni-plugin/k8s.go 386: Populated endpoint ContainerID="acb2a2198a9c0eb6d99c1a55bd17188bacd13d04fc31dab2a3b2514baaa0489e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="146.190.50.84-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"146.190.50.84-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"83a12b53-a59a-41f1-a6db-d5dd71d533d0", ResourceVersion:"1305", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 24, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"146.190.50.84", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.35.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:24:23.634061 containerd[1581]: 2025-01-17 12:24:23.600 [INFO][3600] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.35.132/32] ContainerID="acb2a2198a9c0eb6d99c1a55bd17188bacd13d04fc31dab2a3b2514baaa0489e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="146.190.50.84-k8s-test--pod--1-eth0" Jan 17 12:24:23.634061 containerd[1581]: 2025-01-17 12:24:23.601 [INFO][3600] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="acb2a2198a9c0eb6d99c1a55bd17188bacd13d04fc31dab2a3b2514baaa0489e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="146.190.50.84-k8s-test--pod--1-eth0" Jan 17 12:24:23.634061 containerd[1581]: 2025-01-17 12:24:23.604 [INFO][3600] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="acb2a2198a9c0eb6d99c1a55bd17188bacd13d04fc31dab2a3b2514baaa0489e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="146.190.50.84-k8s-test--pod--1-eth0" Jan 17 12:24:23.634061 containerd[1581]: 2025-01-17 12:24:23.604 [INFO][3600] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="acb2a2198a9c0eb6d99c1a55bd17188bacd13d04fc31dab2a3b2514baaa0489e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="146.190.50.84-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"146.190.50.84-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"83a12b53-a59a-41f1-a6db-d5dd71d533d0", ResourceVersion:"1305", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 24, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"146.190.50.84", ContainerID:"acb2a2198a9c0eb6d99c1a55bd17188bacd13d04fc31dab2a3b2514baaa0489e", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.35.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"76:41:9c:dd:9c:6b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:24:23.634061 containerd[1581]: 2025-01-17 12:24:23.617 [INFO][3600] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="acb2a2198a9c0eb6d99c1a55bd17188bacd13d04fc31dab2a3b2514baaa0489e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="146.190.50.84-k8s-test--pod--1-eth0" Jan 17 12:24:23.694024 containerd[1581]: time="2025-01-17T12:24:23.692871702Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:24:23.694024 containerd[1581]: time="2025-01-17T12:24:23.693016943Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:24:23.694024 containerd[1581]: time="2025-01-17T12:24:23.693045644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:24:23.694024 containerd[1581]: time="2025-01-17T12:24:23.693815409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:24:23.745756 systemd[1]: run-containerd-runc-k8s.io-acb2a2198a9c0eb6d99c1a55bd17188bacd13d04fc31dab2a3b2514baaa0489e-runc.AH526x.mount: Deactivated successfully. Jan 17 12:24:23.817801 containerd[1581]: time="2025-01-17T12:24:23.817735967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:83a12b53-a59a-41f1-a6db-d5dd71d533d0,Namespace:default,Attempt:0,} returns sandbox id \"acb2a2198a9c0eb6d99c1a55bd17188bacd13d04fc31dab2a3b2514baaa0489e\"" Jan 17 12:24:23.822363 containerd[1581]: time="2025-01-17T12:24:23.820260606Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 17 12:24:24.446650 containerd[1581]: time="2025-01-17T12:24:24.446480834Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:24.449840 containerd[1581]: time="2025-01-17T12:24:24.449721622Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 17 12:24:24.456397 containerd[1581]: time="2025-01-17T12:24:24.456292906Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 635.965333ms" Jan 17 12:24:24.456397 containerd[1581]: time="2025-01-17T12:24:24.456366987Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 17 12:24:24.460704 containerd[1581]: time="2025-01-17T12:24:24.460634206Z" level=info msg="CreateContainer within sandbox \"acb2a2198a9c0eb6d99c1a55bd17188bacd13d04fc31dab2a3b2514baaa0489e\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 17 12:24:24.509282 containerd[1581]: time="2025-01-17T12:24:24.508352170Z" level=info msg="CreateContainer within sandbox \"acb2a2198a9c0eb6d99c1a55bd17188bacd13d04fc31dab2a3b2514baaa0489e\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"2343a430d1f6eddae7f102b4d81a1bd6856acb5427c106af528ed7b7ee99b119\"" Jan 17 12:24:24.509829 containerd[1581]: time="2025-01-17T12:24:24.509581604Z" level=info msg="StartContainer for \"2343a430d1f6eddae7f102b4d81a1bd6856acb5427c106af528ed7b7ee99b119\"" Jan 17 12:24:24.572445 kubelet[1932]: E0117 12:24:24.570514 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:24.592755 systemd[1]: run-containerd-runc-k8s.io-2343a430d1f6eddae7f102b4d81a1bd6856acb5427c106af528ed7b7ee99b119-runc.v1W3AE.mount: Deactivated successfully. Jan 17 12:24:24.635691 containerd[1581]: time="2025-01-17T12:24:24.635526501Z" level=info msg="StartContainer for \"2343a430d1f6eddae7f102b4d81a1bd6856acb5427c106af528ed7b7ee99b119\" returns successfully" Jan 17 12:24:25.537872 systemd-networkd[1221]: cali5ec59c6bf6e: Gained IPv6LL Jan 17 12:24:25.571112 kubelet[1932]: E0117 12:24:25.571042 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:26.571957 kubelet[1932]: E0117 12:24:26.571806 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:27.572748 kubelet[1932]: E0117 12:24:27.572651 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:28.573756 kubelet[1932]: E0117 12:24:28.573637 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:29.574346 kubelet[1932]: E0117 12:24:29.574264 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:30.574851 kubelet[1932]: E0117 12:24:30.574756 1932 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"