Jan 17 12:19:05.167777 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 17 10:39:07 -00 2025 Jan 17 12:19:05.167820 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:19:05.167841 kernel: BIOS-provided physical RAM map: Jan 17 12:19:05.167853 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 17 12:19:05.167864 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 17 12:19:05.167876 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 17 12:19:05.167890 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Jan 17 12:19:05.167903 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Jan 17 12:19:05.167915 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 17 12:19:05.167930 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 17 12:19:05.167943 kernel: NX (Execute Disable) protection: active Jan 17 12:19:05.167955 kernel: APIC: Static calls initialized Jan 17 12:19:05.167974 kernel: SMBIOS 2.8 present. Jan 17 12:19:05.167988 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jan 17 12:19:05.168003 kernel: Hypervisor detected: KVM Jan 17 12:19:05.168020 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 12:19:05.168039 kernel: kvm-clock: using sched offset of 3545244451 cycles Jan 17 12:19:05.168054 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 12:19:05.168068 kernel: tsc: Detected 2000.000 MHz processor Jan 17 12:19:05.168081 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 12:19:05.168096 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 12:19:05.168110 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Jan 17 12:19:05.168123 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 17 12:19:05.168137 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 12:19:05.168154 kernel: ACPI: Early table checksum verification disabled Jan 17 12:19:05.168168 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Jan 17 12:19:05.168182 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:19:05.168196 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:19:05.168209 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:19:05.168223 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 17 12:19:05.168236 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:19:05.168250 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:19:05.168263 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:19:05.168281 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:19:05.168294 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jan 17 12:19:05.168307 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jan 17 12:19:05.168320 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 17 12:19:05.168333 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jan 17 12:19:05.168345 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jan 17 12:19:05.168358 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jan 17 12:19:05.168381 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jan 17 12:19:05.168396 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 17 12:19:05.168411 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 17 12:19:05.168425 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 17 12:19:05.168440 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 17 12:19:05.168462 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Jan 17 12:19:05.168477 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Jan 17 12:19:05.168495 kernel: Zone ranges: Jan 17 12:19:05.168527 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 12:19:05.168539 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Jan 17 12:19:05.168553 kernel: Normal empty Jan 17 12:19:05.168568 kernel: Movable zone start for each node Jan 17 12:19:05.168582 kernel: Early memory node ranges Jan 17 12:19:05.168597 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 17 12:19:05.168612 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Jan 17 12:19:05.168626 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Jan 17 12:19:05.168645 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 12:19:05.168660 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 17 12:19:05.168681 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Jan 17 12:19:05.168695 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 17 12:19:05.168710 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 12:19:05.168724 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 12:19:05.168739 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 17 12:19:05.168753 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 12:19:05.168768 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 12:19:05.168789 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 12:19:05.168816 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 12:19:05.168831 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 12:19:05.168846 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 17 12:19:05.168860 kernel: TSC deadline timer available Jan 17 12:19:05.168875 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 17 12:19:05.168890 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 17 12:19:05.168904 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jan 17 12:19:05.168927 kernel: Booting paravirtualized kernel on KVM Jan 17 12:19:05.168941 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 12:19:05.168962 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 17 12:19:05.168977 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 17 12:19:05.168992 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 17 12:19:05.169006 kernel: pcpu-alloc: [0] 0 1 Jan 17 12:19:05.169021 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 17 12:19:05.169039 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:19:05.169054 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 17 12:19:05.169068 kernel: random: crng init done Jan 17 12:19:05.169087 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 12:19:05.169193 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 17 12:19:05.169208 kernel: Fallback order for Node 0: 0 Jan 17 12:19:05.169222 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Jan 17 12:19:05.169237 kernel: Policy zone: DMA32 Jan 17 12:19:05.169252 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 12:19:05.169267 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42848K init, 2344K bss, 125148K reserved, 0K cma-reserved) Jan 17 12:19:05.169282 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 12:19:05.169299 kernel: Kernel/User page tables isolation: enabled Jan 17 12:19:05.169311 kernel: ftrace: allocating 37918 entries in 149 pages Jan 17 12:19:05.169326 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 12:19:05.169340 kernel: Dynamic Preempt: voluntary Jan 17 12:19:05.169354 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 12:19:05.169377 kernel: rcu: RCU event tracing is enabled. Jan 17 12:19:05.169391 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 12:19:05.169406 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 12:19:05.169420 kernel: Rude variant of Tasks RCU enabled. Jan 17 12:19:05.169435 kernel: Tracing variant of Tasks RCU enabled. Jan 17 12:19:05.169455 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 12:19:05.169469 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 12:19:05.169484 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 17 12:19:05.169498 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 12:19:05.169558 kernel: Console: colour VGA+ 80x25 Jan 17 12:19:05.169573 kernel: printk: console [tty0] enabled Jan 17 12:19:05.169587 kernel: printk: console [ttyS0] enabled Jan 17 12:19:05.169602 kernel: ACPI: Core revision 20230628 Jan 17 12:19:05.169617 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 17 12:19:05.169636 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 12:19:05.169650 kernel: x2apic enabled Jan 17 12:19:05.169665 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 12:19:05.169679 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 17 12:19:05.169694 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Jan 17 12:19:05.169708 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Jan 17 12:19:05.169723 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 17 12:19:05.169738 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 17 12:19:05.169769 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 12:19:05.169784 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 12:19:05.169799 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 17 12:19:05.169817 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 17 12:19:05.169833 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 17 12:19:05.169848 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 17 12:19:05.169863 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 17 12:19:05.169879 kernel: MDS: Mitigation: Clear CPU buffers Jan 17 12:19:05.169895 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 12:19:05.169920 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 12:19:05.169935 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 12:19:05.169951 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 12:19:05.169967 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 12:19:05.169983 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 17 12:19:05.169998 kernel: Freeing SMP alternatives memory: 32K Jan 17 12:19:05.170014 kernel: pid_max: default: 32768 minimum: 301 Jan 17 12:19:05.170029 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 12:19:05.170048 kernel: landlock: Up and running. Jan 17 12:19:05.170063 kernel: SELinux: Initializing. Jan 17 12:19:05.170078 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 12:19:05.170094 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 12:19:05.170109 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jan 17 12:19:05.170125 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:19:05.170141 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:19:05.170157 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:19:05.170176 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jan 17 12:19:05.170191 kernel: signal: max sigframe size: 1776 Jan 17 12:19:05.170207 kernel: rcu: Hierarchical SRCU implementation. Jan 17 12:19:05.170223 kernel: rcu: Max phase no-delay instances is 400. Jan 17 12:19:05.170238 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 12:19:05.170254 kernel: smp: Bringing up secondary CPUs ... Jan 17 12:19:05.170269 kernel: smpboot: x86: Booting SMP configuration: Jan 17 12:19:05.170284 kernel: .... node #0, CPUs: #1 Jan 17 12:19:05.170300 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 12:19:05.170321 kernel: smpboot: Max logical packages: 1 Jan 17 12:19:05.170340 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Jan 17 12:19:05.170355 kernel: devtmpfs: initialized Jan 17 12:19:05.170371 kernel: x86/mm: Memory block size: 128MB Jan 17 12:19:05.170386 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 12:19:05.170402 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 12:19:05.170417 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 12:19:05.170433 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 12:19:05.170448 kernel: audit: initializing netlink subsys (disabled) Jan 17 12:19:05.170464 kernel: audit: type=2000 audit(1737116344.058:1): state=initialized audit_enabled=0 res=1 Jan 17 12:19:05.170482 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 12:19:05.170498 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 12:19:05.171831 kernel: cpuidle: using governor menu Jan 17 12:19:05.171864 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 12:19:05.171881 kernel: dca service started, version 1.12.1 Jan 17 12:19:05.171897 kernel: PCI: Using configuration type 1 for base access Jan 17 12:19:05.171913 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 12:19:05.171928 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 12:19:05.171943 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 12:19:05.171968 kernel: ACPI: Added _OSI(Module Device) Jan 17 12:19:05.171984 kernel: ACPI: Added _OSI(Processor Device) Jan 17 12:19:05.172000 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 17 12:19:05.172016 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 12:19:05.172031 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 12:19:05.172047 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 12:19:05.172061 kernel: ACPI: Interpreter enabled Jan 17 12:19:05.172075 kernel: ACPI: PM: (supports S0 S5) Jan 17 12:19:05.172090 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 12:19:05.172109 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 12:19:05.172124 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 12:19:05.172140 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 17 12:19:05.172154 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 12:19:05.172475 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 17 12:19:05.172686 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 17 12:19:05.172836 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 17 12:19:05.172862 kernel: acpiphp: Slot [3] registered Jan 17 12:19:05.172878 kernel: acpiphp: Slot [4] registered Jan 17 12:19:05.172894 kernel: acpiphp: Slot [5] registered Jan 17 12:19:05.172908 kernel: acpiphp: Slot [6] registered Jan 17 12:19:05.172924 kernel: acpiphp: Slot [7] registered Jan 17 12:19:05.172939 kernel: acpiphp: Slot [8] registered Jan 17 12:19:05.172954 kernel: acpiphp: Slot [9] registered Jan 17 12:19:05.172969 kernel: acpiphp: Slot [10] registered Jan 17 12:19:05.172985 kernel: acpiphp: Slot [11] registered Jan 17 12:19:05.173005 kernel: acpiphp: Slot [12] registered Jan 17 12:19:05.173020 kernel: acpiphp: Slot [13] registered Jan 17 12:19:05.173033 kernel: acpiphp: Slot [14] registered Jan 17 12:19:05.173047 kernel: acpiphp: Slot [15] registered Jan 17 12:19:05.173061 kernel: acpiphp: Slot [16] registered Jan 17 12:19:05.173075 kernel: acpiphp: Slot [17] registered Jan 17 12:19:05.173088 kernel: acpiphp: Slot [18] registered Jan 17 12:19:05.173099 kernel: acpiphp: Slot [19] registered Jan 17 12:19:05.173112 kernel: acpiphp: Slot [20] registered Jan 17 12:19:05.173124 kernel: acpiphp: Slot [21] registered Jan 17 12:19:05.173142 kernel: acpiphp: Slot [22] registered Jan 17 12:19:05.173155 kernel: acpiphp: Slot [23] registered Jan 17 12:19:05.173168 kernel: acpiphp: Slot [24] registered Jan 17 12:19:05.173183 kernel: acpiphp: Slot [25] registered Jan 17 12:19:05.173198 kernel: acpiphp: Slot [26] registered Jan 17 12:19:05.173213 kernel: acpiphp: Slot [27] registered Jan 17 12:19:05.173229 kernel: acpiphp: Slot [28] registered Jan 17 12:19:05.173243 kernel: acpiphp: Slot [29] registered Jan 17 12:19:05.173255 kernel: acpiphp: Slot [30] registered Jan 17 12:19:05.173273 kernel: acpiphp: Slot [31] registered Jan 17 12:19:05.173288 kernel: PCI host bridge to bus 0000:00 Jan 17 12:19:05.173502 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 12:19:05.173679 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 12:19:05.173814 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 12:19:05.173943 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 17 12:19:05.174073 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jan 17 12:19:05.174202 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 12:19:05.174403 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 17 12:19:05.174610 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 17 12:19:05.174833 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 17 12:19:05.174984 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jan 17 12:19:05.175135 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 17 12:19:05.175287 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 17 12:19:05.175460 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 17 12:19:05.176440 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 17 12:19:05.176749 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jan 17 12:19:05.176913 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jan 17 12:19:05.177143 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 17 12:19:05.177300 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 17 12:19:05.177465 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 17 12:19:05.177663 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 17 12:19:05.177821 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 17 12:19:05.177973 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jan 17 12:19:05.178122 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jan 17 12:19:05.178275 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 17 12:19:05.178420 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 12:19:05.181770 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 17 12:19:05.182006 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jan 17 12:19:05.182190 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jan 17 12:19:05.182340 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jan 17 12:19:05.182557 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 17 12:19:05.182739 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jan 17 12:19:05.182924 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jan 17 12:19:05.183085 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jan 17 12:19:05.183292 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jan 17 12:19:05.183458 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jan 17 12:19:05.184766 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jan 17 12:19:05.184940 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jan 17 12:19:05.185119 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jan 17 12:19:05.185274 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jan 17 12:19:05.185409 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jan 17 12:19:05.186667 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jan 17 12:19:05.186946 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jan 17 12:19:05.187105 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jan 17 12:19:05.187256 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jan 17 12:19:05.187398 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jan 17 12:19:05.189692 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jan 17 12:19:05.189914 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jan 17 12:19:05.190084 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jan 17 12:19:05.190106 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 12:19:05.190121 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 12:19:05.190135 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 12:19:05.190153 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 12:19:05.190192 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 17 12:19:05.190219 kernel: iommu: Default domain type: Translated Jan 17 12:19:05.190247 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 12:19:05.190274 kernel: PCI: Using ACPI for IRQ routing Jan 17 12:19:05.190294 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 12:19:05.190308 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 17 12:19:05.190323 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Jan 17 12:19:05.190743 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 17 12:19:05.190934 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 17 12:19:05.191107 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 12:19:05.191138 kernel: vgaarb: loaded Jan 17 12:19:05.191153 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 17 12:19:05.191166 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 17 12:19:05.191180 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 12:19:05.191193 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 12:19:05.191208 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 12:19:05.191223 kernel: pnp: PnP ACPI init Jan 17 12:19:05.191237 kernel: pnp: PnP ACPI: found 4 devices Jan 17 12:19:05.191252 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 12:19:05.191273 kernel: NET: Registered PF_INET protocol family Jan 17 12:19:05.191287 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 12:19:05.191302 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 17 12:19:05.191316 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 12:19:05.191331 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 12:19:05.191345 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 17 12:19:05.191358 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 17 12:19:05.191372 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 12:19:05.191386 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 12:19:05.191405 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 12:19:05.191420 kernel: NET: Registered PF_XDP protocol family Jan 17 12:19:05.191676 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 12:19:05.191828 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 12:19:05.191968 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 12:19:05.192110 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 17 12:19:05.192253 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jan 17 12:19:05.192424 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 17 12:19:05.192667 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 17 12:19:05.192691 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 17 12:19:05.192849 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 40460 usecs Jan 17 12:19:05.192869 kernel: PCI: CLS 0 bytes, default 64 Jan 17 12:19:05.192885 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 17 12:19:05.192900 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Jan 17 12:19:05.192915 kernel: Initialise system trusted keyrings Jan 17 12:19:05.192930 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 17 12:19:05.192952 kernel: Key type asymmetric registered Jan 17 12:19:05.192967 kernel: Asymmetric key parser 'x509' registered Jan 17 12:19:05.192981 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 12:19:05.192995 kernel: io scheduler mq-deadline registered Jan 17 12:19:05.193008 kernel: io scheduler kyber registered Jan 17 12:19:05.193024 kernel: io scheduler bfq registered Jan 17 12:19:05.193038 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 12:19:05.193052 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 17 12:19:05.193066 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 17 12:19:05.193085 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 17 12:19:05.193098 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 12:19:05.193112 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 12:19:05.193127 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 12:19:05.193142 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 12:19:05.193156 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 12:19:05.193391 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 17 12:19:05.193419 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 12:19:05.193629 kernel: rtc_cmos 00:03: registered as rtc0 Jan 17 12:19:05.194839 kernel: rtc_cmos 00:03: setting system clock to 2025-01-17T12:19:04 UTC (1737116344) Jan 17 12:19:05.195014 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 17 12:19:05.195036 kernel: intel_pstate: CPU model not supported Jan 17 12:19:05.195054 kernel: NET: Registered PF_INET6 protocol family Jan 17 12:19:05.195070 kernel: Segment Routing with IPv6 Jan 17 12:19:05.195085 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 12:19:05.195100 kernel: NET: Registered PF_PACKET protocol family Jan 17 12:19:05.195116 kernel: Key type dns_resolver registered Jan 17 12:19:05.195145 kernel: IPI shorthand broadcast: enabled Jan 17 12:19:05.195157 kernel: sched_clock: Marking stable (1372011193, 169116609)->(1593817253, -52689451) Jan 17 12:19:05.195170 kernel: registered taskstats version 1 Jan 17 12:19:05.195182 kernel: Loading compiled-in X.509 certificates Jan 17 12:19:05.195196 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 6baa290b0089ed5c4c5f7248306af816ac8c7f80' Jan 17 12:19:05.195208 kernel: Key type .fscrypt registered Jan 17 12:19:05.195220 kernel: Key type fscrypt-provisioning registered Jan 17 12:19:05.195235 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 12:19:05.195252 kernel: ima: Allocated hash algorithm: sha1 Jan 17 12:19:05.195270 kernel: ima: No architecture policies found Jan 17 12:19:05.195284 kernel: clk: Disabling unused clocks Jan 17 12:19:05.195298 kernel: Freeing unused kernel image (initmem) memory: 42848K Jan 17 12:19:05.195313 kernel: Write protecting the kernel read-only data: 36864k Jan 17 12:19:05.195383 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 17 12:19:05.195403 kernel: Run /init as init process Jan 17 12:19:05.195419 kernel: with arguments: Jan 17 12:19:05.195436 kernel: /init Jan 17 12:19:05.195452 kernel: with environment: Jan 17 12:19:05.195478 kernel: HOME=/ Jan 17 12:19:05.196143 kernel: TERM=linux Jan 17 12:19:05.196171 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 17 12:19:05.196194 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:19:05.196214 systemd[1]: Detected virtualization kvm. Jan 17 12:19:05.196233 systemd[1]: Detected architecture x86-64. Jan 17 12:19:05.196250 systemd[1]: Running in initrd. Jan 17 12:19:05.196268 systemd[1]: No hostname configured, using default hostname. Jan 17 12:19:05.196292 systemd[1]: Hostname set to . Jan 17 12:19:05.196309 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:19:05.196327 systemd[1]: Queued start job for default target initrd.target. Jan 17 12:19:05.196345 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:19:05.196362 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:19:05.196382 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 12:19:05.196400 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:19:05.196421 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 12:19:05.196439 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 12:19:05.196460 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 12:19:05.196477 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 12:19:05.196495 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:19:05.196548 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:19:05.196564 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:19:05.196583 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:19:05.196600 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:19:05.196620 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:19:05.196637 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:19:05.196655 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:19:05.196672 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:19:05.196693 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:19:05.196711 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:19:05.196728 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:19:05.196746 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:19:05.196763 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:19:05.196781 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 12:19:05.196799 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:19:05.196817 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 12:19:05.196838 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 12:19:05.196855 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:19:05.196872 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:19:05.196890 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:19:05.196962 systemd-journald[184]: Collecting audit messages is disabled. Jan 17 12:19:05.197008 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 12:19:05.197025 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:19:05.197043 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 12:19:05.197062 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:19:05.197085 systemd-journald[184]: Journal started Jan 17 12:19:05.197124 systemd-journald[184]: Runtime Journal (/run/log/journal/8981e9b2377f4671ae86cf2fce3c6ac9) is 4.9M, max 39.3M, 34.4M free. Jan 17 12:19:05.193823 systemd-modules-load[185]: Inserted module 'overlay' Jan 17 12:19:05.209574 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:19:05.239573 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 12:19:05.242241 systemd-modules-load[185]: Inserted module 'br_netfilter' Jan 17 12:19:05.269125 kernel: Bridge firewalling registered Jan 17 12:19:05.267701 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:19:05.277214 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:19:05.286993 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:19:05.295984 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:19:05.306075 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:19:05.315718 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:19:05.327505 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:19:05.330434 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:19:05.344377 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 12:19:05.347799 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:19:05.350631 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:19:05.370838 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:19:05.374032 dracut-cmdline[214]: dracut-dracut-053 Jan 17 12:19:05.379853 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:19:05.382653 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:19:05.427027 systemd-resolved[224]: Positive Trust Anchors: Jan 17 12:19:05.427957 systemd-resolved[224]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:19:05.428007 systemd-resolved[224]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:19:05.435871 systemd-resolved[224]: Defaulting to hostname 'linux'. Jan 17 12:19:05.438085 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:19:05.438987 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:19:05.507621 kernel: SCSI subsystem initialized Jan 17 12:19:05.523570 kernel: Loading iSCSI transport class v2.0-870. Jan 17 12:19:05.541563 kernel: iscsi: registered transport (tcp) Jan 17 12:19:05.574017 kernel: iscsi: registered transport (qla4xxx) Jan 17 12:19:05.574137 kernel: QLogic iSCSI HBA Driver Jan 17 12:19:05.675694 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 12:19:05.689903 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 12:19:05.755208 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 12:19:05.755347 kernel: device-mapper: uevent: version 1.0.3 Jan 17 12:19:05.755502 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 12:19:05.824627 kernel: raid6: avx2x4 gen() 7688 MB/s Jan 17 12:19:05.842621 kernel: raid6: avx2x2 gen() 18988 MB/s Jan 17 12:19:05.862401 kernel: raid6: avx2x1 gen() 14554 MB/s Jan 17 12:19:05.862536 kernel: raid6: using algorithm avx2x2 gen() 18988 MB/s Jan 17 12:19:05.892376 kernel: raid6: .... xor() 8690 MB/s, rmw enabled Jan 17 12:19:05.892552 kernel: raid6: using avx2x2 recovery algorithm Jan 17 12:19:05.938817 kernel: xor: automatically using best checksumming function avx Jan 17 12:19:06.200676 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 12:19:06.234192 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:19:06.242984 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:19:06.272870 systemd-udevd[403]: Using default interface naming scheme 'v255'. Jan 17 12:19:06.279677 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:19:06.289392 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 12:19:06.333845 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Jan 17 12:19:06.384622 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:19:06.389882 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:19:06.477074 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:19:06.487881 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 12:19:06.516994 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 12:19:06.522600 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:19:06.525876 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:19:06.528071 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:19:06.539644 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 12:19:06.580769 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:19:06.605604 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jan 17 12:19:06.693395 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 12:19:06.693434 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 17 12:19:06.693731 kernel: scsi host0: Virtio SCSI HBA Jan 17 12:19:06.693963 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 12:19:06.693988 kernel: GPT:9289727 != 125829119 Jan 17 12:19:06.694012 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 12:19:06.694033 kernel: GPT:9289727 != 125829119 Jan 17 12:19:06.694052 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 12:19:06.694082 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:19:06.694106 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jan 17 12:19:06.720666 kernel: virtio_blk virtio5: [vdb] 920 512-byte logical blocks (471 kB/460 KiB) Jan 17 12:19:06.720989 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 12:19:06.721019 kernel: AES CTR mode by8 optimization enabled Jan 17 12:19:06.686152 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:19:06.686413 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:19:06.688331 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:19:06.689309 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:19:06.690889 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:19:06.694851 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:19:06.708156 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:19:06.738566 kernel: libata version 3.00 loaded. Jan 17 12:19:06.743972 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 17 12:19:06.801207 kernel: scsi host1: ata_piix Jan 17 12:19:06.801489 kernel: scsi host2: ata_piix Jan 17 12:19:06.801764 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jan 17 12:19:06.801789 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jan 17 12:19:06.830089 kernel: ACPI: bus type USB registered Jan 17 12:19:06.838692 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 17 12:19:06.912343 kernel: usbcore: registered new interface driver usbfs Jan 17 12:19:06.912392 kernel: usbcore: registered new interface driver hub Jan 17 12:19:06.912410 kernel: usbcore: registered new device driver usb Jan 17 12:19:06.912427 kernel: BTRFS: device fsid e459b8ee-f1f7-4c3d-a087-3f1955f52c85 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (452) Jan 17 12:19:06.912443 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (457) Jan 17 12:19:06.913491 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:19:06.929813 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 17 12:19:06.930894 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 17 12:19:06.938004 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 17 12:19:06.948544 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 12:19:06.954950 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 12:19:06.967913 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:19:06.999433 disk-uuid[540]: Primary Header is updated. Jan 17 12:19:06.999433 disk-uuid[540]: Secondary Entries is updated. Jan 17 12:19:06.999433 disk-uuid[540]: Secondary Header is updated. Jan 17 12:19:07.017570 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:19:07.019906 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:19:07.026793 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jan 17 12:19:07.036120 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jan 17 12:19:07.036578 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jan 17 12:19:07.036775 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jan 17 12:19:07.036929 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:19:07.036948 kernel: hub 1-0:1.0: USB hub found Jan 17 12:19:07.037178 kernel: hub 1-0:1.0: 2 ports detected Jan 17 12:19:08.035656 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:19:08.035761 disk-uuid[543]: The operation has completed successfully. Jan 17 12:19:08.087377 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 12:19:08.087607 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 12:19:08.101908 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 12:19:08.116641 sh[560]: Success Jan 17 12:19:08.139590 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 17 12:19:08.211740 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 12:19:08.225743 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 12:19:08.229350 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 12:19:08.265613 kernel: BTRFS info (device dm-0): first mount of filesystem e459b8ee-f1f7-4c3d-a087-3f1955f52c85 Jan 17 12:19:08.265705 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:19:08.268698 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 12:19:08.268802 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 12:19:08.270218 kernel: BTRFS info (device dm-0): using free space tree Jan 17 12:19:08.282156 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 12:19:08.283941 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 12:19:08.289904 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 12:19:08.291643 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 12:19:08.315208 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:19:08.315304 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:19:08.315318 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:19:08.321592 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:19:08.336120 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 12:19:08.338561 kernel: BTRFS info (device vda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:19:08.345864 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 12:19:08.354845 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 12:19:08.447419 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:19:08.461858 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:19:08.500150 systemd-networkd[744]: lo: Link UP Jan 17 12:19:08.500163 systemd-networkd[744]: lo: Gained carrier Jan 17 12:19:08.504747 systemd-networkd[744]: Enumeration completed Jan 17 12:19:08.505481 systemd-networkd[744]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 17 12:19:08.505487 systemd-networkd[744]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jan 17 12:19:08.505565 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:19:08.506269 systemd[1]: Reached target network.target - Network. Jan 17 12:19:08.507646 systemd-networkd[744]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:19:08.507652 systemd-networkd[744]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:19:08.508670 systemd-networkd[744]: eth0: Link UP Jan 17 12:19:08.508676 systemd-networkd[744]: eth0: Gained carrier Jan 17 12:19:08.508698 systemd-networkd[744]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 17 12:19:08.517350 systemd-networkd[744]: eth1: Link UP Jan 17 12:19:08.517363 systemd-networkd[744]: eth1: Gained carrier Jan 17 12:19:08.517382 systemd-networkd[744]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:19:08.536634 systemd-networkd[744]: eth1: DHCPv4 address 10.124.0.25/20 acquired from 169.254.169.253 Jan 17 12:19:08.540673 systemd-networkd[744]: eth0: DHCPv4 address 143.244.184.73/20, gateway 143.244.176.1 acquired from 169.254.169.253 Jan 17 12:19:08.540758 ignition[657]: Ignition 2.19.0 Jan 17 12:19:08.540773 ignition[657]: Stage: fetch-offline Jan 17 12:19:08.540825 ignition[657]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:19:08.540840 ignition[657]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 12:19:08.541008 ignition[657]: parsed url from cmdline: "" Jan 17 12:19:08.541014 ignition[657]: no config URL provided Jan 17 12:19:08.541023 ignition[657]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:19:08.546860 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:19:08.541036 ignition[657]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:19:08.541049 ignition[657]: failed to fetch config: resource requires networking Jan 17 12:19:08.541371 ignition[657]: Ignition finished successfully Jan 17 12:19:08.554924 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 12:19:08.579922 ignition[752]: Ignition 2.19.0 Jan 17 12:19:08.579945 ignition[752]: Stage: fetch Jan 17 12:19:08.580208 ignition[752]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:19:08.580223 ignition[752]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 12:19:08.580353 ignition[752]: parsed url from cmdline: "" Jan 17 12:19:08.580357 ignition[752]: no config URL provided Jan 17 12:19:08.580363 ignition[752]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:19:08.580373 ignition[752]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:19:08.580399 ignition[752]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jan 17 12:19:08.599214 ignition[752]: GET result: OK Jan 17 12:19:08.599661 ignition[752]: parsing config with SHA512: a3a14abd12be4aec8b84e1d56fa3fc422f0289df000cdc4b34e2b64fdb7c6515257c13dd7629f2222ef6bf94297426b6ea7f22d56544720f532e9d15f1b409ba Jan 17 12:19:08.604414 unknown[752]: fetched base config from "system" Jan 17 12:19:08.604755 ignition[752]: fetch: fetch complete Jan 17 12:19:08.604431 unknown[752]: fetched base config from "system" Jan 17 12:19:08.604762 ignition[752]: fetch: fetch passed Jan 17 12:19:08.604439 unknown[752]: fetched user config from "digitalocean" Jan 17 12:19:08.604822 ignition[752]: Ignition finished successfully Jan 17 12:19:08.607325 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 12:19:08.621934 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 12:19:08.646504 ignition[758]: Ignition 2.19.0 Jan 17 12:19:08.646535 ignition[758]: Stage: kargs Jan 17 12:19:08.646889 ignition[758]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:19:08.648943 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 12:19:08.646902 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 12:19:08.647680 ignition[758]: kargs: kargs passed Jan 17 12:19:08.647735 ignition[758]: Ignition finished successfully Jan 17 12:19:08.659022 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 12:19:08.685898 ignition[765]: Ignition 2.19.0 Jan 17 12:19:08.685914 ignition[765]: Stage: disks Jan 17 12:19:08.686205 ignition[765]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:19:08.689051 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 12:19:08.686221 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 12:19:08.687376 ignition[765]: disks: disks passed Jan 17 12:19:08.691027 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 12:19:08.687459 ignition[765]: Ignition finished successfully Jan 17 12:19:08.696592 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:19:08.697691 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:19:08.699152 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:19:08.700416 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:19:08.710875 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 12:19:08.733068 systemd-fsck[773]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 12:19:08.737796 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 12:19:08.745736 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 12:19:08.889556 kernel: EXT4-fs (vda9): mounted filesystem 0ba4fe0e-76d7-406f-b570-4642d86198f6 r/w with ordered data mode. Quota mode: none. Jan 17 12:19:08.890378 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 12:19:08.892789 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 12:19:08.902896 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:19:08.906831 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 12:19:08.915833 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Jan 17 12:19:08.924196 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (781) Jan 17 12:19:08.923604 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 17 12:19:08.926027 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 12:19:08.937092 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:19:08.937140 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:19:08.937155 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:19:08.926083 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:19:08.947127 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 12:19:08.956627 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:19:08.961797 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 12:19:08.973768 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:19:09.035466 coreos-metadata[784]: Jan 17 12:19:09.033 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 17 12:19:09.049564 coreos-metadata[784]: Jan 17 12:19:09.048 INFO Fetch successful Jan 17 12:19:09.060969 coreos-metadata[784]: Jan 17 12:19:09.060 INFO wrote hostname ci-4081.3.0-f-3a3da9a24b to /sysroot/etc/hostname Jan 17 12:19:09.062906 initrd-setup-root[812]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 12:19:09.064376 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 12:19:09.068203 coreos-metadata[783]: Jan 17 12:19:09.063 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 17 12:19:09.077878 coreos-metadata[783]: Jan 17 12:19:09.077 INFO Fetch successful Jan 17 12:19:09.082187 initrd-setup-root[820]: cut: /sysroot/etc/group: No such file or directory Jan 17 12:19:09.090175 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Jan 17 12:19:09.091183 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Jan 17 12:19:09.096955 initrd-setup-root[828]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 12:19:09.103146 initrd-setup-root[835]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 12:19:09.234304 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 12:19:09.237780 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 12:19:09.239765 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 12:19:09.257554 kernel: BTRFS info (device vda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:19:09.262962 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 12:19:09.282160 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 12:19:09.305912 ignition[903]: INFO : Ignition 2.19.0 Jan 17 12:19:09.305912 ignition[903]: INFO : Stage: mount Jan 17 12:19:09.307469 ignition[903]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:19:09.307469 ignition[903]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 12:19:09.309074 ignition[903]: INFO : mount: mount passed Jan 17 12:19:09.309074 ignition[903]: INFO : Ignition finished successfully Jan 17 12:19:09.309357 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 12:19:09.316775 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 12:19:09.352948 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:19:09.362995 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (914) Jan 17 12:19:09.363078 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:19:09.364618 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:19:09.365904 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:19:09.371045 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:19:09.373382 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:19:09.407785 ignition[931]: INFO : Ignition 2.19.0 Jan 17 12:19:09.407785 ignition[931]: INFO : Stage: files Jan 17 12:19:09.409297 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:19:09.409297 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 12:19:09.409297 ignition[931]: DEBUG : files: compiled without relabeling support, skipping Jan 17 12:19:09.411984 ignition[931]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 12:19:09.411984 ignition[931]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 12:19:09.414926 ignition[931]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 12:19:09.416020 ignition[931]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 12:19:09.417149 unknown[931]: wrote ssh authorized keys file for user: core Jan 17 12:19:09.418132 ignition[931]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 12:19:09.419628 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 17 12:19:09.420718 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 12:19:09.420718 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:19:09.420718 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:19:09.420718 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 17 12:19:09.420718 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 17 12:19:09.420718 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 17 12:19:09.427410 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 17 12:19:09.769970 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 17 12:19:10.102160 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 17 12:19:10.104034 ignition[931]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:19:10.104034 ignition[931]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:19:10.104034 ignition[931]: INFO : files: files passed Jan 17 12:19:10.104034 ignition[931]: INFO : Ignition finished successfully Jan 17 12:19:10.106160 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 12:19:10.112901 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 12:19:10.117114 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 12:19:10.124762 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 12:19:10.124890 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 12:19:10.146237 initrd-setup-root-after-ignition[959]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:19:10.146237 initrd-setup-root-after-ignition[959]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:19:10.149857 initrd-setup-root-after-ignition[963]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:19:10.151854 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:19:10.152959 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 12:19:10.164955 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 12:19:10.197938 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 12:19:10.198094 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 12:19:10.200358 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 12:19:10.201319 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 12:19:10.202896 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 12:19:10.208909 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 12:19:10.219043 systemd-networkd[744]: eth1: Gained IPv6LL Jan 17 12:19:10.233356 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:19:10.242049 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 12:19:10.262260 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:19:10.263443 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:19:10.265302 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 12:19:10.266950 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 12:19:10.267121 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:19:10.268932 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 12:19:10.269980 systemd[1]: Stopped target basic.target - Basic System. Jan 17 12:19:10.271686 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 12:19:10.273126 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:19:10.274347 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 12:19:10.275896 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 12:19:10.277173 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:19:10.278894 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 12:19:10.280214 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 12:19:10.281718 systemd[1]: Stopped target swap.target - Swaps. Jan 17 12:19:10.282793 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 12:19:10.283070 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:19:10.284697 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:19:10.285557 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:19:10.286901 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 12:19:10.287309 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:19:10.288462 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 12:19:10.288707 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 12:19:10.290226 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 12:19:10.290447 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:19:10.292091 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 12:19:10.292294 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 12:19:10.293738 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 17 12:19:10.293956 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 12:19:10.302014 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 12:19:10.305024 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 12:19:10.306703 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 12:19:10.306923 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:19:10.309967 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 12:19:10.312280 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:19:10.318383 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 12:19:10.319569 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 12:19:10.339715 ignition[983]: INFO : Ignition 2.19.0 Jan 17 12:19:10.339715 ignition[983]: INFO : Stage: umount Jan 17 12:19:10.345251 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:19:10.345251 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 12:19:10.345251 ignition[983]: INFO : umount: umount passed Jan 17 12:19:10.345251 ignition[983]: INFO : Ignition finished successfully Jan 17 12:19:10.342958 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 12:19:10.343114 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 12:19:10.373503 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 12:19:10.374992 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 12:19:10.375202 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 12:19:10.376295 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 12:19:10.376383 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 12:19:10.377982 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 12:19:10.378057 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 12:19:10.379397 systemd[1]: Stopped target network.target - Network. Jan 17 12:19:10.380583 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 12:19:10.380672 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:19:10.381928 systemd[1]: Stopped target paths.target - Path Units. Jan 17 12:19:10.383187 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 12:19:10.401707 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:19:10.403892 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 12:19:10.405426 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 12:19:10.406260 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 12:19:10.406359 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:19:10.409786 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 12:19:10.409879 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:19:10.411039 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 12:19:10.411151 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 12:19:10.411892 systemd-networkd[744]: eth0: Gained IPv6LL Jan 17 12:19:10.414220 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 12:19:10.414343 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 12:19:10.415657 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 12:19:10.417163 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 12:19:10.419106 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 12:19:10.419281 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 12:19:10.420883 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 12:19:10.421019 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 12:19:10.421222 systemd-networkd[744]: eth0: DHCPv6 lease lost Jan 17 12:19:10.424763 systemd-networkd[744]: eth1: DHCPv6 lease lost Jan 17 12:19:10.427948 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 12:19:10.428097 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 12:19:10.429962 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 12:19:10.430028 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:19:10.444838 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 12:19:10.445680 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 12:19:10.445802 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:19:10.450051 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:19:10.452894 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 12:19:10.453097 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 12:19:10.469405 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 12:19:10.470864 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:19:10.476106 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 12:19:10.476277 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 12:19:10.477785 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 12:19:10.477857 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:19:10.479233 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 12:19:10.479341 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:19:10.481374 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 12:19:10.481478 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 12:19:10.482659 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:19:10.482758 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:19:10.486880 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 12:19:10.488153 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:19:10.488261 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:19:10.493167 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 12:19:10.493320 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 12:19:10.496086 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 12:19:10.496202 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:19:10.497612 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 17 12:19:10.497707 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:19:10.498633 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 12:19:10.498726 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:19:10.500259 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 12:19:10.500342 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:19:10.501716 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:19:10.501801 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:19:10.507995 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 12:19:10.508185 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 12:19:10.521003 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 12:19:10.521226 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 12:19:10.523617 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 12:19:10.531068 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 12:19:10.556821 systemd[1]: Switching root. Jan 17 12:19:10.628369 systemd-journald[184]: Journal stopped Jan 17 12:19:12.049680 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jan 17 12:19:12.049785 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 12:19:12.049803 kernel: SELinux: policy capability open_perms=1 Jan 17 12:19:12.049821 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 12:19:12.049833 kernel: SELinux: policy capability always_check_network=0 Jan 17 12:19:12.049845 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 12:19:12.049856 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 12:19:12.049867 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 12:19:12.049886 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 12:19:12.049898 kernel: audit: type=1403 audit(1737116350.798:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 12:19:12.049917 systemd[1]: Successfully loaded SELinux policy in 61.313ms. Jan 17 12:19:12.049945 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.684ms. Jan 17 12:19:12.049965 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:19:12.049981 systemd[1]: Detected virtualization kvm. Jan 17 12:19:12.049993 systemd[1]: Detected architecture x86-64. Jan 17 12:19:12.050006 systemd[1]: Detected first boot. Jan 17 12:19:12.050021 systemd[1]: Hostname set to . Jan 17 12:19:12.050034 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:19:12.050050 zram_generator::config[1029]: No configuration found. Jan 17 12:19:12.050073 systemd[1]: Populated /etc with preset unit settings. Jan 17 12:19:12.050093 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 12:19:12.050105 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 12:19:12.050118 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 12:19:12.050132 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 12:19:12.050144 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 12:19:12.050161 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 12:19:12.050174 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 12:19:12.050186 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 12:19:12.050199 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 12:19:12.050212 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 12:19:12.050223 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 12:19:12.050236 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:19:12.050249 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:19:12.050261 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 12:19:12.050276 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 12:19:12.050289 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 12:19:12.050301 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:19:12.050313 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 12:19:12.050325 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:19:12.050337 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 12:19:12.050350 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 12:19:12.050366 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 12:19:12.050379 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 12:19:12.050391 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:19:12.050403 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:19:12.050415 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:19:12.050427 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:19:12.050438 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 12:19:12.050450 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 12:19:12.050465 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:19:12.050478 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:19:12.050491 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:19:12.050506 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 12:19:12.050869 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 12:19:12.050891 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 12:19:12.050904 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 12:19:12.050916 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:19:12.050928 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 12:19:12.050947 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 12:19:12.050964 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 12:19:12.050986 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 12:19:12.051003 systemd[1]: Reached target machines.target - Containers. Jan 17 12:19:12.051016 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 12:19:12.051028 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:19:12.051041 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:19:12.051053 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 12:19:12.051069 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:19:12.051081 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:19:12.051094 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:19:12.051106 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 12:19:12.051118 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:19:12.051132 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 12:19:12.051144 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 12:19:12.051156 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 12:19:12.051172 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 12:19:12.051184 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 12:19:12.051197 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:19:12.051209 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:19:12.051221 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 12:19:12.051234 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 12:19:12.051250 kernel: loop: module loaded Jan 17 12:19:12.051271 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:19:12.051291 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 12:19:12.051314 systemd[1]: Stopped verity-setup.service. Jan 17 12:19:12.051331 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:19:12.051343 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 12:19:12.051354 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 12:19:12.051367 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 12:19:12.051388 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 12:19:12.051412 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 12:19:12.051425 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 12:19:12.051437 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:19:12.051453 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 12:19:12.051465 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 12:19:12.051480 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:19:12.051493 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:19:12.051505 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:19:12.051592 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:19:12.051606 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:19:12.051619 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:19:12.051631 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:19:12.051644 kernel: ACPI: bus type drm_connector registered Jan 17 12:19:12.051660 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 12:19:12.051672 kernel: fuse: init (API version 7.39) Jan 17 12:19:12.051683 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 12:19:12.051696 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:19:12.051708 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:19:12.051721 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 12:19:12.051733 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 12:19:12.051745 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 12:19:12.051809 systemd-journald[1099]: Collecting audit messages is disabled. Jan 17 12:19:12.051841 systemd-journald[1099]: Journal started Jan 17 12:19:12.051868 systemd-journald[1099]: Runtime Journal (/run/log/journal/8981e9b2377f4671ae86cf2fce3c6ac9) is 4.9M, max 39.3M, 34.4M free. Jan 17 12:19:12.054663 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 12:19:11.584245 systemd[1]: Queued start job for default target multi-user.target. Jan 17 12:19:11.607323 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 17 12:19:11.607976 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 12:19:12.064775 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 12:19:12.072616 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 12:19:12.079341 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:19:12.086657 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 12:19:12.101616 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 12:19:12.112557 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 12:19:12.117587 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:19:12.132488 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 12:19:12.136552 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:19:12.147558 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 12:19:12.154727 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:19:12.168506 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:19:12.188660 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 12:19:12.217659 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:19:12.228205 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:19:12.232601 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 12:19:12.233938 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 12:19:12.235974 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 12:19:12.238187 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 12:19:12.240044 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 12:19:12.277414 kernel: loop0: detected capacity change from 0 to 142488 Jan 17 12:19:12.295940 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 12:19:12.311991 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 12:19:12.328829 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 12:19:12.367563 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 12:19:12.379659 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:19:12.384569 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 12:19:12.398181 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 12:19:12.405997 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 12:19:12.422731 kernel: loop1: detected capacity change from 0 to 140768 Jan 17 12:19:12.441873 systemd-tmpfiles[1129]: ACLs are not supported, ignoring. Jan 17 12:19:12.441908 systemd-tmpfiles[1129]: ACLs are not supported, ignoring. Jan 17 12:19:12.449159 systemd-journald[1099]: Time spent on flushing to /var/log/journal/8981e9b2377f4671ae86cf2fce3c6ac9 is 85.269ms for 984 entries. Jan 17 12:19:12.449159 systemd-journald[1099]: System Journal (/var/log/journal/8981e9b2377f4671ae86cf2fce3c6ac9) is 8.0M, max 195.6M, 187.6M free. Jan 17 12:19:12.575938 systemd-journald[1099]: Received client request to flush runtime journal. Jan 17 12:19:12.576021 kernel: loop2: detected capacity change from 0 to 205544 Jan 17 12:19:12.465855 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:19:12.481921 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 12:19:12.488078 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:19:12.510928 udevadm[1159]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 17 12:19:12.586266 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 12:19:12.604670 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 12:19:12.614684 kernel: loop3: detected capacity change from 0 to 8 Jan 17 12:19:12.617065 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:19:12.663586 kernel: loop4: detected capacity change from 0 to 142488 Jan 17 12:19:12.684970 kernel: loop5: detected capacity change from 0 to 140768 Jan 17 12:19:12.690242 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Jan 17 12:19:12.690272 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Jan 17 12:19:12.708192 kernel: loop6: detected capacity change from 0 to 205544 Jan 17 12:19:12.713646 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:19:12.728653 kernel: loop7: detected capacity change from 0 to 8 Jan 17 12:19:12.729062 (sd-merge)[1174]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jan 17 12:19:12.729922 (sd-merge)[1174]: Merged extensions into '/usr'. Jan 17 12:19:12.735679 systemd[1]: Reloading requested from client PID 1128 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 12:19:12.735702 systemd[1]: Reloading... Jan 17 12:19:12.889154 zram_generator::config[1201]: No configuration found. Jan 17 12:19:13.082625 ldconfig[1124]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 12:19:13.160699 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:19:13.221250 systemd[1]: Reloading finished in 484 ms. Jan 17 12:19:13.265211 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 12:19:13.267147 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 12:19:13.281026 systemd[1]: Starting ensure-sysext.service... Jan 17 12:19:13.292614 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:19:13.318396 systemd[1]: Reloading requested from client PID 1244 ('systemctl') (unit ensure-sysext.service)... Jan 17 12:19:13.318416 systemd[1]: Reloading... Jan 17 12:19:13.364225 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 12:19:13.366576 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 12:19:13.367956 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 12:19:13.368389 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Jan 17 12:19:13.368488 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Jan 17 12:19:13.379978 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:19:13.380147 systemd-tmpfiles[1245]: Skipping /boot Jan 17 12:19:13.418855 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:19:13.420592 systemd-tmpfiles[1245]: Skipping /boot Jan 17 12:19:13.495556 zram_generator::config[1268]: No configuration found. Jan 17 12:19:13.693956 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:19:13.747172 systemd[1]: Reloading finished in 428 ms. Jan 17 12:19:13.767964 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 12:19:13.775291 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:19:13.792784 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:19:13.797815 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 12:19:13.800490 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 12:19:13.813584 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:19:13.818699 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:19:13.823276 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 12:19:13.835033 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:19:13.835310 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:19:13.840971 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:19:13.845614 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:19:13.853017 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:19:13.855808 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:19:13.856050 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:19:13.862694 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:19:13.863035 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:19:13.863366 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:19:13.874083 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 12:19:13.874884 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:19:13.878817 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 12:19:13.893796 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:19:13.894245 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:19:13.905216 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:19:13.906186 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:19:13.911090 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 12:19:13.912661 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:19:13.919370 systemd[1]: Finished ensure-sysext.service. Jan 17 12:19:13.937458 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 12:19:13.940202 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 12:19:13.957106 systemd-udevd[1325]: Using default interface naming scheme 'v255'. Jan 17 12:19:13.972088 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:19:13.972372 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:19:13.987373 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:19:13.989487 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:19:13.992194 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:19:13.992931 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:19:13.995900 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:19:13.996047 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:19:13.997539 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 12:19:14.005333 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:19:14.005667 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:19:14.015764 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:19:14.026808 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:19:14.030443 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 12:19:14.031616 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 12:19:14.055576 augenrules[1370]: No rules Jan 17 12:19:14.059115 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:19:14.069036 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 12:19:14.204686 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jan 17 12:19:14.205337 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:19:14.205540 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:19:14.212857 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:19:14.221891 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:19:14.228769 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:19:14.229631 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:19:14.229683 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 12:19:14.229700 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:19:14.261441 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:19:14.263259 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:19:14.265116 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 17 12:19:14.304448 kernel: ISO 9660 Extensions: RRIP_1991A Jan 17 12:19:14.304667 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 17 12:19:14.308140 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jan 17 12:19:14.312128 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:19:14.313643 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:19:14.318915 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:19:14.324560 kernel: ACPI: button: Power Button [PWRF] Jan 17 12:19:14.327325 systemd-resolved[1321]: Positive Trust Anchors: Jan 17 12:19:14.328074 systemd-resolved[1321]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:19:14.328112 systemd-resolved[1321]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:19:14.334366 systemd-resolved[1321]: Using system hostname 'ci-4081.3.0-f-3a3da9a24b'. Jan 17 12:19:14.337608 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:19:14.338330 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:19:14.343716 systemd-networkd[1355]: lo: Link UP Jan 17 12:19:14.343730 systemd-networkd[1355]: lo: Gained carrier Jan 17 12:19:14.345552 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:19:14.345810 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:19:14.346842 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:19:14.352770 systemd-networkd[1355]: Enumeration completed Jan 17 12:19:14.353195 systemd-networkd[1355]: eth0: Configuring with /run/systemd/network/10-26:89:72:4c:50:2b.network. Jan 17 12:19:14.353661 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:19:14.354955 systemd[1]: Reached target network.target - Network. Jan 17 12:19:14.363383 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 17 12:19:14.360627 systemd-networkd[1355]: eth1: Configuring with /run/systemd/network/10-16:6e:a2:c9:eb:a2.network. Jan 17 12:19:14.363704 systemd-networkd[1355]: eth0: Link UP Jan 17 12:19:14.363711 systemd-networkd[1355]: eth0: Gained carrier Jan 17 12:19:14.366857 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 12:19:14.368942 systemd-networkd[1355]: eth1: Link UP Jan 17 12:19:14.368953 systemd-networkd[1355]: eth1: Gained carrier Jan 17 12:19:14.378359 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 12:19:14.379252 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 12:19:14.422611 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1363) Jan 17 12:19:14.453556 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 17 12:19:15.460993 systemd-resolved[1321]: Clock change detected. Flushing caches. Jan 17 12:19:15.461089 systemd-timesyncd[1338]: Contacted time server 24.229.44.105:123 (0.flatcar.pool.ntp.org). Jan 17 12:19:15.461174 systemd-timesyncd[1338]: Initial clock synchronization to Fri 2025-01-17 12:19:15.460902 UTC. Jan 17 12:19:15.506462 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 17 12:19:15.506557 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 17 12:19:15.518525 kernel: Console: switching to colour dummy device 80x25 Jan 17 12:19:15.518606 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 17 12:19:15.518622 kernel: [drm] features: -context_init Jan 17 12:19:15.518569 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:19:15.522080 kernel: [drm] number of scanouts: 1 Jan 17 12:19:15.522224 kernel: [drm] number of cap sets: 0 Jan 17 12:19:15.524815 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 17 12:19:15.527888 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 17 12:19:15.528014 kernel: Console: switching to colour frame buffer device 128x48 Jan 17 12:19:15.526357 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 12:19:15.533008 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 17 12:19:15.545025 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 12:19:15.561607 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 12:19:15.583010 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:19:15.583218 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:19:15.598138 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:19:15.602275 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 12:19:15.707861 kernel: EDAC MC: Ver: 3.0.0 Jan 17 12:19:15.736346 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 12:19:15.746288 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 12:19:15.747704 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:19:15.765865 lvm[1424]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:19:15.802197 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 12:19:15.804404 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:19:15.804632 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:19:15.805103 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 12:19:15.810855 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 12:19:15.813066 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 12:19:15.814535 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 12:19:15.814646 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 12:19:15.814712 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 12:19:15.814747 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:19:15.814931 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:19:15.816907 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 12:19:15.820503 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 12:19:15.831609 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 12:19:15.835130 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 12:19:15.836318 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 12:19:15.837503 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:19:15.840070 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:19:15.842288 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:19:15.842324 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:19:15.855037 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 12:19:15.860063 lvm[1429]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:19:15.877137 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 12:19:15.884582 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 12:19:15.897009 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 12:19:15.904151 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 12:19:15.905530 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 12:19:15.909078 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 12:19:15.917139 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 12:19:15.931110 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 12:19:15.950171 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 12:19:15.952612 dbus-daemon[1432]: [system] SELinux support is enabled Jan 17 12:19:15.953712 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 12:19:15.955661 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 12:19:15.967094 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 12:19:15.974144 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 12:19:15.975678 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 12:19:15.983634 coreos-metadata[1431]: Jan 17 12:19:15.983 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 17 12:19:15.988086 jq[1434]: false Jan 17 12:19:15.989209 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 12:19:15.995525 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 12:19:15.997032 coreos-metadata[1431]: Jan 17 12:19:15.995 INFO Fetch successful Jan 17 12:19:15.995821 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 12:19:16.005379 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 12:19:16.005443 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 12:19:16.009367 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 12:19:16.009489 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jan 17 12:19:16.009520 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 12:19:16.038876 jq[1442]: true Jan 17 12:19:16.046824 extend-filesystems[1436]: Found loop4 Jan 17 12:19:16.046824 extend-filesystems[1436]: Found loop5 Jan 17 12:19:16.046824 extend-filesystems[1436]: Found loop6 Jan 17 12:19:16.046824 extend-filesystems[1436]: Found loop7 Jan 17 12:19:16.046824 extend-filesystems[1436]: Found vda Jan 17 12:19:16.046824 extend-filesystems[1436]: Found vda1 Jan 17 12:19:16.046824 extend-filesystems[1436]: Found vda2 Jan 17 12:19:16.046824 extend-filesystems[1436]: Found vda3 Jan 17 12:19:16.046824 extend-filesystems[1436]: Found usr Jan 17 12:19:16.046824 extend-filesystems[1436]: Found vda4 Jan 17 12:19:16.046824 extend-filesystems[1436]: Found vda6 Jan 17 12:19:16.046824 extend-filesystems[1436]: Found vda7 Jan 17 12:19:16.046824 extend-filesystems[1436]: Found vda9 Jan 17 12:19:16.046824 extend-filesystems[1436]: Checking size of /dev/vda9 Jan 17 12:19:16.048214 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 12:19:16.095655 update_engine[1441]: I20250117 12:19:16.088552 1441 main.cc:92] Flatcar Update Engine starting Jan 17 12:19:16.048542 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 12:19:16.084883 (ntainerd)[1451]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 12:19:16.107200 systemd[1]: Started update-engine.service - Update Engine. Jan 17 12:19:16.108758 extend-filesystems[1436]: Resized partition /dev/vda9 Jan 17 12:19:16.116282 update_engine[1441]: I20250117 12:19:16.111272 1441 update_check_scheduler.cc:74] Next update check in 8m28s Jan 17 12:19:16.126229 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 12:19:16.138577 extend-filesystems[1469]: resize2fs 1.47.1 (20-May-2024) Jan 17 12:19:16.147564 jq[1452]: true Jan 17 12:19:16.168326 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jan 17 12:19:16.180756 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 12:19:16.181129 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 12:19:16.219608 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 12:19:16.248837 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1356) Jan 17 12:19:16.258965 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 12:19:16.380663 systemd-logind[1440]: New seat seat0. Jan 17 12:19:16.403502 systemd-logind[1440]: Watching system buttons on /dev/input/event1 (Power Button) Jan 17 12:19:16.403543 systemd-logind[1440]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 12:19:16.409481 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 12:19:16.428607 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 17 12:19:16.467916 extend-filesystems[1469]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 17 12:19:16.467916 extend-filesystems[1469]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 17 12:19:16.467916 extend-filesystems[1469]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 17 12:19:16.484378 extend-filesystems[1436]: Resized filesystem in /dev/vda9 Jan 17 12:19:16.484378 extend-filesystems[1436]: Found vdb Jan 17 12:19:16.476654 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 12:19:16.477008 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 12:19:16.491624 bash[1496]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:19:16.492301 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 12:19:16.509405 systemd[1]: Starting sshkeys.service... Jan 17 12:19:16.555797 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 17 12:19:16.572098 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 17 12:19:16.607478 locksmithd[1470]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 12:19:16.650852 coreos-metadata[1503]: Jan 17 12:19:16.648 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 17 12:19:16.651983 containerd[1451]: time="2025-01-17T12:19:16.651837062Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 12:19:16.659346 coreos-metadata[1503]: Jan 17 12:19:16.659 INFO Fetch successful Jan 17 12:19:16.673306 unknown[1503]: wrote ssh authorized keys file for user: core Jan 17 12:19:16.694853 containerd[1451]: time="2025-01-17T12:19:16.694709519Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:19:16.697048 containerd[1451]: time="2025-01-17T12:19:16.696992349Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:19:16.697210 containerd[1451]: time="2025-01-17T12:19:16.697195609Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 12:19:16.697262 containerd[1451]: time="2025-01-17T12:19:16.697252420Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 12:19:16.697500 containerd[1451]: time="2025-01-17T12:19:16.697482243Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 12:19:16.697594 containerd[1451]: time="2025-01-17T12:19:16.697580874Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 12:19:16.697696 containerd[1451]: time="2025-01-17T12:19:16.697681079Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:19:16.697742 containerd[1451]: time="2025-01-17T12:19:16.697732781Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:19:16.698001 containerd[1451]: time="2025-01-17T12:19:16.697980519Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:19:16.698683 containerd[1451]: time="2025-01-17T12:19:16.698052121Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 12:19:16.698683 containerd[1451]: time="2025-01-17T12:19:16.698069131Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:19:16.698683 containerd[1451]: time="2025-01-17T12:19:16.698078495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 12:19:16.698683 containerd[1451]: time="2025-01-17T12:19:16.698152184Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:19:16.698683 containerd[1451]: time="2025-01-17T12:19:16.698399673Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:19:16.698683 containerd[1451]: time="2025-01-17T12:19:16.698521055Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:19:16.698683 containerd[1451]: time="2025-01-17T12:19:16.698534107Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 12:19:16.698683 containerd[1451]: time="2025-01-17T12:19:16.698602826Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 12:19:16.698683 containerd[1451]: time="2025-01-17T12:19:16.698646035Z" level=info msg="metadata content store policy set" policy=shared Jan 17 12:19:16.705475 containerd[1451]: time="2025-01-17T12:19:16.705389141Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 12:19:16.705475 containerd[1451]: time="2025-01-17T12:19:16.705493776Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 12:19:16.705704 containerd[1451]: time="2025-01-17T12:19:16.705522714Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 12:19:16.705704 containerd[1451]: time="2025-01-17T12:19:16.705547732Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 12:19:16.705704 containerd[1451]: time="2025-01-17T12:19:16.705570728Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 12:19:16.708480 containerd[1451]: time="2025-01-17T12:19:16.708402804Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 12:19:16.709141 containerd[1451]: time="2025-01-17T12:19:16.709098882Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 12:19:16.710165 containerd[1451]: time="2025-01-17T12:19:16.710102593Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 12:19:16.710237 containerd[1451]: time="2025-01-17T12:19:16.710173407Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 12:19:16.711048 containerd[1451]: time="2025-01-17T12:19:16.710198603Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 12:19:16.711134 containerd[1451]: time="2025-01-17T12:19:16.711074568Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 12:19:16.711178 containerd[1451]: time="2025-01-17T12:19:16.711150698Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 12:19:16.711215 containerd[1451]: time="2025-01-17T12:19:16.711198954Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 12:19:16.711239 containerd[1451]: time="2025-01-17T12:19:16.711229769Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 12:19:16.711291 containerd[1451]: time="2025-01-17T12:19:16.711273993Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 12:19:16.711314 containerd[1451]: time="2025-01-17T12:19:16.711300949Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 12:19:16.711344 containerd[1451]: time="2025-01-17T12:19:16.711319964Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 12:19:16.711397 containerd[1451]: time="2025-01-17T12:19:16.711378029Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 12:19:16.711488 containerd[1451]: time="2025-01-17T12:19:16.711465579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 12:19:16.711524 containerd[1451]: time="2025-01-17T12:19:16.711500482Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 12:19:16.711581 containerd[1451]: time="2025-01-17T12:19:16.711559180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 12:19:16.711629 containerd[1451]: time="2025-01-17T12:19:16.711592952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 12:19:16.711670 containerd[1451]: time="2025-01-17T12:19:16.711648691Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 12:19:16.711699 containerd[1451]: time="2025-01-17T12:19:16.711673137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 12:19:16.711720 containerd[1451]: time="2025-01-17T12:19:16.711709628Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 12:19:16.711752 containerd[1451]: time="2025-01-17T12:19:16.711732462Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 12:19:16.711819 containerd[1451]: time="2025-01-17T12:19:16.711800095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 12:19:16.711852 containerd[1451]: time="2025-01-17T12:19:16.711832208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 12:19:16.712028 containerd[1451]: time="2025-01-17T12:19:16.712001683Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 12:19:16.712069 containerd[1451]: time="2025-01-17T12:19:16.712038715Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 12:19:16.712108 containerd[1451]: time="2025-01-17T12:19:16.712088674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 12:19:16.712182 containerd[1451]: time="2025-01-17T12:19:16.712139495Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 12:19:16.712230 containerd[1451]: time="2025-01-17T12:19:16.712213609Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 12:19:16.712272 containerd[1451]: time="2025-01-17T12:19:16.712257603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 12:19:16.712293 containerd[1451]: time="2025-01-17T12:19:16.712280894Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 12:19:16.712464 containerd[1451]: time="2025-01-17T12:19:16.712428583Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 12:19:16.712504 containerd[1451]: time="2025-01-17T12:19:16.712471844Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 12:19:16.712528 containerd[1451]: time="2025-01-17T12:19:16.712508493Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 12:19:16.712548 containerd[1451]: time="2025-01-17T12:19:16.712526860Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 12:19:16.712691 containerd[1451]: time="2025-01-17T12:19:16.712541414Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 12:19:16.712721 containerd[1451]: time="2025-01-17T12:19:16.712700906Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 12:19:16.712751 containerd[1451]: time="2025-01-17T12:19:16.712723856Z" level=info msg="NRI interface is disabled by configuration." Jan 17 12:19:16.712823 containerd[1451]: time="2025-01-17T12:19:16.712797712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 12:19:16.713574 containerd[1451]: time="2025-01-17T12:19:16.713454597Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 12:19:16.713861 containerd[1451]: time="2025-01-17T12:19:16.713836202Z" level=info msg="Connect containerd service" Jan 17 12:19:16.714005 containerd[1451]: time="2025-01-17T12:19:16.713976001Z" level=info msg="using legacy CRI server" Jan 17 12:19:16.714040 containerd[1451]: time="2025-01-17T12:19:16.714015563Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 12:19:16.714780 containerd[1451]: time="2025-01-17T12:19:16.714378512Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 12:19:16.718788 containerd[1451]: time="2025-01-17T12:19:16.717243549Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:19:16.718788 containerd[1451]: time="2025-01-17T12:19:16.717908803Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 12:19:16.718788 containerd[1451]: time="2025-01-17T12:19:16.718001396Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 12:19:16.718788 containerd[1451]: time="2025-01-17T12:19:16.718044494Z" level=info msg="Start subscribing containerd event" Jan 17 12:19:16.718788 containerd[1451]: time="2025-01-17T12:19:16.718110399Z" level=info msg="Start recovering state" Jan 17 12:19:16.718788 containerd[1451]: time="2025-01-17T12:19:16.718254722Z" level=info msg="Start event monitor" Jan 17 12:19:16.718788 containerd[1451]: time="2025-01-17T12:19:16.718314776Z" level=info msg="Start snapshots syncer" Jan 17 12:19:16.718788 containerd[1451]: time="2025-01-17T12:19:16.718331635Z" level=info msg="Start cni network conf syncer for default" Jan 17 12:19:16.718788 containerd[1451]: time="2025-01-17T12:19:16.718345239Z" level=info msg="Start streaming server" Jan 17 12:19:16.718639 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 12:19:16.723807 containerd[1451]: time="2025-01-17T12:19:16.721977391Z" level=info msg="containerd successfully booted in 0.072028s" Jan 17 12:19:16.727285 update-ssh-keys[1509]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:19:16.729256 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 17 12:19:16.733666 systemd[1]: Finished sshkeys.service. Jan 17 12:19:16.758037 systemd-networkd[1355]: eth1: Gained IPv6LL Jan 17 12:19:16.764142 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 12:19:16.769443 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 12:19:16.785468 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:19:16.792385 sshd_keygen[1471]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 12:19:16.796597 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 12:19:16.841153 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 12:19:16.857645 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 12:19:16.874234 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 12:19:16.884322 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 12:19:16.884570 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 12:19:16.896203 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 12:19:16.913986 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 12:19:16.926580 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 12:19:16.937223 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 12:19:16.941040 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 12:19:17.014533 systemd-networkd[1355]: eth0: Gained IPv6LL Jan 17 12:19:17.935187 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:19:17.937549 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 12:19:17.943612 systemd[1]: Startup finished in 1.595s (kernel) + 5.974s (initrd) + 6.232s (userspace) = 13.801s. Jan 17 12:19:17.953241 (kubelet)[1546]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:19:18.666630 kubelet[1546]: E0117 12:19:18.666551 1546 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:19:18.669102 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:19:18.669275 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:19:18.669870 systemd[1]: kubelet.service: Consumed 1.447s CPU time. Jan 17 12:19:25.418082 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 12:19:25.428261 systemd[1]: Started sshd@0-143.244.184.73:22-139.178.68.195:46968.service - OpenSSH per-connection server daemon (139.178.68.195:46968). Jan 17 12:19:25.498673 sshd[1559]: Accepted publickey for core from 139.178.68.195 port 46968 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:19:25.501210 sshd[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:25.513395 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 12:19:25.530670 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 12:19:25.536415 systemd-logind[1440]: New session 1 of user core. Jan 17 12:19:25.547740 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 12:19:25.554309 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 12:19:25.574161 (systemd)[1563]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 12:19:25.716244 systemd[1563]: Queued start job for default target default.target. Jan 17 12:19:25.727840 systemd[1563]: Created slice app.slice - User Application Slice. Jan 17 12:19:25.727895 systemd[1563]: Reached target paths.target - Paths. Jan 17 12:19:25.727912 systemd[1563]: Reached target timers.target - Timers. Jan 17 12:19:25.729866 systemd[1563]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 12:19:25.745505 systemd[1563]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 12:19:25.745668 systemd[1563]: Reached target sockets.target - Sockets. Jan 17 12:19:25.745686 systemd[1563]: Reached target basic.target - Basic System. Jan 17 12:19:25.745740 systemd[1563]: Reached target default.target - Main User Target. Jan 17 12:19:25.745797 systemd[1563]: Startup finished in 162ms. Jan 17 12:19:25.745934 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 12:19:25.754145 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 12:19:25.826283 systemd[1]: Started sshd@1-143.244.184.73:22-139.178.68.195:46982.service - OpenSSH per-connection server daemon (139.178.68.195:46982). Jan 17 12:19:25.871296 sshd[1574]: Accepted publickey for core from 139.178.68.195 port 46982 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:19:25.873625 sshd[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:25.881044 systemd-logind[1440]: New session 2 of user core. Jan 17 12:19:25.887141 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 12:19:25.953561 sshd[1574]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:25.970507 systemd[1]: sshd@1-143.244.184.73:22-139.178.68.195:46982.service: Deactivated successfully. Jan 17 12:19:25.972562 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 12:19:25.975043 systemd-logind[1440]: Session 2 logged out. Waiting for processes to exit. Jan 17 12:19:25.979180 systemd[1]: Started sshd@2-143.244.184.73:22-139.178.68.195:46996.service - OpenSSH per-connection server daemon (139.178.68.195:46996). Jan 17 12:19:25.980989 systemd-logind[1440]: Removed session 2. Jan 17 12:19:26.027279 sshd[1581]: Accepted publickey for core from 139.178.68.195 port 46996 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:19:26.029096 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:26.034856 systemd-logind[1440]: New session 3 of user core. Jan 17 12:19:26.043428 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 12:19:26.104611 sshd[1581]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:26.119896 systemd[1]: sshd@2-143.244.184.73:22-139.178.68.195:46996.service: Deactivated successfully. Jan 17 12:19:26.122240 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 12:19:26.124249 systemd-logind[1440]: Session 3 logged out. Waiting for processes to exit. Jan 17 12:19:26.130907 systemd[1]: Started sshd@3-143.244.184.73:22-139.178.68.195:47002.service - OpenSSH per-connection server daemon (139.178.68.195:47002). Jan 17 12:19:26.132595 systemd-logind[1440]: Removed session 3. Jan 17 12:19:26.184173 sshd[1588]: Accepted publickey for core from 139.178.68.195 port 47002 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:19:26.186222 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:26.193273 systemd-logind[1440]: New session 4 of user core. Jan 17 12:19:26.205108 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 12:19:26.269415 sshd[1588]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:26.281058 systemd[1]: sshd@3-143.244.184.73:22-139.178.68.195:47002.service: Deactivated successfully. Jan 17 12:19:26.283155 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 12:19:26.285243 systemd-logind[1440]: Session 4 logged out. Waiting for processes to exit. Jan 17 12:19:26.290284 systemd[1]: Started sshd@4-143.244.184.73:22-139.178.68.195:47016.service - OpenSSH per-connection server daemon (139.178.68.195:47016). Jan 17 12:19:26.292341 systemd-logind[1440]: Removed session 4. Jan 17 12:19:26.347709 sshd[1595]: Accepted publickey for core from 139.178.68.195 port 47016 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:19:26.349893 sshd[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:26.359993 systemd-logind[1440]: New session 5 of user core. Jan 17 12:19:26.366176 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 12:19:26.440216 sudo[1598]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 12:19:26.440572 sudo[1598]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:19:26.458100 sudo[1598]: pam_unix(sudo:session): session closed for user root Jan 17 12:19:26.462799 sshd[1595]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:26.476584 systemd[1]: sshd@4-143.244.184.73:22-139.178.68.195:47016.service: Deactivated successfully. Jan 17 12:19:26.478634 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 12:19:26.480968 systemd-logind[1440]: Session 5 logged out. Waiting for processes to exit. Jan 17 12:19:26.487294 systemd[1]: Started sshd@5-143.244.184.73:22-139.178.68.195:47024.service - OpenSSH per-connection server daemon (139.178.68.195:47024). Jan 17 12:19:26.488951 systemd-logind[1440]: Removed session 5. Jan 17 12:19:26.531997 sshd[1603]: Accepted publickey for core from 139.178.68.195 port 47024 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:19:26.535477 sshd[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:26.542369 systemd-logind[1440]: New session 6 of user core. Jan 17 12:19:26.552150 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 12:19:26.616511 sudo[1607]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 12:19:26.617070 sudo[1607]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:19:26.623509 sudo[1607]: pam_unix(sudo:session): session closed for user root Jan 17 12:19:26.633495 sudo[1606]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 12:19:26.634023 sudo[1606]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:19:26.656266 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 12:19:26.658416 auditctl[1610]: No rules Jan 17 12:19:26.659730 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 12:19:26.660019 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 12:19:26.663686 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:19:26.701749 augenrules[1628]: No rules Jan 17 12:19:26.704222 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:19:26.706160 sudo[1606]: pam_unix(sudo:session): session closed for user root Jan 17 12:19:26.711112 sshd[1603]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:26.723500 systemd[1]: sshd@5-143.244.184.73:22-139.178.68.195:47024.service: Deactivated successfully. Jan 17 12:19:26.726025 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 12:19:26.728385 systemd-logind[1440]: Session 6 logged out. Waiting for processes to exit. Jan 17 12:19:26.739991 systemd[1]: Started sshd@6-143.244.184.73:22-139.178.68.195:47026.service - OpenSSH per-connection server daemon (139.178.68.195:47026). Jan 17 12:19:26.741152 systemd-logind[1440]: Removed session 6. Jan 17 12:19:26.778978 sshd[1636]: Accepted publickey for core from 139.178.68.195 port 47026 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:19:26.781140 sshd[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:26.788090 systemd-logind[1440]: New session 7 of user core. Jan 17 12:19:26.795122 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 12:19:26.855359 sudo[1639]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 12:19:26.855689 sudo[1639]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:19:27.694561 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:19:27.695048 systemd[1]: kubelet.service: Consumed 1.447s CPU time. Jan 17 12:19:27.703292 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:19:27.753923 systemd[1]: Reloading requested from client PID 1672 ('systemctl') (unit session-7.scope)... Jan 17 12:19:27.753951 systemd[1]: Reloading... Jan 17 12:19:27.881809 zram_generator::config[1708]: No configuration found. Jan 17 12:19:28.060881 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:19:28.158971 systemd[1]: Reloading finished in 404 ms. Jan 17 12:19:28.228057 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 12:19:28.228172 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 12:19:28.228590 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:19:28.235355 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:19:28.382369 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:19:28.395370 (kubelet)[1766]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:19:28.470473 kubelet[1766]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:19:28.471022 kubelet[1766]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:19:28.471097 kubelet[1766]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:19:28.472518 kubelet[1766]: I0117 12:19:28.472438 1766 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:19:29.191965 kubelet[1766]: I0117 12:19:29.191873 1766 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 17 12:19:29.191965 kubelet[1766]: I0117 12:19:29.191925 1766 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:19:29.192358 kubelet[1766]: I0117 12:19:29.192268 1766 server.go:929] "Client rotation is on, will bootstrap in background" Jan 17 12:19:29.229733 kubelet[1766]: I0117 12:19:29.229645 1766 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:19:29.244366 kubelet[1766]: E0117 12:19:29.244293 1766 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 12:19:29.244366 kubelet[1766]: I0117 12:19:29.244350 1766 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 12:19:29.252433 kubelet[1766]: I0117 12:19:29.252183 1766 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:19:29.254390 kubelet[1766]: I0117 12:19:29.254235 1766 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 17 12:19:29.254701 kubelet[1766]: I0117 12:19:29.254615 1766 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:19:29.254936 kubelet[1766]: I0117 12:19:29.254682 1766 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"143.244.184.73","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 12:19:29.255101 kubelet[1766]: I0117 12:19:29.254944 1766 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:19:29.255101 kubelet[1766]: I0117 12:19:29.254961 1766 container_manager_linux.go:300] "Creating device plugin manager" Jan 17 12:19:29.255185 kubelet[1766]: I0117 12:19:29.255131 1766 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:19:29.257465 kubelet[1766]: I0117 12:19:29.257106 1766 kubelet.go:408] "Attempting to sync node with API server" Jan 17 12:19:29.257465 kubelet[1766]: I0117 12:19:29.257149 1766 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:19:29.257465 kubelet[1766]: I0117 12:19:29.257187 1766 kubelet.go:314] "Adding apiserver pod source" Jan 17 12:19:29.257465 kubelet[1766]: I0117 12:19:29.257204 1766 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:19:29.259845 kubelet[1766]: E0117 12:19:29.259729 1766 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:29.259845 kubelet[1766]: E0117 12:19:29.259853 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:29.262566 kubelet[1766]: I0117 12:19:29.262472 1766 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:19:29.264661 kubelet[1766]: I0117 12:19:29.264600 1766 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:19:29.265426 kubelet[1766]: W0117 12:19:29.265351 1766 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 12:19:29.266258 kubelet[1766]: I0117 12:19:29.266188 1766 server.go:1269] "Started kubelet" Jan 17 12:19:29.267968 kubelet[1766]: I0117 12:19:29.267628 1766 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:19:29.270179 kubelet[1766]: I0117 12:19:29.269481 1766 server.go:460] "Adding debug handlers to kubelet server" Jan 17 12:19:29.273296 kubelet[1766]: I0117 12:19:29.273225 1766 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:19:29.273720 kubelet[1766]: I0117 12:19:29.273701 1766 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:19:29.274469 kubelet[1766]: I0117 12:19:29.274438 1766 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:19:29.278845 kubelet[1766]: E0117 12:19:29.278807 1766 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:19:29.279384 kubelet[1766]: I0117 12:19:29.279344 1766 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 12:19:29.284288 kubelet[1766]: E0117 12:19:29.283483 1766 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"143.244.184.73\" not found" Jan 17 12:19:29.284288 kubelet[1766]: I0117 12:19:29.283571 1766 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 17 12:19:29.284580 kubelet[1766]: I0117 12:19:29.284552 1766 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 17 12:19:29.284816 kubelet[1766]: I0117 12:19:29.284756 1766 reconciler.go:26] "Reconciler: start to sync state" Jan 17 12:19:29.288421 kubelet[1766]: I0117 12:19:29.286697 1766 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:19:29.288421 kubelet[1766]: I0117 12:19:29.286901 1766 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:19:29.292300 kubelet[1766]: I0117 12:19:29.291381 1766 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:19:29.317617 kubelet[1766]: E0117 12:19:29.302990 1766 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{143.244.184.73.181b7a225ba00e71 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:143.244.184.73,UID:143.244.184.73,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:143.244.184.73,},FirstTimestamp:2025-01-17 12:19:29.266151025 +0000 UTC m=+0.860819675,LastTimestamp:2025-01-17 12:19:29.266151025 +0000 UTC m=+0.860819675,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:143.244.184.73,}" Jan 17 12:19:29.317617 kubelet[1766]: W0117 12:19:29.311407 1766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 17 12:19:29.317617 kubelet[1766]: E0117 12:19:29.311497 1766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 17 12:19:29.317617 kubelet[1766]: W0117 12:19:29.311794 1766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "143.244.184.73" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 17 12:19:29.317617 kubelet[1766]: E0117 12:19:29.311882 1766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"143.244.184.73\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 17 12:19:29.321812 kubelet[1766]: W0117 12:19:29.320881 1766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 17 12:19:29.321812 kubelet[1766]: E0117 12:19:29.320941 1766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Jan 17 12:19:29.321812 kubelet[1766]: E0117 12:19:29.321094 1766 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"143.244.184.73\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 17 12:19:29.344046 kubelet[1766]: I0117 12:19:29.344009 1766 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:19:29.344263 kubelet[1766]: I0117 12:19:29.344250 1766 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:19:29.344323 kubelet[1766]: I0117 12:19:29.344315 1766 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:19:29.349406 kubelet[1766]: I0117 12:19:29.349362 1766 policy_none.go:49] "None policy: Start" Jan 17 12:19:29.352395 kubelet[1766]: I0117 12:19:29.352316 1766 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:19:29.353941 kubelet[1766]: E0117 12:19:29.353308 1766 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{143.244.184.73.181b7a225c6059d1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:143.244.184.73,UID:143.244.184.73,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:143.244.184.73,},FirstTimestamp:2025-01-17 12:19:29.278753233 +0000 UTC m=+0.873421863,LastTimestamp:2025-01-17 12:19:29.278753233 +0000 UTC m=+0.873421863,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:143.244.184.73,}" Jan 17 12:19:29.354950 kubelet[1766]: I0117 12:19:29.354872 1766 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:19:29.368415 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 12:19:29.384848 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 12:19:29.386759 kubelet[1766]: E0117 12:19:29.385528 1766 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"143.244.184.73\" not found" Jan 17 12:19:29.393319 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 12:19:29.397625 kubelet[1766]: E0117 12:19:29.397498 1766 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{143.244.184.73.181b7a226011a4c2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:143.244.184.73,UID:143.244.184.73,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 143.244.184.73 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:143.244.184.73,},FirstTimestamp:2025-01-17 12:19:29.340703938 +0000 UTC m=+0.935372543,LastTimestamp:2025-01-17 12:19:29.340703938 +0000 UTC m=+0.935372543,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:143.244.184.73,}" Jan 17 12:19:29.401324 kubelet[1766]: I0117 12:19:29.401278 1766 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:19:29.403841 kubelet[1766]: I0117 12:19:29.403076 1766 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:19:29.403841 kubelet[1766]: I0117 12:19:29.403271 1766 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 12:19:29.403841 kubelet[1766]: I0117 12:19:29.403282 1766 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 12:19:29.404086 kubelet[1766]: I0117 12:19:29.404072 1766 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:19:29.408827 kubelet[1766]: I0117 12:19:29.407551 1766 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:19:29.408827 kubelet[1766]: I0117 12:19:29.407601 1766 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:19:29.408827 kubelet[1766]: I0117 12:19:29.407627 1766 kubelet.go:2321] "Starting kubelet main sync loop" Jan 17 12:19:29.408827 kubelet[1766]: E0117 12:19:29.407760 1766 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 17 12:19:29.412109 kubelet[1766]: E0117 12:19:29.412060 1766 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"143.244.184.73\" not found" Jan 17 12:19:29.423647 kubelet[1766]: E0117 12:19:29.423497 1766 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{143.244.184.73.181b7a226011c9bf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:143.244.184.73,UID:143.244.184.73,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 143.244.184.73 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:143.244.184.73,},FirstTimestamp:2025-01-17 12:19:29.340713407 +0000 UTC m=+0.935382028,LastTimestamp:2025-01-17 12:19:29.340713407 +0000 UTC m=+0.935382028,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:143.244.184.73,}" Jan 17 12:19:29.424314 kubelet[1766]: W0117 12:19:29.424214 1766 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Jan 17 12:19:29.424314 kubelet[1766]: E0117 12:19:29.424268 1766 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Jan 17 12:19:29.428760 kubelet[1766]: E0117 12:19:29.428550 1766 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{143.244.184.73.181b7a226011d977 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:143.244.184.73,UID:143.244.184.73,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 143.244.184.73 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:143.244.184.73,},FirstTimestamp:2025-01-17 12:19:29.340717431 +0000 UTC m=+0.935386036,LastTimestamp:2025-01-17 12:19:29.340717431 +0000 UTC m=+0.935386036,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:143.244.184.73,}" Jan 17 12:19:29.439597 kubelet[1766]: E0117 12:19:29.439454 1766 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{143.244.184.73.181b7a22641407f4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:143.244.184.73,UID:143.244.184.73,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:143.244.184.73,},FirstTimestamp:2025-01-17 12:19:29.407969268 +0000 UTC m=+1.002637874,LastTimestamp:2025-01-17 12:19:29.407969268 +0000 UTC m=+1.002637874,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:143.244.184.73,}" Jan 17 12:19:29.505167 kubelet[1766]: I0117 12:19:29.504615 1766 kubelet_node_status.go:72] "Attempting to register node" node="143.244.184.73" Jan 17 12:19:29.507441 kubelet[1766]: E0117 12:19:29.507284 1766 kubelet_node_status.go:95] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="143.244.184.73" Jan 17 12:19:29.534681 kubelet[1766]: E0117 12:19:29.534582 1766 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"143.244.184.73\" not found" node="143.244.184.73" Jan 17 12:19:29.709953 kubelet[1766]: I0117 12:19:29.709059 1766 kubelet_node_status.go:72] "Attempting to register node" node="143.244.184.73" Jan 17 12:19:29.745721 kubelet[1766]: I0117 12:19:29.745644 1766 kubelet_node_status.go:75] "Successfully registered node" node="143.244.184.73" Jan 17 12:19:29.745721 kubelet[1766]: E0117 12:19:29.745705 1766 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"143.244.184.73\": node \"143.244.184.73\" not found" Jan 17 12:19:29.783623 kubelet[1766]: E0117 12:19:29.783384 1766 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"143.244.184.73\" not found" Jan 17 12:19:29.883960 kubelet[1766]: E0117 12:19:29.883880 1766 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"143.244.184.73\" not found" Jan 17 12:19:29.984590 kubelet[1766]: E0117 12:19:29.984507 1766 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"143.244.184.73\" not found" Jan 17 12:19:30.085498 kubelet[1766]: E0117 12:19:30.085316 1766 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"143.244.184.73\" not found" Jan 17 12:19:30.186245 kubelet[1766]: E0117 12:19:30.186167 1766 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"143.244.184.73\" not found" Jan 17 12:19:30.195899 kubelet[1766]: I0117 12:19:30.195805 1766 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 17 12:19:30.260822 kubelet[1766]: E0117 12:19:30.260699 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:30.286831 kubelet[1766]: E0117 12:19:30.286737 1766 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"143.244.184.73\" not found" Jan 17 12:19:30.388175 kubelet[1766]: E0117 12:19:30.387899 1766 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"143.244.184.73\" not found" Jan 17 12:19:30.439997 sudo[1639]: pam_unix(sudo:session): session closed for user root Jan 17 12:19:30.444644 sshd[1636]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:30.450355 systemd[1]: sshd@6-143.244.184.73:22-139.178.68.195:47026.service: Deactivated successfully. Jan 17 12:19:30.453727 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 12:19:30.455981 systemd-logind[1440]: Session 7 logged out. Waiting for processes to exit. Jan 17 12:19:30.458138 systemd-logind[1440]: Removed session 7. Jan 17 12:19:30.488861 kubelet[1766]: E0117 12:19:30.488755 1766 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"143.244.184.73\" not found" Jan 17 12:19:30.589941 kubelet[1766]: E0117 12:19:30.589862 1766 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"143.244.184.73\" not found" Jan 17 12:19:30.690997 kubelet[1766]: E0117 12:19:30.690791 1766 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"143.244.184.73\" not found" Jan 17 12:19:30.791882 kubelet[1766]: E0117 12:19:30.791796 1766 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"143.244.184.73\" not found" Jan 17 12:19:30.893228 kubelet[1766]: I0117 12:19:30.893039 1766 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 17 12:19:30.893693 containerd[1451]: time="2025-01-17T12:19:30.893539032Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 12:19:30.894199 kubelet[1766]: I0117 12:19:30.894156 1766 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 17 12:19:31.262347 kubelet[1766]: I0117 12:19:31.261867 1766 apiserver.go:52] "Watching apiserver" Jan 17 12:19:31.262347 kubelet[1766]: E0117 12:19:31.262283 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:31.270608 kubelet[1766]: E0117 12:19:31.269651 1766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s7bwc" podUID="d94ec394-9eb7-4930-b60e-267badfa15a7" Jan 17 12:19:31.279546 systemd[1]: Created slice kubepods-besteffort-podd564e16b_1f67_4184_aa43_129a3edbd123.slice - libcontainer container kubepods-besteffort-podd564e16b_1f67_4184_aa43_129a3edbd123.slice. Jan 17 12:19:31.286023 kubelet[1766]: I0117 12:19:31.285951 1766 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 17 12:19:31.310255 systemd[1]: Created slice kubepods-besteffort-pod8c83ed62_01c6_4892_80b1_9740c1eeacaf.slice - libcontainer container kubepods-besteffort-pod8c83ed62_01c6_4892_80b1_9740c1eeacaf.slice. Jan 17 12:19:31.313370 kubelet[1766]: I0117 12:19:31.313297 1766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcndx\" (UniqueName: \"kubernetes.io/projected/d564e16b-1f67-4184-aa43-129a3edbd123-kube-api-access-lcndx\") pod \"kube-proxy-rfjfq\" (UID: \"d564e16b-1f67-4184-aa43-129a3edbd123\") " pod="kube-system/kube-proxy-rfjfq" Jan 17 12:19:31.313370 kubelet[1766]: I0117 12:19:31.313361 1766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/cdebc0ff-29de-46fd-9d22-55ec1d869990-typha-certs\") pod \"calico-typha-68d6dc4656-r9bhj\" (UID: \"cdebc0ff-29de-46fd-9d22-55ec1d869990\") " pod="calico-system/calico-typha-68d6dc4656-r9bhj" Jan 17 12:19:31.313580 kubelet[1766]: I0117 12:19:31.313394 1766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhpxk\" (UniqueName: \"kubernetes.io/projected/cdebc0ff-29de-46fd-9d22-55ec1d869990-kube-api-access-nhpxk\") pod \"calico-typha-68d6dc4656-r9bhj\" (UID: \"cdebc0ff-29de-46fd-9d22-55ec1d869990\") " pod="calico-system/calico-typha-68d6dc4656-r9bhj" Jan 17 12:19:31.313580 kubelet[1766]: I0117 12:19:31.313474 1766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8c83ed62-01c6-4892-80b1-9740c1eeacaf-policysync\") pod \"calico-node-f8q2m\" (UID: \"8c83ed62-01c6-4892-80b1-9740c1eeacaf\") " pod="calico-system/calico-node-f8q2m" Jan 17 12:19:31.313580 kubelet[1766]: I0117 12:19:31.313507 1766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d94ec394-9eb7-4930-b60e-267badfa15a7-varrun\") pod \"csi-node-driver-s7bwc\" (UID: \"d94ec394-9eb7-4930-b60e-267badfa15a7\") " pod="calico-system/csi-node-driver-s7bwc" Jan 17 12:19:31.313580 kubelet[1766]: I0117 12:19:31.313532 1766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d94ec394-9eb7-4930-b60e-267badfa15a7-socket-dir\") pod \"csi-node-driver-s7bwc\" (UID: \"d94ec394-9eb7-4930-b60e-267badfa15a7\") " pod="calico-system/csi-node-driver-s7bwc" Jan 17 12:19:31.313580 kubelet[1766]: I0117 12:19:31.313557 1766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d564e16b-1f67-4184-aa43-129a3edbd123-lib-modules\") pod \"kube-proxy-rfjfq\" (UID: \"d564e16b-1f67-4184-aa43-129a3edbd123\") " pod="kube-system/kube-proxy-rfjfq" Jan 17 12:19:31.313759 kubelet[1766]: I0117 12:19:31.313606 1766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cdebc0ff-29de-46fd-9d22-55ec1d869990-tigera-ca-bundle\") pod \"calico-typha-68d6dc4656-r9bhj\" (UID: \"cdebc0ff-29de-46fd-9d22-55ec1d869990\") " pod="calico-system/calico-typha-68d6dc4656-r9bhj" Jan 17 12:19:31.313759 kubelet[1766]: I0117 12:19:31.313636 1766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8c83ed62-01c6-4892-80b1-9740c1eeacaf-var-lib-calico\") pod \"calico-node-f8q2m\" (UID: \"8c83ed62-01c6-4892-80b1-9740c1eeacaf\") " pod="calico-system/calico-node-f8q2m" Jan 17 12:19:31.313759 kubelet[1766]: I0117 12:19:31.313659 1766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8c83ed62-01c6-4892-80b1-9740c1eeacaf-cni-bin-dir\") pod \"calico-node-f8q2m\" (UID: \"8c83ed62-01c6-4892-80b1-9740c1eeacaf\") " pod="calico-system/calico-node-f8q2m" Jan 17 12:19:31.313759 kubelet[1766]: I0117 12:19:31.313687 1766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqqwn\" (UniqueName: \"kubernetes.io/projected/8c83ed62-01c6-4892-80b1-9740c1eeacaf-kube-api-access-jqqwn\") pod \"calico-node-f8q2m\" (UID: \"8c83ed62-01c6-4892-80b1-9740c1eeacaf\") " pod="calico-system/calico-node-f8q2m" Jan 17 12:19:31.313759 kubelet[1766]: I0117 12:19:31.313721 1766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8c83ed62-01c6-4892-80b1-9740c1eeacaf-var-run-calico\") pod \"calico-node-f8q2m\" (UID: \"8c83ed62-01c6-4892-80b1-9740c1eeacaf\") " pod="calico-system/calico-node-f8q2m" Jan 17 12:19:31.313893 kubelet[1766]: I0117 12:19:31.313746 1766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8c83ed62-01c6-4892-80b1-9740c1eeacaf-cni-net-dir\") pod \"calico-node-f8q2m\" (UID: \"8c83ed62-01c6-4892-80b1-9740c1eeacaf\") " pod="calico-system/calico-node-f8q2m" Jan 17 12:19:31.313893 kubelet[1766]: I0117 12:19:31.313790 1766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8c83ed62-01c6-4892-80b1-9740c1eeacaf-cni-log-dir\") pod \"calico-node-f8q2m\" (UID: \"8c83ed62-01c6-4892-80b1-9740c1eeacaf\") " pod="calico-system/calico-node-f8q2m" Jan 17 12:19:31.313893 kubelet[1766]: I0117 12:19:31.313837 1766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8c83ed62-01c6-4892-80b1-9740c1eeacaf-flexvol-driver-host\") pod \"calico-node-f8q2m\" (UID: \"8c83ed62-01c6-4892-80b1-9740c1eeacaf\") " pod="calico-system/calico-node-f8q2m" Jan 17 12:19:31.313893 kubelet[1766]: I0117 12:19:31.313863 1766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j78tw\" (UniqueName: \"kubernetes.io/projected/d94ec394-9eb7-4930-b60e-267badfa15a7-kube-api-access-j78tw\") pod \"csi-node-driver-s7bwc\" (UID: \"d94ec394-9eb7-4930-b60e-267badfa15a7\") " pod="calico-system/csi-node-driver-s7bwc" Jan 17 12:19:31.314005 kubelet[1766]: I0117 12:19:31.313890 1766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d564e16b-1f67-4184-aa43-129a3edbd123-xtables-lock\") pod \"kube-proxy-rfjfq\" (UID: \"d564e16b-1f67-4184-aa43-129a3edbd123\") " pod="kube-system/kube-proxy-rfjfq" Jan 17 12:19:31.314005 kubelet[1766]: I0117 12:19:31.313917 1766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d564e16b-1f67-4184-aa43-129a3edbd123-kube-proxy\") pod \"kube-proxy-rfjfq\" (UID: \"d564e16b-1f67-4184-aa43-129a3edbd123\") " pod="kube-system/kube-proxy-rfjfq" Jan 17 12:19:31.314005 kubelet[1766]: I0117 12:19:31.313942 1766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8c83ed62-01c6-4892-80b1-9740c1eeacaf-lib-modules\") pod \"calico-node-f8q2m\" (UID: \"8c83ed62-01c6-4892-80b1-9740c1eeacaf\") " pod="calico-system/calico-node-f8q2m" Jan 17 12:19:31.314005 kubelet[1766]: I0117 12:19:31.313964 1766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8c83ed62-01c6-4892-80b1-9740c1eeacaf-xtables-lock\") pod \"calico-node-f8q2m\" (UID: \"8c83ed62-01c6-4892-80b1-9740c1eeacaf\") " pod="calico-system/calico-node-f8q2m" Jan 17 12:19:31.314005 kubelet[1766]: I0117 12:19:31.313987 1766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c83ed62-01c6-4892-80b1-9740c1eeacaf-tigera-ca-bundle\") pod \"calico-node-f8q2m\" (UID: \"8c83ed62-01c6-4892-80b1-9740c1eeacaf\") " pod="calico-system/calico-node-f8q2m" Jan 17 12:19:31.314180 kubelet[1766]: I0117 12:19:31.314008 1766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8c83ed62-01c6-4892-80b1-9740c1eeacaf-node-certs\") pod \"calico-node-f8q2m\" (UID: \"8c83ed62-01c6-4892-80b1-9740c1eeacaf\") " pod="calico-system/calico-node-f8q2m" Jan 17 12:19:31.314180 kubelet[1766]: I0117 12:19:31.314032 1766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d94ec394-9eb7-4930-b60e-267badfa15a7-kubelet-dir\") pod \"csi-node-driver-s7bwc\" (UID: \"d94ec394-9eb7-4930-b60e-267badfa15a7\") " pod="calico-system/csi-node-driver-s7bwc" Jan 17 12:19:31.314180 kubelet[1766]: I0117 12:19:31.314055 1766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d94ec394-9eb7-4930-b60e-267badfa15a7-registration-dir\") pod \"csi-node-driver-s7bwc\" (UID: \"d94ec394-9eb7-4930-b60e-267badfa15a7\") " pod="calico-system/csi-node-driver-s7bwc" Jan 17 12:19:31.320392 systemd[1]: Created slice kubepods-besteffort-podcdebc0ff_29de_46fd_9d22_55ec1d869990.slice - libcontainer container kubepods-besteffort-podcdebc0ff_29de_46fd_9d22_55ec1d869990.slice. Jan 17 12:19:31.448017 kubelet[1766]: E0117 12:19:31.447847 1766 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:19:31.448017 kubelet[1766]: W0117 12:19:31.447886 1766 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:19:31.448017 kubelet[1766]: E0117 12:19:31.447917 1766 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:19:31.448904 kubelet[1766]: E0117 12:19:31.448867 1766 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:19:31.448904 kubelet[1766]: W0117 12:19:31.448894 1766 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:19:31.449065 kubelet[1766]: E0117 12:19:31.448920 1766 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:19:31.476895 kubelet[1766]: E0117 12:19:31.476742 1766 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:19:31.476895 kubelet[1766]: W0117 12:19:31.476882 1766 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:19:31.476895 kubelet[1766]: E0117 12:19:31.476909 1766 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:19:31.488976 kubelet[1766]: E0117 12:19:31.488933 1766 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:19:31.488976 kubelet[1766]: W0117 12:19:31.488963 1766 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:19:31.489140 kubelet[1766]: E0117 12:19:31.489029 1766 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:19:31.494378 kubelet[1766]: E0117 12:19:31.494180 1766 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:19:31.494378 kubelet[1766]: W0117 12:19:31.494209 1766 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:19:31.494378 kubelet[1766]: E0117 12:19:31.494245 1766 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:19:31.498348 kubelet[1766]: E0117 12:19:31.498056 1766 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:19:31.498348 kubelet[1766]: W0117 12:19:31.498080 1766 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:19:31.498348 kubelet[1766]: E0117 12:19:31.498105 1766 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:19:31.606573 kubelet[1766]: E0117 12:19:31.606247 1766 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:31.609705 containerd[1451]: time="2025-01-17T12:19:31.609277687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rfjfq,Uid:d564e16b-1f67-4184-aa43-129a3edbd123,Namespace:kube-system,Attempt:0,}" Jan 17 12:19:31.619179 kubelet[1766]: E0117 12:19:31.617664 1766 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:31.619362 containerd[1451]: time="2025-01-17T12:19:31.618496782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-f8q2m,Uid:8c83ed62-01c6-4892-80b1-9740c1eeacaf,Namespace:calico-system,Attempt:0,}" Jan 17 12:19:31.625492 kubelet[1766]: E0117 12:19:31.625441 1766 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:31.626359 containerd[1451]: time="2025-01-17T12:19:31.626319621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-68d6dc4656-r9bhj,Uid:cdebc0ff-29de-46fd-9d22-55ec1d869990,Namespace:calico-system,Attempt:0,}" Jan 17 12:19:32.193984 containerd[1451]: time="2025-01-17T12:19:32.193329053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:19:32.195625 containerd[1451]: time="2025-01-17T12:19:32.195545700Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 17 12:19:32.199819 containerd[1451]: time="2025-01-17T12:19:32.198254284Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:19:32.199819 containerd[1451]: time="2025-01-17T12:19:32.199330741Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:19:32.200433 containerd[1451]: time="2025-01-17T12:19:32.200367447Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:19:32.202984 containerd[1451]: time="2025-01-17T12:19:32.202899601Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:19:32.203860 containerd[1451]: time="2025-01-17T12:19:32.203798223Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:19:32.206931 containerd[1451]: time="2025-01-17T12:19:32.206861174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:19:32.209504 containerd[1451]: time="2025-01-17T12:19:32.209433882Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 582.707646ms" Jan 17 12:19:32.211493 containerd[1451]: time="2025-01-17T12:19:32.210525535Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 591.789825ms" Jan 17 12:19:32.213714 containerd[1451]: time="2025-01-17T12:19:32.213649180Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 604.243811ms" Jan 17 12:19:32.262783 kubelet[1766]: E0117 12:19:32.262700 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:32.413310 containerd[1451]: time="2025-01-17T12:19:32.412690134Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:19:32.413310 containerd[1451]: time="2025-01-17T12:19:32.412835926Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:19:32.413310 containerd[1451]: time="2025-01-17T12:19:32.412855880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:32.417892 containerd[1451]: time="2025-01-17T12:19:32.415857046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:32.452492 containerd[1451]: time="2025-01-17T12:19:32.451834825Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:19:32.452492 containerd[1451]: time="2025-01-17T12:19:32.451940460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:19:32.452492 containerd[1451]: time="2025-01-17T12:19:32.451960852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:32.452492 containerd[1451]: time="2025-01-17T12:19:32.452088063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:32.458330 containerd[1451]: time="2025-01-17T12:19:32.457570036Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:19:32.458330 containerd[1451]: time="2025-01-17T12:19:32.457663250Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:19:32.458330 containerd[1451]: time="2025-01-17T12:19:32.457701287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:32.462295 containerd[1451]: time="2025-01-17T12:19:32.458127588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:32.566258 systemd[1]: run-containerd-runc-k8s.io-c483e7cb6dabd2f04bb5f3483e905807f58c49a7f9c8d3e779f5a6783efd72ba-runc.X6jgLC.mount: Deactivated successfully. Jan 17 12:19:32.578075 systemd[1]: Started cri-containerd-c483e7cb6dabd2f04bb5f3483e905807f58c49a7f9c8d3e779f5a6783efd72ba.scope - libcontainer container c483e7cb6dabd2f04bb5f3483e905807f58c49a7f9c8d3e779f5a6783efd72ba. Jan 17 12:19:32.598071 systemd[1]: Started cri-containerd-ac77b38b6a73002033fc291e940ff9f524741a4aa09f16f9dea7d0ffe8afb426.scope - libcontainer container ac77b38b6a73002033fc291e940ff9f524741a4aa09f16f9dea7d0ffe8afb426. Jan 17 12:19:32.610097 systemd[1]: Started cri-containerd-4b4aa05fea1b656e041fbde59c14b4d087f1cc4cacf8fd0911402dfdfe9f29a8.scope - libcontainer container 4b4aa05fea1b656e041fbde59c14b4d087f1cc4cacf8fd0911402dfdfe9f29a8. Jan 17 12:19:32.675901 containerd[1451]: time="2025-01-17T12:19:32.675842606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-f8q2m,Uid:8c83ed62-01c6-4892-80b1-9740c1eeacaf,Namespace:calico-system,Attempt:0,} returns sandbox id \"c483e7cb6dabd2f04bb5f3483e905807f58c49a7f9c8d3e779f5a6783efd72ba\"" Jan 17 12:19:32.680245 kubelet[1766]: E0117 12:19:32.679919 1766 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:32.685485 containerd[1451]: time="2025-01-17T12:19:32.684118096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 17 12:19:32.694223 containerd[1451]: time="2025-01-17T12:19:32.694136348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rfjfq,Uid:d564e16b-1f67-4184-aa43-129a3edbd123,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac77b38b6a73002033fc291e940ff9f524741a4aa09f16f9dea7d0ffe8afb426\"" Jan 17 12:19:32.696873 kubelet[1766]: E0117 12:19:32.696712 1766 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:32.721032 containerd[1451]: time="2025-01-17T12:19:32.720564249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-68d6dc4656-r9bhj,Uid:cdebc0ff-29de-46fd-9d22-55ec1d869990,Namespace:calico-system,Attempt:0,} returns sandbox id \"4b4aa05fea1b656e041fbde59c14b4d087f1cc4cacf8fd0911402dfdfe9f29a8\"" Jan 17 12:19:32.722812 kubelet[1766]: E0117 12:19:32.722730 1766 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:33.263585 kubelet[1766]: E0117 12:19:33.263477 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:33.410635 kubelet[1766]: E0117 12:19:33.410569 1766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s7bwc" podUID="d94ec394-9eb7-4930-b60e-267badfa15a7" Jan 17 12:19:34.259006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1987903601.mount: Deactivated successfully. Jan 17 12:19:34.264029 kubelet[1766]: E0117 12:19:34.263918 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:34.486805 containerd[1451]: time="2025-01-17T12:19:34.486690004Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:34.487995 containerd[1451]: time="2025-01-17T12:19:34.487910121Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 17 12:19:34.488972 containerd[1451]: time="2025-01-17T12:19:34.488889487Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:34.492017 containerd[1451]: time="2025-01-17T12:19:34.491946802Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:34.494198 containerd[1451]: time="2025-01-17T12:19:34.493365234Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.809192532s" Jan 17 12:19:34.494198 containerd[1451]: time="2025-01-17T12:19:34.493432076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 17 12:19:34.496737 containerd[1451]: time="2025-01-17T12:19:34.496476660Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 17 12:19:34.498790 containerd[1451]: time="2025-01-17T12:19:34.498729171Z" level=info msg="CreateContainer within sandbox \"c483e7cb6dabd2f04bb5f3483e905807f58c49a7f9c8d3e779f5a6783efd72ba\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 12:19:34.521007 containerd[1451]: time="2025-01-17T12:19:34.520828029Z" level=info msg="CreateContainer within sandbox \"c483e7cb6dabd2f04bb5f3483e905807f58c49a7f9c8d3e779f5a6783efd72ba\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1116508b97ff08909de154d61e8473256028e9410315e0adb5ac8690ebfbe926\"" Jan 17 12:19:34.522810 containerd[1451]: time="2025-01-17T12:19:34.522379703Z" level=info msg="StartContainer for \"1116508b97ff08909de154d61e8473256028e9410315e0adb5ac8690ebfbe926\"" Jan 17 12:19:34.586241 systemd[1]: Started cri-containerd-1116508b97ff08909de154d61e8473256028e9410315e0adb5ac8690ebfbe926.scope - libcontainer container 1116508b97ff08909de154d61e8473256028e9410315e0adb5ac8690ebfbe926. Jan 17 12:19:34.630390 containerd[1451]: time="2025-01-17T12:19:34.630173779Z" level=info msg="StartContainer for \"1116508b97ff08909de154d61e8473256028e9410315e0adb5ac8690ebfbe926\" returns successfully" Jan 17 12:19:34.651420 systemd[1]: cri-containerd-1116508b97ff08909de154d61e8473256028e9410315e0adb5ac8690ebfbe926.scope: Deactivated successfully. Jan 17 12:19:34.690842 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1116508b97ff08909de154d61e8473256028e9410315e0adb5ac8690ebfbe926-rootfs.mount: Deactivated successfully. Jan 17 12:19:34.723666 containerd[1451]: time="2025-01-17T12:19:34.723512835Z" level=info msg="shim disconnected" id=1116508b97ff08909de154d61e8473256028e9410315e0adb5ac8690ebfbe926 namespace=k8s.io Jan 17 12:19:34.723666 containerd[1451]: time="2025-01-17T12:19:34.723601090Z" level=warning msg="cleaning up after shim disconnected" id=1116508b97ff08909de154d61e8473256028e9410315e0adb5ac8690ebfbe926 namespace=k8s.io Jan 17 12:19:34.723666 containerd[1451]: time="2025-01-17T12:19:34.723610410Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:19:35.264157 kubelet[1766]: E0117 12:19:35.264113 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:35.409128 kubelet[1766]: E0117 12:19:35.408526 1766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s7bwc" podUID="d94ec394-9eb7-4930-b60e-267badfa15a7" Jan 17 12:19:35.467080 kubelet[1766]: E0117 12:19:35.466552 1766 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:35.750857 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3180671226.mount: Deactivated successfully. Jan 17 12:19:36.265415 kubelet[1766]: E0117 12:19:36.265329 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:36.506466 containerd[1451]: time="2025-01-17T12:19:36.506290715Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:36.507952 containerd[1451]: time="2025-01-17T12:19:36.507884062Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=30231128" Jan 17 12:19:36.508787 containerd[1451]: time="2025-01-17T12:19:36.508715839Z" level=info msg="ImageCreate event name:\"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:36.511151 containerd[1451]: time="2025-01-17T12:19:36.511075203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:36.512254 containerd[1451]: time="2025-01-17T12:19:36.511854827Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"30230147\" in 2.015327669s" Jan 17 12:19:36.512254 containerd[1451]: time="2025-01-17T12:19:36.511896263Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\"" Jan 17 12:19:36.514866 containerd[1451]: time="2025-01-17T12:19:36.514252780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 17 12:19:36.515655 containerd[1451]: time="2025-01-17T12:19:36.515308757Z" level=info msg="CreateContainer within sandbox \"ac77b38b6a73002033fc291e940ff9f524741a4aa09f16f9dea7d0ffe8afb426\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 12:19:36.541601 containerd[1451]: time="2025-01-17T12:19:36.541545313Z" level=info msg="CreateContainer within sandbox \"ac77b38b6a73002033fc291e940ff9f524741a4aa09f16f9dea7d0ffe8afb426\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9153cdecb1852e4ba2145e5195ff18dd0846b57ea910332880abf776ed92ca96\"" Jan 17 12:19:36.542639 containerd[1451]: time="2025-01-17T12:19:36.542499221Z" level=info msg="StartContainer for \"9153cdecb1852e4ba2145e5195ff18dd0846b57ea910332880abf776ed92ca96\"" Jan 17 12:19:36.583057 systemd[1]: Started cri-containerd-9153cdecb1852e4ba2145e5195ff18dd0846b57ea910332880abf776ed92ca96.scope - libcontainer container 9153cdecb1852e4ba2145e5195ff18dd0846b57ea910332880abf776ed92ca96. Jan 17 12:19:36.631752 containerd[1451]: time="2025-01-17T12:19:36.631674402Z" level=info msg="StartContainer for \"9153cdecb1852e4ba2145e5195ff18dd0846b57ea910332880abf776ed92ca96\" returns successfully" Jan 17 12:19:37.266197 kubelet[1766]: E0117 12:19:37.266104 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:37.409162 kubelet[1766]: E0117 12:19:37.408721 1766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s7bwc" podUID="d94ec394-9eb7-4930-b60e-267badfa15a7" Jan 17 12:19:37.472112 kubelet[1766]: E0117 12:19:37.472072 1766 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:37.750179 systemd-resolved[1321]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jan 17 12:19:38.267407 kubelet[1766]: E0117 12:19:38.267116 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:38.476358 kubelet[1766]: E0117 12:19:38.475985 1766 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:38.901091 containerd[1451]: time="2025-01-17T12:19:38.899976437Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:38.901091 containerd[1451]: time="2025-01-17T12:19:38.901019934Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Jan 17 12:19:38.901881 containerd[1451]: time="2025-01-17T12:19:38.901833741Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:38.907445 containerd[1451]: time="2025-01-17T12:19:38.907351582Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:38.909636 containerd[1451]: time="2025-01-17T12:19:38.909535220Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.395229759s" Jan 17 12:19:38.909888 containerd[1451]: time="2025-01-17T12:19:38.909859930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 17 12:19:38.914524 containerd[1451]: time="2025-01-17T12:19:38.914476570Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 17 12:19:38.932293 containerd[1451]: time="2025-01-17T12:19:38.932231173Z" level=info msg="CreateContainer within sandbox \"4b4aa05fea1b656e041fbde59c14b4d087f1cc4cacf8fd0911402dfdfe9f29a8\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 17 12:19:38.952166 containerd[1451]: time="2025-01-17T12:19:38.952067595Z" level=info msg="CreateContainer within sandbox \"4b4aa05fea1b656e041fbde59c14b4d087f1cc4cacf8fd0911402dfdfe9f29a8\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ffff4d07443c39b4873eb6bd5fcc961b423c875b3379b4cc0cc2b7f2362fc5ed\"" Jan 17 12:19:38.955140 containerd[1451]: time="2025-01-17T12:19:38.953651767Z" level=info msg="StartContainer for \"ffff4d07443c39b4873eb6bd5fcc961b423c875b3379b4cc0cc2b7f2362fc5ed\"" Jan 17 12:19:39.013225 systemd[1]: Started cri-containerd-ffff4d07443c39b4873eb6bd5fcc961b423c875b3379b4cc0cc2b7f2362fc5ed.scope - libcontainer container ffff4d07443c39b4873eb6bd5fcc961b423c875b3379b4cc0cc2b7f2362fc5ed. Jan 17 12:19:39.085292 containerd[1451]: time="2025-01-17T12:19:39.085080734Z" level=info msg="StartContainer for \"ffff4d07443c39b4873eb6bd5fcc961b423c875b3379b4cc0cc2b7f2362fc5ed\" returns successfully" Jan 17 12:19:39.268527 kubelet[1766]: E0117 12:19:39.268203 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:39.409544 kubelet[1766]: E0117 12:19:39.408825 1766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s7bwc" podUID="d94ec394-9eb7-4930-b60e-267badfa15a7" Jan 17 12:19:39.480055 kubelet[1766]: E0117 12:19:39.479992 1766 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:39.528976 kubelet[1766]: I0117 12:19:39.528238 1766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rfjfq" podStartSLOduration=6.714589793 podStartE2EDuration="10.528209462s" podCreationTimestamp="2025-01-17 12:19:29 +0000 UTC" firstStartedPulling="2025-01-17 12:19:32.699706207 +0000 UTC m=+4.294374834" lastFinishedPulling="2025-01-17 12:19:36.513325887 +0000 UTC m=+8.107994503" observedRunningTime="2025-01-17 12:19:37.516719783 +0000 UTC m=+9.111388421" watchObservedRunningTime="2025-01-17 12:19:39.528209462 +0000 UTC m=+11.122878107" Jan 17 12:19:40.268972 kubelet[1766]: E0117 12:19:40.268906 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:40.481790 kubelet[1766]: I0117 12:19:40.481725 1766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:19:40.483453 kubelet[1766]: E0117 12:19:40.482893 1766 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:40.822098 systemd-resolved[1321]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jan 17 12:19:41.269887 kubelet[1766]: E0117 12:19:41.269480 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:41.409321 kubelet[1766]: E0117 12:19:41.408955 1766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s7bwc" podUID="d94ec394-9eb7-4930-b60e-267badfa15a7" Jan 17 12:19:42.270529 kubelet[1766]: E0117 12:19:42.270380 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:43.203395 containerd[1451]: time="2025-01-17T12:19:43.203282079Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:43.204869 containerd[1451]: time="2025-01-17T12:19:43.204790416Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 17 12:19:43.206050 containerd[1451]: time="2025-01-17T12:19:43.205970131Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:43.208742 containerd[1451]: time="2025-01-17T12:19:43.208650363Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:43.210433 containerd[1451]: time="2025-01-17T12:19:43.210365733Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.295596741s" Jan 17 12:19:43.210433 containerd[1451]: time="2025-01-17T12:19:43.210420839Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 17 12:19:43.213259 containerd[1451]: time="2025-01-17T12:19:43.213204957Z" level=info msg="CreateContainer within sandbox \"c483e7cb6dabd2f04bb5f3483e905807f58c49a7f9c8d3e779f5a6783efd72ba\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 12:19:43.236285 containerd[1451]: time="2025-01-17T12:19:43.236146148Z" level=info msg="CreateContainer within sandbox \"c483e7cb6dabd2f04bb5f3483e905807f58c49a7f9c8d3e779f5a6783efd72ba\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c3c6a0d9122ebfff141812a95957c21f6b1dfcea69a336a93c6a658f338866a4\"" Jan 17 12:19:43.236958 containerd[1451]: time="2025-01-17T12:19:43.236818143Z" level=info msg="StartContainer for \"c3c6a0d9122ebfff141812a95957c21f6b1dfcea69a336a93c6a658f338866a4\"" Jan 17 12:19:43.271538 kubelet[1766]: E0117 12:19:43.271451 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:43.293141 systemd[1]: Started cri-containerd-c3c6a0d9122ebfff141812a95957c21f6b1dfcea69a336a93c6a658f338866a4.scope - libcontainer container c3c6a0d9122ebfff141812a95957c21f6b1dfcea69a336a93c6a658f338866a4. Jan 17 12:19:43.344412 containerd[1451]: time="2025-01-17T12:19:43.344341611Z" level=info msg="StartContainer for \"c3c6a0d9122ebfff141812a95957c21f6b1dfcea69a336a93c6a658f338866a4\" returns successfully" Jan 17 12:19:43.409213 kubelet[1766]: E0117 12:19:43.409129 1766 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s7bwc" podUID="d94ec394-9eb7-4930-b60e-267badfa15a7" Jan 17 12:19:43.499175 kubelet[1766]: E0117 12:19:43.497233 1766 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:43.533074 kubelet[1766]: I0117 12:19:43.532438 1766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-68d6dc4656-r9bhj" podStartSLOduration=12.344024683 podStartE2EDuration="18.532419255s" podCreationTimestamp="2025-01-17 12:19:25 +0000 UTC" firstStartedPulling="2025-01-17 12:19:32.723511203 +0000 UTC m=+4.318179808" lastFinishedPulling="2025-01-17 12:19:38.911905776 +0000 UTC m=+10.506574380" observedRunningTime="2025-01-17 12:19:39.533336655 +0000 UTC m=+11.128005295" watchObservedRunningTime="2025-01-17 12:19:43.532419255 +0000 UTC m=+15.127087885" Jan 17 12:19:44.267807 containerd[1451]: time="2025-01-17T12:19:44.267715375Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:19:44.270274 systemd[1]: cri-containerd-c3c6a0d9122ebfff141812a95957c21f6b1dfcea69a336a93c6a658f338866a4.scope: Deactivated successfully. Jan 17 12:19:44.272523 kubelet[1766]: E0117 12:19:44.272307 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:44.308288 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c3c6a0d9122ebfff141812a95957c21f6b1dfcea69a336a93c6a658f338866a4-rootfs.mount: Deactivated successfully. Jan 17 12:19:44.369939 kubelet[1766]: I0117 12:19:44.369647 1766 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 17 12:19:44.416756 containerd[1451]: time="2025-01-17T12:19:44.416640151Z" level=info msg="shim disconnected" id=c3c6a0d9122ebfff141812a95957c21f6b1dfcea69a336a93c6a658f338866a4 namespace=k8s.io Jan 17 12:19:44.416756 containerd[1451]: time="2025-01-17T12:19:44.416729912Z" level=warning msg="cleaning up after shim disconnected" id=c3c6a0d9122ebfff141812a95957c21f6b1dfcea69a336a93c6a658f338866a4 namespace=k8s.io Jan 17 12:19:44.416756 containerd[1451]: time="2025-01-17T12:19:44.416746639Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:19:44.502061 kubelet[1766]: E0117 12:19:44.502002 1766 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:44.503571 containerd[1451]: time="2025-01-17T12:19:44.503107711Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 17 12:19:44.505450 systemd-resolved[1321]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Jan 17 12:19:45.272614 kubelet[1766]: E0117 12:19:45.272515 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:45.419691 systemd[1]: Created slice kubepods-besteffort-podd94ec394_9eb7_4930_b60e_267badfa15a7.slice - libcontainer container kubepods-besteffort-podd94ec394_9eb7_4930_b60e_267badfa15a7.slice. Jan 17 12:19:45.424826 containerd[1451]: time="2025-01-17T12:19:45.424216461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s7bwc,Uid:d94ec394-9eb7-4930-b60e-267badfa15a7,Namespace:calico-system,Attempt:0,}" Jan 17 12:19:45.522067 containerd[1451]: time="2025-01-17T12:19:45.521987112Z" level=error msg="Failed to destroy network for sandbox \"80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:19:45.524288 containerd[1451]: time="2025-01-17T12:19:45.524159580Z" level=error msg="encountered an error cleaning up failed sandbox \"80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:19:45.524288 containerd[1451]: time="2025-01-17T12:19:45.524240985Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s7bwc,Uid:d94ec394-9eb7-4930-b60e-267badfa15a7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:19:45.526010 kubelet[1766]: E0117 12:19:45.524520 1766 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:19:45.526010 kubelet[1766]: E0117 12:19:45.524594 1766 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s7bwc" Jan 17 12:19:45.526010 kubelet[1766]: E0117 12:19:45.524618 1766 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s7bwc" Jan 17 12:19:45.525573 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7-shm.mount: Deactivated successfully. Jan 17 12:19:45.526327 kubelet[1766]: E0117 12:19:45.524667 1766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-s7bwc_calico-system(d94ec394-9eb7-4930-b60e-267badfa15a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-s7bwc_calico-system(d94ec394-9eb7-4930-b60e-267badfa15a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s7bwc" podUID="d94ec394-9eb7-4930-b60e-267badfa15a7" Jan 17 12:19:46.273488 kubelet[1766]: E0117 12:19:46.273399 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:46.506575 kubelet[1766]: I0117 12:19:46.506508 1766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7" Jan 17 12:19:46.507913 containerd[1451]: time="2025-01-17T12:19:46.507412129Z" level=info msg="StopPodSandbox for \"80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7\"" Jan 17 12:19:46.507913 containerd[1451]: time="2025-01-17T12:19:46.507627211Z" level=info msg="Ensure that sandbox 80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7 in task-service has been cleanup successfully" Jan 17 12:19:46.565323 containerd[1451]: time="2025-01-17T12:19:46.564748491Z" level=error msg="StopPodSandbox for \"80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7\" failed" error="failed to destroy network for sandbox \"80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:19:46.565458 kubelet[1766]: E0117 12:19:46.565033 1766 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7" Jan 17 12:19:46.565458 kubelet[1766]: E0117 12:19:46.565092 1766 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7"} Jan 17 12:19:46.565458 kubelet[1766]: E0117 12:19:46.565164 1766 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d94ec394-9eb7-4930-b60e-267badfa15a7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:19:46.565458 kubelet[1766]: E0117 12:19:46.565188 1766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d94ec394-9eb7-4930-b60e-267badfa15a7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s7bwc" podUID="d94ec394-9eb7-4930-b60e-267badfa15a7" Jan 17 12:19:47.273632 kubelet[1766]: E0117 12:19:47.273574 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:48.274152 kubelet[1766]: E0117 12:19:48.274017 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:49.258135 kubelet[1766]: E0117 12:19:49.258078 1766 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:49.274888 kubelet[1766]: E0117 12:19:49.274819 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:49.894563 systemd[1]: Created slice kubepods-besteffort-podfa40673b_0360_409f_9d77_7f3e3e6b869d.slice - libcontainer container kubepods-besteffort-podfa40673b_0360_409f_9d77_7f3e3e6b869d.slice. Jan 17 12:19:49.980778 kubelet[1766]: I0117 12:19:49.980696 1766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbljh\" (UniqueName: \"kubernetes.io/projected/fa40673b-0360-409f-9d77-7f3e3e6b869d-kube-api-access-kbljh\") pod \"nginx-deployment-8587fbcb89-d5dcn\" (UID: \"fa40673b-0360-409f-9d77-7f3e3e6b869d\") " pod="default/nginx-deployment-8587fbcb89-d5dcn" Jan 17 12:19:50.000310 kubelet[1766]: I0117 12:19:50.000264 1766 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:19:50.002870 kubelet[1766]: E0117 12:19:50.002742 1766 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:50.204612 containerd[1451]: time="2025-01-17T12:19:50.203180307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-d5dcn,Uid:fa40673b-0360-409f-9d77-7f3e3e6b869d,Namespace:default,Attempt:0,}" Jan 17 12:19:50.275845 kubelet[1766]: E0117 12:19:50.275156 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:50.386885 containerd[1451]: time="2025-01-17T12:19:50.384421997Z" level=error msg="Failed to destroy network for sandbox \"94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:19:50.387431 containerd[1451]: time="2025-01-17T12:19:50.387219632Z" level=error msg="encountered an error cleaning up failed sandbox \"94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:19:50.387431 containerd[1451]: time="2025-01-17T12:19:50.387333140Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-d5dcn,Uid:fa40673b-0360-409f-9d77-7f3e3e6b869d,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:19:50.388155 kubelet[1766]: E0117 12:19:50.387712 1766 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:19:50.389060 kubelet[1766]: E0117 12:19:50.389005 1766 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-d5dcn" Jan 17 12:19:50.389143 kubelet[1766]: E0117 12:19:50.389061 1766 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-d5dcn" Jan 17 12:19:50.389143 kubelet[1766]: E0117 12:19:50.389117 1766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-d5dcn_default(fa40673b-0360-409f-9d77-7f3e3e6b869d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-d5dcn_default(fa40673b-0360-409f-9d77-7f3e3e6b869d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-d5dcn" podUID="fa40673b-0360-409f-9d77-7f3e3e6b869d" Jan 17 12:19:50.390426 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622-shm.mount: Deactivated successfully. Jan 17 12:19:50.523633 kubelet[1766]: E0117 12:19:50.522340 1766 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:50.523749 containerd[1451]: time="2025-01-17T12:19:50.522899619Z" level=info msg="StopPodSandbox for \"94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622\"" Jan 17 12:19:50.523749 containerd[1451]: time="2025-01-17T12:19:50.523120199Z" level=info msg="Ensure that sandbox 94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622 in task-service has been cleanup successfully" Jan 17 12:19:50.524437 kubelet[1766]: I0117 12:19:50.522430 1766 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622" Jan 17 12:19:50.572102 containerd[1451]: time="2025-01-17T12:19:50.572034801Z" level=error msg="StopPodSandbox for \"94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622\" failed" error="failed to destroy network for sandbox \"94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:19:50.573224 kubelet[1766]: E0117 12:19:50.573056 1766 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622" Jan 17 12:19:50.573224 kubelet[1766]: E0117 12:19:50.573119 1766 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622"} Jan 17 12:19:50.573224 kubelet[1766]: E0117 12:19:50.573166 1766 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fa40673b-0360-409f-9d77-7f3e3e6b869d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:19:50.573224 kubelet[1766]: E0117 12:19:50.573191 1766 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fa40673b-0360-409f-9d77-7f3e3e6b869d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-d5dcn" podUID="fa40673b-0360-409f-9d77-7f3e3e6b869d" Jan 17 12:19:51.275714 kubelet[1766]: E0117 12:19:51.275637 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:51.323574 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3282833236.mount: Deactivated successfully. Jan 17 12:19:51.452699 containerd[1451]: time="2025-01-17T12:19:51.451863482Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:51.462611 containerd[1451]: time="2025-01-17T12:19:51.462487346Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 17 12:19:51.473355 containerd[1451]: time="2025-01-17T12:19:51.473045994Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:51.478093 containerd[1451]: time="2025-01-17T12:19:51.477978389Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:51.480153 containerd[1451]: time="2025-01-17T12:19:51.479103462Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 6.975883028s" Jan 17 12:19:51.480153 containerd[1451]: time="2025-01-17T12:19:51.479171453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 17 12:19:51.506722 containerd[1451]: time="2025-01-17T12:19:51.506662736Z" level=info msg="CreateContainer within sandbox \"c483e7cb6dabd2f04bb5f3483e905807f58c49a7f9c8d3e779f5a6783efd72ba\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 12:19:51.541177 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2756709700.mount: Deactivated successfully. Jan 17 12:19:51.544283 containerd[1451]: time="2025-01-17T12:19:51.543001038Z" level=info msg="CreateContainer within sandbox \"c483e7cb6dabd2f04bb5f3483e905807f58c49a7f9c8d3e779f5a6783efd72ba\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"30f60043aa817f1959d62845d643c85ed5655d76a747d7e50eb0e4bab5387a98\"" Jan 17 12:19:51.544283 containerd[1451]: time="2025-01-17T12:19:51.544096555Z" level=info msg="StartContainer for \"30f60043aa817f1959d62845d643c85ed5655d76a747d7e50eb0e4bab5387a98\"" Jan 17 12:19:51.651081 systemd[1]: Started cri-containerd-30f60043aa817f1959d62845d643c85ed5655d76a747d7e50eb0e4bab5387a98.scope - libcontainer container 30f60043aa817f1959d62845d643c85ed5655d76a747d7e50eb0e4bab5387a98. Jan 17 12:19:51.705345 containerd[1451]: time="2025-01-17T12:19:51.705024396Z" level=info msg="StartContainer for \"30f60043aa817f1959d62845d643c85ed5655d76a747d7e50eb0e4bab5387a98\" returns successfully" Jan 17 12:19:51.830496 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 17 12:19:51.831031 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 17 12:19:52.276807 kubelet[1766]: E0117 12:19:52.276713 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:52.532120 kubelet[1766]: E0117 12:19:52.531696 1766 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:52.562788 systemd[1]: run-containerd-runc-k8s.io-30f60043aa817f1959d62845d643c85ed5655d76a747d7e50eb0e4bab5387a98-runc.t7lckK.mount: Deactivated successfully. Jan 17 12:19:53.277827 kubelet[1766]: E0117 12:19:53.277740 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:53.536954 kubelet[1766]: E0117 12:19:53.536236 1766 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:54.121816 kernel: bpftool[2637]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 17 12:19:54.278687 kubelet[1766]: E0117 12:19:54.278593 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:54.484570 systemd-networkd[1355]: vxlan.calico: Link UP Jan 17 12:19:54.484581 systemd-networkd[1355]: vxlan.calico: Gained carrier Jan 17 12:19:54.550557 kubelet[1766]: E0117 12:19:54.540101 1766 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:19:55.279920 kubelet[1766]: E0117 12:19:55.279851 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:55.670147 systemd-networkd[1355]: vxlan.calico: Gained IPv6LL Jan 17 12:19:56.280228 kubelet[1766]: E0117 12:19:56.280143 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:57.280807 kubelet[1766]: E0117 12:19:57.280696 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:58.281651 kubelet[1766]: E0117 12:19:58.281545 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:58.409849 containerd[1451]: time="2025-01-17T12:19:58.409525135Z" level=info msg="StopPodSandbox for \"80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7\"" Jan 17 12:19:58.497244 kubelet[1766]: I0117 12:19:58.497122 1766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-f8q2m" podStartSLOduration=10.699360235 podStartE2EDuration="29.497097897s" podCreationTimestamp="2025-01-17 12:19:29 +0000 UTC" firstStartedPulling="2025-01-17 12:19:32.683265997 +0000 UTC m=+4.277934629" lastFinishedPulling="2025-01-17 12:19:51.481003661 +0000 UTC m=+23.075672291" observedRunningTime="2025-01-17 12:19:52.569021428 +0000 UTC m=+24.163690057" watchObservedRunningTime="2025-01-17 12:19:58.497097897 +0000 UTC m=+30.091766518" Jan 17 12:19:58.588251 containerd[1451]: 2025-01-17 12:19:58.496 [INFO][2745] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7" Jan 17 12:19:58.588251 containerd[1451]: 2025-01-17 12:19:58.497 [INFO][2745] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7" iface="eth0" netns="/var/run/netns/cni-e5f7408c-1ff4-25dc-94df-db2abaf5ce11" Jan 17 12:19:58.588251 containerd[1451]: 2025-01-17 12:19:58.498 [INFO][2745] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7" iface="eth0" netns="/var/run/netns/cni-e5f7408c-1ff4-25dc-94df-db2abaf5ce11" Jan 17 12:19:58.588251 containerd[1451]: 2025-01-17 12:19:58.499 [INFO][2745] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7" iface="eth0" netns="/var/run/netns/cni-e5f7408c-1ff4-25dc-94df-db2abaf5ce11" Jan 17 12:19:58.588251 containerd[1451]: 2025-01-17 12:19:58.499 [INFO][2745] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7" Jan 17 12:19:58.588251 containerd[1451]: 2025-01-17 12:19:58.499 [INFO][2745] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7" Jan 17 12:19:58.588251 containerd[1451]: 2025-01-17 12:19:58.565 [INFO][2751] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7" HandleID="k8s-pod-network.80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7" Workload="143.244.184.73-k8s-csi--node--driver--s7bwc-eth0" Jan 17 12:19:58.588251 containerd[1451]: 2025-01-17 12:19:58.565 [INFO][2751] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:58.588251 containerd[1451]: 2025-01-17 12:19:58.565 [INFO][2751] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:58.588251 containerd[1451]: 2025-01-17 12:19:58.581 [WARNING][2751] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7" HandleID="k8s-pod-network.80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7" Workload="143.244.184.73-k8s-csi--node--driver--s7bwc-eth0" Jan 17 12:19:58.588251 containerd[1451]: 2025-01-17 12:19:58.581 [INFO][2751] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7" HandleID="k8s-pod-network.80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7" Workload="143.244.184.73-k8s-csi--node--driver--s7bwc-eth0" Jan 17 12:19:58.588251 containerd[1451]: 2025-01-17 12:19:58.583 [INFO][2751] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:58.588251 containerd[1451]: 2025-01-17 12:19:58.586 [INFO][2745] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7" Jan 17 12:19:58.589989 containerd[1451]: time="2025-01-17T12:19:58.589940682Z" level=info msg="TearDown network for sandbox \"80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7\" successfully" Jan 17 12:19:58.589989 containerd[1451]: time="2025-01-17T12:19:58.589983404Z" level=info msg="StopPodSandbox for \"80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7\" returns successfully" Jan 17 12:19:58.591969 containerd[1451]: time="2025-01-17T12:19:58.591900425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s7bwc,Uid:d94ec394-9eb7-4930-b60e-267badfa15a7,Namespace:calico-system,Attempt:1,}" Jan 17 12:19:58.592699 systemd[1]: run-netns-cni\x2de5f7408c\x2d1ff4\x2d25dc\x2d94df\x2ddb2abaf5ce11.mount: Deactivated successfully. Jan 17 12:19:58.787056 systemd-networkd[1355]: cali18a540e13e2: Link UP Jan 17 12:19:58.788054 systemd-networkd[1355]: cali18a540e13e2: Gained carrier Jan 17 12:19:58.815438 containerd[1451]: 2025-01-17 12:19:58.656 [INFO][2759] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {143.244.184.73-k8s-csi--node--driver--s7bwc-eth0 csi-node-driver- calico-system d94ec394-9eb7-4930-b60e-267badfa15a7 1324 0 2025-01-17 12:19:29 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 143.244.184.73 csi-node-driver-s7bwc eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali18a540e13e2 [] []}} ContainerID="c524cb1f93747dd163b7185bceb1a7cf7f7de03aeaff730128a1dfaf0a89248f" Namespace="calico-system" Pod="csi-node-driver-s7bwc" WorkloadEndpoint="143.244.184.73-k8s-csi--node--driver--s7bwc-" Jan 17 12:19:58.815438 containerd[1451]: 2025-01-17 12:19:58.656 [INFO][2759] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c524cb1f93747dd163b7185bceb1a7cf7f7de03aeaff730128a1dfaf0a89248f" Namespace="calico-system" Pod="csi-node-driver-s7bwc" WorkloadEndpoint="143.244.184.73-k8s-csi--node--driver--s7bwc-eth0" Jan 17 12:19:58.815438 containerd[1451]: 2025-01-17 12:19:58.696 [INFO][2769] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c524cb1f93747dd163b7185bceb1a7cf7f7de03aeaff730128a1dfaf0a89248f" HandleID="k8s-pod-network.c524cb1f93747dd163b7185bceb1a7cf7f7de03aeaff730128a1dfaf0a89248f" Workload="143.244.184.73-k8s-csi--node--driver--s7bwc-eth0" Jan 17 12:19:58.815438 containerd[1451]: 2025-01-17 12:19:58.717 [INFO][2769] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c524cb1f93747dd163b7185bceb1a7cf7f7de03aeaff730128a1dfaf0a89248f" HandleID="k8s-pod-network.c524cb1f93747dd163b7185bceb1a7cf7f7de03aeaff730128a1dfaf0a89248f" Workload="143.244.184.73-k8s-csi--node--driver--s7bwc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004d2550), Attrs:map[string]string{"namespace":"calico-system", "node":"143.244.184.73", "pod":"csi-node-driver-s7bwc", "timestamp":"2025-01-17 12:19:58.695987338 +0000 UTC"}, Hostname:"143.244.184.73", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:19:58.815438 containerd[1451]: 2025-01-17 12:19:58.717 [INFO][2769] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:19:58.815438 containerd[1451]: 2025-01-17 12:19:58.717 [INFO][2769] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:19:58.815438 containerd[1451]: 2025-01-17 12:19:58.717 [INFO][2769] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '143.244.184.73' Jan 17 12:19:58.815438 containerd[1451]: 2025-01-17 12:19:58.721 [INFO][2769] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c524cb1f93747dd163b7185bceb1a7cf7f7de03aeaff730128a1dfaf0a89248f" host="143.244.184.73" Jan 17 12:19:58.815438 containerd[1451]: 2025-01-17 12:19:58.730 [INFO][2769] ipam/ipam.go 372: Looking up existing affinities for host host="143.244.184.73" Jan 17 12:19:58.815438 containerd[1451]: 2025-01-17 12:19:58.741 [INFO][2769] ipam/ipam.go 489: Trying affinity for 192.168.87.128/26 host="143.244.184.73" Jan 17 12:19:58.815438 containerd[1451]: 2025-01-17 12:19:58.745 [INFO][2769] ipam/ipam.go 155: Attempting to load block cidr=192.168.87.128/26 host="143.244.184.73" Jan 17 12:19:58.815438 containerd[1451]: 2025-01-17 12:19:58.750 [INFO][2769] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.87.128/26 host="143.244.184.73" Jan 17 12:19:58.815438 containerd[1451]: 2025-01-17 12:19:58.750 [INFO][2769] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.87.128/26 handle="k8s-pod-network.c524cb1f93747dd163b7185bceb1a7cf7f7de03aeaff730128a1dfaf0a89248f" host="143.244.184.73" Jan 17 12:19:58.815438 containerd[1451]: 2025-01-17 12:19:58.754 [INFO][2769] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c524cb1f93747dd163b7185bceb1a7cf7f7de03aeaff730128a1dfaf0a89248f Jan 17 12:19:58.815438 containerd[1451]: 2025-01-17 12:19:58.761 [INFO][2769] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.87.128/26 handle="k8s-pod-network.c524cb1f93747dd163b7185bceb1a7cf7f7de03aeaff730128a1dfaf0a89248f" host="143.244.184.73" Jan 17 12:19:58.815438 containerd[1451]: 2025-01-17 12:19:58.774 [INFO][2769] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.87.129/26] block=192.168.87.128/26 handle="k8s-pod-network.c524cb1f93747dd163b7185bceb1a7cf7f7de03aeaff730128a1dfaf0a89248f" host="143.244.184.73" Jan 17 12:19:58.815438 containerd[1451]: 2025-01-17 12:19:58.775 [INFO][2769] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.87.129/26] handle="k8s-pod-network.c524cb1f93747dd163b7185bceb1a7cf7f7de03aeaff730128a1dfaf0a89248f" host="143.244.184.73" Jan 17 12:19:58.815438 containerd[1451]: 2025-01-17 12:19:58.775 [INFO][2769] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:19:58.815438 containerd[1451]: 2025-01-17 12:19:58.775 [INFO][2769] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.87.129/26] IPv6=[] ContainerID="c524cb1f93747dd163b7185bceb1a7cf7f7de03aeaff730128a1dfaf0a89248f" HandleID="k8s-pod-network.c524cb1f93747dd163b7185bceb1a7cf7f7de03aeaff730128a1dfaf0a89248f" Workload="143.244.184.73-k8s-csi--node--driver--s7bwc-eth0" Jan 17 12:19:58.816745 containerd[1451]: 2025-01-17 12:19:58.778 [INFO][2759] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c524cb1f93747dd163b7185bceb1a7cf7f7de03aeaff730128a1dfaf0a89248f" Namespace="calico-system" Pod="csi-node-driver-s7bwc" WorkloadEndpoint="143.244.184.73-k8s-csi--node--driver--s7bwc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.244.184.73-k8s-csi--node--driver--s7bwc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d94ec394-9eb7-4930-b60e-267badfa15a7", ResourceVersion:"1324", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 19, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.244.184.73", ContainerID:"", Pod:"csi-node-driver-s7bwc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.87.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali18a540e13e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:58.816745 containerd[1451]: 2025-01-17 12:19:58.778 [INFO][2759] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.87.129/32] ContainerID="c524cb1f93747dd163b7185bceb1a7cf7f7de03aeaff730128a1dfaf0a89248f" Namespace="calico-system" Pod="csi-node-driver-s7bwc" WorkloadEndpoint="143.244.184.73-k8s-csi--node--driver--s7bwc-eth0" Jan 17 12:19:58.816745 containerd[1451]: 2025-01-17 12:19:58.778 [INFO][2759] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali18a540e13e2 ContainerID="c524cb1f93747dd163b7185bceb1a7cf7f7de03aeaff730128a1dfaf0a89248f" Namespace="calico-system" Pod="csi-node-driver-s7bwc" WorkloadEndpoint="143.244.184.73-k8s-csi--node--driver--s7bwc-eth0" Jan 17 12:19:58.816745 containerd[1451]: 2025-01-17 12:19:58.788 [INFO][2759] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c524cb1f93747dd163b7185bceb1a7cf7f7de03aeaff730128a1dfaf0a89248f" Namespace="calico-system" Pod="csi-node-driver-s7bwc" WorkloadEndpoint="143.244.184.73-k8s-csi--node--driver--s7bwc-eth0" Jan 17 12:19:58.816745 containerd[1451]: 2025-01-17 12:19:58.789 [INFO][2759] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c524cb1f93747dd163b7185bceb1a7cf7f7de03aeaff730128a1dfaf0a89248f" Namespace="calico-system" Pod="csi-node-driver-s7bwc" WorkloadEndpoint="143.244.184.73-k8s-csi--node--driver--s7bwc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.244.184.73-k8s-csi--node--driver--s7bwc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d94ec394-9eb7-4930-b60e-267badfa15a7", ResourceVersion:"1324", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 19, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.244.184.73", ContainerID:"c524cb1f93747dd163b7185bceb1a7cf7f7de03aeaff730128a1dfaf0a89248f", Pod:"csi-node-driver-s7bwc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.87.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali18a540e13e2", MAC:"0a:6f:fa:16:64:b9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:19:58.816745 containerd[1451]: 2025-01-17 12:19:58.807 [INFO][2759] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c524cb1f93747dd163b7185bceb1a7cf7f7de03aeaff730128a1dfaf0a89248f" Namespace="calico-system" Pod="csi-node-driver-s7bwc" WorkloadEndpoint="143.244.184.73-k8s-csi--node--driver--s7bwc-eth0" Jan 17 12:19:58.853998 containerd[1451]: time="2025-01-17T12:19:58.853527948Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:19:58.853998 containerd[1451]: time="2025-01-17T12:19:58.853650966Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:19:58.853998 containerd[1451]: time="2025-01-17T12:19:58.853683837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:58.855182 containerd[1451]: time="2025-01-17T12:19:58.853852174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:58.905281 systemd[1]: Started cri-containerd-c524cb1f93747dd163b7185bceb1a7cf7f7de03aeaff730128a1dfaf0a89248f.scope - libcontainer container c524cb1f93747dd163b7185bceb1a7cf7f7de03aeaff730128a1dfaf0a89248f. Jan 17 12:19:58.943191 containerd[1451]: time="2025-01-17T12:19:58.943088657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s7bwc,Uid:d94ec394-9eb7-4930-b60e-267badfa15a7,Namespace:calico-system,Attempt:1,} returns sandbox id \"c524cb1f93747dd163b7185bceb1a7cf7f7de03aeaff730128a1dfaf0a89248f\"" Jan 17 12:19:58.947213 containerd[1451]: time="2025-01-17T12:19:58.946812099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 17 12:19:59.282699 kubelet[1766]: E0117 12:19:59.282608 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:59.591746 systemd[1]: run-containerd-runc-k8s.io-c524cb1f93747dd163b7185bceb1a7cf7f7de03aeaff730128a1dfaf0a89248f-runc.wH0Tz4.mount: Deactivated successfully. Jan 17 12:20:00.150314 systemd-networkd[1355]: cali18a540e13e2: Gained IPv6LL Jan 17 12:20:00.283837 kubelet[1766]: E0117 12:20:00.283725 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:00.696648 containerd[1451]: time="2025-01-17T12:20:00.696577279Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:00.698541 containerd[1451]: time="2025-01-17T12:20:00.698126229Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 17 12:20:00.698541 containerd[1451]: time="2025-01-17T12:20:00.698323651Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:00.704048 containerd[1451]: time="2025-01-17T12:20:00.703986402Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:00.705836 containerd[1451]: time="2025-01-17T12:20:00.705604211Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.758730892s" Jan 17 12:20:00.705836 containerd[1451]: time="2025-01-17T12:20:00.705676633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 17 12:20:00.709880 containerd[1451]: time="2025-01-17T12:20:00.709801280Z" level=info msg="CreateContainer within sandbox \"c524cb1f93747dd163b7185bceb1a7cf7f7de03aeaff730128a1dfaf0a89248f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 17 12:20:00.730227 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3194915959.mount: Deactivated successfully. Jan 17 12:20:00.737907 containerd[1451]: time="2025-01-17T12:20:00.737110092Z" level=info msg="CreateContainer within sandbox \"c524cb1f93747dd163b7185bceb1a7cf7f7de03aeaff730128a1dfaf0a89248f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"dc01aee7d3d39184073d99efd4307ce289a590c417b15c5a00ec0e4e9b743c76\"" Jan 17 12:20:00.738218 containerd[1451]: time="2025-01-17T12:20:00.738120333Z" level=info msg="StartContainer for \"dc01aee7d3d39184073d99efd4307ce289a590c417b15c5a00ec0e4e9b743c76\"" Jan 17 12:20:00.784077 systemd[1]: Started cri-containerd-dc01aee7d3d39184073d99efd4307ce289a590c417b15c5a00ec0e4e9b743c76.scope - libcontainer container dc01aee7d3d39184073d99efd4307ce289a590c417b15c5a00ec0e4e9b743c76. Jan 17 12:20:00.832093 containerd[1451]: time="2025-01-17T12:20:00.832007404Z" level=info msg="StartContainer for \"dc01aee7d3d39184073d99efd4307ce289a590c417b15c5a00ec0e4e9b743c76\" returns successfully" Jan 17 12:20:00.834499 containerd[1451]: time="2025-01-17T12:20:00.834205033Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 17 12:20:01.142896 update_engine[1441]: I20250117 12:20:01.142331 1441 update_attempter.cc:509] Updating boot flags... Jan 17 12:20:01.173979 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2883) Jan 17 12:20:01.285043 kubelet[1766]: E0117 12:20:01.284927 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:02.285336 kubelet[1766]: E0117 12:20:02.285233 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:02.410027 containerd[1451]: time="2025-01-17T12:20:02.409820660Z" level=info msg="StopPodSandbox for \"94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622\"" Jan 17 12:20:02.631378 containerd[1451]: 2025-01-17 12:20:02.548 [INFO][2902] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622" Jan 17 12:20:02.631378 containerd[1451]: 2025-01-17 12:20:02.548 [INFO][2902] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622" iface="eth0" netns="/var/run/netns/cni-59c41bb3-bada-496a-155f-b9f649b3a8b9" Jan 17 12:20:02.631378 containerd[1451]: 2025-01-17 12:20:02.549 [INFO][2902] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622" iface="eth0" netns="/var/run/netns/cni-59c41bb3-bada-496a-155f-b9f649b3a8b9" Jan 17 12:20:02.631378 containerd[1451]: 2025-01-17 12:20:02.549 [INFO][2902] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622" iface="eth0" netns="/var/run/netns/cni-59c41bb3-bada-496a-155f-b9f649b3a8b9" Jan 17 12:20:02.631378 containerd[1451]: 2025-01-17 12:20:02.549 [INFO][2902] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622" Jan 17 12:20:02.631378 containerd[1451]: 2025-01-17 12:20:02.549 [INFO][2902] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622" Jan 17 12:20:02.631378 containerd[1451]: 2025-01-17 12:20:02.582 [INFO][2909] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622" HandleID="k8s-pod-network.94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622" Workload="143.244.184.73-k8s-nginx--deployment--8587fbcb89--d5dcn-eth0" Jan 17 12:20:02.631378 containerd[1451]: 2025-01-17 12:20:02.583 [INFO][2909] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:02.631378 containerd[1451]: 2025-01-17 12:20:02.583 [INFO][2909] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:02.631378 containerd[1451]: 2025-01-17 12:20:02.607 [WARNING][2909] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622" HandleID="k8s-pod-network.94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622" Workload="143.244.184.73-k8s-nginx--deployment--8587fbcb89--d5dcn-eth0" Jan 17 12:20:02.631378 containerd[1451]: 2025-01-17 12:20:02.607 [INFO][2909] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622" HandleID="k8s-pod-network.94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622" Workload="143.244.184.73-k8s-nginx--deployment--8587fbcb89--d5dcn-eth0" Jan 17 12:20:02.631378 containerd[1451]: 2025-01-17 12:20:02.626 [INFO][2909] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:02.631378 containerd[1451]: 2025-01-17 12:20:02.628 [INFO][2902] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622" Jan 17 12:20:02.634891 containerd[1451]: time="2025-01-17T12:20:02.634621959Z" level=info msg="TearDown network for sandbox \"94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622\" successfully" Jan 17 12:20:02.634891 containerd[1451]: time="2025-01-17T12:20:02.634687305Z" level=info msg="StopPodSandbox for \"94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622\" returns successfully" Jan 17 12:20:02.638141 containerd[1451]: time="2025-01-17T12:20:02.637460830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-d5dcn,Uid:fa40673b-0360-409f-9d77-7f3e3e6b869d,Namespace:default,Attempt:1,}" Jan 17 12:20:02.638018 systemd[1]: run-netns-cni\x2d59c41bb3\x2dbada\x2d496a\x2d155f\x2db9f649b3a8b9.mount: Deactivated successfully. Jan 17 12:20:03.073849 systemd-networkd[1355]: cali38efd43bc1a: Link UP Jan 17 12:20:03.075464 systemd-networkd[1355]: cali38efd43bc1a: Gained carrier Jan 17 12:20:03.107963 containerd[1451]: 2025-01-17 12:20:02.750 [INFO][2916] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {143.244.184.73-k8s-nginx--deployment--8587fbcb89--d5dcn-eth0 nginx-deployment-8587fbcb89- default fa40673b-0360-409f-9d77-7f3e3e6b869d 1346 0 2025-01-17 12:19:49 +0000 UTC map[app:nginx pod-template-hash:8587fbcb89 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 143.244.184.73 nginx-deployment-8587fbcb89-d5dcn eth0 default [] [] [kns.default ksa.default.default] cali38efd43bc1a [] []}} ContainerID="05e07427156434f9fd64c41b88da064aefbb6250c5ee2f0b15097b8dd21383cf" Namespace="default" Pod="nginx-deployment-8587fbcb89-d5dcn" WorkloadEndpoint="143.244.184.73-k8s-nginx--deployment--8587fbcb89--d5dcn-" Jan 17 12:20:03.107963 containerd[1451]: 2025-01-17 12:20:02.751 [INFO][2916] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="05e07427156434f9fd64c41b88da064aefbb6250c5ee2f0b15097b8dd21383cf" Namespace="default" Pod="nginx-deployment-8587fbcb89-d5dcn" WorkloadEndpoint="143.244.184.73-k8s-nginx--deployment--8587fbcb89--d5dcn-eth0" Jan 17 12:20:03.107963 containerd[1451]: 2025-01-17 12:20:02.828 [INFO][2926] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="05e07427156434f9fd64c41b88da064aefbb6250c5ee2f0b15097b8dd21383cf" HandleID="k8s-pod-network.05e07427156434f9fd64c41b88da064aefbb6250c5ee2f0b15097b8dd21383cf" Workload="143.244.184.73-k8s-nginx--deployment--8587fbcb89--d5dcn-eth0" Jan 17 12:20:03.107963 containerd[1451]: 2025-01-17 12:20:02.880 [INFO][2926] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="05e07427156434f9fd64c41b88da064aefbb6250c5ee2f0b15097b8dd21383cf" HandleID="k8s-pod-network.05e07427156434f9fd64c41b88da064aefbb6250c5ee2f0b15097b8dd21383cf" Workload="143.244.184.73-k8s-nginx--deployment--8587fbcb89--d5dcn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003037c0), Attrs:map[string]string{"namespace":"default", "node":"143.244.184.73", "pod":"nginx-deployment-8587fbcb89-d5dcn", "timestamp":"2025-01-17 12:20:02.828511036 +0000 UTC"}, Hostname:"143.244.184.73", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:20:03.107963 containerd[1451]: 2025-01-17 12:20:02.880 [INFO][2926] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:03.107963 containerd[1451]: 2025-01-17 12:20:02.880 [INFO][2926] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:03.107963 containerd[1451]: 2025-01-17 12:20:02.880 [INFO][2926] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '143.244.184.73' Jan 17 12:20:03.107963 containerd[1451]: 2025-01-17 12:20:02.892 [INFO][2926] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.05e07427156434f9fd64c41b88da064aefbb6250c5ee2f0b15097b8dd21383cf" host="143.244.184.73" Jan 17 12:20:03.107963 containerd[1451]: 2025-01-17 12:20:02.914 [INFO][2926] ipam/ipam.go 372: Looking up existing affinities for host host="143.244.184.73" Jan 17 12:20:03.107963 containerd[1451]: 2025-01-17 12:20:02.951 [INFO][2926] ipam/ipam.go 489: Trying affinity for 192.168.87.128/26 host="143.244.184.73" Jan 17 12:20:03.107963 containerd[1451]: 2025-01-17 12:20:02.966 [INFO][2926] ipam/ipam.go 155: Attempting to load block cidr=192.168.87.128/26 host="143.244.184.73" Jan 17 12:20:03.107963 containerd[1451]: 2025-01-17 12:20:02.974 [INFO][2926] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.87.128/26 host="143.244.184.73" Jan 17 12:20:03.107963 containerd[1451]: 2025-01-17 12:20:02.975 [INFO][2926] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.87.128/26 handle="k8s-pod-network.05e07427156434f9fd64c41b88da064aefbb6250c5ee2f0b15097b8dd21383cf" host="143.244.184.73" Jan 17 12:20:03.107963 containerd[1451]: 2025-01-17 12:20:02.993 [INFO][2926] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.05e07427156434f9fd64c41b88da064aefbb6250c5ee2f0b15097b8dd21383cf Jan 17 12:20:03.107963 containerd[1451]: 2025-01-17 12:20:03.017 [INFO][2926] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.87.128/26 handle="k8s-pod-network.05e07427156434f9fd64c41b88da064aefbb6250c5ee2f0b15097b8dd21383cf" host="143.244.184.73" Jan 17 12:20:03.107963 containerd[1451]: 2025-01-17 12:20:03.060 [INFO][2926] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.87.130/26] block=192.168.87.128/26 handle="k8s-pod-network.05e07427156434f9fd64c41b88da064aefbb6250c5ee2f0b15097b8dd21383cf" host="143.244.184.73" Jan 17 12:20:03.107963 containerd[1451]: 2025-01-17 12:20:03.060 [INFO][2926] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.87.130/26] handle="k8s-pod-network.05e07427156434f9fd64c41b88da064aefbb6250c5ee2f0b15097b8dd21383cf" host="143.244.184.73" Jan 17 12:20:03.107963 containerd[1451]: 2025-01-17 12:20:03.060 [INFO][2926] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:03.107963 containerd[1451]: 2025-01-17 12:20:03.061 [INFO][2926] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.87.130/26] IPv6=[] ContainerID="05e07427156434f9fd64c41b88da064aefbb6250c5ee2f0b15097b8dd21383cf" HandleID="k8s-pod-network.05e07427156434f9fd64c41b88da064aefbb6250c5ee2f0b15097b8dd21383cf" Workload="143.244.184.73-k8s-nginx--deployment--8587fbcb89--d5dcn-eth0" Jan 17 12:20:03.109082 containerd[1451]: 2025-01-17 12:20:03.065 [INFO][2916] cni-plugin/k8s.go 386: Populated endpoint ContainerID="05e07427156434f9fd64c41b88da064aefbb6250c5ee2f0b15097b8dd21383cf" Namespace="default" Pod="nginx-deployment-8587fbcb89-d5dcn" WorkloadEndpoint="143.244.184.73-k8s-nginx--deployment--8587fbcb89--d5dcn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.244.184.73-k8s-nginx--deployment--8587fbcb89--d5dcn-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"fa40673b-0360-409f-9d77-7f3e3e6b869d", ResourceVersion:"1346", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 19, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.244.184.73", ContainerID:"", Pod:"nginx-deployment-8587fbcb89-d5dcn", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.87.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali38efd43bc1a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:03.109082 containerd[1451]: 2025-01-17 12:20:03.066 [INFO][2916] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.87.130/32] ContainerID="05e07427156434f9fd64c41b88da064aefbb6250c5ee2f0b15097b8dd21383cf" Namespace="default" Pod="nginx-deployment-8587fbcb89-d5dcn" WorkloadEndpoint="143.244.184.73-k8s-nginx--deployment--8587fbcb89--d5dcn-eth0" Jan 17 12:20:03.109082 containerd[1451]: 2025-01-17 12:20:03.066 [INFO][2916] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali38efd43bc1a ContainerID="05e07427156434f9fd64c41b88da064aefbb6250c5ee2f0b15097b8dd21383cf" Namespace="default" Pod="nginx-deployment-8587fbcb89-d5dcn" WorkloadEndpoint="143.244.184.73-k8s-nginx--deployment--8587fbcb89--d5dcn-eth0" Jan 17 12:20:03.109082 containerd[1451]: 2025-01-17 12:20:03.070 [INFO][2916] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="05e07427156434f9fd64c41b88da064aefbb6250c5ee2f0b15097b8dd21383cf" Namespace="default" Pod="nginx-deployment-8587fbcb89-d5dcn" WorkloadEndpoint="143.244.184.73-k8s-nginx--deployment--8587fbcb89--d5dcn-eth0" Jan 17 12:20:03.109082 containerd[1451]: 2025-01-17 12:20:03.071 [INFO][2916] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="05e07427156434f9fd64c41b88da064aefbb6250c5ee2f0b15097b8dd21383cf" Namespace="default" Pod="nginx-deployment-8587fbcb89-d5dcn" WorkloadEndpoint="143.244.184.73-k8s-nginx--deployment--8587fbcb89--d5dcn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.244.184.73-k8s-nginx--deployment--8587fbcb89--d5dcn-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"fa40673b-0360-409f-9d77-7f3e3e6b869d", ResourceVersion:"1346", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 19, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.244.184.73", ContainerID:"05e07427156434f9fd64c41b88da064aefbb6250c5ee2f0b15097b8dd21383cf", Pod:"nginx-deployment-8587fbcb89-d5dcn", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.87.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali38efd43bc1a", MAC:"02:88:48:5e:ad:56", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:03.109082 containerd[1451]: 2025-01-17 12:20:03.103 [INFO][2916] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="05e07427156434f9fd64c41b88da064aefbb6250c5ee2f0b15097b8dd21383cf" Namespace="default" Pod="nginx-deployment-8587fbcb89-d5dcn" WorkloadEndpoint="143.244.184.73-k8s-nginx--deployment--8587fbcb89--d5dcn-eth0" Jan 17 12:20:03.176922 kubelet[1766]: E0117 12:20:03.176308 1766 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:20:03.189518 containerd[1451]: time="2025-01-17T12:20:03.188384474Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:20:03.189518 containerd[1451]: time="2025-01-17T12:20:03.189367863Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:20:03.189518 containerd[1451]: time="2025-01-17T12:20:03.189386256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:20:03.190913 containerd[1451]: time="2025-01-17T12:20:03.190074261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:20:03.228049 systemd[1]: Started cri-containerd-05e07427156434f9fd64c41b88da064aefbb6250c5ee2f0b15097b8dd21383cf.scope - libcontainer container 05e07427156434f9fd64c41b88da064aefbb6250c5ee2f0b15097b8dd21383cf. Jan 17 12:20:03.286586 kubelet[1766]: E0117 12:20:03.286524 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:03.304459 containerd[1451]: time="2025-01-17T12:20:03.304308221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-d5dcn,Uid:fa40673b-0360-409f-9d77-7f3e3e6b869d,Namespace:default,Attempt:1,} returns sandbox id \"05e07427156434f9fd64c41b88da064aefbb6250c5ee2f0b15097b8dd21383cf\"" Jan 17 12:20:03.367404 containerd[1451]: time="2025-01-17T12:20:03.367206500Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:03.370326 containerd[1451]: time="2025-01-17T12:20:03.370261465Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 17 12:20:03.371384 containerd[1451]: time="2025-01-17T12:20:03.371339190Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:03.374272 containerd[1451]: time="2025-01-17T12:20:03.374066716Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:03.376012 containerd[1451]: time="2025-01-17T12:20:03.375943332Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.541690066s" Jan 17 12:20:03.376012 containerd[1451]: time="2025-01-17T12:20:03.376019594Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 17 12:20:03.378000 containerd[1451]: time="2025-01-17T12:20:03.377948042Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 17 12:20:03.380495 containerd[1451]: time="2025-01-17T12:20:03.380361292Z" level=info msg="CreateContainer within sandbox \"c524cb1f93747dd163b7185bceb1a7cf7f7de03aeaff730128a1dfaf0a89248f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 17 12:20:03.400941 containerd[1451]: time="2025-01-17T12:20:03.400869241Z" level=info msg="CreateContainer within sandbox \"c524cb1f93747dd163b7185bceb1a7cf7f7de03aeaff730128a1dfaf0a89248f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"77a9a07efd8900308ebbd4ee13f478f8ed0ea4d964ad01f493b88b88ad8f83c5\"" Jan 17 12:20:03.401688 containerd[1451]: time="2025-01-17T12:20:03.401625434Z" level=info msg="StartContainer for \"77a9a07efd8900308ebbd4ee13f478f8ed0ea4d964ad01f493b88b88ad8f83c5\"" Jan 17 12:20:03.439609 systemd[1]: Started cri-containerd-77a9a07efd8900308ebbd4ee13f478f8ed0ea4d964ad01f493b88b88ad8f83c5.scope - libcontainer container 77a9a07efd8900308ebbd4ee13f478f8ed0ea4d964ad01f493b88b88ad8f83c5. Jan 17 12:20:03.489301 containerd[1451]: time="2025-01-17T12:20:03.489234466Z" level=info msg="StartContainer for \"77a9a07efd8900308ebbd4ee13f478f8ed0ea4d964ad01f493b88b88ad8f83c5\" returns successfully" Jan 17 12:20:04.287451 kubelet[1766]: E0117 12:20:04.287298 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:04.374167 systemd-networkd[1355]: cali38efd43bc1a: Gained IPv6LL Jan 17 12:20:04.425834 kubelet[1766]: I0117 12:20:04.425605 1766 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 17 12:20:04.425834 kubelet[1766]: I0117 12:20:04.425754 1766 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 17 12:20:05.288324 kubelet[1766]: E0117 12:20:05.288226 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:06.291136 kubelet[1766]: E0117 12:20:06.291028 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:07.291289 kubelet[1766]: E0117 12:20:07.291185 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:08.082530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3753165998.mount: Deactivated successfully. Jan 17 12:20:08.292164 kubelet[1766]: E0117 12:20:08.291607 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:09.258721 kubelet[1766]: E0117 12:20:09.258308 1766 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:09.291909 kubelet[1766]: E0117 12:20:09.291852 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:09.696832 containerd[1451]: time="2025-01-17T12:20:09.696673770Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:09.698568 containerd[1451]: time="2025-01-17T12:20:09.698487007Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71036018" Jan 17 12:20:09.700267 containerd[1451]: time="2025-01-17T12:20:09.699556808Z" level=info msg="ImageCreate event name:\"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:09.703799 containerd[1451]: time="2025-01-17T12:20:09.703702733Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:09.705182 containerd[1451]: time="2025-01-17T12:20:09.705123558Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 6.327120483s" Jan 17 12:20:09.705182 containerd[1451]: time="2025-01-17T12:20:09.705182390Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 17 12:20:09.724309 containerd[1451]: time="2025-01-17T12:20:09.724246745Z" level=info msg="CreateContainer within sandbox \"05e07427156434f9fd64c41b88da064aefbb6250c5ee2f0b15097b8dd21383cf\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 17 12:20:09.751318 containerd[1451]: time="2025-01-17T12:20:09.751124521Z" level=info msg="CreateContainer within sandbox \"05e07427156434f9fd64c41b88da064aefbb6250c5ee2f0b15097b8dd21383cf\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"b0efeba407ceb1e178e5966540a2fca6219e9b4257edf29cd75c6dfe43c220c1\"" Jan 17 12:20:09.752813 containerd[1451]: time="2025-01-17T12:20:09.752267972Z" level=info msg="StartContainer for \"b0efeba407ceb1e178e5966540a2fca6219e9b4257edf29cd75c6dfe43c220c1\"" Jan 17 12:20:09.804726 systemd[1]: Started cri-containerd-b0efeba407ceb1e178e5966540a2fca6219e9b4257edf29cd75c6dfe43c220c1.scope - libcontainer container b0efeba407ceb1e178e5966540a2fca6219e9b4257edf29cd75c6dfe43c220c1. Jan 17 12:20:09.842608 containerd[1451]: time="2025-01-17T12:20:09.842555301Z" level=info msg="StartContainer for \"b0efeba407ceb1e178e5966540a2fca6219e9b4257edf29cd75c6dfe43c220c1\" returns successfully" Jan 17 12:20:10.292925 kubelet[1766]: E0117 12:20:10.292823 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:10.631388 kubelet[1766]: I0117 12:20:10.631285 1766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-d5dcn" podStartSLOduration=15.231887775 podStartE2EDuration="21.631262033s" podCreationTimestamp="2025-01-17 12:19:49 +0000 UTC" firstStartedPulling="2025-01-17 12:20:03.307967855 +0000 UTC m=+34.902636487" lastFinishedPulling="2025-01-17 12:20:09.707342122 +0000 UTC m=+41.302010745" observedRunningTime="2025-01-17 12:20:10.630542579 +0000 UTC m=+42.225211204" watchObservedRunningTime="2025-01-17 12:20:10.631262033 +0000 UTC m=+42.225930641" Jan 17 12:20:10.632075 kubelet[1766]: I0117 12:20:10.631908 1766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-s7bwc" podStartSLOduration=37.200597918 podStartE2EDuration="41.631896041s" podCreationTimestamp="2025-01-17 12:19:29 +0000 UTC" firstStartedPulling="2025-01-17 12:19:58.945851029 +0000 UTC m=+30.540519635" lastFinishedPulling="2025-01-17 12:20:03.377149129 +0000 UTC m=+34.971817758" observedRunningTime="2025-01-17 12:20:03.641642221 +0000 UTC m=+35.236310853" watchObservedRunningTime="2025-01-17 12:20:10.631896041 +0000 UTC m=+42.226564678" Jan 17 12:20:11.293483 kubelet[1766]: E0117 12:20:11.293409 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:12.294700 kubelet[1766]: E0117 12:20:12.294621 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:13.295848 kubelet[1766]: E0117 12:20:13.295722 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:14.296020 kubelet[1766]: E0117 12:20:14.295931 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:15.296808 kubelet[1766]: E0117 12:20:15.296698 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:16.297051 kubelet[1766]: E0117 12:20:16.296931 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:17.297701 kubelet[1766]: E0117 12:20:17.297621 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:18.254022 systemd[1]: Created slice kubepods-besteffort-pod011f3f6b_77b6_4ee9_9ac5_414aa3c87a61.slice - libcontainer container kubepods-besteffort-pod011f3f6b_77b6_4ee9_9ac5_414aa3c87a61.slice. Jan 17 12:20:18.260597 kubelet[1766]: I0117 12:20:18.260513 1766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7tf7\" (UniqueName: \"kubernetes.io/projected/011f3f6b-77b6-4ee9-9ac5-414aa3c87a61-kube-api-access-k7tf7\") pod \"nfs-server-provisioner-0\" (UID: \"011f3f6b-77b6-4ee9-9ac5-414aa3c87a61\") " pod="default/nfs-server-provisioner-0" Jan 17 12:20:18.260791 kubelet[1766]: I0117 12:20:18.260678 1766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/011f3f6b-77b6-4ee9-9ac5-414aa3c87a61-data\") pod \"nfs-server-provisioner-0\" (UID: \"011f3f6b-77b6-4ee9-9ac5-414aa3c87a61\") " pod="default/nfs-server-provisioner-0" Jan 17 12:20:18.298377 kubelet[1766]: E0117 12:20:18.298291 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:18.562572 containerd[1451]: time="2025-01-17T12:20:18.562019513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:011f3f6b-77b6-4ee9-9ac5-414aa3c87a61,Namespace:default,Attempt:0,}" Jan 17 12:20:18.889472 systemd-networkd[1355]: cali60e51b789ff: Link UP Jan 17 12:20:18.889888 systemd-networkd[1355]: cali60e51b789ff: Gained carrier Jan 17 12:20:18.920827 containerd[1451]: 2025-01-17 12:20:18.658 [INFO][3164] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {143.244.184.73-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 011f3f6b-77b6-4ee9-9ac5-414aa3c87a61 1428 0 2025-01-17 12:20:18 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 143.244.184.73 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="61a874917ff25166bc3fb27e6d63ee9e5225603bd12ca49d838e55ede95bc635" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="143.244.184.73-k8s-nfs--server--provisioner--0-" Jan 17 12:20:18.920827 containerd[1451]: 2025-01-17 12:20:18.658 [INFO][3164] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="61a874917ff25166bc3fb27e6d63ee9e5225603bd12ca49d838e55ede95bc635" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="143.244.184.73-k8s-nfs--server--provisioner--0-eth0" Jan 17 12:20:18.920827 containerd[1451]: 2025-01-17 12:20:18.714 [INFO][3170] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="61a874917ff25166bc3fb27e6d63ee9e5225603bd12ca49d838e55ede95bc635" HandleID="k8s-pod-network.61a874917ff25166bc3fb27e6d63ee9e5225603bd12ca49d838e55ede95bc635" Workload="143.244.184.73-k8s-nfs--server--provisioner--0-eth0" Jan 17 12:20:18.920827 containerd[1451]: 2025-01-17 12:20:18.742 [INFO][3170] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="61a874917ff25166bc3fb27e6d63ee9e5225603bd12ca49d838e55ede95bc635" HandleID="k8s-pod-network.61a874917ff25166bc3fb27e6d63ee9e5225603bd12ca49d838e55ede95bc635" Workload="143.244.184.73-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290b70), Attrs:map[string]string{"namespace":"default", "node":"143.244.184.73", "pod":"nfs-server-provisioner-0", "timestamp":"2025-01-17 12:20:18.714718563 +0000 UTC"}, Hostname:"143.244.184.73", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:20:18.920827 containerd[1451]: 2025-01-17 12:20:18.742 [INFO][3170] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:18.920827 containerd[1451]: 2025-01-17 12:20:18.742 [INFO][3170] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:18.920827 containerd[1451]: 2025-01-17 12:20:18.742 [INFO][3170] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '143.244.184.73' Jan 17 12:20:18.920827 containerd[1451]: 2025-01-17 12:20:18.754 [INFO][3170] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.61a874917ff25166bc3fb27e6d63ee9e5225603bd12ca49d838e55ede95bc635" host="143.244.184.73" Jan 17 12:20:18.920827 containerd[1451]: 2025-01-17 12:20:18.773 [INFO][3170] ipam/ipam.go 372: Looking up existing affinities for host host="143.244.184.73" Jan 17 12:20:18.920827 containerd[1451]: 2025-01-17 12:20:18.806 [INFO][3170] ipam/ipam.go 489: Trying affinity for 192.168.87.128/26 host="143.244.184.73" Jan 17 12:20:18.920827 containerd[1451]: 2025-01-17 12:20:18.815 [INFO][3170] ipam/ipam.go 155: Attempting to load block cidr=192.168.87.128/26 host="143.244.184.73" Jan 17 12:20:18.920827 containerd[1451]: 2025-01-17 12:20:18.822 [INFO][3170] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.87.128/26 host="143.244.184.73" Jan 17 12:20:18.920827 containerd[1451]: 2025-01-17 12:20:18.822 [INFO][3170] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.87.128/26 handle="k8s-pod-network.61a874917ff25166bc3fb27e6d63ee9e5225603bd12ca49d838e55ede95bc635" host="143.244.184.73" Jan 17 12:20:18.920827 containerd[1451]: 2025-01-17 12:20:18.834 [INFO][3170] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.61a874917ff25166bc3fb27e6d63ee9e5225603bd12ca49d838e55ede95bc635 Jan 17 12:20:18.920827 containerd[1451]: 2025-01-17 12:20:18.849 [INFO][3170] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.87.128/26 handle="k8s-pod-network.61a874917ff25166bc3fb27e6d63ee9e5225603bd12ca49d838e55ede95bc635" host="143.244.184.73" Jan 17 12:20:18.920827 containerd[1451]: 2025-01-17 12:20:18.881 [INFO][3170] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.87.131/26] block=192.168.87.128/26 handle="k8s-pod-network.61a874917ff25166bc3fb27e6d63ee9e5225603bd12ca49d838e55ede95bc635" host="143.244.184.73" Jan 17 12:20:18.920827 containerd[1451]: 2025-01-17 12:20:18.881 [INFO][3170] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.87.131/26] handle="k8s-pod-network.61a874917ff25166bc3fb27e6d63ee9e5225603bd12ca49d838e55ede95bc635" host="143.244.184.73" Jan 17 12:20:18.920827 containerd[1451]: 2025-01-17 12:20:18.881 [INFO][3170] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:18.920827 containerd[1451]: 2025-01-17 12:20:18.881 [INFO][3170] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.87.131/26] IPv6=[] ContainerID="61a874917ff25166bc3fb27e6d63ee9e5225603bd12ca49d838e55ede95bc635" HandleID="k8s-pod-network.61a874917ff25166bc3fb27e6d63ee9e5225603bd12ca49d838e55ede95bc635" Workload="143.244.184.73-k8s-nfs--server--provisioner--0-eth0" Jan 17 12:20:18.921593 containerd[1451]: 2025-01-17 12:20:18.884 [INFO][3164] cni-plugin/k8s.go 386: Populated endpoint ContainerID="61a874917ff25166bc3fb27e6d63ee9e5225603bd12ca49d838e55ede95bc635" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="143.244.184.73-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.244.184.73-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"011f3f6b-77b6-4ee9-9ac5-414aa3c87a61", ResourceVersion:"1428", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 20, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.244.184.73", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.87.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:18.921593 containerd[1451]: 2025-01-17 12:20:18.884 [INFO][3164] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.87.131/32] ContainerID="61a874917ff25166bc3fb27e6d63ee9e5225603bd12ca49d838e55ede95bc635" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="143.244.184.73-k8s-nfs--server--provisioner--0-eth0" Jan 17 12:20:18.921593 containerd[1451]: 2025-01-17 12:20:18.884 [INFO][3164] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="61a874917ff25166bc3fb27e6d63ee9e5225603bd12ca49d838e55ede95bc635" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="143.244.184.73-k8s-nfs--server--provisioner--0-eth0" Jan 17 12:20:18.921593 containerd[1451]: 2025-01-17 12:20:18.890 [INFO][3164] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="61a874917ff25166bc3fb27e6d63ee9e5225603bd12ca49d838e55ede95bc635" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="143.244.184.73-k8s-nfs--server--provisioner--0-eth0" Jan 17 12:20:18.921803 containerd[1451]: 2025-01-17 12:20:18.891 [INFO][3164] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="61a874917ff25166bc3fb27e6d63ee9e5225603bd12ca49d838e55ede95bc635" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="143.244.184.73-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.244.184.73-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"011f3f6b-77b6-4ee9-9ac5-414aa3c87a61", ResourceVersion:"1428", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 20, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.244.184.73", ContainerID:"61a874917ff25166bc3fb27e6d63ee9e5225603bd12ca49d838e55ede95bc635", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.87.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"16:94:c0:88:d8:37", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:18.921803 containerd[1451]: 2025-01-17 12:20:18.918 [INFO][3164] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="61a874917ff25166bc3fb27e6d63ee9e5225603bd12ca49d838e55ede95bc635" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="143.244.184.73-k8s-nfs--server--provisioner--0-eth0" Jan 17 12:20:18.957189 containerd[1451]: time="2025-01-17T12:20:18.957035002Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:20:18.957853 containerd[1451]: time="2025-01-17T12:20:18.957462699Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:20:18.957853 containerd[1451]: time="2025-01-17T12:20:18.957492859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:20:18.958533 containerd[1451]: time="2025-01-17T12:20:18.958374093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:20:19.002112 systemd[1]: Started cri-containerd-61a874917ff25166bc3fb27e6d63ee9e5225603bd12ca49d838e55ede95bc635.scope - libcontainer container 61a874917ff25166bc3fb27e6d63ee9e5225603bd12ca49d838e55ede95bc635. Jan 17 12:20:19.064487 containerd[1451]: time="2025-01-17T12:20:19.064413711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:011f3f6b-77b6-4ee9-9ac5-414aa3c87a61,Namespace:default,Attempt:0,} returns sandbox id \"61a874917ff25166bc3fb27e6d63ee9e5225603bd12ca49d838e55ede95bc635\"" Jan 17 12:20:19.069298 containerd[1451]: time="2025-01-17T12:20:19.069109853Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 17 12:20:19.300097 kubelet[1766]: E0117 12:20:19.299168 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:19.386214 systemd[1]: run-containerd-runc-k8s.io-61a874917ff25166bc3fb27e6d63ee9e5225603bd12ca49d838e55ede95bc635-runc.wT4BR2.mount: Deactivated successfully. Jan 17 12:20:19.991137 systemd-networkd[1355]: cali60e51b789ff: Gained IPv6LL Jan 17 12:20:20.299637 kubelet[1766]: E0117 12:20:20.299380 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:21.300184 kubelet[1766]: E0117 12:20:21.300101 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:21.622790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1611976855.mount: Deactivated successfully. Jan 17 12:20:22.300784 kubelet[1766]: E0117 12:20:22.300671 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:23.302203 kubelet[1766]: E0117 12:20:23.302151 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:24.189833 containerd[1451]: time="2025-01-17T12:20:24.189109861Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:24.191629 containerd[1451]: time="2025-01-17T12:20:24.191190956Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jan 17 12:20:24.192802 containerd[1451]: time="2025-01-17T12:20:24.192693982Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:24.196056 containerd[1451]: time="2025-01-17T12:20:24.195987685Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:24.197481 containerd[1451]: time="2025-01-17T12:20:24.197310049Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 5.128142788s" Jan 17 12:20:24.197481 containerd[1451]: time="2025-01-17T12:20:24.197360919Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 17 12:20:24.302457 kubelet[1766]: E0117 12:20:24.302373 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:24.302810 containerd[1451]: time="2025-01-17T12:20:24.302382115Z" level=info msg="CreateContainer within sandbox \"61a874917ff25166bc3fb27e6d63ee9e5225603bd12ca49d838e55ede95bc635\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 17 12:20:24.319760 containerd[1451]: time="2025-01-17T12:20:24.319700767Z" level=info msg="CreateContainer within sandbox \"61a874917ff25166bc3fb27e6d63ee9e5225603bd12ca49d838e55ede95bc635\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"a4e408a4802586ae60e3213db4f1bf6e5e929b7b633d8557cb94c830318e76db\"" Jan 17 12:20:24.327249 containerd[1451]: time="2025-01-17T12:20:24.327176709Z" level=info msg="StartContainer for \"a4e408a4802586ae60e3213db4f1bf6e5e929b7b633d8557cb94c830318e76db\"" Jan 17 12:20:24.389175 systemd[1]: Started cri-containerd-a4e408a4802586ae60e3213db4f1bf6e5e929b7b633d8557cb94c830318e76db.scope - libcontainer container a4e408a4802586ae60e3213db4f1bf6e5e929b7b633d8557cb94c830318e76db. Jan 17 12:20:24.437088 containerd[1451]: time="2025-01-17T12:20:24.436930946Z" level=info msg="StartContainer for \"a4e408a4802586ae60e3213db4f1bf6e5e929b7b633d8557cb94c830318e76db\" returns successfully" Jan 17 12:20:24.745622 kubelet[1766]: I0117 12:20:24.745492 1766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.590069936 podStartE2EDuration="6.745451866s" podCreationTimestamp="2025-01-17 12:20:18 +0000 UTC" firstStartedPulling="2025-01-17 12:20:19.068348356 +0000 UTC m=+50.663016992" lastFinishedPulling="2025-01-17 12:20:24.223730309 +0000 UTC m=+55.818398922" observedRunningTime="2025-01-17 12:20:24.738209776 +0000 UTC m=+56.332878425" watchObservedRunningTime="2025-01-17 12:20:24.745451866 +0000 UTC m=+56.340120552" Jan 17 12:20:25.303399 kubelet[1766]: E0117 12:20:25.303306 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:26.303799 kubelet[1766]: E0117 12:20:26.303729 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:27.304011 kubelet[1766]: E0117 12:20:27.303910 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:28.304653 kubelet[1766]: E0117 12:20:28.304573 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:29.260064 kubelet[1766]: E0117 12:20:29.259964 1766 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:29.288708 containerd[1451]: time="2025-01-17T12:20:29.288646536Z" level=info msg="StopPodSandbox for \"80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7\"" Jan 17 12:20:29.305435 kubelet[1766]: E0117 12:20:29.305366 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:29.413224 containerd[1451]: 2025-01-17 12:20:29.356 [WARNING][3345] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.244.184.73-k8s-csi--node--driver--s7bwc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d94ec394-9eb7-4930-b60e-267badfa15a7", ResourceVersion:"1359", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 19, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.244.184.73", ContainerID:"c524cb1f93747dd163b7185bceb1a7cf7f7de03aeaff730128a1dfaf0a89248f", Pod:"csi-node-driver-s7bwc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.87.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali18a540e13e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:29.413224 containerd[1451]: 2025-01-17 12:20:29.357 [INFO][3345] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7" Jan 17 12:20:29.413224 containerd[1451]: 2025-01-17 12:20:29.357 [INFO][3345] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7" iface="eth0" netns="" Jan 17 12:20:29.413224 containerd[1451]: 2025-01-17 12:20:29.357 [INFO][3345] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7" Jan 17 12:20:29.413224 containerd[1451]: 2025-01-17 12:20:29.357 [INFO][3345] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7" Jan 17 12:20:29.413224 containerd[1451]: 2025-01-17 12:20:29.390 [INFO][3350] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7" HandleID="k8s-pod-network.80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7" Workload="143.244.184.73-k8s-csi--node--driver--s7bwc-eth0" Jan 17 12:20:29.413224 containerd[1451]: 2025-01-17 12:20:29.391 [INFO][3350] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:29.413224 containerd[1451]: 2025-01-17 12:20:29.391 [INFO][3350] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:29.413224 containerd[1451]: 2025-01-17 12:20:29.402 [WARNING][3350] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7" HandleID="k8s-pod-network.80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7" Workload="143.244.184.73-k8s-csi--node--driver--s7bwc-eth0" Jan 17 12:20:29.413224 containerd[1451]: 2025-01-17 12:20:29.402 [INFO][3350] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7" HandleID="k8s-pod-network.80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7" Workload="143.244.184.73-k8s-csi--node--driver--s7bwc-eth0" Jan 17 12:20:29.413224 containerd[1451]: 2025-01-17 12:20:29.406 [INFO][3350] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:29.413224 containerd[1451]: 2025-01-17 12:20:29.410 [INFO][3345] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7" Jan 17 12:20:29.413224 containerd[1451]: time="2025-01-17T12:20:29.413013168Z" level=info msg="TearDown network for sandbox \"80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7\" successfully" Jan 17 12:20:29.413224 containerd[1451]: time="2025-01-17T12:20:29.413051380Z" level=info msg="StopPodSandbox for \"80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7\" returns successfully" Jan 17 12:20:29.424626 containerd[1451]: time="2025-01-17T12:20:29.421452021Z" level=info msg="RemovePodSandbox for \"80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7\"" Jan 17 12:20:29.424626 containerd[1451]: time="2025-01-17T12:20:29.421519400Z" level=info msg="Forcibly stopping sandbox \"80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7\"" Jan 17 12:20:29.555843 containerd[1451]: 2025-01-17 12:20:29.496 [WARNING][3370] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.244.184.73-k8s-csi--node--driver--s7bwc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d94ec394-9eb7-4930-b60e-267badfa15a7", ResourceVersion:"1359", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 19, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.244.184.73", ContainerID:"c524cb1f93747dd163b7185bceb1a7cf7f7de03aeaff730128a1dfaf0a89248f", Pod:"csi-node-driver-s7bwc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.87.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali18a540e13e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:29.555843 containerd[1451]: 2025-01-17 12:20:29.496 [INFO][3370] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7" Jan 17 12:20:29.555843 containerd[1451]: 2025-01-17 12:20:29.496 [INFO][3370] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7" iface="eth0" netns="" Jan 17 12:20:29.555843 containerd[1451]: 2025-01-17 12:20:29.496 [INFO][3370] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7" Jan 17 12:20:29.555843 containerd[1451]: 2025-01-17 12:20:29.496 [INFO][3370] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7" Jan 17 12:20:29.555843 containerd[1451]: 2025-01-17 12:20:29.531 [INFO][3376] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7" HandleID="k8s-pod-network.80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7" Workload="143.244.184.73-k8s-csi--node--driver--s7bwc-eth0" Jan 17 12:20:29.555843 containerd[1451]: 2025-01-17 12:20:29.531 [INFO][3376] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:29.555843 containerd[1451]: 2025-01-17 12:20:29.531 [INFO][3376] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:29.555843 containerd[1451]: 2025-01-17 12:20:29.544 [WARNING][3376] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7" HandleID="k8s-pod-network.80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7" Workload="143.244.184.73-k8s-csi--node--driver--s7bwc-eth0" Jan 17 12:20:29.555843 containerd[1451]: 2025-01-17 12:20:29.545 [INFO][3376] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7" HandleID="k8s-pod-network.80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7" Workload="143.244.184.73-k8s-csi--node--driver--s7bwc-eth0" Jan 17 12:20:29.555843 containerd[1451]: 2025-01-17 12:20:29.550 [INFO][3376] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:29.555843 containerd[1451]: 2025-01-17 12:20:29.552 [INFO][3370] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7" Jan 17 12:20:29.555843 containerd[1451]: time="2025-01-17T12:20:29.554838488Z" level=info msg="TearDown network for sandbox \"80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7\" successfully" Jan 17 12:20:29.600656 containerd[1451]: time="2025-01-17T12:20:29.598903889Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:20:29.600656 containerd[1451]: time="2025-01-17T12:20:29.599060088Z" level=info msg="RemovePodSandbox \"80767f3c6ffa8453fc21b353d588ce9dd6bb05135810695a509e85aa6fab76c7\" returns successfully" Jan 17 12:20:29.602260 containerd[1451]: time="2025-01-17T12:20:29.602183043Z" level=info msg="StopPodSandbox for \"94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622\"" Jan 17 12:20:29.747950 containerd[1451]: 2025-01-17 12:20:29.675 [WARNING][3397] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.244.184.73-k8s-nginx--deployment--8587fbcb89--d5dcn-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"fa40673b-0360-409f-9d77-7f3e3e6b869d", ResourceVersion:"1382", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 19, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.244.184.73", ContainerID:"05e07427156434f9fd64c41b88da064aefbb6250c5ee2f0b15097b8dd21383cf", Pod:"nginx-deployment-8587fbcb89-d5dcn", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.87.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali38efd43bc1a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:29.747950 containerd[1451]: 2025-01-17 12:20:29.676 [INFO][3397] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622" Jan 17 12:20:29.747950 containerd[1451]: 2025-01-17 12:20:29.676 [INFO][3397] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622" iface="eth0" netns="" Jan 17 12:20:29.747950 containerd[1451]: 2025-01-17 12:20:29.676 [INFO][3397] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622" Jan 17 12:20:29.747950 containerd[1451]: 2025-01-17 12:20:29.676 [INFO][3397] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622" Jan 17 12:20:29.747950 containerd[1451]: 2025-01-17 12:20:29.726 [INFO][3403] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622" HandleID="k8s-pod-network.94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622" Workload="143.244.184.73-k8s-nginx--deployment--8587fbcb89--d5dcn-eth0" Jan 17 12:20:29.747950 containerd[1451]: 2025-01-17 12:20:29.727 [INFO][3403] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:29.747950 containerd[1451]: 2025-01-17 12:20:29.727 [INFO][3403] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:29.747950 containerd[1451]: 2025-01-17 12:20:29.738 [WARNING][3403] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622" HandleID="k8s-pod-network.94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622" Workload="143.244.184.73-k8s-nginx--deployment--8587fbcb89--d5dcn-eth0" Jan 17 12:20:29.747950 containerd[1451]: 2025-01-17 12:20:29.738 [INFO][3403] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622" HandleID="k8s-pod-network.94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622" Workload="143.244.184.73-k8s-nginx--deployment--8587fbcb89--d5dcn-eth0" Jan 17 12:20:29.747950 containerd[1451]: 2025-01-17 12:20:29.741 [INFO][3403] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:29.747950 containerd[1451]: 2025-01-17 12:20:29.743 [INFO][3397] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622" Jan 17 12:20:29.749288 containerd[1451]: time="2025-01-17T12:20:29.748667633Z" level=info msg="TearDown network for sandbox \"94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622\" successfully" Jan 17 12:20:29.749288 containerd[1451]: time="2025-01-17T12:20:29.748726259Z" level=info msg="StopPodSandbox for \"94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622\" returns successfully" Jan 17 12:20:29.750196 containerd[1451]: time="2025-01-17T12:20:29.750144611Z" level=info msg="RemovePodSandbox for \"94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622\"" Jan 17 12:20:29.755617 containerd[1451]: time="2025-01-17T12:20:29.755523065Z" level=info msg="Forcibly stopping sandbox \"94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622\"" Jan 17 12:20:29.866578 containerd[1451]: 2025-01-17 12:20:29.812 [WARNING][3422] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.244.184.73-k8s-nginx--deployment--8587fbcb89--d5dcn-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"fa40673b-0360-409f-9d77-7f3e3e6b869d", ResourceVersion:"1382", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 19, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.244.184.73", ContainerID:"05e07427156434f9fd64c41b88da064aefbb6250c5ee2f0b15097b8dd21383cf", Pod:"nginx-deployment-8587fbcb89-d5dcn", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.87.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali38efd43bc1a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:29.866578 containerd[1451]: 2025-01-17 12:20:29.812 [INFO][3422] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622" Jan 17 12:20:29.866578 containerd[1451]: 2025-01-17 12:20:29.812 [INFO][3422] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622" iface="eth0" netns="" Jan 17 12:20:29.866578 containerd[1451]: 2025-01-17 12:20:29.812 [INFO][3422] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622" Jan 17 12:20:29.866578 containerd[1451]: 2025-01-17 12:20:29.812 [INFO][3422] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622" Jan 17 12:20:29.866578 containerd[1451]: 2025-01-17 12:20:29.844 [INFO][3428] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622" HandleID="k8s-pod-network.94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622" Workload="143.244.184.73-k8s-nginx--deployment--8587fbcb89--d5dcn-eth0" Jan 17 12:20:29.866578 containerd[1451]: 2025-01-17 12:20:29.844 [INFO][3428] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:29.866578 containerd[1451]: 2025-01-17 12:20:29.844 [INFO][3428] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:29.866578 containerd[1451]: 2025-01-17 12:20:29.859 [WARNING][3428] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622" HandleID="k8s-pod-network.94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622" Workload="143.244.184.73-k8s-nginx--deployment--8587fbcb89--d5dcn-eth0" Jan 17 12:20:29.866578 containerd[1451]: 2025-01-17 12:20:29.859 [INFO][3428] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622" HandleID="k8s-pod-network.94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622" Workload="143.244.184.73-k8s-nginx--deployment--8587fbcb89--d5dcn-eth0" Jan 17 12:20:29.866578 containerd[1451]: 2025-01-17 12:20:29.863 [INFO][3428] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:29.866578 containerd[1451]: 2025-01-17 12:20:29.864 [INFO][3422] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622" Jan 17 12:20:29.866578 containerd[1451]: time="2025-01-17T12:20:29.866262980Z" level=info msg="TearDown network for sandbox \"94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622\" successfully" Jan 17 12:20:29.882987 containerd[1451]: time="2025-01-17T12:20:29.882869205Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:20:29.882987 containerd[1451]: time="2025-01-17T12:20:29.882977116Z" level=info msg="RemovePodSandbox \"94fa8b2c94d326883c80470d61d7cc7351dd805727938a68fd4b8e3926d32622\" returns successfully" Jan 17 12:20:30.306746 kubelet[1766]: E0117 12:20:30.306640 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:31.307873 kubelet[1766]: E0117 12:20:31.307804 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:32.308753 kubelet[1766]: E0117 12:20:32.308684 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:33.038966 systemd[1]: run-containerd-runc-k8s.io-30f60043aa817f1959d62845d643c85ed5655d76a747d7e50eb0e4bab5387a98-runc.NmDCeN.mount: Deactivated successfully. Jan 17 12:20:33.310076 kubelet[1766]: E0117 12:20:33.309880 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:34.310937 kubelet[1766]: E0117 12:20:34.310740 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:34.595530 systemd[1]: Created slice kubepods-besteffort-pod0bcbe788_970e_4258_9572_e5944215385a.slice - libcontainer container kubepods-besteffort-pod0bcbe788_970e_4258_9572_e5944215385a.slice. Jan 17 12:20:34.695018 kubelet[1766]: I0117 12:20:34.694536 1766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jkhk\" (UniqueName: \"kubernetes.io/projected/0bcbe788-970e-4258-9572-e5944215385a-kube-api-access-7jkhk\") pod \"test-pod-1\" (UID: \"0bcbe788-970e-4258-9572-e5944215385a\") " pod="default/test-pod-1" Jan 17 12:20:34.695018 kubelet[1766]: I0117 12:20:34.694657 1766 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3d034373-8580-4080-a782-6bf78e0f2bcb\" (UniqueName: \"kubernetes.io/nfs/0bcbe788-970e-4258-9572-e5944215385a-pvc-3d034373-8580-4080-a782-6bf78e0f2bcb\") pod \"test-pod-1\" (UID: \"0bcbe788-970e-4258-9572-e5944215385a\") " pod="default/test-pod-1" Jan 17 12:20:34.884229 kernel: FS-Cache: Loaded Jan 17 12:20:34.984186 kernel: RPC: Registered named UNIX socket transport module. Jan 17 12:20:34.984371 kernel: RPC: Registered udp transport module. Jan 17 12:20:34.984410 kernel: RPC: Registered tcp transport module. Jan 17 12:20:34.984437 kernel: RPC: Registered tcp-with-tls transport module. Jan 17 12:20:34.984464 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 17 12:20:35.311840 kubelet[1766]: E0117 12:20:35.311338 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:35.349249 kernel: NFS: Registering the id_resolver key type Jan 17 12:20:35.349411 kernel: Key type id_resolver registered Jan 17 12:20:35.349434 kernel: Key type id_legacy registered Jan 17 12:20:35.402008 nfsidmap[3478]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.0-f-3a3da9a24b' Jan 17 12:20:35.408273 nfsidmap[3479]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.0-f-3a3da9a24b' Jan 17 12:20:35.508995 containerd[1451]: time="2025-01-17T12:20:35.508885696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:0bcbe788-970e-4258-9572-e5944215385a,Namespace:default,Attempt:0,}" Jan 17 12:20:35.716743 systemd-networkd[1355]: cali5ec59c6bf6e: Link UP Jan 17 12:20:35.718383 systemd-networkd[1355]: cali5ec59c6bf6e: Gained carrier Jan 17 12:20:35.743941 containerd[1451]: 2025-01-17 12:20:35.592 [INFO][3481] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {143.244.184.73-k8s-test--pod--1-eth0 default 0bcbe788-970e-4258-9572-e5944215385a 1490 0 2025-01-17 12:20:19 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 143.244.184.73 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="28973402a69c9f9a648333120ff8dbaa5ab73d341c45117a4791ec77928a250c" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="143.244.184.73-k8s-test--pod--1-" Jan 17 12:20:35.743941 containerd[1451]: 2025-01-17 12:20:35.593 [INFO][3481] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="28973402a69c9f9a648333120ff8dbaa5ab73d341c45117a4791ec77928a250c" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="143.244.184.73-k8s-test--pod--1-eth0" Jan 17 12:20:35.743941 containerd[1451]: 2025-01-17 12:20:35.636 [INFO][3492] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="28973402a69c9f9a648333120ff8dbaa5ab73d341c45117a4791ec77928a250c" HandleID="k8s-pod-network.28973402a69c9f9a648333120ff8dbaa5ab73d341c45117a4791ec77928a250c" Workload="143.244.184.73-k8s-test--pod--1-eth0" Jan 17 12:20:35.743941 containerd[1451]: 2025-01-17 12:20:35.657 [INFO][3492] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="28973402a69c9f9a648333120ff8dbaa5ab73d341c45117a4791ec77928a250c" HandleID="k8s-pod-network.28973402a69c9f9a648333120ff8dbaa5ab73d341c45117a4791ec77928a250c" Workload="143.244.184.73-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bded0), Attrs:map[string]string{"namespace":"default", "node":"143.244.184.73", "pod":"test-pod-1", "timestamp":"2025-01-17 12:20:35.636014444 +0000 UTC"}, Hostname:"143.244.184.73", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:20:35.743941 containerd[1451]: 2025-01-17 12:20:35.657 [INFO][3492] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:35.743941 containerd[1451]: 2025-01-17 12:20:35.657 [INFO][3492] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:35.743941 containerd[1451]: 2025-01-17 12:20:35.657 [INFO][3492] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '143.244.184.73' Jan 17 12:20:35.743941 containerd[1451]: 2025-01-17 12:20:35.662 [INFO][3492] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.28973402a69c9f9a648333120ff8dbaa5ab73d341c45117a4791ec77928a250c" host="143.244.184.73" Jan 17 12:20:35.743941 containerd[1451]: 2025-01-17 12:20:35.670 [INFO][3492] ipam/ipam.go 372: Looking up existing affinities for host host="143.244.184.73" Jan 17 12:20:35.743941 containerd[1451]: 2025-01-17 12:20:35.678 [INFO][3492] ipam/ipam.go 489: Trying affinity for 192.168.87.128/26 host="143.244.184.73" Jan 17 12:20:35.743941 containerd[1451]: 2025-01-17 12:20:35.682 [INFO][3492] ipam/ipam.go 155: Attempting to load block cidr=192.168.87.128/26 host="143.244.184.73" Jan 17 12:20:35.743941 containerd[1451]: 2025-01-17 12:20:35.687 [INFO][3492] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.87.128/26 host="143.244.184.73" Jan 17 12:20:35.743941 containerd[1451]: 2025-01-17 12:20:35.687 [INFO][3492] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.87.128/26 handle="k8s-pod-network.28973402a69c9f9a648333120ff8dbaa5ab73d341c45117a4791ec77928a250c" host="143.244.184.73" Jan 17 12:20:35.743941 containerd[1451]: 2025-01-17 12:20:35.691 [INFO][3492] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.28973402a69c9f9a648333120ff8dbaa5ab73d341c45117a4791ec77928a250c Jan 17 12:20:35.743941 containerd[1451]: 2025-01-17 12:20:35.700 [INFO][3492] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.87.128/26 handle="k8s-pod-network.28973402a69c9f9a648333120ff8dbaa5ab73d341c45117a4791ec77928a250c" host="143.244.184.73" Jan 17 12:20:35.743941 containerd[1451]: 2025-01-17 12:20:35.708 [INFO][3492] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.87.132/26] block=192.168.87.128/26 handle="k8s-pod-network.28973402a69c9f9a648333120ff8dbaa5ab73d341c45117a4791ec77928a250c" host="143.244.184.73" Jan 17 12:20:35.743941 containerd[1451]: 2025-01-17 12:20:35.708 [INFO][3492] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.87.132/26] handle="k8s-pod-network.28973402a69c9f9a648333120ff8dbaa5ab73d341c45117a4791ec77928a250c" host="143.244.184.73" Jan 17 12:20:35.743941 containerd[1451]: 2025-01-17 12:20:35.709 [INFO][3492] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:35.743941 containerd[1451]: 2025-01-17 12:20:35.709 [INFO][3492] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.87.132/26] IPv6=[] ContainerID="28973402a69c9f9a648333120ff8dbaa5ab73d341c45117a4791ec77928a250c" HandleID="k8s-pod-network.28973402a69c9f9a648333120ff8dbaa5ab73d341c45117a4791ec77928a250c" Workload="143.244.184.73-k8s-test--pod--1-eth0" Jan 17 12:20:35.743941 containerd[1451]: 2025-01-17 12:20:35.711 [INFO][3481] cni-plugin/k8s.go 386: Populated endpoint ContainerID="28973402a69c9f9a648333120ff8dbaa5ab73d341c45117a4791ec77928a250c" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="143.244.184.73-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.244.184.73-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"0bcbe788-970e-4258-9572-e5944215385a", ResourceVersion:"1490", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 20, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.244.184.73", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.87.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:35.745175 containerd[1451]: 2025-01-17 12:20:35.712 [INFO][3481] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.87.132/32] ContainerID="28973402a69c9f9a648333120ff8dbaa5ab73d341c45117a4791ec77928a250c" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="143.244.184.73-k8s-test--pod--1-eth0" Jan 17 12:20:35.745175 containerd[1451]: 2025-01-17 12:20:35.712 [INFO][3481] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="28973402a69c9f9a648333120ff8dbaa5ab73d341c45117a4791ec77928a250c" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="143.244.184.73-k8s-test--pod--1-eth0" Jan 17 12:20:35.745175 containerd[1451]: 2025-01-17 12:20:35.717 [INFO][3481] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="28973402a69c9f9a648333120ff8dbaa5ab73d341c45117a4791ec77928a250c" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="143.244.184.73-k8s-test--pod--1-eth0" Jan 17 12:20:35.745175 containerd[1451]: 2025-01-17 12:20:35.718 [INFO][3481] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="28973402a69c9f9a648333120ff8dbaa5ab73d341c45117a4791ec77928a250c" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="143.244.184.73-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.244.184.73-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"0bcbe788-970e-4258-9572-e5944215385a", ResourceVersion:"1490", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 20, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.244.184.73", ContainerID:"28973402a69c9f9a648333120ff8dbaa5ab73d341c45117a4791ec77928a250c", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.87.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"4a:4e:dc:08:fe:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:35.745175 containerd[1451]: 2025-01-17 12:20:35.740 [INFO][3481] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="28973402a69c9f9a648333120ff8dbaa5ab73d341c45117a4791ec77928a250c" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="143.244.184.73-k8s-test--pod--1-eth0" Jan 17 12:20:35.780430 containerd[1451]: time="2025-01-17T12:20:35.779952458Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:20:35.781055 containerd[1451]: time="2025-01-17T12:20:35.780861729Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:20:35.781055 containerd[1451]: time="2025-01-17T12:20:35.780905791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:20:35.781504 containerd[1451]: time="2025-01-17T12:20:35.781087745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:20:35.807212 systemd[1]: Started cri-containerd-28973402a69c9f9a648333120ff8dbaa5ab73d341c45117a4791ec77928a250c.scope - libcontainer container 28973402a69c9f9a648333120ff8dbaa5ab73d341c45117a4791ec77928a250c. Jan 17 12:20:35.873449 containerd[1451]: time="2025-01-17T12:20:35.873278551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:0bcbe788-970e-4258-9572-e5944215385a,Namespace:default,Attempt:0,} returns sandbox id \"28973402a69c9f9a648333120ff8dbaa5ab73d341c45117a4791ec77928a250c\"" Jan 17 12:20:35.877458 containerd[1451]: time="2025-01-17T12:20:35.877392081Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 17 12:20:36.256559 containerd[1451]: time="2025-01-17T12:20:36.255537388Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:36.256559 containerd[1451]: time="2025-01-17T12:20:36.256488101Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 17 12:20:36.265095 containerd[1451]: time="2025-01-17T12:20:36.265001000Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 387.551816ms" Jan 17 12:20:36.265095 containerd[1451]: time="2025-01-17T12:20:36.265083031Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 17 12:20:36.268213 containerd[1451]: time="2025-01-17T12:20:36.267931892Z" level=info msg="CreateContainer within sandbox \"28973402a69c9f9a648333120ff8dbaa5ab73d341c45117a4791ec77928a250c\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 17 12:20:36.302693 containerd[1451]: time="2025-01-17T12:20:36.302616934Z" level=info msg="CreateContainer within sandbox \"28973402a69c9f9a648333120ff8dbaa5ab73d341c45117a4791ec77928a250c\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"d4d712b5e92fefabc891f2b2926cdacaf1760d6773f4e7e6801237560d76a815\"" Jan 17 12:20:36.303953 containerd[1451]: time="2025-01-17T12:20:36.303477673Z" level=info msg="StartContainer for \"d4d712b5e92fefabc891f2b2926cdacaf1760d6773f4e7e6801237560d76a815\"" Jan 17 12:20:36.312682 kubelet[1766]: E0117 12:20:36.312343 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:36.348642 systemd[1]: run-containerd-runc-k8s.io-d4d712b5e92fefabc891f2b2926cdacaf1760d6773f4e7e6801237560d76a815-runc.K47Xh4.mount: Deactivated successfully. Jan 17 12:20:36.357074 systemd[1]: Started cri-containerd-d4d712b5e92fefabc891f2b2926cdacaf1760d6773f4e7e6801237560d76a815.scope - libcontainer container d4d712b5e92fefabc891f2b2926cdacaf1760d6773f4e7e6801237560d76a815. Jan 17 12:20:36.417530 containerd[1451]: time="2025-01-17T12:20:36.417432111Z" level=info msg="StartContainer for \"d4d712b5e92fefabc891f2b2926cdacaf1760d6773f4e7e6801237560d76a815\" returns successfully" Jan 17 12:20:37.312819 kubelet[1766]: E0117 12:20:37.312699 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:37.654076 systemd-networkd[1355]: cali5ec59c6bf6e: Gained IPv6LL Jan 17 12:20:38.313308 kubelet[1766]: E0117 12:20:38.313219 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:39.314180 kubelet[1766]: E0117 12:20:39.314087 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:40.315269 kubelet[1766]: E0117 12:20:40.315182 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:41.316357 kubelet[1766]: E0117 12:20:41.316268 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:42.317114 kubelet[1766]: E0117 12:20:42.317022 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:43.318088 kubelet[1766]: E0117 12:20:43.317902 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:43.595202 systemd[1]: Started sshd@7-143.244.184.73:22-8.211.142.214:57436.service - OpenSSH per-connection server daemon (8.211.142.214:57436). Jan 17 12:20:43.609188 sshd[3610]: Connection closed by 8.211.142.214 port 57436 Jan 17 12:20:43.610227 systemd[1]: sshd@7-143.244.184.73:22-8.211.142.214:57436.service: Deactivated successfully. Jan 17 12:20:43.708390 systemd[1]: Started sshd@8-143.244.184.73:22-8.211.142.214:57444.service - OpenSSH per-connection server daemon (8.211.142.214:57444). Jan 17 12:20:44.166725 sshd[3614]: Invalid user hadoop from 8.211.142.214 port 57444 Jan 17 12:20:44.276301 sshd[3614]: Connection closed by invalid user hadoop 8.211.142.214 port 57444 [preauth] Jan 17 12:20:44.280898 systemd[1]: sshd@8-143.244.184.73:22-8.211.142.214:57444.service: Deactivated successfully. Jan 17 12:20:44.319157 kubelet[1766]: E0117 12:20:44.319020 1766 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:20:44.427392 systemd[1]: Started sshd@9-143.244.184.73:22-8.211.142.214:57454.service - OpenSSH per-connection server daemon (8.211.142.214:57454).