Dec 13 09:11:56.075815 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 09:11:56.075868 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 09:11:56.075891 kernel: BIOS-provided physical RAM map: Dec 13 09:11:56.075904 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 09:11:56.075917 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 09:11:56.075929 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 09:11:56.075944 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Dec 13 09:11:56.075960 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Dec 13 09:11:56.075973 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 09:11:56.075991 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 09:11:56.076013 kernel: NX (Execute Disable) protection: active Dec 13 09:11:56.076027 kernel: APIC: Static calls initialized Dec 13 09:11:56.076040 kernel: SMBIOS 2.8 present. Dec 13 09:11:56.076054 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Dec 13 09:11:56.076071 kernel: Hypervisor detected: KVM Dec 13 09:11:56.076091 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 09:11:56.076111 kernel: kvm-clock: using sched offset of 3412403663 cycles Dec 13 09:11:56.076126 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 09:11:56.076140 kernel: tsc: Detected 2494.136 MHz processor Dec 13 09:11:56.076155 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 09:11:56.076170 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 09:11:56.076185 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Dec 13 09:11:56.076201 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 09:11:56.076215 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 09:11:56.076236 kernel: ACPI: Early table checksum verification disabled Dec 13 09:11:56.076251 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Dec 13 09:11:56.076281 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 09:11:56.076298 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 09:11:56.076313 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 09:11:56.076328 kernel: ACPI: FACS 0x000000007FFE0000 000040 Dec 13 09:11:56.076343 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 09:11:56.076358 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 09:11:56.076373 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 09:11:56.076395 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 09:11:56.076410 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Dec 13 09:11:56.076425 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Dec 13 09:11:56.076440 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Dec 13 09:11:56.076455 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Dec 13 09:11:56.076470 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Dec 13 09:11:56.076485 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Dec 13 09:11:56.076517 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Dec 13 09:11:56.076533 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 09:11:56.076549 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 09:11:56.076566 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 09:11:56.076582 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 13 09:11:56.076598 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Dec 13 09:11:56.076614 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Dec 13 09:11:56.076636 kernel: Zone ranges: Dec 13 09:11:56.076652 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 09:11:56.076668 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Dec 13 09:11:56.076685 kernel: Normal empty Dec 13 09:11:56.076701 kernel: Movable zone start for each node Dec 13 09:11:56.076717 kernel: Early memory node ranges Dec 13 09:11:56.076734 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 09:11:56.079935 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Dec 13 09:11:56.079956 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Dec 13 09:11:56.079983 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 09:11:56.080002 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 09:11:56.080027 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Dec 13 09:11:56.080040 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 09:11:56.080078 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 09:11:56.080092 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 09:11:56.080105 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 09:11:56.080118 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 09:11:56.080130 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 09:11:56.080149 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 09:11:56.080162 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 09:11:56.080175 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 09:11:56.080188 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 09:11:56.080201 kernel: TSC deadline timer available Dec 13 09:11:56.080213 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 09:11:56.080226 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 09:11:56.080240 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Dec 13 09:11:56.080258 kernel: Booting paravirtualized kernel on KVM Dec 13 09:11:56.080335 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 09:11:56.080361 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 13 09:11:56.080377 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Dec 13 09:11:56.080394 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Dec 13 09:11:56.080410 kernel: pcpu-alloc: [0] 0 1 Dec 13 09:11:56.080425 kernel: kvm-guest: PV spinlocks disabled, no host support Dec 13 09:11:56.080444 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 09:11:56.080459 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 09:11:56.080474 kernel: random: crng init done Dec 13 09:11:56.080494 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 09:11:56.080508 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 09:11:56.080522 kernel: Fallback order for Node 0: 0 Dec 13 09:11:56.080537 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Dec 13 09:11:56.080550 kernel: Policy zone: DMA32 Dec 13 09:11:56.080563 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 09:11:56.080591 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 125148K reserved, 0K cma-reserved) Dec 13 09:11:56.080606 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 09:11:56.080633 kernel: Kernel/User page tables isolation: enabled Dec 13 09:11:56.080647 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 09:11:56.080661 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 09:11:56.080675 kernel: Dynamic Preempt: voluntary Dec 13 09:11:56.080690 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 09:11:56.080705 kernel: rcu: RCU event tracing is enabled. Dec 13 09:11:56.080720 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 09:11:56.080758 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 09:11:56.080774 kernel: Rude variant of Tasks RCU enabled. Dec 13 09:11:56.080805 kernel: Tracing variant of Tasks RCU enabled. Dec 13 09:11:56.080826 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 09:11:56.080841 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 09:11:56.080856 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 09:11:56.080876 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 09:11:56.080891 kernel: Console: colour VGA+ 80x25 Dec 13 09:11:56.080904 kernel: printk: console [tty0] enabled Dec 13 09:11:56.080918 kernel: printk: console [ttyS0] enabled Dec 13 09:11:56.080931 kernel: ACPI: Core revision 20230628 Dec 13 09:11:56.080946 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 09:11:56.080966 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 09:11:56.080979 kernel: x2apic enabled Dec 13 09:11:56.080993 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 09:11:56.081006 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 09:11:56.081019 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39654230, max_idle_ns: 440795207432 ns Dec 13 09:11:56.081033 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494136) Dec 13 09:11:56.081047 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 13 09:11:56.081061 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 13 09:11:56.081094 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 09:11:56.081109 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 09:11:56.081123 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 09:11:56.081143 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 09:11:56.081158 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 13 09:11:56.081174 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 09:11:56.081190 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 09:11:56.081206 kernel: MDS: Mitigation: Clear CPU buffers Dec 13 09:11:56.081223 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 09:11:56.081254 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 09:11:56.081271 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 09:11:56.081287 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 09:11:56.081304 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 09:11:56.081320 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 09:11:56.081334 kernel: Freeing SMP alternatives memory: 32K Dec 13 09:11:56.081349 kernel: pid_max: default: 32768 minimum: 301 Dec 13 09:11:56.081365 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 09:11:56.081387 kernel: landlock: Up and running. Dec 13 09:11:56.081404 kernel: SELinux: Initializing. Dec 13 09:11:56.081419 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 09:11:56.081436 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 09:11:56.081452 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Dec 13 09:11:56.081469 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 09:11:56.081486 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 09:11:56.081503 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 09:11:56.081520 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Dec 13 09:11:56.081544 kernel: signal: max sigframe size: 1776 Dec 13 09:11:56.081561 kernel: rcu: Hierarchical SRCU implementation. Dec 13 09:11:56.081579 kernel: rcu: Max phase no-delay instances is 400. Dec 13 09:11:56.081595 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 09:11:56.081611 kernel: smp: Bringing up secondary CPUs ... Dec 13 09:11:56.081628 kernel: smpboot: x86: Booting SMP configuration: Dec 13 09:11:56.081651 kernel: .... node #0, CPUs: #1 Dec 13 09:11:56.081667 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 09:11:56.081684 kernel: smpboot: Max logical packages: 1 Dec 13 09:11:56.081707 kernel: smpboot: Total of 2 processors activated (9976.54 BogoMIPS) Dec 13 09:11:56.081722 kernel: devtmpfs: initialized Dec 13 09:11:56.083370 kernel: x86/mm: Memory block size: 128MB Dec 13 09:11:56.083428 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 09:11:56.083447 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 09:11:56.083464 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 09:11:56.083481 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 09:11:56.083497 kernel: audit: initializing netlink subsys (disabled) Dec 13 09:11:56.083512 kernel: audit: type=2000 audit(1734081115.140:1): state=initialized audit_enabled=0 res=1 Dec 13 09:11:56.083548 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 09:11:56.083562 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 09:11:56.083577 kernel: cpuidle: using governor menu Dec 13 09:11:56.083591 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 09:11:56.083604 kernel: dca service started, version 1.12.1 Dec 13 09:11:56.083618 kernel: PCI: Using configuration type 1 for base access Dec 13 09:11:56.083635 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 09:11:56.083649 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 09:11:56.083663 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 09:11:56.083685 kernel: ACPI: Added _OSI(Module Device) Dec 13 09:11:56.083699 kernel: ACPI: Added _OSI(Processor Device) Dec 13 09:11:56.083714 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 09:11:56.083729 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 09:11:56.083760 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 09:11:56.083776 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 09:11:56.083823 kernel: ACPI: Interpreter enabled Dec 13 09:11:56.083837 kernel: ACPI: PM: (supports S0 S5) Dec 13 09:11:56.083852 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 09:11:56.083873 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 09:11:56.083889 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 09:11:56.083904 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Dec 13 09:11:56.083918 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 09:11:56.084346 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 09:11:56.084596 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 13 09:11:56.084793 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 13 09:11:56.084830 kernel: acpiphp: Slot [3] registered Dec 13 09:11:56.084846 kernel: acpiphp: Slot [4] registered Dec 13 09:11:56.084860 kernel: acpiphp: Slot [5] registered Dec 13 09:11:56.084873 kernel: acpiphp: Slot [6] registered Dec 13 09:11:56.084886 kernel: acpiphp: Slot [7] registered Dec 13 09:11:56.084900 kernel: acpiphp: Slot [8] registered Dec 13 09:11:56.084914 kernel: acpiphp: Slot [9] registered Dec 13 09:11:56.084928 kernel: acpiphp: Slot [10] registered Dec 13 09:11:56.084942 kernel: acpiphp: Slot [11] registered Dec 13 09:11:56.084963 kernel: acpiphp: Slot [12] registered Dec 13 09:11:56.084977 kernel: acpiphp: Slot [13] registered Dec 13 09:11:56.084991 kernel: acpiphp: Slot [14] registered Dec 13 09:11:56.085004 kernel: acpiphp: Slot [15] registered Dec 13 09:11:56.085019 kernel: acpiphp: Slot [16] registered Dec 13 09:11:56.085034 kernel: acpiphp: Slot [17] registered Dec 13 09:11:56.085047 kernel: acpiphp: Slot [18] registered Dec 13 09:11:56.085062 kernel: acpiphp: Slot [19] registered Dec 13 09:11:56.085077 kernel: acpiphp: Slot [20] registered Dec 13 09:11:56.085093 kernel: acpiphp: Slot [21] registered Dec 13 09:11:56.085113 kernel: acpiphp: Slot [22] registered Dec 13 09:11:56.085127 kernel: acpiphp: Slot [23] registered Dec 13 09:11:56.085141 kernel: acpiphp: Slot [24] registered Dec 13 09:11:56.085156 kernel: acpiphp: Slot [25] registered Dec 13 09:11:56.085172 kernel: acpiphp: Slot [26] registered Dec 13 09:11:56.085187 kernel: acpiphp: Slot [27] registered Dec 13 09:11:56.085201 kernel: acpiphp: Slot [28] registered Dec 13 09:11:56.085214 kernel: acpiphp: Slot [29] registered Dec 13 09:11:56.085229 kernel: acpiphp: Slot [30] registered Dec 13 09:11:56.085249 kernel: acpiphp: Slot [31] registered Dec 13 09:11:56.085263 kernel: PCI host bridge to bus 0000:00 Dec 13 09:11:56.085527 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 09:11:56.085684 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 09:11:56.088071 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 09:11:56.088346 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 13 09:11:56.088507 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Dec 13 09:11:56.088649 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 09:11:56.090049 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 09:11:56.090288 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Dec 13 09:11:56.090531 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Dec 13 09:11:56.090698 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Dec 13 09:11:56.092178 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Dec 13 09:11:56.092451 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Dec 13 09:11:56.092705 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Dec 13 09:11:56.092908 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Dec 13 09:11:56.093044 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Dec 13 09:11:56.093215 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Dec 13 09:11:56.093415 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Dec 13 09:11:56.093560 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Dec 13 09:11:56.093687 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Dec 13 09:11:56.093947 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Dec 13 09:11:56.094094 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Dec 13 09:11:56.094208 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Dec 13 09:11:56.094314 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Dec 13 09:11:56.094479 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Dec 13 09:11:56.094609 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 09:11:56.095533 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Dec 13 09:11:56.095878 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Dec 13 09:11:56.096051 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Dec 13 09:11:56.096206 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Dec 13 09:11:56.096393 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 09:11:56.096555 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Dec 13 09:11:56.096712 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Dec 13 09:11:56.096924 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Dec 13 09:11:56.097093 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Dec 13 09:11:56.097249 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Dec 13 09:11:56.097423 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Dec 13 09:11:56.097587 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Dec 13 09:11:56.097866 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Dec 13 09:11:56.098043 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 09:11:56.098218 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Dec 13 09:11:56.098399 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Dec 13 09:11:56.098583 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Dec 13 09:11:56.098769 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Dec 13 09:11:56.098938 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Dec 13 09:11:56.099103 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Dec 13 09:11:56.099300 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Dec 13 09:11:56.099489 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Dec 13 09:11:56.099646 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Dec 13 09:11:56.099668 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 09:11:56.099683 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 09:11:56.099700 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 09:11:56.099715 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 09:11:56.099731 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 09:11:56.099779 kernel: iommu: Default domain type: Translated Dec 13 09:11:56.099795 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 09:11:56.099812 kernel: PCI: Using ACPI for IRQ routing Dec 13 09:11:56.099830 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 09:11:56.099847 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 09:11:56.099864 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Dec 13 09:11:56.100051 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Dec 13 09:11:56.100197 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Dec 13 09:11:56.100341 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 09:11:56.100356 kernel: vgaarb: loaded Dec 13 09:11:56.100367 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 09:11:56.100377 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 09:11:56.100387 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 09:11:56.100397 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 09:11:56.100408 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 09:11:56.100418 kernel: pnp: PnP ACPI init Dec 13 09:11:56.100428 kernel: pnp: PnP ACPI: found 4 devices Dec 13 09:11:56.100444 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 09:11:56.100454 kernel: NET: Registered PF_INET protocol family Dec 13 09:11:56.100464 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 09:11:56.100480 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 09:11:56.100496 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 09:11:56.100513 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 09:11:56.100524 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 09:11:56.100541 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 09:11:56.100557 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 09:11:56.100581 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 09:11:56.100595 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 09:11:56.100605 kernel: NET: Registered PF_XDP protocol family Dec 13 09:11:56.100732 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 09:11:56.100872 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 09:11:56.100967 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 09:11:56.101071 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 13 09:11:56.101188 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Dec 13 09:11:56.101396 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Dec 13 09:11:56.101583 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 09:11:56.101602 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 13 09:11:56.102296 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 39639 usecs Dec 13 09:11:56.102325 kernel: PCI: CLS 0 bytes, default 64 Dec 13 09:11:56.102340 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 09:11:56.102355 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39654230, max_idle_ns: 440795207432 ns Dec 13 09:11:56.102370 kernel: Initialise system trusted keyrings Dec 13 09:11:56.102397 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 09:11:56.102411 kernel: Key type asymmetric registered Dec 13 09:11:56.102425 kernel: Asymmetric key parser 'x509' registered Dec 13 09:11:56.102439 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 09:11:56.102452 kernel: io scheduler mq-deadline registered Dec 13 09:11:56.102465 kernel: io scheduler kyber registered Dec 13 09:11:56.102479 kernel: io scheduler bfq registered Dec 13 09:11:56.102492 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 09:11:56.102506 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Dec 13 09:11:56.102520 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 13 09:11:56.102540 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 13 09:11:56.102553 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 09:11:56.102566 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 09:11:56.102580 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 09:11:56.102595 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 09:11:56.102608 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 09:11:56.102982 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 13 09:11:56.103012 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Dec 13 09:11:56.103168 kernel: rtc_cmos 00:03: registered as rtc0 Dec 13 09:11:56.103312 kernel: rtc_cmos 00:03: setting system clock to 2024-12-13T09:11:55 UTC (1734081115) Dec 13 09:11:56.103454 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Dec 13 09:11:56.103474 kernel: intel_pstate: CPU model not supported Dec 13 09:11:56.103492 kernel: NET: Registered PF_INET6 protocol family Dec 13 09:11:56.103509 kernel: Segment Routing with IPv6 Dec 13 09:11:56.103528 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 09:11:56.103546 kernel: NET: Registered PF_PACKET protocol family Dec 13 09:11:56.103569 kernel: Key type dns_resolver registered Dec 13 09:11:56.103587 kernel: IPI shorthand broadcast: enabled Dec 13 09:11:56.103604 kernel: sched_clock: Marking stable (1178005433, 105981216)->(1307343483, -23356834) Dec 13 09:11:56.103622 kernel: registered taskstats version 1 Dec 13 09:11:56.103639 kernel: Loading compiled-in X.509 certificates Dec 13 09:11:56.103656 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 09:11:56.103674 kernel: Key type .fscrypt registered Dec 13 09:11:56.103690 kernel: Key type fscrypt-provisioning registered Dec 13 09:11:56.103707 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 09:11:56.103741 kernel: ima: Allocated hash algorithm: sha1 Dec 13 09:11:56.103777 kernel: ima: No architecture policies found Dec 13 09:11:56.103809 kernel: clk: Disabling unused clocks Dec 13 09:11:56.103843 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 09:11:56.103861 kernel: Write protecting the kernel read-only data: 36864k Dec 13 09:11:56.103912 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 09:11:56.103935 kernel: Run /init as init process Dec 13 09:11:56.103954 kernel: with arguments: Dec 13 09:11:56.103973 kernel: /init Dec 13 09:11:56.103996 kernel: with environment: Dec 13 09:11:56.104015 kernel: HOME=/ Dec 13 09:11:56.104032 kernel: TERM=linux Dec 13 09:11:56.104050 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 09:11:56.104075 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 09:11:56.104102 systemd[1]: Detected virtualization kvm. Dec 13 09:11:56.104121 systemd[1]: Detected architecture x86-64. Dec 13 09:11:56.104140 systemd[1]: Running in initrd. Dec 13 09:11:56.104165 systemd[1]: No hostname configured, using default hostname. Dec 13 09:11:56.104184 systemd[1]: Hostname set to . Dec 13 09:11:56.104204 systemd[1]: Initializing machine ID from VM UUID. Dec 13 09:11:56.104223 systemd[1]: Queued start job for default target initrd.target. Dec 13 09:11:56.104244 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 09:11:56.104279 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 09:11:56.104297 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 09:11:56.104313 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 09:11:56.104335 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 09:11:56.104355 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 09:11:56.104377 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 09:11:56.104396 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 09:11:56.104416 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 09:11:56.104436 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 09:11:56.104460 systemd[1]: Reached target paths.target - Path Units. Dec 13 09:11:56.104479 systemd[1]: Reached target slices.target - Slice Units. Dec 13 09:11:56.104501 systemd[1]: Reached target swap.target - Swaps. Dec 13 09:11:56.104526 systemd[1]: Reached target timers.target - Timer Units. Dec 13 09:11:56.104546 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 09:11:56.104566 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 09:11:56.104591 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 09:11:56.104610 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 09:11:56.104630 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 09:11:56.104650 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 09:11:56.104670 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 09:11:56.104689 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 09:11:56.104708 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 09:11:56.104723 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 09:11:56.104766 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 09:11:56.104786 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 09:11:56.104805 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 09:11:56.104825 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 09:11:56.104845 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 09:11:56.104865 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 09:11:56.104884 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 09:11:56.104903 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 09:11:56.104930 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 09:11:56.105031 systemd-journald[183]: Collecting audit messages is disabled. Dec 13 09:11:56.105083 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 09:11:56.105098 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 09:11:56.105119 systemd-journald[183]: Journal started Dec 13 09:11:56.105159 systemd-journald[183]: Runtime Journal (/run/log/journal/a4b84da1807f44f1ae8e2624608fef3b) is 4.9M, max 39.3M, 34.4M free. Dec 13 09:11:56.085826 systemd-modules-load[184]: Inserted module 'overlay' Dec 13 09:11:56.158396 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 09:11:56.158433 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 09:11:56.158450 kernel: Bridge firewalling registered Dec 13 09:11:56.148408 systemd-modules-load[184]: Inserted module 'br_netfilter' Dec 13 09:11:56.158133 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 09:11:56.159296 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 09:11:56.160201 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 09:11:56.170166 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 09:11:56.178244 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 09:11:56.187111 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 09:11:56.204699 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 09:11:56.213880 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 09:11:56.224178 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 09:11:56.226625 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 09:11:56.236239 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 09:11:56.240026 dracut-cmdline[217]: dracut-dracut-053 Dec 13 09:11:56.244659 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 09:11:56.288691 systemd-resolved[222]: Positive Trust Anchors: Dec 13 09:11:56.289467 systemd-resolved[222]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 09:11:56.289522 systemd-resolved[222]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 09:11:56.294631 systemd-resolved[222]: Defaulting to hostname 'linux'. Dec 13 09:11:56.296455 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 09:11:56.297040 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 09:11:56.373810 kernel: SCSI subsystem initialized Dec 13 09:11:56.386843 kernel: Loading iSCSI transport class v2.0-870. Dec 13 09:11:56.403793 kernel: iscsi: registered transport (tcp) Dec 13 09:11:56.428778 kernel: iscsi: registered transport (qla4xxx) Dec 13 09:11:56.428856 kernel: QLogic iSCSI HBA Driver Dec 13 09:11:56.492447 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 09:11:56.500047 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 09:11:56.546991 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 09:11:56.547145 kernel: device-mapper: uevent: version 1.0.3 Dec 13 09:11:56.548389 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 09:11:56.599839 kernel: raid6: avx2x4 gen() 13979 MB/s Dec 13 09:11:56.616814 kernel: raid6: avx2x2 gen() 13880 MB/s Dec 13 09:11:56.633842 kernel: raid6: avx2x1 gen() 10620 MB/s Dec 13 09:11:56.633942 kernel: raid6: using algorithm avx2x4 gen() 13979 MB/s Dec 13 09:11:56.651927 kernel: raid6: .... xor() 6498 MB/s, rmw enabled Dec 13 09:11:56.652021 kernel: raid6: using avx2x2 recovery algorithm Dec 13 09:11:56.676822 kernel: xor: automatically using best checksumming function avx Dec 13 09:11:56.870803 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 09:11:56.890122 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 09:11:56.897109 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 09:11:56.927626 systemd-udevd[402]: Using default interface naming scheme 'v255'. Dec 13 09:11:56.934297 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 09:11:56.941000 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 09:11:56.975694 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Dec 13 09:11:57.025136 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 09:11:57.032230 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 09:11:57.130667 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 09:11:57.143725 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 09:11:57.188072 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 09:11:57.190358 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 09:11:57.191544 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 09:11:57.192112 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 09:11:57.199262 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 09:11:57.236356 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 09:11:57.285295 kernel: scsi host0: Virtio SCSI HBA Dec 13 09:11:57.285448 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Dec 13 09:11:57.369210 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 09:11:57.369250 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Dec 13 09:11:57.369503 kernel: libata version 3.00 loaded. Dec 13 09:11:57.369547 kernel: ata_piix 0000:00:01.1: version 2.13 Dec 13 09:11:57.369829 kernel: scsi host1: ata_piix Dec 13 09:11:57.370058 kernel: scsi host2: ata_piix Dec 13 09:11:57.370328 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Dec 13 09:11:57.370352 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Dec 13 09:11:57.370371 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 09:11:57.370389 kernel: AES CTR mode by8 optimization enabled Dec 13 09:11:57.370424 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 09:11:57.370444 kernel: GPT:9289727 != 125829119 Dec 13 09:11:57.370462 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 09:11:57.370482 kernel: GPT:9289727 != 125829119 Dec 13 09:11:57.370500 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 09:11:57.370519 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 09:11:57.343657 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 09:11:57.374231 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Dec 13 09:11:57.382997 kernel: virtio_blk virtio5: [vdb] 920 512-byte logical blocks (471 kB/460 KiB) Dec 13 09:11:57.343884 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 09:11:57.347380 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 09:11:57.350067 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 09:11:57.395255 kernel: ACPI: bus type USB registered Dec 13 09:11:57.395290 kernel: usbcore: registered new interface driver usbfs Dec 13 09:11:57.395306 kernel: usbcore: registered new interface driver hub Dec 13 09:11:57.395319 kernel: usbcore: registered new device driver usb Dec 13 09:11:57.350320 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 09:11:57.350831 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 09:11:57.358429 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 09:11:57.443983 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 09:11:57.448328 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 09:11:57.478886 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 09:11:57.567774 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (453) Dec 13 09:11:57.567848 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Dec 13 09:11:57.580113 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Dec 13 09:11:57.580351 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Dec 13 09:11:57.580503 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Dec 13 09:11:57.580626 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (458) Dec 13 09:11:57.580640 kernel: hub 1-0:1.0: USB hub found Dec 13 09:11:57.580858 kernel: hub 1-0:1.0: 2 ports detected Dec 13 09:11:57.579492 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 09:11:57.601156 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 09:11:57.608940 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 09:11:57.616573 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 09:11:57.617111 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 09:11:57.623110 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 09:11:57.639164 disk-uuid[547]: Primary Header is updated. Dec 13 09:11:57.639164 disk-uuid[547]: Secondary Entries is updated. Dec 13 09:11:57.639164 disk-uuid[547]: Secondary Header is updated. Dec 13 09:11:57.648774 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 09:11:57.656815 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 09:11:58.668793 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 09:11:58.670638 disk-uuid[548]: The operation has completed successfully. Dec 13 09:11:58.725142 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 09:11:58.725337 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 09:11:58.740131 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 09:11:58.752954 sh[562]: Success Dec 13 09:11:58.770773 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 09:11:58.848455 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 09:11:58.855940 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 09:11:58.863158 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 09:11:58.892561 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 09:11:58.892656 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 09:11:58.892671 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 09:11:58.895215 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 09:11:58.895301 kernel: BTRFS info (device dm-0): using free space tree Dec 13 09:11:58.904501 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 09:11:58.906158 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 09:11:58.920133 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 09:11:58.924070 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 09:11:58.945130 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 09:11:58.945259 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 09:11:58.945283 kernel: BTRFS info (device vda6): using free space tree Dec 13 09:11:58.949795 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 09:11:58.970771 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 09:11:58.972932 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 09:11:58.980549 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 09:11:58.989500 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 09:11:59.161177 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 09:11:59.172173 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 09:11:59.176583 ignition[661]: Ignition 2.19.0 Dec 13 09:11:59.176607 ignition[661]: Stage: fetch-offline Dec 13 09:11:59.176697 ignition[661]: no configs at "/usr/lib/ignition/base.d" Dec 13 09:11:59.179665 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 09:11:59.176728 ignition[661]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 09:11:59.177289 ignition[661]: parsed url from cmdline: "" Dec 13 09:11:59.177298 ignition[661]: no config URL provided Dec 13 09:11:59.177309 ignition[661]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 09:11:59.177327 ignition[661]: no config at "/usr/lib/ignition/user.ign" Dec 13 09:11:59.177337 ignition[661]: failed to fetch config: resource requires networking Dec 13 09:11:59.177887 ignition[661]: Ignition finished successfully Dec 13 09:11:59.230342 systemd-networkd[751]: lo: Link UP Dec 13 09:11:59.230359 systemd-networkd[751]: lo: Gained carrier Dec 13 09:11:59.234131 systemd-networkd[751]: Enumeration completed Dec 13 09:11:59.234860 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Dec 13 09:11:59.234866 systemd-networkd[751]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Dec 13 09:11:59.235399 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 09:11:59.236507 systemd-networkd[751]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 09:11:59.236513 systemd-networkd[751]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 09:11:59.237819 systemd[1]: Reached target network.target - Network. Dec 13 09:11:59.238428 systemd-networkd[751]: eth0: Link UP Dec 13 09:11:59.238435 systemd-networkd[751]: eth0: Gained carrier Dec 13 09:11:59.238454 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Dec 13 09:11:59.244177 systemd-networkd[751]: eth1: Link UP Dec 13 09:11:59.244182 systemd-networkd[751]: eth1: Gained carrier Dec 13 09:11:59.244199 systemd-networkd[751]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 09:11:59.249058 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 09:11:59.258904 systemd-networkd[751]: eth0: DHCPv4 address 147.182.199.141/20, gateway 147.182.192.1 acquired from 169.254.169.253 Dec 13 09:11:59.261909 systemd-networkd[751]: eth1: DHCPv4 address 10.124.0.14/20, gateway 10.124.0.1 acquired from 169.254.169.253 Dec 13 09:11:59.278734 ignition[755]: Ignition 2.19.0 Dec 13 09:11:59.278840 ignition[755]: Stage: fetch Dec 13 09:11:59.279122 ignition[755]: no configs at "/usr/lib/ignition/base.d" Dec 13 09:11:59.279135 ignition[755]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 09:11:59.279283 ignition[755]: parsed url from cmdline: "" Dec 13 09:11:59.279289 ignition[755]: no config URL provided Dec 13 09:11:59.279298 ignition[755]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 09:11:59.279317 ignition[755]: no config at "/usr/lib/ignition/user.ign" Dec 13 09:11:59.279353 ignition[755]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Dec 13 09:11:59.295247 ignition[755]: GET result: OK Dec 13 09:11:59.295359 ignition[755]: parsing config with SHA512: 5885c9a2245d2850b3a7e78ce2dd5c6a707347b13ca42f4738f464494a4fe904167384a3b7c9e193c82869ee155710433c2d87fe9dd60b254ab415ba855d6f46 Dec 13 09:11:59.299281 unknown[755]: fetched base config from "system" Dec 13 09:11:59.299300 unknown[755]: fetched base config from "system" Dec 13 09:11:59.299603 ignition[755]: fetch: fetch complete Dec 13 09:11:59.299311 unknown[755]: fetched user config from "digitalocean" Dec 13 09:11:59.299609 ignition[755]: fetch: fetch passed Dec 13 09:11:59.302491 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 09:11:59.299670 ignition[755]: Ignition finished successfully Dec 13 09:11:59.310135 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 09:11:59.339254 ignition[762]: Ignition 2.19.0 Dec 13 09:11:59.339271 ignition[762]: Stage: kargs Dec 13 09:11:59.339563 ignition[762]: no configs at "/usr/lib/ignition/base.d" Dec 13 09:11:59.339580 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 09:11:59.342471 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 09:11:59.340773 ignition[762]: kargs: kargs passed Dec 13 09:11:59.340843 ignition[762]: Ignition finished successfully Dec 13 09:11:59.350136 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 09:11:59.386494 ignition[768]: Ignition 2.19.0 Dec 13 09:11:59.386509 ignition[768]: Stage: disks Dec 13 09:11:59.386840 ignition[768]: no configs at "/usr/lib/ignition/base.d" Dec 13 09:11:59.386853 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 09:11:59.387859 ignition[768]: disks: disks passed Dec 13 09:11:59.389273 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 09:11:59.387923 ignition[768]: Ignition finished successfully Dec 13 09:11:59.394733 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 09:11:59.395898 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 09:11:59.397035 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 09:11:59.398097 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 09:11:59.399005 systemd[1]: Reached target basic.target - Basic System. Dec 13 09:11:59.406072 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 09:11:59.435046 systemd-fsck[777]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 09:11:59.438506 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 09:11:59.445982 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 09:11:59.589766 kernel: EXT4-fs (vda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 09:11:59.590875 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 09:11:59.593056 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 09:11:59.605999 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 09:11:59.609367 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 09:11:59.613020 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Dec 13 09:11:59.621804 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (785) Dec 13 09:11:59.623043 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 13 09:11:59.626632 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 09:11:59.633111 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 09:11:59.633170 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 09:11:59.633193 kernel: BTRFS info (device vda6): using free space tree Dec 13 09:11:59.633214 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 09:11:59.628283 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 09:11:59.637893 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 09:11:59.647474 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 09:11:59.664207 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 09:11:59.736354 coreos-metadata[788]: Dec 13 09:11:59.735 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 13 09:11:59.746783 initrd-setup-root[816]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 09:11:59.748868 coreos-metadata[787]: Dec 13 09:11:59.748 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 13 09:11:59.751237 coreos-metadata[788]: Dec 13 09:11:59.749 INFO Fetch successful Dec 13 09:11:59.757026 coreos-metadata[788]: Dec 13 09:11:59.756 INFO wrote hostname ci-4081.2.1-7-9f5b9bd84f to /sysroot/etc/hostname Dec 13 09:11:59.759179 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 09:11:59.763949 coreos-metadata[787]: Dec 13 09:11:59.763 INFO Fetch successful Dec 13 09:11:59.768339 initrd-setup-root[824]: cut: /sysroot/etc/group: No such file or directory Dec 13 09:11:59.772065 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Dec 13 09:11:59.774004 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Dec 13 09:11:59.780191 initrd-setup-root[832]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 09:11:59.789220 initrd-setup-root[839]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 09:11:59.950234 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 09:11:59.959015 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 09:11:59.963066 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 09:11:59.984769 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 09:11:59.984506 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 09:12:00.019565 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 09:12:00.025837 ignition[907]: INFO : Ignition 2.19.0 Dec 13 09:12:00.025837 ignition[907]: INFO : Stage: mount Dec 13 09:12:00.026975 ignition[907]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 09:12:00.026975 ignition[907]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 09:12:00.030772 ignition[907]: INFO : mount: mount passed Dec 13 09:12:00.030772 ignition[907]: INFO : Ignition finished successfully Dec 13 09:12:00.032014 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 09:12:00.038000 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 09:12:00.073158 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 09:12:00.087853 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (918) Dec 13 09:12:00.091808 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 09:12:00.091934 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 09:12:00.091952 kernel: BTRFS info (device vda6): using free space tree Dec 13 09:12:00.096823 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 09:12:00.099482 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 09:12:00.141848 ignition[935]: INFO : Ignition 2.19.0 Dec 13 09:12:00.141848 ignition[935]: INFO : Stage: files Dec 13 09:12:00.141848 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 09:12:00.141848 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 09:12:00.145223 ignition[935]: DEBUG : files: compiled without relabeling support, skipping Dec 13 09:12:00.145223 ignition[935]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 09:12:00.145223 ignition[935]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 09:12:00.149094 ignition[935]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 09:12:00.149887 ignition[935]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 09:12:00.150791 ignition[935]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 09:12:00.150015 unknown[935]: wrote ssh authorized keys file for user: core Dec 13 09:12:00.153778 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Dec 13 09:12:00.155125 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 09:12:00.155125 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 09:12:00.155125 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 09:12:00.155125 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 09:12:00.155125 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 09:12:00.155125 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 09:12:00.155125 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Dec 13 09:12:00.372985 systemd-networkd[751]: eth0: Gained IPv6LL Dec 13 09:12:00.621465 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Dec 13 09:12:00.957352 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 09:12:00.958499 ignition[935]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 09:12:00.958499 ignition[935]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 09:12:00.958499 ignition[935]: INFO : files: files passed Dec 13 09:12:00.958499 ignition[935]: INFO : Ignition finished successfully Dec 13 09:12:00.960112 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 09:12:00.966168 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 09:12:00.970145 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 09:12:00.993940 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 09:12:00.994129 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 09:12:01.005037 initrd-setup-root-after-ignition[964]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 09:12:01.005037 initrd-setup-root-after-ignition[964]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 09:12:01.007868 initrd-setup-root-after-ignition[968]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 09:12:01.010957 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 09:12:01.012574 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 09:12:01.018089 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 09:12:01.072887 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 09:12:01.073083 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 09:12:01.074516 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 09:12:01.075202 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 09:12:01.076174 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 09:12:01.083113 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 09:12:01.107120 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 09:12:01.114049 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 09:12:01.131750 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 09:12:01.133259 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 09:12:01.134905 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 09:12:01.136074 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 09:12:01.136297 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 09:12:01.137911 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 09:12:01.138664 systemd[1]: Stopped target basic.target - Basic System. Dec 13 09:12:01.139673 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 09:12:01.140619 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 09:12:01.141214 systemd-networkd[751]: eth1: Gained IPv6LL Dec 13 09:12:01.143128 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 09:12:01.144061 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 09:12:01.145050 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 09:12:01.145895 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 09:12:01.146779 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 09:12:01.147960 systemd[1]: Stopped target swap.target - Swaps. Dec 13 09:12:01.148887 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 09:12:01.149107 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 09:12:01.150323 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 09:12:01.151646 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 09:12:01.152594 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 09:12:01.153700 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 09:12:01.155171 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 09:12:01.155439 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 09:12:01.157305 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 09:12:01.157522 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 09:12:01.158983 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 09:12:01.159214 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 09:12:01.159997 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 09:12:01.160175 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 09:12:01.173262 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 09:12:01.178324 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 09:12:01.179575 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 09:12:01.182158 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 09:12:01.185156 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 09:12:01.186220 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 09:12:01.197414 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 09:12:01.197786 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 09:12:01.205798 ignition[988]: INFO : Ignition 2.19.0 Dec 13 09:12:01.205798 ignition[988]: INFO : Stage: umount Dec 13 09:12:01.205798 ignition[988]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 09:12:01.205798 ignition[988]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 09:12:01.213695 ignition[988]: INFO : umount: umount passed Dec 13 09:12:01.213695 ignition[988]: INFO : Ignition finished successfully Dec 13 09:12:01.217963 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 09:12:01.218969 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 09:12:01.220438 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 09:12:01.220518 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 09:12:01.222082 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 09:12:01.222192 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 09:12:01.225047 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 09:12:01.225130 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 09:12:01.226652 systemd[1]: Stopped target network.target - Network. Dec 13 09:12:01.227121 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 09:12:01.227209 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 09:12:01.227726 systemd[1]: Stopped target paths.target - Path Units. Dec 13 09:12:01.230248 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 09:12:01.234876 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 09:12:01.235503 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 09:12:01.236027 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 09:12:01.236640 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 09:12:01.236728 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 09:12:01.239467 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 09:12:01.239541 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 09:12:01.240915 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 09:12:01.241012 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 09:12:01.242199 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 09:12:01.242280 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 09:12:01.243416 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 09:12:01.245012 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 09:12:01.250159 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 09:12:01.251298 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 09:12:01.251632 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 09:12:01.251728 systemd-networkd[751]: eth0: DHCPv6 lease lost Dec 13 09:12:01.253338 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 09:12:01.253541 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 09:12:01.254109 systemd-networkd[751]: eth1: DHCPv6 lease lost Dec 13 09:12:01.258684 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 09:12:01.259203 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 09:12:01.263806 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 09:12:01.263898 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 09:12:01.264954 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 09:12:01.265064 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 09:12:01.275019 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 09:12:01.275504 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 09:12:01.275614 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 09:12:01.276482 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 09:12:01.276554 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 09:12:01.277591 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 09:12:01.277673 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 09:12:01.281084 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 09:12:01.281199 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 09:12:01.282025 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 09:12:01.298298 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 09:12:01.298537 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 09:12:01.301613 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 09:12:01.301912 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 09:12:01.303357 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 09:12:01.303464 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 09:12:01.304564 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 09:12:01.304615 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 09:12:01.305368 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 09:12:01.305424 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 09:12:01.306846 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 09:12:01.306933 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 09:12:01.308304 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 09:12:01.308390 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 09:12:01.323177 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 09:12:01.324749 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 09:12:01.324896 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 09:12:01.326774 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 09:12:01.326900 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 09:12:01.328653 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 09:12:01.328811 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 09:12:01.330402 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 09:12:01.330506 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 09:12:01.335136 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 09:12:01.335430 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 09:12:01.338046 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 09:12:01.344181 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 09:12:01.373132 systemd[1]: Switching root. Dec 13 09:12:01.405880 systemd-journald[183]: Journal stopped Dec 13 09:12:02.856609 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Dec 13 09:12:02.858843 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 09:12:02.858915 kernel: SELinux: policy capability open_perms=1 Dec 13 09:12:02.858949 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 09:12:02.858969 kernel: SELinux: policy capability always_check_network=0 Dec 13 09:12:02.858986 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 09:12:02.858999 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 09:12:02.859011 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 09:12:02.859029 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 09:12:02.859054 kernel: audit: type=1403 audit(1734081121.585:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 09:12:02.859072 systemd[1]: Successfully loaded SELinux policy in 43.482ms. Dec 13 09:12:02.859093 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.177ms. Dec 13 09:12:02.859114 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 09:12:02.859136 systemd[1]: Detected virtualization kvm. Dec 13 09:12:02.859151 systemd[1]: Detected architecture x86-64. Dec 13 09:12:02.859167 systemd[1]: Detected first boot. Dec 13 09:12:02.859192 systemd[1]: Hostname set to . Dec 13 09:12:02.859210 systemd[1]: Initializing machine ID from VM UUID. Dec 13 09:12:02.859230 zram_generator::config[1030]: No configuration found. Dec 13 09:12:02.859255 systemd[1]: Populated /etc with preset unit settings. Dec 13 09:12:02.859274 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 09:12:02.859288 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 09:12:02.859301 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 09:12:02.859316 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 09:12:02.859333 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 09:12:02.859347 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 09:12:02.859359 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 09:12:02.859372 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 09:12:02.859386 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 09:12:02.859399 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 09:12:02.859424 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 09:12:02.859443 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 09:12:02.859460 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 09:12:02.859486 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 09:12:02.859500 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 09:12:02.859513 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 09:12:02.859533 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 09:12:02.859557 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 09:12:02.859577 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 09:12:02.859591 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 09:12:02.859605 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 09:12:02.859621 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 09:12:02.859641 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 09:12:02.859661 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 09:12:02.859682 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 09:12:02.859701 systemd[1]: Reached target slices.target - Slice Units. Dec 13 09:12:02.859722 systemd[1]: Reached target swap.target - Swaps. Dec 13 09:12:02.860914 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 09:12:02.860996 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 09:12:02.861020 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 09:12:02.861040 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 09:12:02.861059 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 09:12:02.861076 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 09:12:02.861095 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 09:12:02.861115 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 09:12:02.861133 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 09:12:02.861153 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 09:12:02.861200 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 09:12:02.861222 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 09:12:02.861242 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 09:12:02.861264 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 09:12:02.861282 systemd[1]: Reached target machines.target - Containers. Dec 13 09:12:02.861301 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 09:12:02.861319 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 09:12:02.861338 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 09:12:02.861358 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 09:12:02.861383 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 09:12:02.861403 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 09:12:02.861422 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 09:12:02.861441 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 09:12:02.861462 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 09:12:02.861482 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 09:12:02.861501 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 09:12:02.861522 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 09:12:02.861550 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 09:12:02.861570 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 09:12:02.861592 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 09:12:02.861609 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 09:12:02.861628 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 09:12:02.861645 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 09:12:02.861664 kernel: fuse: init (API version 7.39) Dec 13 09:12:02.861686 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 09:12:02.861708 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 09:12:02.861726 systemd[1]: Stopped verity-setup.service. Dec 13 09:12:02.869038 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 09:12:02.869123 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 09:12:02.869149 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 09:12:02.869171 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 09:12:02.869226 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 09:12:02.869247 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 09:12:02.869265 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 09:12:02.869285 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 09:12:02.869314 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 09:12:02.869339 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 09:12:02.869360 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 09:12:02.869379 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 09:12:02.869403 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 09:12:02.869425 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 09:12:02.869446 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 09:12:02.869466 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 09:12:02.869487 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 09:12:02.869507 kernel: loop: module loaded Dec 13 09:12:02.869538 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 09:12:02.869562 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 09:12:02.869582 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 09:12:02.869603 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 09:12:02.869638 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 09:12:02.869659 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 09:12:02.869679 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 09:12:02.869700 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 09:12:02.869855 systemd-journald[1106]: Collecting audit messages is disabled. Dec 13 09:12:02.869924 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 09:12:02.869949 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 09:12:02.869972 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 09:12:02.869995 systemd-journald[1106]: Journal started Dec 13 09:12:02.870034 systemd-journald[1106]: Runtime Journal (/run/log/journal/a4b84da1807f44f1ae8e2624608fef3b) is 4.9M, max 39.3M, 34.4M free. Dec 13 09:12:02.396365 systemd[1]: Queued start job for default target multi-user.target. Dec 13 09:12:02.421449 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 09:12:02.422272 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 09:12:02.902784 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 09:12:02.910879 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 09:12:02.910977 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 09:12:02.917795 kernel: ACPI: bus type drm_connector registered Dec 13 09:12:02.926895 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 09:12:02.927023 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 09:12:02.940833 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 09:12:02.944165 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 09:12:02.951771 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 09:12:02.984773 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 09:12:03.010681 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 09:12:03.010803 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 09:12:03.012010 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 09:12:03.012247 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 09:12:03.013104 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 09:12:03.013802 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 09:12:03.014559 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 09:12:03.034331 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 09:12:03.066094 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 09:12:03.083733 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 09:12:03.095315 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 09:12:03.096385 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 09:12:03.101437 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 09:12:03.113024 kernel: loop0: detected capacity change from 0 to 205544 Dec 13 09:12:03.117990 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 09:12:03.153061 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 09:12:03.163914 systemd-journald[1106]: Time spent on flushing to /var/log/journal/a4b84da1807f44f1ae8e2624608fef3b is 105.161ms for 980 entries. Dec 13 09:12:03.163914 systemd-journald[1106]: System Journal (/var/log/journal/a4b84da1807f44f1ae8e2624608fef3b) is 8.0M, max 195.6M, 187.6M free. Dec 13 09:12:03.281192 systemd-journald[1106]: Received client request to flush runtime journal. Dec 13 09:12:03.281400 kernel: loop1: detected capacity change from 0 to 140768 Dec 13 09:12:03.208425 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 09:12:03.214329 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 09:12:03.225292 systemd-tmpfiles[1133]: ACLs are not supported, ignoring. Dec 13 09:12:03.225318 systemd-tmpfiles[1133]: ACLs are not supported, ignoring. Dec 13 09:12:03.225980 udevadm[1162]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 09:12:03.266278 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 09:12:03.285435 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 09:12:03.288053 kernel: loop2: detected capacity change from 0 to 142488 Dec 13 09:12:03.288400 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 09:12:03.356792 kernel: loop3: detected capacity change from 0 to 8 Dec 13 09:12:03.391362 kernel: loop4: detected capacity change from 0 to 205544 Dec 13 09:12:03.401647 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 09:12:03.413104 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 09:12:03.436808 kernel: loop5: detected capacity change from 0 to 140768 Dec 13 09:12:03.470070 kernel: loop6: detected capacity change from 0 to 142488 Dec 13 09:12:03.504946 kernel: loop7: detected capacity change from 0 to 8 Dec 13 09:12:03.510596 (sd-merge)[1174]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Dec 13 09:12:03.511616 (sd-merge)[1174]: Merged extensions into '/usr'. Dec 13 09:12:03.539114 systemd[1]: Reloading requested from client PID 1132 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 09:12:03.539152 systemd[1]: Reloading... Dec 13 09:12:03.567258 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Dec 13 09:12:03.567290 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Dec 13 09:12:03.689792 zram_generator::config[1204]: No configuration found. Dec 13 09:12:03.894880 ldconfig[1128]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 09:12:04.015191 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 09:12:04.097699 systemd[1]: Reloading finished in 557 ms. Dec 13 09:12:04.148438 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 09:12:04.151108 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 09:12:04.152805 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 09:12:04.172317 systemd[1]: Starting ensure-sysext.service... Dec 13 09:12:04.182844 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 09:12:04.205844 systemd[1]: Reloading requested from client PID 1248 ('systemctl') (unit ensure-sysext.service)... Dec 13 09:12:04.205884 systemd[1]: Reloading... Dec 13 09:12:04.258513 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 09:12:04.258945 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 09:12:04.262012 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 09:12:04.262383 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Dec 13 09:12:04.262474 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Dec 13 09:12:04.279571 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 09:12:04.279593 systemd-tmpfiles[1249]: Skipping /boot Dec 13 09:12:04.307370 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 09:12:04.307394 systemd-tmpfiles[1249]: Skipping /boot Dec 13 09:12:04.394776 zram_generator::config[1278]: No configuration found. Dec 13 09:12:04.584677 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 09:12:04.670820 systemd[1]: Reloading finished in 464 ms. Dec 13 09:12:04.695258 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 09:12:04.719257 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 09:12:04.726204 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 09:12:04.732429 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 09:12:04.746051 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 09:12:04.755682 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 09:12:04.767342 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 09:12:04.767566 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 09:12:04.778147 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 09:12:04.786103 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 09:12:04.799860 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 09:12:04.801980 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 09:12:04.802162 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 09:12:04.803194 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 09:12:04.819389 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 09:12:04.833849 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 09:12:04.844044 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 09:12:04.844606 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 09:12:04.845094 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 09:12:04.845465 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 09:12:04.849863 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 09:12:04.851496 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 09:12:04.853042 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 09:12:04.857113 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 09:12:04.857394 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 09:12:04.877610 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 09:12:04.878921 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 09:12:04.887539 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 09:12:04.901024 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 09:12:04.907555 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 09:12:04.909424 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 09:12:04.909730 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 09:12:04.913808 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 09:12:04.920294 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 09:12:04.920583 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 09:12:04.922719 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 09:12:04.924251 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 09:12:04.925529 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 09:12:04.925703 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 09:12:04.928625 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 09:12:04.929870 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 09:12:04.941351 systemd[1]: Finished ensure-sysext.service. Dec 13 09:12:04.949094 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 09:12:04.949228 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 09:12:04.957164 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 09:12:04.957719 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 09:12:04.969855 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 09:12:04.978102 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 09:12:04.999364 systemd-udevd[1330]: Using default interface naming scheme 'v255'. Dec 13 09:12:05.023503 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 09:12:05.032612 augenrules[1361]: No rules Dec 13 09:12:05.035714 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 09:12:05.039211 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 09:12:05.049982 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 09:12:05.058692 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 09:12:05.177337 systemd-resolved[1323]: Positive Trust Anchors: Dec 13 09:12:05.177830 systemd-resolved[1323]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 09:12:05.177917 systemd-resolved[1323]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 09:12:05.183414 systemd-resolved[1323]: Using system hostname 'ci-4081.2.1-7-9f5b9bd84f'. Dec 13 09:12:05.200769 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 09:12:05.201598 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 09:12:05.224304 systemd-networkd[1372]: lo: Link UP Dec 13 09:12:05.224315 systemd-networkd[1372]: lo: Gained carrier Dec 13 09:12:05.224412 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 09:12:05.225984 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 09:12:05.227324 systemd-networkd[1372]: Enumeration completed Dec 13 09:12:05.227539 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 09:12:05.227853 systemd-timesyncd[1351]: No network connectivity, watching for changes. Dec 13 09:12:05.228330 systemd[1]: Reached target network.target - Network. Dec 13 09:12:05.237078 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 09:12:05.294818 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1387) Dec 13 09:12:05.298505 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Dec 13 09:12:05.299151 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 09:12:05.299316 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 09:12:05.318776 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1378) Dec 13 09:12:05.308838 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 09:12:05.322084 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 09:12:05.327239 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 09:12:05.328944 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 09:12:05.328997 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 09:12:05.329016 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 09:12:05.329420 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 09:12:05.347694 systemd-networkd[1372]: eth0: Configuring with /run/systemd/network/10-76:e2:29:ff:78:bb.network. Dec 13 09:12:05.358380 systemd-networkd[1372]: eth0: Link UP Dec 13 09:12:05.359942 systemd-networkd[1372]: eth0: Gained carrier Dec 13 09:12:05.360415 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 09:12:05.361183 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 09:12:05.366163 kernel: ISO 9660 Extensions: RRIP_1991A Dec 13 09:12:05.372282 systemd-timesyncd[1351]: Network configuration changed, trying to establish connection. Dec 13 09:12:05.373621 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Dec 13 09:12:05.378379 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1387) Dec 13 09:12:05.384539 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 09:12:05.385111 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 09:12:05.387866 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 09:12:05.394197 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 09:12:05.394481 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 09:12:05.395671 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 09:12:05.479388 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 09:12:05.482032 systemd-networkd[1372]: eth1: Configuring with /run/systemd/network/10-82:46:0d:5e:bc:bc.network. Dec 13 09:12:05.488733 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 09:12:05.485388 systemd-networkd[1372]: eth1: Link UP Dec 13 09:12:05.485396 systemd-networkd[1372]: eth1: Gained carrier Dec 13 09:12:05.488139 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 09:12:05.502803 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Dec 13 09:12:05.515878 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 09:12:05.515994 kernel: ACPI: button: Power Button [PWRF] Dec 13 09:12:05.540948 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 09:12:05.623866 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Dec 13 09:12:05.623979 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Dec 13 09:12:05.627326 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 09:12:05.628829 kernel: Console: switching to colour dummy device 80x25 Dec 13 09:12:05.630150 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 13 09:12:05.630594 kernel: [drm] features: -context_init Dec 13 09:12:05.631029 kernel: [drm] number of scanouts: 1 Dec 13 09:12:05.631084 kernel: [drm] number of cap sets: 0 Dec 13 09:12:05.633787 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Dec 13 09:12:05.637282 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Dec 13 09:12:05.637401 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 09:12:06.457659 systemd-resolved[1323]: Clock change detected. Flushing caches. Dec 13 09:12:06.458096 systemd-timesyncd[1351]: Contacted time server 67.217.246.204:123 (1.flatcar.pool.ntp.org). Dec 13 09:12:06.458351 systemd-timesyncd[1351]: Initial clock synchronization to Fri 2024-12-13 09:12:06.457374 UTC. Dec 13 09:12:06.459106 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 13 09:12:06.463493 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 09:12:06.494292 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 09:12:06.494932 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 09:12:06.506441 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 09:12:06.521346 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 09:12:06.522435 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 09:12:06.540300 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 09:12:06.682387 kernel: EDAC MC: Ver: 3.0.0 Dec 13 09:12:06.707879 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 09:12:06.709626 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 09:12:06.731378 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 09:12:06.748325 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 09:12:06.786828 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 09:12:06.788348 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 09:12:06.788640 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 09:12:06.788881 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 09:12:06.789081 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 09:12:06.789470 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 09:12:06.789686 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 09:12:06.789770 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 09:12:06.789839 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 09:12:06.789889 systemd[1]: Reached target paths.target - Path Units. Dec 13 09:12:06.789944 systemd[1]: Reached target timers.target - Timer Units. Dec 13 09:12:06.792326 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 09:12:06.794855 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 09:12:06.804042 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 09:12:06.808437 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 09:12:06.813108 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 09:12:06.814300 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 09:12:06.814783 systemd[1]: Reached target basic.target - Basic System. Dec 13 09:12:06.818311 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 09:12:06.818364 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 09:12:06.831280 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 09:12:06.839278 lvm[1437]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 09:12:06.840311 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 09:12:06.851472 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 09:12:06.859307 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 09:12:06.868336 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 09:12:06.870716 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 09:12:06.874643 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 09:12:06.884321 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 09:12:06.892358 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 09:12:06.901294 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 09:12:06.902370 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 09:12:06.904071 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 09:12:06.913271 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 09:12:06.919272 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 09:12:06.943364 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 09:12:06.946637 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 09:12:06.948116 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 09:12:06.967641 jq[1441]: false Dec 13 09:12:06.985664 extend-filesystems[1442]: Found loop4 Dec 13 09:12:06.985664 extend-filesystems[1442]: Found loop5 Dec 13 09:12:06.985664 extend-filesystems[1442]: Found loop6 Dec 13 09:12:06.985664 extend-filesystems[1442]: Found loop7 Dec 13 09:12:06.985664 extend-filesystems[1442]: Found vda Dec 13 09:12:06.985664 extend-filesystems[1442]: Found vda1 Dec 13 09:12:06.985664 extend-filesystems[1442]: Found vda2 Dec 13 09:12:06.985664 extend-filesystems[1442]: Found vda3 Dec 13 09:12:06.985664 extend-filesystems[1442]: Found usr Dec 13 09:12:06.985664 extend-filesystems[1442]: Found vda4 Dec 13 09:12:06.985664 extend-filesystems[1442]: Found vda6 Dec 13 09:12:06.985664 extend-filesystems[1442]: Found vda7 Dec 13 09:12:06.985664 extend-filesystems[1442]: Found vda9 Dec 13 09:12:06.985664 extend-filesystems[1442]: Checking size of /dev/vda9 Dec 13 09:12:07.101788 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Dec 13 09:12:07.102004 extend-filesystems[1442]: Resized partition /dev/vda9 Dec 13 09:12:07.102965 coreos-metadata[1439]: Dec 13 09:12:07.009 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 13 09:12:07.102965 coreos-metadata[1439]: Dec 13 09:12:07.043 INFO Fetch successful Dec 13 09:12:07.043911 dbus-daemon[1440]: [system] SELinux support is enabled Dec 13 09:12:06.985705 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 09:12:07.135212 extend-filesystems[1471]: resize2fs 1.47.1 (20-May-2024) Dec 13 09:12:06.985980 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 09:12:07.044555 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 09:12:07.149041 update_engine[1449]: I20241213 09:12:07.075927 1449 main.cc:92] Flatcar Update Engine starting Dec 13 09:12:07.149041 update_engine[1449]: I20241213 09:12:07.130146 1449 update_check_scheduler.cc:74] Next update check in 10m45s Dec 13 09:12:07.056608 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 09:12:07.174227 jq[1450]: true Dec 13 09:12:07.056650 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 09:12:07.070671 (ntainerd)[1466]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 09:12:07.075351 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 09:12:07.178918 jq[1469]: true Dec 13 09:12:07.075483 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Dec 13 09:12:07.075525 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 09:12:07.080531 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 09:12:07.080874 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 09:12:07.125568 systemd[1]: Started update-engine.service - Update Engine. Dec 13 09:12:07.134404 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 09:12:07.214743 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1386) Dec 13 09:12:07.315698 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 09:12:07.319711 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 09:12:07.371421 systemd-logind[1448]: New seat seat0. Dec 13 09:12:07.378600 systemd-logind[1448]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 09:12:07.378653 systemd-logind[1448]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 09:12:07.379073 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 09:12:07.402380 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Dec 13 09:12:07.418799 extend-filesystems[1471]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 09:12:07.418799 extend-filesystems[1471]: old_desc_blocks = 1, new_desc_blocks = 8 Dec 13 09:12:07.418799 extend-filesystems[1471]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Dec 13 09:12:07.435169 extend-filesystems[1442]: Resized filesystem in /dev/vda9 Dec 13 09:12:07.435169 extend-filesystems[1442]: Found vdb Dec 13 09:12:07.420426 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 09:12:07.422223 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 09:12:07.443316 bash[1501]: Updated "/home/core/.ssh/authorized_keys" Dec 13 09:12:07.461408 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 09:12:07.480378 systemd[1]: Starting sshkeys.service... Dec 13 09:12:07.527800 locksmithd[1476]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 09:12:07.528460 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 09:12:07.542301 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 09:12:07.621305 coreos-metadata[1512]: Dec 13 09:12:07.619 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 13 09:12:07.633174 coreos-metadata[1512]: Dec 13 09:12:07.632 INFO Fetch successful Dec 13 09:12:07.643128 unknown[1512]: wrote ssh authorized keys file for user: core Dec 13 09:12:07.674959 update-ssh-keys[1516]: Updated "/home/core/.ssh/authorized_keys" Dec 13 09:12:07.677937 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 09:12:07.686331 systemd[1]: Finished sshkeys.service. Dec 13 09:12:07.713229 systemd-networkd[1372]: eth0: Gained IPv6LL Dec 13 09:12:07.714608 containerd[1466]: time="2024-12-13T09:12:07.713387433Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 09:12:07.722105 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 09:12:07.723572 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 09:12:07.735436 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:12:07.749378 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 09:12:07.779847 containerd[1466]: time="2024-12-13T09:12:07.779738018Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 09:12:07.783791 containerd[1466]: time="2024-12-13T09:12:07.783717008Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 09:12:07.784084 containerd[1466]: time="2024-12-13T09:12:07.784046772Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 09:12:07.784241 containerd[1466]: time="2024-12-13T09:12:07.784202586Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 09:12:07.784633 containerd[1466]: time="2024-12-13T09:12:07.784595166Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 09:12:07.784866 containerd[1466]: time="2024-12-13T09:12:07.784769639Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 09:12:07.785535 containerd[1466]: time="2024-12-13T09:12:07.785484016Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 09:12:07.785697 containerd[1466]: time="2024-12-13T09:12:07.785669184Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 09:12:07.788217 containerd[1466]: time="2024-12-13T09:12:07.787677325Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 09:12:07.788217 containerd[1466]: time="2024-12-13T09:12:07.787733081Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 09:12:07.788217 containerd[1466]: time="2024-12-13T09:12:07.787759567Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 09:12:07.788217 containerd[1466]: time="2024-12-13T09:12:07.787775978Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 09:12:07.788217 containerd[1466]: time="2024-12-13T09:12:07.787981459Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 09:12:07.789020 containerd[1466]: time="2024-12-13T09:12:07.788969454Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 09:12:07.789732 containerd[1466]: time="2024-12-13T09:12:07.789691817Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 09:12:07.789885 containerd[1466]: time="2024-12-13T09:12:07.789858244Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 09:12:07.790426 containerd[1466]: time="2024-12-13T09:12:07.790380858Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 09:12:07.790764 containerd[1466]: time="2024-12-13T09:12:07.790636976Z" level=info msg="metadata content store policy set" policy=shared Dec 13 09:12:07.795736 containerd[1466]: time="2024-12-13T09:12:07.795595759Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 09:12:07.796854 containerd[1466]: time="2024-12-13T09:12:07.795915569Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 09:12:07.796854 containerd[1466]: time="2024-12-13T09:12:07.796628150Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 09:12:07.796854 containerd[1466]: time="2024-12-13T09:12:07.796698497Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 09:12:07.796854 containerd[1466]: time="2024-12-13T09:12:07.796721308Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 09:12:07.801047 containerd[1466]: time="2024-12-13T09:12:07.798949959Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 09:12:07.801047 containerd[1466]: time="2024-12-13T09:12:07.799410865Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 09:12:07.801047 containerd[1466]: time="2024-12-13T09:12:07.799664314Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 09:12:07.801047 containerd[1466]: time="2024-12-13T09:12:07.799685288Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 09:12:07.801047 containerd[1466]: time="2024-12-13T09:12:07.799699825Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 09:12:07.801047 containerd[1466]: time="2024-12-13T09:12:07.799716091Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 09:12:07.801047 containerd[1466]: time="2024-12-13T09:12:07.799731607Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 09:12:07.801047 containerd[1466]: time="2024-12-13T09:12:07.799744969Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 09:12:07.801047 containerd[1466]: time="2024-12-13T09:12:07.799759979Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 09:12:07.801047 containerd[1466]: time="2024-12-13T09:12:07.799777672Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 09:12:07.801047 containerd[1466]: time="2024-12-13T09:12:07.799791185Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 09:12:07.801047 containerd[1466]: time="2024-12-13T09:12:07.799804091Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 09:12:07.801047 containerd[1466]: time="2024-12-13T09:12:07.799816134Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 09:12:07.801047 containerd[1466]: time="2024-12-13T09:12:07.799837472Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 09:12:07.801673 containerd[1466]: time="2024-12-13T09:12:07.799855100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 09:12:07.801673 containerd[1466]: time="2024-12-13T09:12:07.799868190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 09:12:07.801673 containerd[1466]: time="2024-12-13T09:12:07.799881943Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 09:12:07.801673 containerd[1466]: time="2024-12-13T09:12:07.799894930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 09:12:07.801673 containerd[1466]: time="2024-12-13T09:12:07.799911256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 09:12:07.801673 containerd[1466]: time="2024-12-13T09:12:07.799923500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 09:12:07.801673 containerd[1466]: time="2024-12-13T09:12:07.799938184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 09:12:07.801673 containerd[1466]: time="2024-12-13T09:12:07.799950947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 09:12:07.801673 containerd[1466]: time="2024-12-13T09:12:07.799971423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 09:12:07.801673 containerd[1466]: time="2024-12-13T09:12:07.799990401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 09:12:07.801673 containerd[1466]: time="2024-12-13T09:12:07.800063194Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 09:12:07.801673 containerd[1466]: time="2024-12-13T09:12:07.800088334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 09:12:07.801673 containerd[1466]: time="2024-12-13T09:12:07.800112567Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 09:12:07.801673 containerd[1466]: time="2024-12-13T09:12:07.800149223Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 09:12:07.801673 containerd[1466]: time="2024-12-13T09:12:07.800172445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 09:12:07.802554 containerd[1466]: time="2024-12-13T09:12:07.800191512Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 09:12:07.802554 containerd[1466]: time="2024-12-13T09:12:07.800295831Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 09:12:07.802554 containerd[1466]: time="2024-12-13T09:12:07.800323718Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 09:12:07.802554 containerd[1466]: time="2024-12-13T09:12:07.800345020Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 09:12:07.802554 containerd[1466]: time="2024-12-13T09:12:07.800365691Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 09:12:07.802554 containerd[1466]: time="2024-12-13T09:12:07.800382455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 09:12:07.802554 containerd[1466]: time="2024-12-13T09:12:07.800403627Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 09:12:07.802554 containerd[1466]: time="2024-12-13T09:12:07.800424649Z" level=info msg="NRI interface is disabled by configuration." Dec 13 09:12:07.802554 containerd[1466]: time="2024-12-13T09:12:07.800439724Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 09:12:07.802881 containerd[1466]: time="2024-12-13T09:12:07.800881135Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 09:12:07.802881 containerd[1466]: time="2024-12-13T09:12:07.800983070Z" level=info msg="Connect containerd service" Dec 13 09:12:07.803314 containerd[1466]: time="2024-12-13T09:12:07.803266351Z" level=info msg="using legacy CRI server" Dec 13 09:12:07.804042 containerd[1466]: time="2024-12-13T09:12:07.803396614Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 09:12:07.804042 containerd[1466]: time="2024-12-13T09:12:07.803566571Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 09:12:07.805967 containerd[1466]: time="2024-12-13T09:12:07.805907188Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 09:12:07.806339 containerd[1466]: time="2024-12-13T09:12:07.806224897Z" level=info msg="Start subscribing containerd event" Dec 13 09:12:07.806457 containerd[1466]: time="2024-12-13T09:12:07.806414366Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 09:12:07.806534 containerd[1466]: time="2024-12-13T09:12:07.806499699Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 09:12:07.806626 containerd[1466]: time="2024-12-13T09:12:07.806604640Z" level=info msg="Start recovering state" Dec 13 09:12:07.806825 containerd[1466]: time="2024-12-13T09:12:07.806806588Z" level=info msg="Start event monitor" Dec 13 09:12:07.806909 containerd[1466]: time="2024-12-13T09:12:07.806896326Z" level=info msg="Start snapshots syncer" Dec 13 09:12:07.806988 containerd[1466]: time="2024-12-13T09:12:07.806964808Z" level=info msg="Start cni network conf syncer for default" Dec 13 09:12:07.807104 containerd[1466]: time="2024-12-13T09:12:07.807088059Z" level=info msg="Start streaming server" Dec 13 09:12:07.812047 containerd[1466]: time="2024-12-13T09:12:07.810505804Z" level=info msg="containerd successfully booted in 0.098760s" Dec 13 09:12:07.810675 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 09:12:07.833453 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 09:12:07.847076 sshd_keygen[1472]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 09:12:07.886643 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 09:12:07.899521 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 09:12:07.914125 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 09:12:07.914502 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 09:12:07.927571 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 09:12:07.950809 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 09:12:07.961779 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 09:12:07.977214 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 09:12:07.978106 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 09:12:08.097545 systemd-networkd[1372]: eth1: Gained IPv6LL Dec 13 09:12:08.779698 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 09:12:08.789463 systemd[1]: Started sshd@0-147.182.199.141:22-147.75.109.163:58896.service - OpenSSH per-connection server daemon (147.75.109.163:58896). Dec 13 09:12:08.896028 sshd[1551]: Accepted publickey for core from 147.75.109.163 port 58896 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:12:08.901211 sshd[1551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:12:08.914665 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 09:12:08.924880 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 09:12:08.935640 systemd-logind[1448]: New session 1 of user core. Dec 13 09:12:08.957892 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 09:12:08.973634 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 09:12:08.977853 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:12:08.986702 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 09:12:08.999522 (kubelet)[1560]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 09:12:09.003545 (systemd)[1559]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 09:12:09.153767 systemd[1559]: Queued start job for default target default.target. Dec 13 09:12:09.162395 systemd[1559]: Created slice app.slice - User Application Slice. Dec 13 09:12:09.162779 systemd[1559]: Reached target paths.target - Paths. Dec 13 09:12:09.162886 systemd[1559]: Reached target timers.target - Timers. Dec 13 09:12:09.167249 systemd[1559]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 09:12:09.188936 systemd[1559]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 09:12:09.189108 systemd[1559]: Reached target sockets.target - Sockets. Dec 13 09:12:09.189133 systemd[1559]: Reached target basic.target - Basic System. Dec 13 09:12:09.189204 systemd[1559]: Reached target default.target - Main User Target. Dec 13 09:12:09.189242 systemd[1559]: Startup finished in 174ms. Dec 13 09:12:09.189453 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 09:12:09.206358 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 09:12:09.208795 systemd[1]: Startup finished in 1.353s (kernel) + 5.837s (initrd) + 6.853s (userspace) = 14.043s. Dec 13 09:12:09.291479 systemd[1]: Started sshd@1-147.182.199.141:22-147.75.109.163:58910.service - OpenSSH per-connection server daemon (147.75.109.163:58910). Dec 13 09:12:09.381070 sshd[1580]: Accepted publickey for core from 147.75.109.163 port 58910 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:12:09.382615 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:12:09.392625 systemd-logind[1448]: New session 2 of user core. Dec 13 09:12:09.398378 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 09:12:09.468713 sshd[1580]: pam_unix(sshd:session): session closed for user core Dec 13 09:12:09.477379 systemd[1]: sshd@1-147.182.199.141:22-147.75.109.163:58910.service: Deactivated successfully. Dec 13 09:12:09.481413 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 09:12:09.485335 systemd-logind[1448]: Session 2 logged out. Waiting for processes to exit. Dec 13 09:12:09.493563 systemd[1]: Started sshd@2-147.182.199.141:22-147.75.109.163:58926.service - OpenSSH per-connection server daemon (147.75.109.163:58926). Dec 13 09:12:09.497455 systemd-logind[1448]: Removed session 2. Dec 13 09:12:09.548299 sshd[1587]: Accepted publickey for core from 147.75.109.163 port 58926 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:12:09.550332 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:12:09.564145 systemd-logind[1448]: New session 3 of user core. Dec 13 09:12:09.569409 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 09:12:09.634598 sshd[1587]: pam_unix(sshd:session): session closed for user core Dec 13 09:12:09.649407 systemd[1]: sshd@2-147.182.199.141:22-147.75.109.163:58926.service: Deactivated successfully. Dec 13 09:12:09.653636 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 09:12:09.655183 systemd-logind[1448]: Session 3 logged out. Waiting for processes to exit. Dec 13 09:12:09.668322 systemd[1]: Started sshd@3-147.182.199.141:22-147.75.109.163:58942.service - OpenSSH per-connection server daemon (147.75.109.163:58942). Dec 13 09:12:09.671759 systemd-logind[1448]: Removed session 3. Dec 13 09:12:09.740168 sshd[1595]: Accepted publickey for core from 147.75.109.163 port 58942 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:12:09.743920 sshd[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:12:09.755038 systemd-logind[1448]: New session 4 of user core. Dec 13 09:12:09.763406 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 09:12:09.829103 kubelet[1560]: E1213 09:12:09.828980 1560 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 09:12:09.833901 sshd[1595]: pam_unix(sshd:session): session closed for user core Dec 13 09:12:09.835545 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 09:12:09.835714 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 09:12:09.837318 systemd[1]: kubelet.service: Consumed 1.322s CPU time. Dec 13 09:12:09.857214 systemd[1]: sshd@3-147.182.199.141:22-147.75.109.163:58942.service: Deactivated successfully. Dec 13 09:12:09.860957 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 09:12:09.864254 systemd-logind[1448]: Session 4 logged out. Waiting for processes to exit. Dec 13 09:12:09.879584 systemd[1]: Started sshd@4-147.182.199.141:22-147.75.109.163:58956.service - OpenSSH per-connection server daemon (147.75.109.163:58956). Dec 13 09:12:09.881617 systemd-logind[1448]: Removed session 4. Dec 13 09:12:09.939298 sshd[1604]: Accepted publickey for core from 147.75.109.163 port 58956 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:12:09.942981 sshd[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:12:09.951279 systemd-logind[1448]: New session 5 of user core. Dec 13 09:12:09.964559 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 09:12:10.044388 sudo[1607]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 09:12:10.044897 sudo[1607]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 09:12:10.069534 sudo[1607]: pam_unix(sudo:session): session closed for user root Dec 13 09:12:10.075356 sshd[1604]: pam_unix(sshd:session): session closed for user core Dec 13 09:12:10.091603 systemd[1]: sshd@4-147.182.199.141:22-147.75.109.163:58956.service: Deactivated successfully. Dec 13 09:12:10.094866 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 09:12:10.098312 systemd-logind[1448]: Session 5 logged out. Waiting for processes to exit. Dec 13 09:12:10.111664 systemd[1]: Started sshd@5-147.182.199.141:22-147.75.109.163:58972.service - OpenSSH per-connection server daemon (147.75.109.163:58972). Dec 13 09:12:10.114076 systemd-logind[1448]: Removed session 5. Dec 13 09:12:10.167400 sshd[1612]: Accepted publickey for core from 147.75.109.163 port 58972 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:12:10.169990 sshd[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:12:10.177285 systemd-logind[1448]: New session 6 of user core. Dec 13 09:12:10.187369 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 09:12:10.255655 sudo[1616]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 09:12:10.256372 sudo[1616]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 09:12:10.261592 sudo[1616]: pam_unix(sudo:session): session closed for user root Dec 13 09:12:10.269682 sudo[1615]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 09:12:10.270126 sudo[1615]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 09:12:10.288522 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 09:12:10.291571 auditctl[1619]: No rules Dec 13 09:12:10.292020 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 09:12:10.292218 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 09:12:10.295346 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 09:12:10.352775 augenrules[1637]: No rules Dec 13 09:12:10.354994 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 09:12:10.358344 sudo[1615]: pam_unix(sudo:session): session closed for user root Dec 13 09:12:10.362613 sshd[1612]: pam_unix(sshd:session): session closed for user core Dec 13 09:12:10.372399 systemd[1]: sshd@5-147.182.199.141:22-147.75.109.163:58972.service: Deactivated successfully. Dec 13 09:12:10.375239 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 09:12:10.376629 systemd-logind[1448]: Session 6 logged out. Waiting for processes to exit. Dec 13 09:12:10.383619 systemd[1]: Started sshd@6-147.182.199.141:22-147.75.109.163:58974.service - OpenSSH per-connection server daemon (147.75.109.163:58974). Dec 13 09:12:10.386207 systemd-logind[1448]: Removed session 6. Dec 13 09:12:10.447742 sshd[1645]: Accepted publickey for core from 147.75.109.163 port 58974 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:12:10.450423 sshd[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:12:10.458270 systemd-logind[1448]: New session 7 of user core. Dec 13 09:12:10.465388 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 09:12:10.536854 sudo[1648]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 09:12:10.537507 sudo[1648]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 09:12:11.587794 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:12:11.588261 systemd[1]: kubelet.service: Consumed 1.322s CPU time. Dec 13 09:12:11.594523 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:12:11.635412 systemd[1]: Reloading requested from client PID 1681 ('systemctl') (unit session-7.scope)... Dec 13 09:12:11.635430 systemd[1]: Reloading... Dec 13 09:12:11.791067 zram_generator::config[1719]: No configuration found. Dec 13 09:12:11.985812 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 09:12:12.118937 systemd[1]: Reloading finished in 482 ms. Dec 13 09:12:12.190133 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 09:12:12.190289 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 09:12:12.190699 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:12:12.199609 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:12:12.364413 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:12:12.371492 (kubelet)[1773]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 09:12:12.441699 kubelet[1773]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 09:12:12.441699 kubelet[1773]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 09:12:12.441699 kubelet[1773]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 09:12:12.442240 kubelet[1773]: I1213 09:12:12.441793 1773 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 09:12:13.138935 kubelet[1773]: I1213 09:12:13.138822 1773 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 09:12:13.138935 kubelet[1773]: I1213 09:12:13.138886 1773 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 09:12:13.139338 kubelet[1773]: I1213 09:12:13.139300 1773 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 09:12:13.176034 kubelet[1773]: I1213 09:12:13.174769 1773 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 09:12:13.188303 kubelet[1773]: E1213 09:12:13.188251 1773 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 09:12:13.188661 kubelet[1773]: I1213 09:12:13.188639 1773 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 09:12:13.195278 kubelet[1773]: I1213 09:12:13.195236 1773 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 09:12:13.197492 kubelet[1773]: I1213 09:12:13.197441 1773 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 09:12:13.197986 kubelet[1773]: I1213 09:12:13.197930 1773 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 09:12:13.198372 kubelet[1773]: I1213 09:12:13.198116 1773 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"147.182.199.141","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 09:12:13.198646 kubelet[1773]: I1213 09:12:13.198629 1773 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 09:12:13.198750 kubelet[1773]: I1213 09:12:13.198711 1773 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 09:12:13.199044 kubelet[1773]: I1213 09:12:13.199027 1773 state_mem.go:36] "Initialized new in-memory state store" Dec 13 09:12:13.201880 kubelet[1773]: I1213 09:12:13.201834 1773 kubelet.go:408] "Attempting to sync node with API server" Dec 13 09:12:13.202146 kubelet[1773]: I1213 09:12:13.202129 1773 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 09:12:13.202305 kubelet[1773]: I1213 09:12:13.202292 1773 kubelet.go:314] "Adding apiserver pod source" Dec 13 09:12:13.202451 kubelet[1773]: I1213 09:12:13.202437 1773 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 09:12:13.204184 kubelet[1773]: E1213 09:12:13.203749 1773 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:12:13.204184 kubelet[1773]: E1213 09:12:13.203832 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:12:13.208819 kubelet[1773]: I1213 09:12:13.208767 1773 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 09:12:13.210835 kubelet[1773]: I1213 09:12:13.210772 1773 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 09:12:13.211600 kubelet[1773]: W1213 09:12:13.211547 1773 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 09:12:13.212728 kubelet[1773]: I1213 09:12:13.212519 1773 server.go:1269] "Started kubelet" Dec 13 09:12:13.215029 kubelet[1773]: I1213 09:12:13.213768 1773 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 09:12:13.215029 kubelet[1773]: I1213 09:12:13.214614 1773 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 09:12:13.215029 kubelet[1773]: I1213 09:12:13.214908 1773 server.go:460] "Adding debug handlers to kubelet server" Dec 13 09:12:13.216088 kubelet[1773]: I1213 09:12:13.216019 1773 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 09:12:13.216493 kubelet[1773]: I1213 09:12:13.216473 1773 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 09:12:13.226193 kubelet[1773]: I1213 09:12:13.226153 1773 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 09:12:13.226809 kubelet[1773]: E1213 09:12:13.226769 1773 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"147.182.199.141\" not found" Dec 13 09:12:13.227295 kubelet[1773]: I1213 09:12:13.227276 1773 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 09:12:13.227546 kubelet[1773]: I1213 09:12:13.227523 1773 reconciler.go:26] "Reconciler: start to sync state" Dec 13 09:12:13.227752 kubelet[1773]: I1213 09:12:13.227720 1773 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 09:12:13.230536 kubelet[1773]: W1213 09:12:13.230487 1773 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 09:12:13.231305 kubelet[1773]: E1213 09:12:13.231275 1773 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Dec 13 09:12:13.231601 kubelet[1773]: W1213 09:12:13.231550 1773 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "147.182.199.141" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 09:12:13.231654 kubelet[1773]: E1213 09:12:13.231599 1773 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"147.182.199.141\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Dec 13 09:12:13.239482 kubelet[1773]: W1213 09:12:13.239395 1773 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Dec 13 09:12:13.239482 kubelet[1773]: E1213 09:12:13.239453 1773 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Dec 13 09:12:13.239726 kubelet[1773]: E1213 09:12:13.239527 1773 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"147.182.199.141\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Dec 13 09:12:13.241622 kubelet[1773]: I1213 09:12:13.241386 1773 factory.go:221] Registration of the containerd container factory successfully Dec 13 09:12:13.241622 kubelet[1773]: I1213 09:12:13.241423 1773 factory.go:221] Registration of the systemd container factory successfully Dec 13 09:12:13.242097 kubelet[1773]: I1213 09:12:13.241872 1773 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 09:12:13.252390 kubelet[1773]: E1213 09:12:13.239715 1773 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{147.182.199.141.1810b19a64d52aad default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:147.182.199.141,UID:147.182.199.141,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:147.182.199.141,},FirstTimestamp:2024-12-13 09:12:13.212412589 +0000 UTC m=+0.835748586,LastTimestamp:2024-12-13 09:12:13.212412589 +0000 UTC m=+0.835748586,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:147.182.199.141,}" Dec 13 09:12:13.269143 kubelet[1773]: E1213 09:12:13.269108 1773 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 09:12:13.277733 kubelet[1773]: E1213 09:12:13.277376 1773 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{147.182.199.141.1810b19a6835e40f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:147.182.199.141,UID:147.182.199.141,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:147.182.199.141,},FirstTimestamp:2024-12-13 09:12:13.269083151 +0000 UTC m=+0.892419155,LastTimestamp:2024-12-13 09:12:13.269083151 +0000 UTC m=+0.892419155,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:147.182.199.141,}" Dec 13 09:12:13.288046 kubelet[1773]: I1213 09:12:13.287753 1773 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 09:12:13.288046 kubelet[1773]: I1213 09:12:13.287790 1773 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 09:12:13.288046 kubelet[1773]: I1213 09:12:13.287824 1773 state_mem.go:36] "Initialized new in-memory state store" Dec 13 09:12:13.290872 kubelet[1773]: I1213 09:12:13.290796 1773 policy_none.go:49] "None policy: Start" Dec 13 09:12:13.293061 kubelet[1773]: I1213 09:12:13.292972 1773 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 09:12:13.293061 kubelet[1773]: I1213 09:12:13.293076 1773 state_mem.go:35] "Initializing new in-memory state store" Dec 13 09:12:13.313385 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 09:12:13.327929 kubelet[1773]: E1213 09:12:13.327125 1773 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"147.182.199.141\" not found" Dec 13 09:12:13.330223 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 09:12:13.339560 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 09:12:13.348766 kubelet[1773]: I1213 09:12:13.348076 1773 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 09:12:13.349590 kubelet[1773]: I1213 09:12:13.348962 1773 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 09:12:13.349590 kubelet[1773]: I1213 09:12:13.349252 1773 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 09:12:13.349590 kubelet[1773]: I1213 09:12:13.349295 1773 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 09:12:13.350470 kubelet[1773]: I1213 09:12:13.350367 1773 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 09:12:13.356765 kubelet[1773]: I1213 09:12:13.356123 1773 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 09:12:13.356765 kubelet[1773]: I1213 09:12:13.356182 1773 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 09:12:13.356765 kubelet[1773]: I1213 09:12:13.356230 1773 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 09:12:13.356765 kubelet[1773]: E1213 09:12:13.356384 1773 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 13 09:12:13.358291 kubelet[1773]: E1213 09:12:13.358252 1773 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"147.182.199.141\" not found" Dec 13 09:12:13.447079 kubelet[1773]: E1213 09:12:13.446873 1773 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"147.182.199.141\" not found" node="147.182.199.141" Dec 13 09:12:13.451615 kubelet[1773]: I1213 09:12:13.451542 1773 kubelet_node_status.go:72] "Attempting to register node" node="147.182.199.141" Dec 13 09:12:13.465988 kubelet[1773]: I1213 09:12:13.465946 1773 kubelet_node_status.go:75] "Successfully registered node" node="147.182.199.141" Dec 13 09:12:13.466385 kubelet[1773]: E1213 09:12:13.466208 1773 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"147.182.199.141\": node \"147.182.199.141\" not found" Dec 13 09:12:13.502590 kubelet[1773]: E1213 09:12:13.502531 1773 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"147.182.199.141\" not found" Dec 13 09:12:13.603725 kubelet[1773]: E1213 09:12:13.603661 1773 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"147.182.199.141\" not found" Dec 13 09:12:13.704280 kubelet[1773]: E1213 09:12:13.703792 1773 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"147.182.199.141\" not found" Dec 13 09:12:13.804046 kubelet[1773]: E1213 09:12:13.803959 1773 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"147.182.199.141\" not found" Dec 13 09:12:13.904859 kubelet[1773]: E1213 09:12:13.904781 1773 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"147.182.199.141\" not found" Dec 13 09:12:13.913188 sudo[1648]: pam_unix(sudo:session): session closed for user root Dec 13 09:12:13.918317 sshd[1645]: pam_unix(sshd:session): session closed for user core Dec 13 09:12:13.923530 systemd[1]: sshd@6-147.182.199.141:22-147.75.109.163:58974.service: Deactivated successfully. Dec 13 09:12:13.925973 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 09:12:13.926845 systemd-logind[1448]: Session 7 logged out. Waiting for processes to exit. Dec 13 09:12:13.928142 systemd-logind[1448]: Removed session 7. Dec 13 09:12:14.006016 kubelet[1773]: E1213 09:12:14.005855 1773 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"147.182.199.141\" not found" Dec 13 09:12:14.107285 kubelet[1773]: E1213 09:12:14.107176 1773 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"147.182.199.141\" not found" Dec 13 09:12:14.142166 kubelet[1773]: I1213 09:12:14.141896 1773 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 09:12:14.142166 kubelet[1773]: W1213 09:12:14.142150 1773 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 09:12:14.204753 kubelet[1773]: E1213 09:12:14.204687 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:12:14.208300 kubelet[1773]: E1213 09:12:14.208248 1773 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"147.182.199.141\" not found" Dec 13 09:12:14.311199 kubelet[1773]: I1213 09:12:14.310351 1773 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 09:12:14.311374 containerd[1466]: time="2024-12-13T09:12:14.310761638Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 09:12:14.312291 kubelet[1773]: I1213 09:12:14.311897 1773 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 09:12:15.205143 kubelet[1773]: E1213 09:12:15.205078 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:12:15.206243 kubelet[1773]: I1213 09:12:15.205855 1773 apiserver.go:52] "Watching apiserver" Dec 13 09:12:15.212824 kubelet[1773]: E1213 09:12:15.211217 1773 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-znlc8" podUID="2ce4350c-2296-46ed-9e25-9ab657a354cd" Dec 13 09:12:15.227590 systemd[1]: Created slice kubepods-besteffort-podc72cdc3a_0d98_46c8_9780_f793aeb39572.slice - libcontainer container kubepods-besteffort-podc72cdc3a_0d98_46c8_9780_f793aeb39572.slice. Dec 13 09:12:15.228727 kubelet[1773]: I1213 09:12:15.228520 1773 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 09:12:15.244080 kubelet[1773]: I1213 09:12:15.243876 1773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8n7l\" (UniqueName: \"kubernetes.io/projected/2ce4350c-2296-46ed-9e25-9ab657a354cd-kube-api-access-m8n7l\") pod \"csi-node-driver-znlc8\" (UID: \"2ce4350c-2296-46ed-9e25-9ab657a354cd\") " pod="calico-system/csi-node-driver-znlc8" Dec 13 09:12:15.246430 kubelet[1773]: I1213 09:12:15.246366 1773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c72cdc3a-0d98-46c8-9780-f793aeb39572-kube-proxy\") pod \"kube-proxy-4x2k9\" (UID: \"c72cdc3a-0d98-46c8-9780-f793aeb39572\") " pod="kube-system/kube-proxy-4x2k9" Dec 13 09:12:15.246833 kubelet[1773]: I1213 09:12:15.246808 1773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c72cdc3a-0d98-46c8-9780-f793aeb39572-lib-modules\") pod \"kube-proxy-4x2k9\" (UID: \"c72cdc3a-0d98-46c8-9780-f793aeb39572\") " pod="kube-system/kube-proxy-4x2k9" Dec 13 09:12:15.246961 kubelet[1773]: I1213 09:12:15.246944 1773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b957368e-b1b8-4528-959e-09c93f61a195-lib-modules\") pod \"calico-node-2t84k\" (UID: \"b957368e-b1b8-4528-959e-09c93f61a195\") " pod="calico-system/calico-node-2t84k" Dec 13 09:12:15.247119 kubelet[1773]: I1213 09:12:15.247098 1773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b957368e-b1b8-4528-959e-09c93f61a195-tigera-ca-bundle\") pod \"calico-node-2t84k\" (UID: \"b957368e-b1b8-4528-959e-09c93f61a195\") " pod="calico-system/calico-node-2t84k" Dec 13 09:12:15.248045 kubelet[1773]: I1213 09:12:15.247481 1773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b957368e-b1b8-4528-959e-09c93f61a195-cni-net-dir\") pod \"calico-node-2t84k\" (UID: \"b957368e-b1b8-4528-959e-09c93f61a195\") " pod="calico-system/calico-node-2t84k" Dec 13 09:12:15.249520 kubelet[1773]: I1213 09:12:15.248368 1773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kxxt\" (UniqueName: \"kubernetes.io/projected/b957368e-b1b8-4528-959e-09c93f61a195-kube-api-access-6kxxt\") pod \"calico-node-2t84k\" (UID: \"b957368e-b1b8-4528-959e-09c93f61a195\") " pod="calico-system/calico-node-2t84k" Dec 13 09:12:15.249292 systemd[1]: Created slice kubepods-besteffort-podb957368e_b1b8_4528_959e_09c93f61a195.slice - libcontainer container kubepods-besteffort-podb957368e_b1b8_4528_959e_09c93f61a195.slice. Dec 13 09:12:15.251092 kubelet[1773]: I1213 09:12:15.250090 1773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2ce4350c-2296-46ed-9e25-9ab657a354cd-kubelet-dir\") pod \"csi-node-driver-znlc8\" (UID: \"2ce4350c-2296-46ed-9e25-9ab657a354cd\") " pod="calico-system/csi-node-driver-znlc8" Dec 13 09:12:15.251092 kubelet[1773]: I1213 09:12:15.250156 1773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2ce4350c-2296-46ed-9e25-9ab657a354cd-varrun\") pod \"csi-node-driver-znlc8\" (UID: \"2ce4350c-2296-46ed-9e25-9ab657a354cd\") " pod="calico-system/csi-node-driver-znlc8" Dec 13 09:12:15.251092 kubelet[1773]: I1213 09:12:15.250195 1773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2ce4350c-2296-46ed-9e25-9ab657a354cd-socket-dir\") pod \"csi-node-driver-znlc8\" (UID: \"2ce4350c-2296-46ed-9e25-9ab657a354cd\") " pod="calico-system/csi-node-driver-znlc8" Dec 13 09:12:15.251092 kubelet[1773]: I1213 09:12:15.250258 1773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c72cdc3a-0d98-46c8-9780-f793aeb39572-xtables-lock\") pod \"kube-proxy-4x2k9\" (UID: \"c72cdc3a-0d98-46c8-9780-f793aeb39572\") " pod="kube-system/kube-proxy-4x2k9" Dec 13 09:12:15.251092 kubelet[1773]: I1213 09:12:15.250291 1773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b957368e-b1b8-4528-959e-09c93f61a195-xtables-lock\") pod \"calico-node-2t84k\" (UID: \"b957368e-b1b8-4528-959e-09c93f61a195\") " pod="calico-system/calico-node-2t84k" Dec 13 09:12:15.251419 kubelet[1773]: I1213 09:12:15.250326 1773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b957368e-b1b8-4528-959e-09c93f61a195-policysync\") pod \"calico-node-2t84k\" (UID: \"b957368e-b1b8-4528-959e-09c93f61a195\") " pod="calico-system/calico-node-2t84k" Dec 13 09:12:15.251419 kubelet[1773]: I1213 09:12:15.250356 1773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b957368e-b1b8-4528-959e-09c93f61a195-var-run-calico\") pod \"calico-node-2t84k\" (UID: \"b957368e-b1b8-4528-959e-09c93f61a195\") " pod="calico-system/calico-node-2t84k" Dec 13 09:12:15.251419 kubelet[1773]: I1213 09:12:15.250398 1773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b957368e-b1b8-4528-959e-09c93f61a195-cni-bin-dir\") pod \"calico-node-2t84k\" (UID: \"b957368e-b1b8-4528-959e-09c93f61a195\") " pod="calico-system/calico-node-2t84k" Dec 13 09:12:15.251419 kubelet[1773]: I1213 09:12:15.250432 1773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b957368e-b1b8-4528-959e-09c93f61a195-flexvol-driver-host\") pod \"calico-node-2t84k\" (UID: \"b957368e-b1b8-4528-959e-09c93f61a195\") " pod="calico-system/calico-node-2t84k" Dec 13 09:12:15.251419 kubelet[1773]: I1213 09:12:15.250459 1773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b957368e-b1b8-4528-959e-09c93f61a195-node-certs\") pod \"calico-node-2t84k\" (UID: \"b957368e-b1b8-4528-959e-09c93f61a195\") " pod="calico-system/calico-node-2t84k" Dec 13 09:12:15.251610 kubelet[1773]: I1213 09:12:15.250491 1773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b957368e-b1b8-4528-959e-09c93f61a195-var-lib-calico\") pod \"calico-node-2t84k\" (UID: \"b957368e-b1b8-4528-959e-09c93f61a195\") " pod="calico-system/calico-node-2t84k" Dec 13 09:12:15.251610 kubelet[1773]: I1213 09:12:15.250523 1773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b957368e-b1b8-4528-959e-09c93f61a195-cni-log-dir\") pod \"calico-node-2t84k\" (UID: \"b957368e-b1b8-4528-959e-09c93f61a195\") " pod="calico-system/calico-node-2t84k" Dec 13 09:12:15.251610 kubelet[1773]: I1213 09:12:15.250552 1773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2ce4350c-2296-46ed-9e25-9ab657a354cd-registration-dir\") pod \"csi-node-driver-znlc8\" (UID: \"2ce4350c-2296-46ed-9e25-9ab657a354cd\") " pod="calico-system/csi-node-driver-znlc8" Dec 13 09:12:15.251610 kubelet[1773]: I1213 09:12:15.250576 1773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kml4\" (UniqueName: \"kubernetes.io/projected/c72cdc3a-0d98-46c8-9780-f793aeb39572-kube-api-access-6kml4\") pod \"kube-proxy-4x2k9\" (UID: \"c72cdc3a-0d98-46c8-9780-f793aeb39572\") " pod="kube-system/kube-proxy-4x2k9" Dec 13 09:12:15.358110 kubelet[1773]: E1213 09:12:15.358034 1773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:12:15.358110 kubelet[1773]: W1213 09:12:15.358095 1773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:12:15.358352 kubelet[1773]: E1213 09:12:15.358131 1773 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:12:15.358581 kubelet[1773]: E1213 09:12:15.358556 1773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:12:15.358581 kubelet[1773]: W1213 09:12:15.358576 1773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:12:15.358684 kubelet[1773]: E1213 09:12:15.358598 1773 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:12:15.365535 kubelet[1773]: E1213 09:12:15.365489 1773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:12:15.365535 kubelet[1773]: W1213 09:12:15.365520 1773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:12:15.365535 kubelet[1773]: E1213 09:12:15.365546 1773 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:12:15.396992 kubelet[1773]: E1213 09:12:15.396938 1773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:12:15.396992 kubelet[1773]: W1213 09:12:15.396977 1773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:12:15.396992 kubelet[1773]: E1213 09:12:15.397042 1773 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:12:15.410242 kubelet[1773]: E1213 09:12:15.409182 1773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:12:15.410242 kubelet[1773]: W1213 09:12:15.409239 1773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:12:15.410242 kubelet[1773]: E1213 09:12:15.409547 1773 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:12:15.411179 kubelet[1773]: E1213 09:12:15.411147 1773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:12:15.411377 kubelet[1773]: W1213 09:12:15.411302 1773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:12:15.411377 kubelet[1773]: E1213 09:12:15.411333 1773 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:12:15.544643 kubelet[1773]: E1213 09:12:15.543751 1773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 09:12:15.546268 containerd[1466]: time="2024-12-13T09:12:15.546048054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4x2k9,Uid:c72cdc3a-0d98-46c8-9780-f793aeb39572,Namespace:kube-system,Attempt:0,}" Dec 13 09:12:15.559167 kubelet[1773]: E1213 09:12:15.558893 1773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 09:12:15.560134 containerd[1466]: time="2024-12-13T09:12:15.559987321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2t84k,Uid:b957368e-b1b8-4528-959e-09c93f61a195,Namespace:calico-system,Attempt:0,}" Dec 13 09:12:16.110702 containerd[1466]: time="2024-12-13T09:12:16.110038161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 09:12:16.111607 containerd[1466]: time="2024-12-13T09:12:16.111457107Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 09:12:16.112926 containerd[1466]: time="2024-12-13T09:12:16.112876843Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 09:12:16.113534 containerd[1466]: time="2024-12-13T09:12:16.113474154Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Dec 13 09:12:16.114143 containerd[1466]: time="2024-12-13T09:12:16.114069555Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 09:12:16.118364 containerd[1466]: time="2024-12-13T09:12:16.118279013Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 09:12:16.119719 containerd[1466]: time="2024-12-13T09:12:16.119660546Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 572.706356ms" Dec 13 09:12:16.123029 containerd[1466]: time="2024-12-13T09:12:16.122950054Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 562.74714ms" Dec 13 09:12:16.205409 kubelet[1773]: E1213 09:12:16.205285 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:12:16.277347 containerd[1466]: time="2024-12-13T09:12:16.277119844Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:12:16.280187 containerd[1466]: time="2024-12-13T09:12:16.278785235Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:12:16.280187 containerd[1466]: time="2024-12-13T09:12:16.278834895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:12:16.280187 containerd[1466]: time="2024-12-13T09:12:16.278967990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:12:16.280555 containerd[1466]: time="2024-12-13T09:12:16.280185446Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:12:16.280555 containerd[1466]: time="2024-12-13T09:12:16.280282353Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:12:16.280555 containerd[1466]: time="2024-12-13T09:12:16.280300549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:12:16.280555 containerd[1466]: time="2024-12-13T09:12:16.280450542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:12:16.371401 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3974564908.mount: Deactivated successfully. Dec 13 09:12:16.427313 systemd[1]: Started cri-containerd-6e7d021d1f35c79b34eee4e6d18deafd7a14d944301b53f1c8f0af1cdb4cd531.scope - libcontainer container 6e7d021d1f35c79b34eee4e6d18deafd7a14d944301b53f1c8f0af1cdb4cd531. Dec 13 09:12:16.431238 systemd[1]: Started cri-containerd-7bc111cc740d9a7be8146d8ddc2bb7df6df0d157f99f4b23ba94cc28f597ae74.scope - libcontainer container 7bc111cc740d9a7be8146d8ddc2bb7df6df0d157f99f4b23ba94cc28f597ae74. Dec 13 09:12:16.484867 containerd[1466]: time="2024-12-13T09:12:16.484585543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4x2k9,Uid:c72cdc3a-0d98-46c8-9780-f793aeb39572,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e7d021d1f35c79b34eee4e6d18deafd7a14d944301b53f1c8f0af1cdb4cd531\"" Dec 13 09:12:16.487359 kubelet[1773]: E1213 09:12:16.486475 1773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 09:12:16.488810 containerd[1466]: time="2024-12-13T09:12:16.488752628Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Dec 13 09:12:16.496329 containerd[1466]: time="2024-12-13T09:12:16.496091200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2t84k,Uid:b957368e-b1b8-4528-959e-09c93f61a195,Namespace:calico-system,Attempt:0,} returns sandbox id \"7bc111cc740d9a7be8146d8ddc2bb7df6df0d157f99f4b23ba94cc28f597ae74\"" Dec 13 09:12:16.498133 kubelet[1773]: E1213 09:12:16.498093 1773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 09:12:17.205796 kubelet[1773]: E1213 09:12:17.205713 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:12:17.357372 kubelet[1773]: E1213 09:12:17.357313 1773 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-znlc8" podUID="2ce4350c-2296-46ed-9e25-9ab657a354cd" Dec 13 09:12:17.805623 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1156086323.mount: Deactivated successfully. Dec 13 09:12:18.206102 kubelet[1773]: E1213 09:12:18.205964 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:12:18.485485 containerd[1466]: time="2024-12-13T09:12:18.484943510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:12:18.486937 containerd[1466]: time="2024-12-13T09:12:18.486571585Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=30230243" Dec 13 09:12:18.487688 containerd[1466]: time="2024-12-13T09:12:18.487633240Z" level=info msg="ImageCreate event name:\"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:12:18.491140 containerd[1466]: time="2024-12-13T09:12:18.491076042Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:12:18.492247 containerd[1466]: time="2024-12-13T09:12:18.492189777Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"30229262\" in 2.003374516s" Dec 13 09:12:18.492247 containerd[1466]: time="2024-12-13T09:12:18.492248001Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Dec 13 09:12:18.494671 containerd[1466]: time="2024-12-13T09:12:18.494409994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 09:12:18.498233 containerd[1466]: time="2024-12-13T09:12:18.498180330Z" level=info msg="CreateContainer within sandbox \"6e7d021d1f35c79b34eee4e6d18deafd7a14d944301b53f1c8f0af1cdb4cd531\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 09:12:18.518610 containerd[1466]: time="2024-12-13T09:12:18.518527517Z" level=info msg="CreateContainer within sandbox \"6e7d021d1f35c79b34eee4e6d18deafd7a14d944301b53f1c8f0af1cdb4cd531\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"34c8b373048a2535bfcda6c30362950736908132894cbef53143852d950c1dc5\"" Dec 13 09:12:18.519955 containerd[1466]: time="2024-12-13T09:12:18.519841991Z" level=info msg="StartContainer for \"34c8b373048a2535bfcda6c30362950736908132894cbef53143852d950c1dc5\"" Dec 13 09:12:18.572085 systemd[1]: run-containerd-runc-k8s.io-34c8b373048a2535bfcda6c30362950736908132894cbef53143852d950c1dc5-runc.KMldEP.mount: Deactivated successfully. Dec 13 09:12:18.581316 systemd[1]: Started cri-containerd-34c8b373048a2535bfcda6c30362950736908132894cbef53143852d950c1dc5.scope - libcontainer container 34c8b373048a2535bfcda6c30362950736908132894cbef53143852d950c1dc5. Dec 13 09:12:18.631898 containerd[1466]: time="2024-12-13T09:12:18.631780825Z" level=info msg="StartContainer for \"34c8b373048a2535bfcda6c30362950736908132894cbef53143852d950c1dc5\" returns successfully" Dec 13 09:12:19.208035 kubelet[1773]: E1213 09:12:19.207947 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:12:19.357631 kubelet[1773]: E1213 09:12:19.357198 1773 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-znlc8" podUID="2ce4350c-2296-46ed-9e25-9ab657a354cd" Dec 13 09:12:19.382893 kubelet[1773]: E1213 09:12:19.382757 1773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 09:12:19.413990 kubelet[1773]: I1213 09:12:19.413894 1773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4x2k9" podStartSLOduration=4.408062386 podStartE2EDuration="6.413873235s" podCreationTimestamp="2024-12-13 09:12:13 +0000 UTC" firstStartedPulling="2024-12-13 09:12:16.488122489 +0000 UTC m=+4.111458479" lastFinishedPulling="2024-12-13 09:12:18.493933345 +0000 UTC m=+6.117269328" observedRunningTime="2024-12-13 09:12:19.413688775 +0000 UTC m=+7.037024780" watchObservedRunningTime="2024-12-13 09:12:19.413873235 +0000 UTC m=+7.037209236" Dec 13 09:12:19.459638 kubelet[1773]: E1213 09:12:19.459507 1773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:12:19.460782 kubelet[1773]: W1213 09:12:19.460219 1773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:12:19.460782 kubelet[1773]: E1213 09:12:19.460308 1773 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:12:19.461118 kubelet[1773]: E1213 09:12:19.461083 1773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:12:19.461118 kubelet[1773]: W1213 09:12:19.461111 1773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:12:19.461302 kubelet[1773]: E1213 09:12:19.461142 1773 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:12:19.461549 kubelet[1773]: E1213 09:12:19.461474 1773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:12:19.461549 kubelet[1773]: W1213 09:12:19.461495 1773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:12:19.461549 kubelet[1773]: E1213 09:12:19.461528 1773 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:12:19.461978 kubelet[1773]: E1213 09:12:19.461837 1773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:12:19.461978 kubelet[1773]: W1213 09:12:19.461851 1773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:12:19.461978 kubelet[1773]: E1213 09:12:19.461868 1773 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:12:19.462228 kubelet[1773]: E1213 09:12:19.462218 1773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:12:19.462285 kubelet[1773]: W1213 09:12:19.462232 1773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:12:19.462285 kubelet[1773]: E1213 09:12:19.462263 1773 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:12:19.462719 kubelet[1773]: E1213 09:12:19.462699 1773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:12:19.462719 kubelet[1773]: W1213 09:12:19.462715 1773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:12:19.462890 kubelet[1773]: E1213 09:12:19.462732 1773 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:12:19.463118 kubelet[1773]: E1213 09:12:19.463097 1773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:12:19.463118 kubelet[1773]: W1213 09:12:19.463116 1773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:12:19.463289 kubelet[1773]: E1213 09:12:19.463132 1773 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:12:19.463483 kubelet[1773]: E1213 09:12:19.463464 1773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:12:19.463483 kubelet[1773]: W1213 09:12:19.463482 1773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:12:19.463601 kubelet[1773]: E1213 09:12:19.463499 1773 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:12:19.463771 kubelet[1773]: E1213 09:12:19.463757 1773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:12:19.463771 kubelet[1773]: W1213 09:12:19.463768 1773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:12:19.463982 kubelet[1773]: E1213 09:12:19.463779 1773 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:12:19.463982 kubelet[1773]: E1213 09:12:19.463977 1773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:12:19.464133 kubelet[1773]: W1213 09:12:19.463988 1773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:12:19.464133 kubelet[1773]: E1213 09:12:19.464012 1773 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:12:19.464242 kubelet[1773]: E1213 09:12:19.464190 1773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:12:19.464242 kubelet[1773]: W1213 09:12:19.464197 1773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:12:19.464242 kubelet[1773]: E1213 09:12:19.464206 1773 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:12:19.464487 kubelet[1773]: E1213 09:12:19.464368 1773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:12:19.464487 kubelet[1773]: W1213 09:12:19.464375 1773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:12:19.464487 kubelet[1773]: E1213 09:12:19.464383 1773 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:12:19.464632 kubelet[1773]: E1213 09:12:19.464578 1773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:12:19.464632 kubelet[1773]: W1213 09:12:19.464586 1773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:12:19.464632 kubelet[1773]: E1213 09:12:19.464594 1773 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:12:19.464776 kubelet[1773]: E1213 09:12:19.464749 1773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:12:19.464776 kubelet[1773]: W1213 09:12:19.464755 1773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:12:19.464776 kubelet[1773]: E1213 09:12:19.464765 1773 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:12:19.464932 kubelet[1773]: E1213 09:12:19.464918 1773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:12:19.464932 kubelet[1773]: W1213 09:12:19.464925 1773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:12:19.464932 kubelet[1773]: E1213 09:12:19.464932 1773 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:12:19.465208 kubelet[1773]: E1213 09:12:19.465187 1773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:12:19.465208 kubelet[1773]: W1213 09:12:19.465206 1773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:12:19.465309 kubelet[1773]: E1213 09:12:19.465220 1773 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:12:19.465445 kubelet[1773]: E1213 09:12:19.465430 1773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:12:19.465445 kubelet[1773]: W1213 09:12:19.465441 1773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:12:19.465559 kubelet[1773]: E1213 09:12:19.465450 1773 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:12:19.465639 kubelet[1773]: E1213 09:12:19.465602 1773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:12:19.465639 kubelet[1773]: W1213 09:12:19.465609 1773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:12:19.465639 kubelet[1773]: E1213 09:12:19.465616 1773 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:12:19.465795 kubelet[1773]: E1213 09:12:19.465753 1773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:12:19.465795 kubelet[1773]: W1213 09:12:19.465759 1773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:12:19.465795 kubelet[1773]: E1213 09:12:19.465768 1773 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:12:19.465960 kubelet[1773]: E1213 09:12:19.465906 1773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:12:19.465960 kubelet[1773]: W1213 09:12:19.465912 1773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:12:19.465960 kubelet[1773]: E1213 09:12:19.465921 1773 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:12:19.482907 kubelet[1773]: E1213 09:12:19.482244 1773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:12:19.482907 kubelet[1773]: W1213 09:12:19.482271 1773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:12:19.482907 kubelet[1773]: E1213 09:12:19.482295 1773 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:12:19.484680 kubelet[1773]: E1213 09:12:19.484285 1773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:12:19.484680 kubelet[1773]: W1213 09:12:19.484317 1773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:12:19.484680 kubelet[1773]: E1213 09:12:19.484356 1773 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:12:19.485681 kubelet[1773]: E1213 09:12:19.485416 1773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:12:19.485681 kubelet[1773]: W1213 09:12:19.485439 1773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:12:19.485681 kubelet[1773]: E1213 09:12:19.485465 1773 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:12:19.486828 kubelet[1773]: E1213 09:12:19.486429 1773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:12:19.486828 kubelet[1773]: W1213 09:12:19.486447 1773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:12:19.486828 kubelet[1773]: E1213 09:12:19.486499 1773 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:12:19.487609 kubelet[1773]: E1213 09:12:19.487255 1773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:12:19.487609 kubelet[1773]: W1213 09:12:19.487278 1773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:12:19.487609 kubelet[1773]: E1213 09:12:19.487318 1773 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:12:19.488326 kubelet[1773]: E1213 09:12:19.488202 1773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:12:19.488326 kubelet[1773]: W1213 09:12:19.488222 1773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:12:19.488682 kubelet[1773]: E1213 09:12:19.488592 1773 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:12:19.488783 kubelet[1773]: E1213 09:12:19.488764 1773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:12:19.489049 kubelet[1773]: W1213 09:12:19.488782 1773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:12:19.489049 kubelet[1773]: E1213 09:12:19.488807 1773 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:12:19.489521 kubelet[1773]: E1213 09:12:19.489313 1773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:12:19.489521 kubelet[1773]: W1213 09:12:19.489333 1773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:12:19.489521 kubelet[1773]: E1213 09:12:19.489356 1773 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:12:19.489887 kubelet[1773]: E1213 09:12:19.489870 1773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:12:19.490268 kubelet[1773]: W1213 09:12:19.489981 1773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:12:19.490268 kubelet[1773]: E1213 09:12:19.490036 1773 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:12:19.490468 kubelet[1773]: E1213 09:12:19.490441 1773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:12:19.490468 kubelet[1773]: W1213 09:12:19.490462 1773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:12:19.490540 kubelet[1773]: E1213 09:12:19.490489 1773 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:12:19.490834 kubelet[1773]: E1213 09:12:19.490766 1773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:12:19.490834 kubelet[1773]: W1213 09:12:19.490779 1773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:12:19.490834 kubelet[1773]: E1213 09:12:19.490794 1773 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:12:19.491749 kubelet[1773]: E1213 09:12:19.491712 1773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 09:12:19.491935 kubelet[1773]: W1213 09:12:19.491902 1773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 09:12:19.491935 kubelet[1773]: E1213 09:12:19.491933 1773 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 09:12:19.774056 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4233251110.mount: Deactivated successfully. Dec 13 09:12:20.006096 containerd[1466]: time="2024-12-13T09:12:20.005527351Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:12:20.007515 containerd[1466]: time="2024-12-13T09:12:20.007451006Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Dec 13 09:12:20.008517 containerd[1466]: time="2024-12-13T09:12:20.008459691Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:12:20.010903 containerd[1466]: time="2024-12-13T09:12:20.010820976Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:12:20.012152 containerd[1466]: time="2024-12-13T09:12:20.011943617Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.517336233s" Dec 13 09:12:20.012152 containerd[1466]: time="2024-12-13T09:12:20.011997208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Dec 13 09:12:20.016112 containerd[1466]: time="2024-12-13T09:12:20.015835444Z" level=info msg="CreateContainer within sandbox \"7bc111cc740d9a7be8146d8ddc2bb7df6df0d157f99f4b23ba94cc28f597ae74\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 09:12:20.046516 containerd[1466]: time="2024-12-13T09:12:20.045817179Z" level=info msg="CreateContainer within sandbox \"7bc111cc740d9a7be8146d8ddc2bb7df6df0d157f99f4b23ba94cc28f597ae74\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2f5f116e404b735d8ed3d07e311fa4a32560b5e08ac53168da017a493977c7b9\"" Dec 13 09:12:20.048089 containerd[1466]: time="2024-12-13T09:12:20.047367183Z" level=info msg="StartContainer for \"2f5f116e404b735d8ed3d07e311fa4a32560b5e08ac53168da017a493977c7b9\"" Dec 13 09:12:20.092703 systemd[1]: Started cri-containerd-2f5f116e404b735d8ed3d07e311fa4a32560b5e08ac53168da017a493977c7b9.scope - libcontainer container 2f5f116e404b735d8ed3d07e311fa4a32560b5e08ac53168da017a493977c7b9. Dec 13 09:12:20.140551 containerd[1466]: time="2024-12-13T09:12:20.140359254Z" level=info msg="StartContainer for \"2f5f116e404b735d8ed3d07e311fa4a32560b5e08ac53168da017a493977c7b9\" returns successfully" Dec 13 09:12:20.162956 systemd[1]: cri-containerd-2f5f116e404b735d8ed3d07e311fa4a32560b5e08ac53168da017a493977c7b9.scope: Deactivated successfully. Dec 13 09:12:20.208767 kubelet[1773]: E1213 09:12:20.208723 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:12:20.234425 containerd[1466]: time="2024-12-13T09:12:20.234049950Z" level=info msg="shim disconnected" id=2f5f116e404b735d8ed3d07e311fa4a32560b5e08ac53168da017a493977c7b9 namespace=k8s.io Dec 13 09:12:20.234425 containerd[1466]: time="2024-12-13T09:12:20.234145870Z" level=warning msg="cleaning up after shim disconnected" id=2f5f116e404b735d8ed3d07e311fa4a32560b5e08ac53168da017a493977c7b9 namespace=k8s.io Dec 13 09:12:20.234425 containerd[1466]: time="2024-12-13T09:12:20.234161312Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 09:12:20.387376 kubelet[1773]: E1213 09:12:20.386320 1773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 09:12:20.387376 kubelet[1773]: E1213 09:12:20.387046 1773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 09:12:20.388189 containerd[1466]: time="2024-12-13T09:12:20.388146231Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 09:12:20.712037 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f5f116e404b735d8ed3d07e311fa4a32560b5e08ac53168da017a493977c7b9-rootfs.mount: Deactivated successfully. Dec 13 09:12:21.209470 kubelet[1773]: E1213 09:12:21.209399 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:12:21.357256 kubelet[1773]: E1213 09:12:21.356763 1773 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-znlc8" podUID="2ce4350c-2296-46ed-9e25-9ab657a354cd" Dec 13 09:12:21.729451 systemd-resolved[1323]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Dec 13 09:12:22.210413 kubelet[1773]: E1213 09:12:22.210286 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:12:23.210611 kubelet[1773]: E1213 09:12:23.210460 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:12:23.359156 kubelet[1773]: E1213 09:12:23.357952 1773 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-znlc8" podUID="2ce4350c-2296-46ed-9e25-9ab657a354cd" Dec 13 09:12:24.210773 kubelet[1773]: E1213 09:12:24.210706 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:12:24.388148 containerd[1466]: time="2024-12-13T09:12:24.387540984Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:12:24.389338 containerd[1466]: time="2024-12-13T09:12:24.389134534Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Dec 13 09:12:24.390100 containerd[1466]: time="2024-12-13T09:12:24.390022920Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:12:24.392118 containerd[1466]: time="2024-12-13T09:12:24.392061541Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:12:24.393662 containerd[1466]: time="2024-12-13T09:12:24.393530008Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.00533655s" Dec 13 09:12:24.393943 containerd[1466]: time="2024-12-13T09:12:24.393599660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Dec 13 09:12:24.397596 containerd[1466]: time="2024-12-13T09:12:24.397541240Z" level=info msg="CreateContainer within sandbox \"7bc111cc740d9a7be8146d8ddc2bb7df6df0d157f99f4b23ba94cc28f597ae74\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 09:12:24.416042 containerd[1466]: time="2024-12-13T09:12:24.415343816Z" level=info msg="CreateContainer within sandbox \"7bc111cc740d9a7be8146d8ddc2bb7df6df0d157f99f4b23ba94cc28f597ae74\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b78864904f8a758a686ca967ec26a2ab8186a86c9d740cd1099f12212f22ec70\"" Dec 13 09:12:24.418328 containerd[1466]: time="2024-12-13T09:12:24.417298513Z" level=info msg="StartContainer for \"b78864904f8a758a686ca967ec26a2ab8186a86c9d740cd1099f12212f22ec70\"" Dec 13 09:12:24.418807 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3454952561.mount: Deactivated successfully. Dec 13 09:12:24.481406 systemd[1]: Started cri-containerd-b78864904f8a758a686ca967ec26a2ab8186a86c9d740cd1099f12212f22ec70.scope - libcontainer container b78864904f8a758a686ca967ec26a2ab8186a86c9d740cd1099f12212f22ec70. Dec 13 09:12:24.541973 containerd[1466]: time="2024-12-13T09:12:24.540902411Z" level=info msg="StartContainer for \"b78864904f8a758a686ca967ec26a2ab8186a86c9d740cd1099f12212f22ec70\" returns successfully" Dec 13 09:12:25.211464 kubelet[1773]: E1213 09:12:25.211386 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:12:25.276625 systemd[1]: cri-containerd-b78864904f8a758a686ca967ec26a2ab8186a86c9d740cd1099f12212f22ec70.scope: Deactivated successfully. Dec 13 09:12:25.278598 kubelet[1773]: I1213 09:12:25.278502 1773 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Dec 13 09:12:25.309646 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b78864904f8a758a686ca967ec26a2ab8186a86c9d740cd1099f12212f22ec70-rootfs.mount: Deactivated successfully. Dec 13 09:12:25.351044 containerd[1466]: time="2024-12-13T09:12:25.350721822Z" level=info msg="shim disconnected" id=b78864904f8a758a686ca967ec26a2ab8186a86c9d740cd1099f12212f22ec70 namespace=k8s.io Dec 13 09:12:25.351044 containerd[1466]: time="2024-12-13T09:12:25.350828028Z" level=warning msg="cleaning up after shim disconnected" id=b78864904f8a758a686ca967ec26a2ab8186a86c9d740cd1099f12212f22ec70 namespace=k8s.io Dec 13 09:12:25.351044 containerd[1466]: time="2024-12-13T09:12:25.350843411Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 09:12:25.369489 systemd[1]: Created slice kubepods-besteffort-pod2ce4350c_2296_46ed_9e25_9ab657a354cd.slice - libcontainer container kubepods-besteffort-pod2ce4350c_2296_46ed_9e25_9ab657a354cd.slice. Dec 13 09:12:25.373840 containerd[1466]: time="2024-12-13T09:12:25.373780926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-znlc8,Uid:2ce4350c-2296-46ed-9e25-9ab657a354cd,Namespace:calico-system,Attempt:0,}" Dec 13 09:12:25.400702 kubelet[1773]: E1213 09:12:25.400658 1773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 09:12:25.411432 containerd[1466]: time="2024-12-13T09:12:25.411384267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 09:12:25.487251 containerd[1466]: time="2024-12-13T09:12:25.487033101Z" level=error msg="Failed to destroy network for sandbox \"bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:12:25.488703 containerd[1466]: time="2024-12-13T09:12:25.488503223Z" level=error msg="encountered an error cleaning up failed sandbox \"bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:12:25.488703 containerd[1466]: time="2024-12-13T09:12:25.488590603Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-znlc8,Uid:2ce4350c-2296-46ed-9e25-9ab657a354cd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:12:25.489027 kubelet[1773]: E1213 09:12:25.488891 1773 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:12:25.489027 kubelet[1773]: E1213 09:12:25.488974 1773 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-znlc8" Dec 13 09:12:25.489027 kubelet[1773]: E1213 09:12:25.489018 1773 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-znlc8" Dec 13 09:12:25.489203 kubelet[1773]: E1213 09:12:25.489077 1773 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-znlc8_calico-system(2ce4350c-2296-46ed-9e25-9ab657a354cd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-znlc8_calico-system(2ce4350c-2296-46ed-9e25-9ab657a354cd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-znlc8" podUID="2ce4350c-2296-46ed-9e25-9ab657a354cd" Dec 13 09:12:25.491403 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426-shm.mount: Deactivated successfully. Dec 13 09:12:26.211629 kubelet[1773]: E1213 09:12:26.211534 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:12:26.404681 kubelet[1773]: I1213 09:12:26.404628 1773 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426" Dec 13 09:12:26.405678 containerd[1466]: time="2024-12-13T09:12:26.405596629Z" level=info msg="StopPodSandbox for \"bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426\"" Dec 13 09:12:26.405976 containerd[1466]: time="2024-12-13T09:12:26.405840317Z" level=info msg="Ensure that sandbox bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426 in task-service has been cleanup successfully" Dec 13 09:12:26.450284 containerd[1466]: time="2024-12-13T09:12:26.450133253Z" level=error msg="StopPodSandbox for \"bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426\" failed" error="failed to destroy network for sandbox \"bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:12:26.450913 kubelet[1773]: E1213 09:12:26.450616 1773 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426" Dec 13 09:12:26.450913 kubelet[1773]: E1213 09:12:26.450705 1773 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426"} Dec 13 09:12:26.450913 kubelet[1773]: E1213 09:12:26.450813 1773 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2ce4350c-2296-46ed-9e25-9ab657a354cd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 09:12:26.450913 kubelet[1773]: E1213 09:12:26.450847 1773 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2ce4350c-2296-46ed-9e25-9ab657a354cd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-znlc8" podUID="2ce4350c-2296-46ed-9e25-9ab657a354cd" Dec 13 09:12:27.212632 kubelet[1773]: E1213 09:12:27.212549 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:12:27.877801 systemd[1]: Created slice kubepods-besteffort-pod6397fde0_ff3d_48f0_9684_9b01322db26e.slice - libcontainer container kubepods-besteffort-pod6397fde0_ff3d_48f0_9684_9b01322db26e.slice. Dec 13 09:12:27.943725 kubelet[1773]: I1213 09:12:27.943515 1773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8f5c\" (UniqueName: \"kubernetes.io/projected/6397fde0-ff3d-48f0-9684-9b01322db26e-kube-api-access-t8f5c\") pod \"nginx-deployment-8587fbcb89-ghx4z\" (UID: \"6397fde0-ff3d-48f0-9684-9b01322db26e\") " pod="default/nginx-deployment-8587fbcb89-ghx4z" Dec 13 09:12:28.184846 containerd[1466]: time="2024-12-13T09:12:28.184643031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-ghx4z,Uid:6397fde0-ff3d-48f0-9684-9b01322db26e,Namespace:default,Attempt:0,}" Dec 13 09:12:28.213610 kubelet[1773]: E1213 09:12:28.213183 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:12:28.348725 containerd[1466]: time="2024-12-13T09:12:28.348496626Z" level=error msg="Failed to destroy network for sandbox \"8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:12:28.353217 containerd[1466]: time="2024-12-13T09:12:28.351885511Z" level=error msg="encountered an error cleaning up failed sandbox \"8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:12:28.353547 containerd[1466]: time="2024-12-13T09:12:28.353490626Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-ghx4z,Uid:6397fde0-ff3d-48f0-9684-9b01322db26e,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:12:28.353700 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160-shm.mount: Deactivated successfully. Dec 13 09:12:28.354286 kubelet[1773]: E1213 09:12:28.354233 1773 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:12:28.354945 kubelet[1773]: E1213 09:12:28.354473 1773 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-ghx4z" Dec 13 09:12:28.354945 kubelet[1773]: E1213 09:12:28.354524 1773 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-ghx4z" Dec 13 09:12:28.354945 kubelet[1773]: E1213 09:12:28.354595 1773 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-ghx4z_default(6397fde0-ff3d-48f0-9684-9b01322db26e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-ghx4z_default(6397fde0-ff3d-48f0-9684-9b01322db26e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-ghx4z" podUID="6397fde0-ff3d-48f0-9684-9b01322db26e" Dec 13 09:12:28.411852 kubelet[1773]: I1213 09:12:28.411749 1773 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160" Dec 13 09:12:28.412597 containerd[1466]: time="2024-12-13T09:12:28.412564599Z" level=info msg="StopPodSandbox for \"8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160\"" Dec 13 09:12:28.412958 containerd[1466]: time="2024-12-13T09:12:28.412928159Z" level=info msg="Ensure that sandbox 8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160 in task-service has been cleanup successfully" Dec 13 09:12:28.483100 containerd[1466]: time="2024-12-13T09:12:28.482905575Z" level=error msg="StopPodSandbox for \"8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160\" failed" error="failed to destroy network for sandbox \"8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 09:12:28.483330 kubelet[1773]: E1213 09:12:28.483282 1773 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160" Dec 13 09:12:28.483394 kubelet[1773]: E1213 09:12:28.483356 1773 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160"} Dec 13 09:12:28.483471 kubelet[1773]: E1213 09:12:28.483416 1773 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6397fde0-ff3d-48f0-9684-9b01322db26e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 09:12:28.483575 kubelet[1773]: E1213 09:12:28.483494 1773 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6397fde0-ff3d-48f0-9684-9b01322db26e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-ghx4z" podUID="6397fde0-ff3d-48f0-9684-9b01322db26e" Dec 13 09:12:28.513483 systemd-resolved[1323]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Dec 13 09:12:29.214311 kubelet[1773]: E1213 09:12:29.214117 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:12:30.215302 kubelet[1773]: E1213 09:12:30.215243 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:12:31.216282 kubelet[1773]: E1213 09:12:31.216231 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:12:32.217950 kubelet[1773]: E1213 09:12:32.217852 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:12:32.461445 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2981107602.mount: Deactivated successfully. Dec 13 09:12:32.518177 containerd[1466]: time="2024-12-13T09:12:32.517977004Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:12:32.519966 containerd[1466]: time="2024-12-13T09:12:32.519053386Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Dec 13 09:12:32.521034 containerd[1466]: time="2024-12-13T09:12:32.520911267Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:12:32.523128 containerd[1466]: time="2024-12-13T09:12:32.523027964Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:12:32.524148 containerd[1466]: time="2024-12-13T09:12:32.523939224Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 7.111857146s" Dec 13 09:12:32.524148 containerd[1466]: time="2024-12-13T09:12:32.523990728Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Dec 13 09:12:32.559688 containerd[1466]: time="2024-12-13T09:12:32.559623675Z" level=info msg="CreateContainer within sandbox \"7bc111cc740d9a7be8146d8ddc2bb7df6df0d157f99f4b23ba94cc28f597ae74\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 09:12:32.589344 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3172254963.mount: Deactivated successfully. Dec 13 09:12:32.599744 containerd[1466]: time="2024-12-13T09:12:32.599554308Z" level=info msg="CreateContainer within sandbox \"7bc111cc740d9a7be8146d8ddc2bb7df6df0d157f99f4b23ba94cc28f597ae74\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b949344a28f93acb07738ca6503ac87d3f3ea0f2e5d00944925e76ba901510f8\"" Dec 13 09:12:32.601941 containerd[1466]: time="2024-12-13T09:12:32.601777791Z" level=info msg="StartContainer for \"b949344a28f93acb07738ca6503ac87d3f3ea0f2e5d00944925e76ba901510f8\"" Dec 13 09:12:32.677233 systemd[1]: Started sshd@7-147.182.199.141:22-218.92.0.158:19154.service - OpenSSH per-connection server daemon (218.92.0.158:19154). Dec 13 09:12:32.697831 systemd[1]: Started cri-containerd-b949344a28f93acb07738ca6503ac87d3f3ea0f2e5d00944925e76ba901510f8.scope - libcontainer container b949344a28f93acb07738ca6503ac87d3f3ea0f2e5d00944925e76ba901510f8. Dec 13 09:12:32.778672 containerd[1466]: time="2024-12-13T09:12:32.778399061Z" level=info msg="StartContainer for \"b949344a28f93acb07738ca6503ac87d3f3ea0f2e5d00944925e76ba901510f8\" returns successfully" Dec 13 09:12:32.918657 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 09:12:32.919187 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 09:12:33.202872 kubelet[1773]: E1213 09:12:33.202826 1773 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:12:33.218752 kubelet[1773]: E1213 09:12:33.218673 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:12:33.432807 kubelet[1773]: E1213 09:12:33.432756 1773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 09:12:33.488451 kubelet[1773]: I1213 09:12:33.488209 1773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-2t84k" podStartSLOduration=4.461187839 podStartE2EDuration="20.48817792s" podCreationTimestamp="2024-12-13 09:12:13 +0000 UTC" firstStartedPulling="2024-12-13 09:12:16.499071718 +0000 UTC m=+4.122407699" lastFinishedPulling="2024-12-13 09:12:32.526061788 +0000 UTC m=+20.149397780" observedRunningTime="2024-12-13 09:12:33.486065357 +0000 UTC m=+21.109401362" watchObservedRunningTime="2024-12-13 09:12:33.48817792 +0000 UTC m=+21.111513921" Dec 13 09:12:33.872071 sshd[2447]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root Dec 13 09:12:34.220128 kubelet[1773]: E1213 09:12:34.219929 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:12:34.438726 kubelet[1773]: I1213 09:12:34.436192 1773 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 09:12:34.438726 kubelet[1773]: E1213 09:12:34.436935 1773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 09:12:34.909085 kernel: bpftool[2551]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 09:12:35.220946 kubelet[1773]: E1213 09:12:35.220594 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:12:35.276926 systemd-networkd[1372]: vxlan.calico: Link UP Dec 13 09:12:35.276939 systemd-networkd[1372]: vxlan.calico: Gained carrier Dec 13 09:12:36.221256 kubelet[1773]: E1213 09:12:36.221185 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:12:36.257504 sshd[2389]: PAM: Permission denied for root from 218.92.0.158 Dec 13 09:12:36.513476 systemd-networkd[1372]: vxlan.calico: Gained IPv6LL Dec 13 09:12:37.222279 kubelet[1773]: E1213 09:12:37.222191 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:12:37.433743 kubelet[1773]: I1213 09:12:37.433162 1773 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 09:12:37.433743 kubelet[1773]: E1213 09:12:37.433749 1773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 09:12:37.570394 systemd[1]: run-containerd-runc-k8s.io-b949344a28f93acb07738ca6503ac87d3f3ea0f2e5d00944925e76ba901510f8-runc.iByvhL.mount: Deactivated successfully. Dec 13 09:12:37.635809 kubelet[1773]: E1213 09:12:37.635756 1773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Dec 13 09:12:38.223456 kubelet[1773]: E1213 09:12:38.223350 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:12:39.223908 kubelet[1773]: E1213 09:12:39.223835 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:12:40.224594 kubelet[1773]: E1213 09:12:40.224519 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:12:40.357948 containerd[1466]: time="2024-12-13T09:12:40.357874661Z" level=info msg="StopPodSandbox for \"bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426\"" Dec 13 09:12:40.359409 containerd[1466]: time="2024-12-13T09:12:40.357911877Z" level=info msg="StopPodSandbox for \"8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160\"" Dec 13 09:12:40.561776 containerd[1466]: 2024-12-13 09:12:40.485 [INFO][2709] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426" Dec 13 09:12:40.561776 containerd[1466]: 2024-12-13 09:12:40.486 [INFO][2709] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426" iface="eth0" netns="/var/run/netns/cni-7187020b-a8cb-2685-bd97-d8a905e15d85" Dec 13 09:12:40.561776 containerd[1466]: 2024-12-13 09:12:40.486 [INFO][2709] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426" iface="eth0" netns="/var/run/netns/cni-7187020b-a8cb-2685-bd97-d8a905e15d85" Dec 13 09:12:40.561776 containerd[1466]: 2024-12-13 09:12:40.487 [INFO][2709] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426" iface="eth0" netns="/var/run/netns/cni-7187020b-a8cb-2685-bd97-d8a905e15d85" Dec 13 09:12:40.561776 containerd[1466]: 2024-12-13 09:12:40.487 [INFO][2709] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426" Dec 13 09:12:40.561776 containerd[1466]: 2024-12-13 09:12:40.487 [INFO][2709] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426" Dec 13 09:12:40.561776 containerd[1466]: 2024-12-13 09:12:40.526 [INFO][2722] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426" HandleID="k8s-pod-network.bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426" Workload="147.182.199.141-k8s-csi--node--driver--znlc8-eth0" Dec 13 09:12:40.561776 containerd[1466]: 2024-12-13 09:12:40.526 [INFO][2722] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:12:40.561776 containerd[1466]: 2024-12-13 09:12:40.526 [INFO][2722] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:12:40.561776 containerd[1466]: 2024-12-13 09:12:40.546 [WARNING][2722] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426" HandleID="k8s-pod-network.bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426" Workload="147.182.199.141-k8s-csi--node--driver--znlc8-eth0" Dec 13 09:12:40.561776 containerd[1466]: 2024-12-13 09:12:40.546 [INFO][2722] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426" HandleID="k8s-pod-network.bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426" Workload="147.182.199.141-k8s-csi--node--driver--znlc8-eth0" Dec 13 09:12:40.561776 containerd[1466]: 2024-12-13 09:12:40.553 [INFO][2722] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:12:40.561776 containerd[1466]: 2024-12-13 09:12:40.559 [INFO][2709] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426" Dec 13 09:12:40.567326 systemd[1]: run-netns-cni\x2d7187020b\x2da8cb\x2d2685\x2dbd97\x2dd8a905e15d85.mount: Deactivated successfully. Dec 13 09:12:40.570163 containerd[1466]: time="2024-12-13T09:12:40.569899411Z" level=info msg="TearDown network for sandbox \"bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426\" successfully" Dec 13 09:12:40.570163 containerd[1466]: time="2024-12-13T09:12:40.569975455Z" level=info msg="StopPodSandbox for \"bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426\" returns successfully" Dec 13 09:12:40.571510 containerd[1466]: time="2024-12-13T09:12:40.571433828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-znlc8,Uid:2ce4350c-2296-46ed-9e25-9ab657a354cd,Namespace:calico-system,Attempt:1,}" Dec 13 09:12:40.582698 containerd[1466]: 2024-12-13 09:12:40.477 [INFO][2710] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160" Dec 13 09:12:40.582698 containerd[1466]: 2024-12-13 09:12:40.477 [INFO][2710] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160" iface="eth0" netns="/var/run/netns/cni-861f1298-1fbe-a33d-18c9-f5045fd5216b" Dec 13 09:12:40.582698 containerd[1466]: 2024-12-13 09:12:40.477 [INFO][2710] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160" iface="eth0" netns="/var/run/netns/cni-861f1298-1fbe-a33d-18c9-f5045fd5216b" Dec 13 09:12:40.582698 containerd[1466]: 2024-12-13 09:12:40.480 [INFO][2710] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160" iface="eth0" netns="/var/run/netns/cni-861f1298-1fbe-a33d-18c9-f5045fd5216b" Dec 13 09:12:40.582698 containerd[1466]: 2024-12-13 09:12:40.480 [INFO][2710] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160" Dec 13 09:12:40.582698 containerd[1466]: 2024-12-13 09:12:40.480 [INFO][2710] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160" Dec 13 09:12:40.582698 containerd[1466]: 2024-12-13 09:12:40.541 [INFO][2721] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160" HandleID="k8s-pod-network.8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160" Workload="147.182.199.141-k8s-nginx--deployment--8587fbcb89--ghx4z-eth0" Dec 13 09:12:40.582698 containerd[1466]: 2024-12-13 09:12:40.542 [INFO][2721] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:12:40.582698 containerd[1466]: 2024-12-13 09:12:40.553 [INFO][2721] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:12:40.582698 containerd[1466]: 2024-12-13 09:12:40.573 [WARNING][2721] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160" HandleID="k8s-pod-network.8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160" Workload="147.182.199.141-k8s-nginx--deployment--8587fbcb89--ghx4z-eth0" Dec 13 09:12:40.582698 containerd[1466]: 2024-12-13 09:12:40.573 [INFO][2721] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160" HandleID="k8s-pod-network.8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160" Workload="147.182.199.141-k8s-nginx--deployment--8587fbcb89--ghx4z-eth0" Dec 13 09:12:40.582698 containerd[1466]: 2024-12-13 09:12:40.577 [INFO][2721] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:12:40.582698 containerd[1466]: 2024-12-13 09:12:40.580 [INFO][2710] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160" Dec 13 09:12:40.586946 containerd[1466]: time="2024-12-13T09:12:40.583240142Z" level=info msg="TearDown network for sandbox \"8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160\" successfully" Dec 13 09:12:40.586946 containerd[1466]: time="2024-12-13T09:12:40.583284838Z" level=info msg="StopPodSandbox for \"8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160\" returns successfully" Dec 13 09:12:40.586946 containerd[1466]: time="2024-12-13T09:12:40.585356037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-ghx4z,Uid:6397fde0-ff3d-48f0-9684-9b01322db26e,Namespace:default,Attempt:1,}" Dec 13 09:12:40.587928 systemd[1]: run-netns-cni\x2d861f1298\x2d1fbe\x2da33d\x2d18c9\x2df5045fd5216b.mount: Deactivated successfully. Dec 13 09:12:40.912635 systemd-networkd[1372]: calibadd869fab6: Link UP Dec 13 09:12:40.914761 systemd-networkd[1372]: calibadd869fab6: Gained carrier Dec 13 09:12:40.940593 containerd[1466]: 2024-12-13 09:12:40.701 [INFO][2735] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {147.182.199.141-k8s-csi--node--driver--znlc8-eth0 csi-node-driver- calico-system 2ce4350c-2296-46ed-9e25-9ab657a354cd 1082 0 2024-12-13 09:12:13 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 147.182.199.141 csi-node-driver-znlc8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calibadd869fab6 [] []}} ContainerID="5a242dae19c5d6ba67f5eb2d0f90e926f9417525879b6ff55d1f83ba09c4e2ee" Namespace="calico-system" Pod="csi-node-driver-znlc8" WorkloadEndpoint="147.182.199.141-k8s-csi--node--driver--znlc8-" Dec 13 09:12:40.940593 containerd[1466]: 2024-12-13 09:12:40.702 [INFO][2735] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5a242dae19c5d6ba67f5eb2d0f90e926f9417525879b6ff55d1f83ba09c4e2ee" Namespace="calico-system" Pod="csi-node-driver-znlc8" WorkloadEndpoint="147.182.199.141-k8s-csi--node--driver--znlc8-eth0" Dec 13 09:12:40.940593 containerd[1466]: 2024-12-13 09:12:40.797 [INFO][2761] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5a242dae19c5d6ba67f5eb2d0f90e926f9417525879b6ff55d1f83ba09c4e2ee" HandleID="k8s-pod-network.5a242dae19c5d6ba67f5eb2d0f90e926f9417525879b6ff55d1f83ba09c4e2ee" Workload="147.182.199.141-k8s-csi--node--driver--znlc8-eth0" Dec 13 09:12:40.940593 containerd[1466]: 2024-12-13 09:12:40.837 [INFO][2761] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5a242dae19c5d6ba67f5eb2d0f90e926f9417525879b6ff55d1f83ba09c4e2ee" HandleID="k8s-pod-network.5a242dae19c5d6ba67f5eb2d0f90e926f9417525879b6ff55d1f83ba09c4e2ee" Workload="147.182.199.141-k8s-csi--node--driver--znlc8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051cf0), Attrs:map[string]string{"namespace":"calico-system", "node":"147.182.199.141", "pod":"csi-node-driver-znlc8", "timestamp":"2024-12-13 09:12:40.797806002 +0000 UTC"}, Hostname:"147.182.199.141", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 09:12:40.940593 containerd[1466]: 2024-12-13 09:12:40.837 [INFO][2761] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:12:40.940593 containerd[1466]: 2024-12-13 09:12:40.837 [INFO][2761] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:12:40.940593 containerd[1466]: 2024-12-13 09:12:40.837 [INFO][2761] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '147.182.199.141' Dec 13 09:12:40.940593 containerd[1466]: 2024-12-13 09:12:40.848 [INFO][2761] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5a242dae19c5d6ba67f5eb2d0f90e926f9417525879b6ff55d1f83ba09c4e2ee" host="147.182.199.141" Dec 13 09:12:40.940593 containerd[1466]: 2024-12-13 09:12:40.859 [INFO][2761] ipam/ipam.go 372: Looking up existing affinities for host host="147.182.199.141" Dec 13 09:12:40.940593 containerd[1466]: 2024-12-13 09:12:40.870 [INFO][2761] ipam/ipam.go 489: Trying affinity for 192.168.107.192/26 host="147.182.199.141" Dec 13 09:12:40.940593 containerd[1466]: 2024-12-13 09:12:40.875 [INFO][2761] ipam/ipam.go 155: Attempting to load block cidr=192.168.107.192/26 host="147.182.199.141" Dec 13 09:12:40.940593 containerd[1466]: 2024-12-13 09:12:40.880 [INFO][2761] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.107.192/26 host="147.182.199.141" Dec 13 09:12:40.940593 containerd[1466]: 2024-12-13 09:12:40.880 [INFO][2761] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.107.192/26 handle="k8s-pod-network.5a242dae19c5d6ba67f5eb2d0f90e926f9417525879b6ff55d1f83ba09c4e2ee" host="147.182.199.141" Dec 13 09:12:40.940593 containerd[1466]: 2024-12-13 09:12:40.885 [INFO][2761] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5a242dae19c5d6ba67f5eb2d0f90e926f9417525879b6ff55d1f83ba09c4e2ee Dec 13 09:12:40.940593 containerd[1466]: 2024-12-13 09:12:40.894 [INFO][2761] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.107.192/26 handle="k8s-pod-network.5a242dae19c5d6ba67f5eb2d0f90e926f9417525879b6ff55d1f83ba09c4e2ee" host="147.182.199.141" Dec 13 09:12:40.940593 containerd[1466]: 2024-12-13 09:12:40.903 [INFO][2761] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.107.193/26] block=192.168.107.192/26 handle="k8s-pod-network.5a242dae19c5d6ba67f5eb2d0f90e926f9417525879b6ff55d1f83ba09c4e2ee" host="147.182.199.141" Dec 13 09:12:40.940593 containerd[1466]: 2024-12-13 09:12:40.903 [INFO][2761] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.107.193/26] handle="k8s-pod-network.5a242dae19c5d6ba67f5eb2d0f90e926f9417525879b6ff55d1f83ba09c4e2ee" host="147.182.199.141" Dec 13 09:12:40.940593 containerd[1466]: 2024-12-13 09:12:40.903 [INFO][2761] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:12:40.940593 containerd[1466]: 2024-12-13 09:12:40.903 [INFO][2761] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.193/26] IPv6=[] ContainerID="5a242dae19c5d6ba67f5eb2d0f90e926f9417525879b6ff55d1f83ba09c4e2ee" HandleID="k8s-pod-network.5a242dae19c5d6ba67f5eb2d0f90e926f9417525879b6ff55d1f83ba09c4e2ee" Workload="147.182.199.141-k8s-csi--node--driver--znlc8-eth0" Dec 13 09:12:40.943778 containerd[1466]: 2024-12-13 09:12:40.906 [INFO][2735] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5a242dae19c5d6ba67f5eb2d0f90e926f9417525879b6ff55d1f83ba09c4e2ee" Namespace="calico-system" Pod="csi-node-driver-znlc8" WorkloadEndpoint="147.182.199.141-k8s-csi--node--driver--znlc8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"147.182.199.141-k8s-csi--node--driver--znlc8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2ce4350c-2296-46ed-9e25-9ab657a354cd", ResourceVersion:"1082", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 12, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"147.182.199.141", ContainerID:"", Pod:"csi-node-driver-znlc8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.107.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibadd869fab6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:12:40.943778 containerd[1466]: 2024-12-13 09:12:40.907 [INFO][2735] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.107.193/32] ContainerID="5a242dae19c5d6ba67f5eb2d0f90e926f9417525879b6ff55d1f83ba09c4e2ee" Namespace="calico-system" Pod="csi-node-driver-znlc8" WorkloadEndpoint="147.182.199.141-k8s-csi--node--driver--znlc8-eth0" Dec 13 09:12:40.943778 containerd[1466]: 2024-12-13 09:12:40.907 [INFO][2735] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibadd869fab6 ContainerID="5a242dae19c5d6ba67f5eb2d0f90e926f9417525879b6ff55d1f83ba09c4e2ee" Namespace="calico-system" Pod="csi-node-driver-znlc8" WorkloadEndpoint="147.182.199.141-k8s-csi--node--driver--znlc8-eth0" Dec 13 09:12:40.943778 containerd[1466]: 2024-12-13 09:12:40.915 [INFO][2735] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5a242dae19c5d6ba67f5eb2d0f90e926f9417525879b6ff55d1f83ba09c4e2ee" Namespace="calico-system" Pod="csi-node-driver-znlc8" WorkloadEndpoint="147.182.199.141-k8s-csi--node--driver--znlc8-eth0" Dec 13 09:12:40.943778 containerd[1466]: 2024-12-13 09:12:40.916 [INFO][2735] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5a242dae19c5d6ba67f5eb2d0f90e926f9417525879b6ff55d1f83ba09c4e2ee" Namespace="calico-system" Pod="csi-node-driver-znlc8" WorkloadEndpoint="147.182.199.141-k8s-csi--node--driver--znlc8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"147.182.199.141-k8s-csi--node--driver--znlc8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2ce4350c-2296-46ed-9e25-9ab657a354cd", ResourceVersion:"1082", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 12, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"147.182.199.141", ContainerID:"5a242dae19c5d6ba67f5eb2d0f90e926f9417525879b6ff55d1f83ba09c4e2ee", Pod:"csi-node-driver-znlc8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.107.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibadd869fab6", MAC:"12:5a:3c:54:86:62", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:12:40.943778 containerd[1466]: 2024-12-13 09:12:40.935 [INFO][2735] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5a242dae19c5d6ba67f5eb2d0f90e926f9417525879b6ff55d1f83ba09c4e2ee" Namespace="calico-system" Pod="csi-node-driver-znlc8" WorkloadEndpoint="147.182.199.141-k8s-csi--node--driver--znlc8-eth0" Dec 13 09:12:40.994701 containerd[1466]: time="2024-12-13T09:12:40.994520025Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:12:40.994701 containerd[1466]: time="2024-12-13T09:12:40.994608591Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:12:40.994701 containerd[1466]: time="2024-12-13T09:12:40.994628633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:12:40.995174 containerd[1466]: time="2024-12-13T09:12:40.994776810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:12:41.033691 systemd[1]: Started cri-containerd-5a242dae19c5d6ba67f5eb2d0f90e926f9417525879b6ff55d1f83ba09c4e2ee.scope - libcontainer container 5a242dae19c5d6ba67f5eb2d0f90e926f9417525879b6ff55d1f83ba09c4e2ee. Dec 13 09:12:41.041788 systemd-networkd[1372]: cali6bd65192d62: Link UP Dec 13 09:12:41.044568 systemd-networkd[1372]: cali6bd65192d62: Gained carrier Dec 13 09:12:41.086422 containerd[1466]: 2024-12-13 09:12:40.705 [INFO][2744] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {147.182.199.141-k8s-nginx--deployment--8587fbcb89--ghx4z-eth0 nginx-deployment-8587fbcb89- default 6397fde0-ff3d-48f0-9684-9b01322db26e 1081 0 2024-12-13 09:12:27 +0000 UTC map[app:nginx pod-template-hash:8587fbcb89 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 147.182.199.141 nginx-deployment-8587fbcb89-ghx4z eth0 default [] [] [kns.default ksa.default.default] cali6bd65192d62 [] []}} ContainerID="b77f22898390184c634d85928f715ebcacb085a5175214b70d57e3a3315860d6" Namespace="default" Pod="nginx-deployment-8587fbcb89-ghx4z" WorkloadEndpoint="147.182.199.141-k8s-nginx--deployment--8587fbcb89--ghx4z-" Dec 13 09:12:41.086422 containerd[1466]: 2024-12-13 09:12:40.706 [INFO][2744] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b77f22898390184c634d85928f715ebcacb085a5175214b70d57e3a3315860d6" Namespace="default" Pod="nginx-deployment-8587fbcb89-ghx4z" WorkloadEndpoint="147.182.199.141-k8s-nginx--deployment--8587fbcb89--ghx4z-eth0" Dec 13 09:12:41.086422 containerd[1466]: 2024-12-13 09:12:40.801 [INFO][2766] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b77f22898390184c634d85928f715ebcacb085a5175214b70d57e3a3315860d6" HandleID="k8s-pod-network.b77f22898390184c634d85928f715ebcacb085a5175214b70d57e3a3315860d6" Workload="147.182.199.141-k8s-nginx--deployment--8587fbcb89--ghx4z-eth0" Dec 13 09:12:41.086422 containerd[1466]: 2024-12-13 09:12:40.837 [INFO][2766] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b77f22898390184c634d85928f715ebcacb085a5175214b70d57e3a3315860d6" HandleID="k8s-pod-network.b77f22898390184c634d85928f715ebcacb085a5175214b70d57e3a3315860d6" Workload="147.182.199.141-k8s-nginx--deployment--8587fbcb89--ghx4z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318c70), Attrs:map[string]string{"namespace":"default", "node":"147.182.199.141", "pod":"nginx-deployment-8587fbcb89-ghx4z", "timestamp":"2024-12-13 09:12:40.801218239 +0000 UTC"}, Hostname:"147.182.199.141", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 09:12:41.086422 containerd[1466]: 2024-12-13 09:12:40.837 [INFO][2766] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:12:41.086422 containerd[1466]: 2024-12-13 09:12:40.905 [INFO][2766] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:12:41.086422 containerd[1466]: 2024-12-13 09:12:40.905 [INFO][2766] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '147.182.199.141' Dec 13 09:12:41.086422 containerd[1466]: 2024-12-13 09:12:40.951 [INFO][2766] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b77f22898390184c634d85928f715ebcacb085a5175214b70d57e3a3315860d6" host="147.182.199.141" Dec 13 09:12:41.086422 containerd[1466]: 2024-12-13 09:12:40.967 [INFO][2766] ipam/ipam.go 372: Looking up existing affinities for host host="147.182.199.141" Dec 13 09:12:41.086422 containerd[1466]: 2024-12-13 09:12:40.984 [INFO][2766] ipam/ipam.go 489: Trying affinity for 192.168.107.192/26 host="147.182.199.141" Dec 13 09:12:41.086422 containerd[1466]: 2024-12-13 09:12:40.989 [INFO][2766] ipam/ipam.go 155: Attempting to load block cidr=192.168.107.192/26 host="147.182.199.141" Dec 13 09:12:41.086422 containerd[1466]: 2024-12-13 09:12:40.997 [INFO][2766] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.107.192/26 host="147.182.199.141" Dec 13 09:12:41.086422 containerd[1466]: 2024-12-13 09:12:40.997 [INFO][2766] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.107.192/26 handle="k8s-pod-network.b77f22898390184c634d85928f715ebcacb085a5175214b70d57e3a3315860d6" host="147.182.199.141" Dec 13 09:12:41.086422 containerd[1466]: 2024-12-13 09:12:41.004 [INFO][2766] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b77f22898390184c634d85928f715ebcacb085a5175214b70d57e3a3315860d6 Dec 13 09:12:41.086422 containerd[1466]: 2024-12-13 09:12:41.012 [INFO][2766] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.107.192/26 handle="k8s-pod-network.b77f22898390184c634d85928f715ebcacb085a5175214b70d57e3a3315860d6" host="147.182.199.141" Dec 13 09:12:41.086422 containerd[1466]: 2024-12-13 09:12:41.028 [INFO][2766] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.107.194/26] block=192.168.107.192/26 handle="k8s-pod-network.b77f22898390184c634d85928f715ebcacb085a5175214b70d57e3a3315860d6" host="147.182.199.141" Dec 13 09:12:41.086422 containerd[1466]: 2024-12-13 09:12:41.029 [INFO][2766] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.107.194/26] handle="k8s-pod-network.b77f22898390184c634d85928f715ebcacb085a5175214b70d57e3a3315860d6" host="147.182.199.141" Dec 13 09:12:41.086422 containerd[1466]: 2024-12-13 09:12:41.030 [INFO][2766] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:12:41.086422 containerd[1466]: 2024-12-13 09:12:41.030 [INFO][2766] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.194/26] IPv6=[] ContainerID="b77f22898390184c634d85928f715ebcacb085a5175214b70d57e3a3315860d6" HandleID="k8s-pod-network.b77f22898390184c634d85928f715ebcacb085a5175214b70d57e3a3315860d6" Workload="147.182.199.141-k8s-nginx--deployment--8587fbcb89--ghx4z-eth0" Dec 13 09:12:41.090686 containerd[1466]: 2024-12-13 09:12:41.035 [INFO][2744] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b77f22898390184c634d85928f715ebcacb085a5175214b70d57e3a3315860d6" Namespace="default" Pod="nginx-deployment-8587fbcb89-ghx4z" WorkloadEndpoint="147.182.199.141-k8s-nginx--deployment--8587fbcb89--ghx4z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"147.182.199.141-k8s-nginx--deployment--8587fbcb89--ghx4z-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"6397fde0-ff3d-48f0-9684-9b01322db26e", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 12, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"147.182.199.141", ContainerID:"", Pod:"nginx-deployment-8587fbcb89-ghx4z", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.107.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali6bd65192d62", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:12:41.090686 containerd[1466]: 2024-12-13 09:12:41.035 [INFO][2744] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.107.194/32] ContainerID="b77f22898390184c634d85928f715ebcacb085a5175214b70d57e3a3315860d6" Namespace="default" Pod="nginx-deployment-8587fbcb89-ghx4z" WorkloadEndpoint="147.182.199.141-k8s-nginx--deployment--8587fbcb89--ghx4z-eth0" Dec 13 09:12:41.090686 containerd[1466]: 2024-12-13 09:12:41.035 [INFO][2744] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6bd65192d62 ContainerID="b77f22898390184c634d85928f715ebcacb085a5175214b70d57e3a3315860d6" Namespace="default" Pod="nginx-deployment-8587fbcb89-ghx4z" WorkloadEndpoint="147.182.199.141-k8s-nginx--deployment--8587fbcb89--ghx4z-eth0" Dec 13 09:12:41.090686 containerd[1466]: 2024-12-13 09:12:41.045 [INFO][2744] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b77f22898390184c634d85928f715ebcacb085a5175214b70d57e3a3315860d6" Namespace="default" Pod="nginx-deployment-8587fbcb89-ghx4z" WorkloadEndpoint="147.182.199.141-k8s-nginx--deployment--8587fbcb89--ghx4z-eth0" Dec 13 09:12:41.090686 containerd[1466]: 2024-12-13 09:12:41.048 [INFO][2744] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b77f22898390184c634d85928f715ebcacb085a5175214b70d57e3a3315860d6" Namespace="default" Pod="nginx-deployment-8587fbcb89-ghx4z" WorkloadEndpoint="147.182.199.141-k8s-nginx--deployment--8587fbcb89--ghx4z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"147.182.199.141-k8s-nginx--deployment--8587fbcb89--ghx4z-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"6397fde0-ff3d-48f0-9684-9b01322db26e", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 12, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"147.182.199.141", ContainerID:"b77f22898390184c634d85928f715ebcacb085a5175214b70d57e3a3315860d6", Pod:"nginx-deployment-8587fbcb89-ghx4z", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.107.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali6bd65192d62", MAC:"3a:b5:35:fc:b1:03", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:12:41.090686 containerd[1466]: 2024-12-13 09:12:41.076 [INFO][2744] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b77f22898390184c634d85928f715ebcacb085a5175214b70d57e3a3315860d6" Namespace="default" Pod="nginx-deployment-8587fbcb89-ghx4z" WorkloadEndpoint="147.182.199.141-k8s-nginx--deployment--8587fbcb89--ghx4z-eth0" Dec 13 09:12:41.104316 containerd[1466]: time="2024-12-13T09:12:41.104149932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-znlc8,Uid:2ce4350c-2296-46ed-9e25-9ab657a354cd,Namespace:calico-system,Attempt:1,} returns sandbox id \"5a242dae19c5d6ba67f5eb2d0f90e926f9417525879b6ff55d1f83ba09c4e2ee\"" Dec 13 09:12:41.113691 containerd[1466]: time="2024-12-13T09:12:41.112205093Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 09:12:41.118343 systemd-resolved[1323]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Dec 13 09:12:41.152835 containerd[1466]: time="2024-12-13T09:12:41.152300580Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:12:41.152835 containerd[1466]: time="2024-12-13T09:12:41.152512154Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:12:41.152835 containerd[1466]: time="2024-12-13T09:12:41.152563499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:12:41.152835 containerd[1466]: time="2024-12-13T09:12:41.152725764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:12:41.184666 systemd[1]: Started cri-containerd-b77f22898390184c634d85928f715ebcacb085a5175214b70d57e3a3315860d6.scope - libcontainer container b77f22898390184c634d85928f715ebcacb085a5175214b70d57e3a3315860d6. Dec 13 09:12:41.225581 kubelet[1773]: E1213 09:12:41.225503 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:12:41.246102 containerd[1466]: time="2024-12-13T09:12:41.245933928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-ghx4z,Uid:6397fde0-ff3d-48f0-9684-9b01322db26e,Namespace:default,Attempt:1,} returns sandbox id \"b77f22898390184c634d85928f715ebcacb085a5175214b70d57e3a3315860d6\"" Dec 13 09:12:42.209228 systemd-networkd[1372]: calibadd869fab6: Gained IPv6LL Dec 13 09:12:42.226499 kubelet[1773]: E1213 09:12:42.226423 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:12:42.719856 containerd[1466]: time="2024-12-13T09:12:42.719770103Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:12:42.723420 containerd[1466]: time="2024-12-13T09:12:42.723154074Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Dec 13 09:12:42.725384 containerd[1466]: time="2024-12-13T09:12:42.725311237Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:12:42.730989 containerd[1466]: time="2024-12-13T09:12:42.729754459Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:12:42.730989 containerd[1466]: time="2024-12-13T09:12:42.730712942Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.618426608s" Dec 13 09:12:42.730989 containerd[1466]: time="2024-12-13T09:12:42.730771998Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Dec 13 09:12:42.737700 containerd[1466]: time="2024-12-13T09:12:42.737210948Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 09:12:42.739512 containerd[1466]: time="2024-12-13T09:12:42.739286443Z" level=info msg="CreateContainer within sandbox \"5a242dae19c5d6ba67f5eb2d0f90e926f9417525879b6ff55d1f83ba09c4e2ee\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 09:12:42.776093 containerd[1466]: time="2024-12-13T09:12:42.775781924Z" level=info msg="CreateContainer within sandbox \"5a242dae19c5d6ba67f5eb2d0f90e926f9417525879b6ff55d1f83ba09c4e2ee\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"de1b1bff496ca857616bc454134cbbc65673228c278a46cf7dc3aa01e8ac241e\"" Dec 13 09:12:42.781846 containerd[1466]: time="2024-12-13T09:12:42.779312291Z" level=info msg="StartContainer for \"de1b1bff496ca857616bc454134cbbc65673228c278a46cf7dc3aa01e8ac241e\"" Dec 13 09:12:42.834527 systemd[1]: Started cri-containerd-de1b1bff496ca857616bc454134cbbc65673228c278a46cf7dc3aa01e8ac241e.scope - libcontainer container de1b1bff496ca857616bc454134cbbc65673228c278a46cf7dc3aa01e8ac241e. Dec 13 09:12:42.849369 systemd-networkd[1372]: cali6bd65192d62: Gained IPv6LL Dec 13 09:12:42.885543 containerd[1466]: time="2024-12-13T09:12:42.885445917Z" level=info msg="StartContainer for \"de1b1bff496ca857616bc454134cbbc65673228c278a46cf7dc3aa01e8ac241e\" returns successfully" Dec 13 09:12:43.227287 kubelet[1773]: E1213 09:12:43.227192 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:12:44.228177 kubelet[1773]: E1213 09:12:44.228105 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:12:45.228918 kubelet[1773]: E1213 09:12:45.228847 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:12:45.992560 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount72336560.mount: Deactivated successfully. Dec 13 09:12:46.230359 kubelet[1773]: E1213 09:12:46.229684 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:12:47.230969 kubelet[1773]: E1213 09:12:47.230800 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:12:47.743977 containerd[1466]: time="2024-12-13T09:12:47.743896424Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:12:47.745338 containerd[1466]: time="2024-12-13T09:12:47.745260527Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71036027" Dec 13 09:12:47.748126 containerd[1466]: time="2024-12-13T09:12:47.746367654Z" level=info msg="ImageCreate event name:\"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:12:47.750972 containerd[1466]: time="2024-12-13T09:12:47.750894855Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:12:47.752545 containerd[1466]: time="2024-12-13T09:12:47.752413161Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\", size \"71035905\" in 5.015116531s" Dec 13 09:12:47.752545 containerd[1466]: time="2024-12-13T09:12:47.752535410Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 09:12:47.766265 containerd[1466]: time="2024-12-13T09:12:47.766205820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 09:12:47.767366 containerd[1466]: time="2024-12-13T09:12:47.767259763Z" level=info msg="CreateContainer within sandbox \"b77f22898390184c634d85928f715ebcacb085a5175214b70d57e3a3315860d6\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 09:12:47.791321 containerd[1466]: time="2024-12-13T09:12:47.791111165Z" level=info msg="CreateContainer within sandbox \"b77f22898390184c634d85928f715ebcacb085a5175214b70d57e3a3315860d6\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"c43f9688bf35022cdc985948bbf57db2576388bf87e04a9dba30843405b7177b\"" Dec 13 09:12:47.793091 containerd[1466]: time="2024-12-13T09:12:47.792958078Z" level=info msg="StartContainer for \"c43f9688bf35022cdc985948bbf57db2576388bf87e04a9dba30843405b7177b\"" Dec 13 09:12:47.874858 systemd[1]: run-containerd-runc-k8s.io-c43f9688bf35022cdc985948bbf57db2576388bf87e04a9dba30843405b7177b-runc.qNXk4Q.mount: Deactivated successfully. Dec 13 09:12:47.887449 systemd[1]: Started cri-containerd-c43f9688bf35022cdc985948bbf57db2576388bf87e04a9dba30843405b7177b.scope - libcontainer container c43f9688bf35022cdc985948bbf57db2576388bf87e04a9dba30843405b7177b. Dec 13 09:12:47.933764 containerd[1466]: time="2024-12-13T09:12:47.931231301Z" level=info msg="StartContainer for \"c43f9688bf35022cdc985948bbf57db2576388bf87e04a9dba30843405b7177b\" returns successfully" Dec 13 09:12:48.231567 kubelet[1773]: E1213 09:12:48.231476 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:12:49.231839 kubelet[1773]: E1213 09:12:49.231788 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:12:49.449060 containerd[1466]: time="2024-12-13T09:12:49.448952415Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:12:49.450726 containerd[1466]: time="2024-12-13T09:12:49.450428305Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Dec 13 09:12:49.451615 containerd[1466]: time="2024-12-13T09:12:49.451511489Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:12:49.458822 containerd[1466]: time="2024-12-13T09:12:49.455990561Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:12:49.458822 containerd[1466]: time="2024-12-13T09:12:49.457174266Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.69065786s" Dec 13 09:12:49.458822 containerd[1466]: time="2024-12-13T09:12:49.457220228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Dec 13 09:12:49.463449 containerd[1466]: time="2024-12-13T09:12:49.463293998Z" level=info msg="CreateContainer within sandbox \"5a242dae19c5d6ba67f5eb2d0f90e926f9417525879b6ff55d1f83ba09c4e2ee\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 09:12:49.485155 containerd[1466]: time="2024-12-13T09:12:49.484818224Z" level=info msg="CreateContainer within sandbox \"5a242dae19c5d6ba67f5eb2d0f90e926f9417525879b6ff55d1f83ba09c4e2ee\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7076fe0677de199799059d763273e1957757041fdea33bf4555c9603a29ebf92\"" Dec 13 09:12:49.486165 containerd[1466]: time="2024-12-13T09:12:49.486047957Z" level=info msg="StartContainer for \"7076fe0677de199799059d763273e1957757041fdea33bf4555c9603a29ebf92\"" Dec 13 09:12:49.555471 systemd[1]: Started cri-containerd-7076fe0677de199799059d763273e1957757041fdea33bf4555c9603a29ebf92.scope - libcontainer container 7076fe0677de199799059d763273e1957757041fdea33bf4555c9603a29ebf92. Dec 13 09:12:49.608370 containerd[1466]: time="2024-12-13T09:12:49.608297102Z" level=info msg="StartContainer for \"7076fe0677de199799059d763273e1957757041fdea33bf4555c9603a29ebf92\" returns successfully" Dec 13 09:12:50.232307 kubelet[1773]: E1213 09:12:50.232233 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:12:50.374052 kubelet[1773]: I1213 09:12:50.373959 1773 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 09:12:50.374052 kubelet[1773]: I1213 09:12:50.374031 1773 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 09:12:50.604207 kubelet[1773]: I1213 09:12:50.603856 1773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-znlc8" podStartSLOduration=29.254669888 podStartE2EDuration="37.603826251s" podCreationTimestamp="2024-12-13 09:12:13 +0000 UTC" firstStartedPulling="2024-12-13 09:12:41.110835396 +0000 UTC m=+28.734171400" lastFinishedPulling="2024-12-13 09:12:49.459991782 +0000 UTC m=+37.083327763" observedRunningTime="2024-12-13 09:12:50.603789418 +0000 UTC m=+38.227125423" watchObservedRunningTime="2024-12-13 09:12:50.603826251 +0000 UTC m=+38.227162260" Dec 13 09:12:50.604207 kubelet[1773]: I1213 09:12:50.604115 1773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-ghx4z" podStartSLOduration=17.090958601 podStartE2EDuration="23.604101979s" podCreationTimestamp="2024-12-13 09:12:27 +0000 UTC" firstStartedPulling="2024-12-13 09:12:41.24788754 +0000 UTC m=+28.871223536" lastFinishedPulling="2024-12-13 09:12:47.761030914 +0000 UTC m=+35.384366914" observedRunningTime="2024-12-13 09:12:48.547995833 +0000 UTC m=+36.171331837" watchObservedRunningTime="2024-12-13 09:12:50.604101979 +0000 UTC m=+38.227437984" Dec 13 09:12:51.233128 kubelet[1773]: E1213 09:12:51.233040 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:12:52.169096 update_engine[1449]: I20241213 09:12:52.168453 1449 update_attempter.cc:509] Updating boot flags... Dec 13 09:12:52.215160 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (3070) Dec 13 09:12:52.235436 kubelet[1773]: E1213 09:12:52.235131 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:12:52.301391 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (3074) Dec 13 09:12:52.390063 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (3074) Dec 13 09:12:53.203290 kubelet[1773]: E1213 09:12:53.203205 1773 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:12:53.235651 kubelet[1773]: E1213 09:12:53.235562 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:12:54.235901 kubelet[1773]: E1213 09:12:54.235839 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:12:55.236662 kubelet[1773]: E1213 09:12:55.236572 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:12:56.237703 kubelet[1773]: E1213 09:12:56.237618 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:12:57.238137 kubelet[1773]: E1213 09:12:57.238066 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:12:57.831078 systemd[1]: Created slice kubepods-besteffort-poddf5bc1bb_74a9_4616_b607_12b595f3df00.slice - libcontainer container kubepods-besteffort-poddf5bc1bb_74a9_4616_b607_12b595f3df00.slice. Dec 13 09:12:57.971526 kubelet[1773]: I1213 09:12:57.971335 1773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcc76\" (UniqueName: \"kubernetes.io/projected/df5bc1bb-74a9-4616-b607-12b595f3df00-kube-api-access-qcc76\") pod \"nfs-server-provisioner-0\" (UID: \"df5bc1bb-74a9-4616-b607-12b595f3df00\") " pod="default/nfs-server-provisioner-0" Dec 13 09:12:57.971526 kubelet[1773]: I1213 09:12:57.971408 1773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/df5bc1bb-74a9-4616-b607-12b595f3df00-data\") pod \"nfs-server-provisioner-0\" (UID: \"df5bc1bb-74a9-4616-b607-12b595f3df00\") " pod="default/nfs-server-provisioner-0" Dec 13 09:12:58.136894 containerd[1466]: time="2024-12-13T09:12:58.136230185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:df5bc1bb-74a9-4616-b607-12b595f3df00,Namespace:default,Attempt:0,}" Dec 13 09:12:58.239128 kubelet[1773]: E1213 09:12:58.239060 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:12:58.441740 systemd-networkd[1372]: cali60e51b789ff: Link UP Dec 13 09:12:58.443192 systemd-networkd[1372]: cali60e51b789ff: Gained carrier Dec 13 09:12:58.468511 containerd[1466]: 2024-12-13 09:12:58.241 [INFO][3091] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {147.182.199.141-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default df5bc1bb-74a9-4616-b607-12b595f3df00 1173 0 2024-12-13 09:12:57 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 147.182.199.141 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="6d4e50bc8091002b9b3b07c4ed4e441f058c4fed2bd279649c68c957f31abb26" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="147.182.199.141-k8s-nfs--server--provisioner--0-" Dec 13 09:12:58.468511 containerd[1466]: 2024-12-13 09:12:58.242 [INFO][3091] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6d4e50bc8091002b9b3b07c4ed4e441f058c4fed2bd279649c68c957f31abb26" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="147.182.199.141-k8s-nfs--server--provisioner--0-eth0" Dec 13 09:12:58.468511 containerd[1466]: 2024-12-13 09:12:58.301 [INFO][3101] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6d4e50bc8091002b9b3b07c4ed4e441f058c4fed2bd279649c68c957f31abb26" HandleID="k8s-pod-network.6d4e50bc8091002b9b3b07c4ed4e441f058c4fed2bd279649c68c957f31abb26" Workload="147.182.199.141-k8s-nfs--server--provisioner--0-eth0" Dec 13 09:12:58.468511 containerd[1466]: 2024-12-13 09:12:58.335 [INFO][3101] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6d4e50bc8091002b9b3b07c4ed4e441f058c4fed2bd279649c68c957f31abb26" HandleID="k8s-pod-network.6d4e50bc8091002b9b3b07c4ed4e441f058c4fed2bd279649c68c957f31abb26" Workload="147.182.199.141-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004d3150), Attrs:map[string]string{"namespace":"default", "node":"147.182.199.141", "pod":"nfs-server-provisioner-0", "timestamp":"2024-12-13 09:12:58.301582575 +0000 UTC"}, Hostname:"147.182.199.141", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 09:12:58.468511 containerd[1466]: 2024-12-13 09:12:58.335 [INFO][3101] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:12:58.468511 containerd[1466]: 2024-12-13 09:12:58.336 [INFO][3101] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:12:58.468511 containerd[1466]: 2024-12-13 09:12:58.336 [INFO][3101] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '147.182.199.141' Dec 13 09:12:58.468511 containerd[1466]: 2024-12-13 09:12:58.355 [INFO][3101] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6d4e50bc8091002b9b3b07c4ed4e441f058c4fed2bd279649c68c957f31abb26" host="147.182.199.141" Dec 13 09:12:58.468511 containerd[1466]: 2024-12-13 09:12:58.371 [INFO][3101] ipam/ipam.go 372: Looking up existing affinities for host host="147.182.199.141" Dec 13 09:12:58.468511 containerd[1466]: 2024-12-13 09:12:58.388 [INFO][3101] ipam/ipam.go 489: Trying affinity for 192.168.107.192/26 host="147.182.199.141" Dec 13 09:12:58.468511 containerd[1466]: 2024-12-13 09:12:58.402 [INFO][3101] ipam/ipam.go 155: Attempting to load block cidr=192.168.107.192/26 host="147.182.199.141" Dec 13 09:12:58.468511 containerd[1466]: 2024-12-13 09:12:58.407 [INFO][3101] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.107.192/26 host="147.182.199.141" Dec 13 09:12:58.468511 containerd[1466]: 2024-12-13 09:12:58.407 [INFO][3101] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.107.192/26 handle="k8s-pod-network.6d4e50bc8091002b9b3b07c4ed4e441f058c4fed2bd279649c68c957f31abb26" host="147.182.199.141" Dec 13 09:12:58.468511 containerd[1466]: 2024-12-13 09:12:58.411 [INFO][3101] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6d4e50bc8091002b9b3b07c4ed4e441f058c4fed2bd279649c68c957f31abb26 Dec 13 09:12:58.468511 containerd[1466]: 2024-12-13 09:12:58.422 [INFO][3101] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.107.192/26 handle="k8s-pod-network.6d4e50bc8091002b9b3b07c4ed4e441f058c4fed2bd279649c68c957f31abb26" host="147.182.199.141" Dec 13 09:12:58.468511 containerd[1466]: 2024-12-13 09:12:58.432 [INFO][3101] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.107.195/26] block=192.168.107.192/26 handle="k8s-pod-network.6d4e50bc8091002b9b3b07c4ed4e441f058c4fed2bd279649c68c957f31abb26" host="147.182.199.141" Dec 13 09:12:58.468511 containerd[1466]: 2024-12-13 09:12:58.433 [INFO][3101] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.107.195/26] handle="k8s-pod-network.6d4e50bc8091002b9b3b07c4ed4e441f058c4fed2bd279649c68c957f31abb26" host="147.182.199.141" Dec 13 09:12:58.468511 containerd[1466]: 2024-12-13 09:12:58.433 [INFO][3101] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:12:58.468511 containerd[1466]: 2024-12-13 09:12:58.433 [INFO][3101] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.195/26] IPv6=[] ContainerID="6d4e50bc8091002b9b3b07c4ed4e441f058c4fed2bd279649c68c957f31abb26" HandleID="k8s-pod-network.6d4e50bc8091002b9b3b07c4ed4e441f058c4fed2bd279649c68c957f31abb26" Workload="147.182.199.141-k8s-nfs--server--provisioner--0-eth0" Dec 13 09:12:58.469591 containerd[1466]: 2024-12-13 09:12:58.434 [INFO][3091] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6d4e50bc8091002b9b3b07c4ed4e441f058c4fed2bd279649c68c957f31abb26" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="147.182.199.141-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"147.182.199.141-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"df5bc1bb-74a9-4616-b607-12b595f3df00", ResourceVersion:"1173", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 12, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"147.182.199.141", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.107.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:12:58.469591 containerd[1466]: 2024-12-13 09:12:58.435 [INFO][3091] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.107.195/32] ContainerID="6d4e50bc8091002b9b3b07c4ed4e441f058c4fed2bd279649c68c957f31abb26" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="147.182.199.141-k8s-nfs--server--provisioner--0-eth0" Dec 13 09:12:58.469591 containerd[1466]: 2024-12-13 09:12:58.435 [INFO][3091] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="6d4e50bc8091002b9b3b07c4ed4e441f058c4fed2bd279649c68c957f31abb26" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="147.182.199.141-k8s-nfs--server--provisioner--0-eth0" Dec 13 09:12:58.469591 containerd[1466]: 2024-12-13 09:12:58.444 [INFO][3091] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6d4e50bc8091002b9b3b07c4ed4e441f058c4fed2bd279649c68c957f31abb26" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="147.182.199.141-k8s-nfs--server--provisioner--0-eth0" Dec 13 09:12:58.470056 containerd[1466]: 2024-12-13 09:12:58.444 [INFO][3091] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6d4e50bc8091002b9b3b07c4ed4e441f058c4fed2bd279649c68c957f31abb26" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="147.182.199.141-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"147.182.199.141-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"df5bc1bb-74a9-4616-b607-12b595f3df00", ResourceVersion:"1173", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 12, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"147.182.199.141", ContainerID:"6d4e50bc8091002b9b3b07c4ed4e441f058c4fed2bd279649c68c957f31abb26", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.107.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"f6:78:71:42:f7:7b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:12:58.470056 containerd[1466]: 2024-12-13 09:12:58.465 [INFO][3091] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6d4e50bc8091002b9b3b07c4ed4e441f058c4fed2bd279649c68c957f31abb26" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="147.182.199.141-k8s-nfs--server--provisioner--0-eth0" Dec 13 09:12:58.515637 containerd[1466]: time="2024-12-13T09:12:58.514690390Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:12:58.515637 containerd[1466]: time="2024-12-13T09:12:58.514793832Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:12:58.515637 containerd[1466]: time="2024-12-13T09:12:58.514818240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:12:58.517106 containerd[1466]: time="2024-12-13T09:12:58.516895099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:12:58.562323 systemd[1]: Started cri-containerd-6d4e50bc8091002b9b3b07c4ed4e441f058c4fed2bd279649c68c957f31abb26.scope - libcontainer container 6d4e50bc8091002b9b3b07c4ed4e441f058c4fed2bd279649c68c957f31abb26. Dec 13 09:12:58.633525 containerd[1466]: time="2024-12-13T09:12:58.633402658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:df5bc1bb-74a9-4616-b607-12b595f3df00,Namespace:default,Attempt:0,} returns sandbox id \"6d4e50bc8091002b9b3b07c4ed4e441f058c4fed2bd279649c68c957f31abb26\"" Dec 13 09:12:58.636430 containerd[1466]: time="2024-12-13T09:12:58.636376100Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 09:12:59.240912 kubelet[1773]: E1213 09:12:59.240801 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:12:59.568688 systemd[1]: Started sshd@8-147.182.199.141:22-92.255.85.188:42390.service - OpenSSH per-connection server daemon (92.255.85.188:42390). Dec 13 09:13:00.252270 kubelet[1773]: E1213 09:13:00.252211 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:13:00.321288 systemd-networkd[1372]: cali60e51b789ff: Gained IPv6LL Dec 13 09:13:00.885456 sshd[3164]: Connection closed by authenticating user root 92.255.85.188 port 42390 [preauth] Dec 13 09:13:00.890838 systemd[1]: sshd@8-147.182.199.141:22-92.255.85.188:42390.service: Deactivated successfully. Dec 13 09:13:01.266224 kubelet[1773]: E1213 09:13:01.265975 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:13:01.493334 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount315199725.mount: Deactivated successfully. Dec 13 09:13:02.266668 kubelet[1773]: E1213 09:13:02.266596 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:13:03.267060 kubelet[1773]: E1213 09:13:03.266918 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:13:04.267949 kubelet[1773]: E1213 09:13:04.267843 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:13:04.665061 containerd[1466]: time="2024-12-13T09:13:04.664308088Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:13:04.666773 containerd[1466]: time="2024-12-13T09:13:04.666689248Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Dec 13 09:13:04.667154 containerd[1466]: time="2024-12-13T09:13:04.667082460Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:13:04.671146 containerd[1466]: time="2024-12-13T09:13:04.671077255Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:13:04.673056 containerd[1466]: time="2024-12-13T09:13:04.672837308Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 6.036385094s" Dec 13 09:13:04.673056 containerd[1466]: time="2024-12-13T09:13:04.672907812Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 13 09:13:04.687388 containerd[1466]: time="2024-12-13T09:13:04.686942048Z" level=info msg="CreateContainer within sandbox \"6d4e50bc8091002b9b3b07c4ed4e441f058c4fed2bd279649c68c957f31abb26\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 09:13:04.770149 containerd[1466]: time="2024-12-13T09:13:04.769972155Z" level=info msg="CreateContainer within sandbox \"6d4e50bc8091002b9b3b07c4ed4e441f058c4fed2bd279649c68c957f31abb26\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"6243fb9cbe4a0733b7cc3937356eaa934ec9932e6c95526fb0546ac5786e9110\"" Dec 13 09:13:04.772319 containerd[1466]: time="2024-12-13T09:13:04.772267619Z" level=info msg="StartContainer for \"6243fb9cbe4a0733b7cc3937356eaa934ec9932e6c95526fb0546ac5786e9110\"" Dec 13 09:13:04.772744 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3128571001.mount: Deactivated successfully. Dec 13 09:13:04.823328 systemd[1]: Started cri-containerd-6243fb9cbe4a0733b7cc3937356eaa934ec9932e6c95526fb0546ac5786e9110.scope - libcontainer container 6243fb9cbe4a0733b7cc3937356eaa934ec9932e6c95526fb0546ac5786e9110. Dec 13 09:13:04.871680 containerd[1466]: time="2024-12-13T09:13:04.871616183Z" level=info msg="StartContainer for \"6243fb9cbe4a0733b7cc3937356eaa934ec9932e6c95526fb0546ac5786e9110\" returns successfully" Dec 13 09:13:05.268296 kubelet[1773]: E1213 09:13:05.268221 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:13:05.693093 kubelet[1773]: I1213 09:13:05.692877 1773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.646522684 podStartE2EDuration="8.68516431s" podCreationTimestamp="2024-12-13 09:12:57 +0000 UTC" firstStartedPulling="2024-12-13 09:12:58.635588779 +0000 UTC m=+46.258924777" lastFinishedPulling="2024-12-13 09:13:04.674230408 +0000 UTC m=+52.297566403" observedRunningTime="2024-12-13 09:13:05.683462495 +0000 UTC m=+53.306798502" watchObservedRunningTime="2024-12-13 09:13:05.68516431 +0000 UTC m=+53.308500316" Dec 13 09:13:06.269173 kubelet[1773]: E1213 09:13:06.269076 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:13:07.270294 kubelet[1773]: E1213 09:13:07.270197 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:13:07.498943 systemd[1]: run-containerd-runc-k8s.io-b949344a28f93acb07738ca6503ac87d3f3ea0f2e5d00944925e76ba901510f8-runc.fGtfhB.mount: Deactivated successfully. Dec 13 09:13:08.271316 kubelet[1773]: E1213 09:13:08.271234 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:13:09.272132 kubelet[1773]: E1213 09:13:09.272042 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:13:10.273246 kubelet[1773]: E1213 09:13:10.273134 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:13:11.274516 kubelet[1773]: E1213 09:13:11.274432 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:13:12.275558 kubelet[1773]: E1213 09:13:12.275469 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:13:13.202910 kubelet[1773]: E1213 09:13:13.202821 1773 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:13:13.254915 containerd[1466]: time="2024-12-13T09:13:13.254856860Z" level=info msg="StopPodSandbox for \"bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426\"" Dec 13 09:13:13.276709 kubelet[1773]: E1213 09:13:13.276619 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:13:13.438439 containerd[1466]: 2024-12-13 09:13:13.342 [WARNING][3307] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"147.182.199.141-k8s-csi--node--driver--znlc8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2ce4350c-2296-46ed-9e25-9ab657a354cd", ResourceVersion:"1133", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 12, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"147.182.199.141", ContainerID:"5a242dae19c5d6ba67f5eb2d0f90e926f9417525879b6ff55d1f83ba09c4e2ee", Pod:"csi-node-driver-znlc8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.107.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibadd869fab6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:13:13.438439 containerd[1466]: 2024-12-13 09:13:13.343 [INFO][3307] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426" Dec 13 09:13:13.438439 containerd[1466]: 2024-12-13 09:13:13.343 [INFO][3307] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426" iface="eth0" netns="" Dec 13 09:13:13.438439 containerd[1466]: 2024-12-13 09:13:13.343 [INFO][3307] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426" Dec 13 09:13:13.438439 containerd[1466]: 2024-12-13 09:13:13.343 [INFO][3307] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426" Dec 13 09:13:13.438439 containerd[1466]: 2024-12-13 09:13:13.391 [INFO][3314] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426" HandleID="k8s-pod-network.bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426" Workload="147.182.199.141-k8s-csi--node--driver--znlc8-eth0" Dec 13 09:13:13.438439 containerd[1466]: 2024-12-13 09:13:13.392 [INFO][3314] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:13:13.438439 containerd[1466]: 2024-12-13 09:13:13.392 [INFO][3314] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:13:13.438439 containerd[1466]: 2024-12-13 09:13:13.408 [WARNING][3314] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426" HandleID="k8s-pod-network.bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426" Workload="147.182.199.141-k8s-csi--node--driver--znlc8-eth0" Dec 13 09:13:13.438439 containerd[1466]: 2024-12-13 09:13:13.408 [INFO][3314] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426" HandleID="k8s-pod-network.bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426" Workload="147.182.199.141-k8s-csi--node--driver--znlc8-eth0" Dec 13 09:13:13.438439 containerd[1466]: 2024-12-13 09:13:13.431 [INFO][3314] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:13:13.438439 containerd[1466]: 2024-12-13 09:13:13.435 [INFO][3307] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426" Dec 13 09:13:13.438439 containerd[1466]: time="2024-12-13T09:13:13.438259558Z" level=info msg="TearDown network for sandbox \"bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426\" successfully" Dec 13 09:13:13.438439 containerd[1466]: time="2024-12-13T09:13:13.438294954Z" level=info msg="StopPodSandbox for \"bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426\" returns successfully" Dec 13 09:13:13.511471 containerd[1466]: time="2024-12-13T09:13:13.510902066Z" level=info msg="RemovePodSandbox for \"bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426\"" Dec 13 09:13:13.511471 containerd[1466]: time="2024-12-13T09:13:13.510963029Z" level=info msg="Forcibly stopping sandbox \"bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426\"" Dec 13 09:13:13.680572 containerd[1466]: 2024-12-13 09:13:13.615 [WARNING][3334] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"147.182.199.141-k8s-csi--node--driver--znlc8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2ce4350c-2296-46ed-9e25-9ab657a354cd", ResourceVersion:"1133", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 12, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"147.182.199.141", ContainerID:"5a242dae19c5d6ba67f5eb2d0f90e926f9417525879b6ff55d1f83ba09c4e2ee", Pod:"csi-node-driver-znlc8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.107.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibadd869fab6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:13:13.680572 containerd[1466]: 2024-12-13 09:13:13.616 [INFO][3334] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426" Dec 13 09:13:13.680572 containerd[1466]: 2024-12-13 09:13:13.616 [INFO][3334] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426" iface="eth0" netns="" Dec 13 09:13:13.680572 containerd[1466]: 2024-12-13 09:13:13.616 [INFO][3334] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426" Dec 13 09:13:13.680572 containerd[1466]: 2024-12-13 09:13:13.616 [INFO][3334] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426" Dec 13 09:13:13.680572 containerd[1466]: 2024-12-13 09:13:13.649 [INFO][3341] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426" HandleID="k8s-pod-network.bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426" Workload="147.182.199.141-k8s-csi--node--driver--znlc8-eth0" Dec 13 09:13:13.680572 containerd[1466]: 2024-12-13 09:13:13.649 [INFO][3341] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:13:13.680572 containerd[1466]: 2024-12-13 09:13:13.649 [INFO][3341] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:13:13.680572 containerd[1466]: 2024-12-13 09:13:13.662 [WARNING][3341] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426" HandleID="k8s-pod-network.bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426" Workload="147.182.199.141-k8s-csi--node--driver--znlc8-eth0" Dec 13 09:13:13.680572 containerd[1466]: 2024-12-13 09:13:13.662 [INFO][3341] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426" HandleID="k8s-pod-network.bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426" Workload="147.182.199.141-k8s-csi--node--driver--znlc8-eth0" Dec 13 09:13:13.680572 containerd[1466]: 2024-12-13 09:13:13.676 [INFO][3341] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:13:13.680572 containerd[1466]: 2024-12-13 09:13:13.678 [INFO][3334] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426" Dec 13 09:13:13.683074 containerd[1466]: time="2024-12-13T09:13:13.681562417Z" level=info msg="TearDown network for sandbox \"bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426\" successfully" Dec 13 09:13:13.718545 containerd[1466]: time="2024-12-13T09:13:13.718443699Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 09:13:13.718545 containerd[1466]: time="2024-12-13T09:13:13.718550973Z" level=info msg="RemovePodSandbox \"bed5d259b98103f9068f9e0015f8444979f27990b3666bfcc605f7d73e5d9426\" returns successfully" Dec 13 09:13:13.719958 containerd[1466]: time="2024-12-13T09:13:13.719483995Z" level=info msg="StopPodSandbox for \"8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160\"" Dec 13 09:13:13.875414 containerd[1466]: 2024-12-13 09:13:13.802 [WARNING][3359] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"147.182.199.141-k8s-nginx--deployment--8587fbcb89--ghx4z-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"6397fde0-ff3d-48f0-9684-9b01322db26e", ResourceVersion:"1118", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 12, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"147.182.199.141", ContainerID:"b77f22898390184c634d85928f715ebcacb085a5175214b70d57e3a3315860d6", Pod:"nginx-deployment-8587fbcb89-ghx4z", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.107.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali6bd65192d62", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:13:13.875414 containerd[1466]: 2024-12-13 09:13:13.803 [INFO][3359] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160" Dec 13 09:13:13.875414 containerd[1466]: 2024-12-13 09:13:13.803 [INFO][3359] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160" iface="eth0" netns="" Dec 13 09:13:13.875414 containerd[1466]: 2024-12-13 09:13:13.803 [INFO][3359] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160" Dec 13 09:13:13.875414 containerd[1466]: 2024-12-13 09:13:13.803 [INFO][3359] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160" Dec 13 09:13:13.875414 containerd[1466]: 2024-12-13 09:13:13.842 [INFO][3365] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160" HandleID="k8s-pod-network.8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160" Workload="147.182.199.141-k8s-nginx--deployment--8587fbcb89--ghx4z-eth0" Dec 13 09:13:13.875414 containerd[1466]: 2024-12-13 09:13:13.843 [INFO][3365] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:13:13.875414 containerd[1466]: 2024-12-13 09:13:13.843 [INFO][3365] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:13:13.875414 containerd[1466]: 2024-12-13 09:13:13.864 [WARNING][3365] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160" HandleID="k8s-pod-network.8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160" Workload="147.182.199.141-k8s-nginx--deployment--8587fbcb89--ghx4z-eth0" Dec 13 09:13:13.875414 containerd[1466]: 2024-12-13 09:13:13.864 [INFO][3365] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160" HandleID="k8s-pod-network.8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160" Workload="147.182.199.141-k8s-nginx--deployment--8587fbcb89--ghx4z-eth0" Dec 13 09:13:13.875414 containerd[1466]: 2024-12-13 09:13:13.869 [INFO][3365] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:13:13.875414 containerd[1466]: 2024-12-13 09:13:13.870 [INFO][3359] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160" Dec 13 09:13:13.876410 containerd[1466]: time="2024-12-13T09:13:13.875493518Z" level=info msg="TearDown network for sandbox \"8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160\" successfully" Dec 13 09:13:13.876410 containerd[1466]: time="2024-12-13T09:13:13.875534261Z" level=info msg="StopPodSandbox for \"8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160\" returns successfully" Dec 13 09:13:13.877892 containerd[1466]: time="2024-12-13T09:13:13.877311092Z" level=info msg="RemovePodSandbox for \"8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160\"" Dec 13 09:13:13.877892 containerd[1466]: time="2024-12-13T09:13:13.877367766Z" level=info msg="Forcibly stopping sandbox \"8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160\"" Dec 13 09:13:14.026316 containerd[1466]: 2024-12-13 09:13:13.949 [WARNING][3383] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"147.182.199.141-k8s-nginx--deployment--8587fbcb89--ghx4z-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"6397fde0-ff3d-48f0-9684-9b01322db26e", ResourceVersion:"1118", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 12, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"147.182.199.141", ContainerID:"b77f22898390184c634d85928f715ebcacb085a5175214b70d57e3a3315860d6", Pod:"nginx-deployment-8587fbcb89-ghx4z", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.107.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali6bd65192d62", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:13:14.026316 containerd[1466]: 2024-12-13 09:13:13.949 [INFO][3383] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160" Dec 13 09:13:14.026316 containerd[1466]: 2024-12-13 09:13:13.949 [INFO][3383] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160" iface="eth0" netns="" Dec 13 09:13:14.026316 containerd[1466]: 2024-12-13 09:13:13.949 [INFO][3383] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160" Dec 13 09:13:14.026316 containerd[1466]: 2024-12-13 09:13:13.949 [INFO][3383] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160" Dec 13 09:13:14.026316 containerd[1466]: 2024-12-13 09:13:13.984 [INFO][3389] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160" HandleID="k8s-pod-network.8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160" Workload="147.182.199.141-k8s-nginx--deployment--8587fbcb89--ghx4z-eth0" Dec 13 09:13:14.026316 containerd[1466]: 2024-12-13 09:13:13.984 [INFO][3389] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:13:14.026316 containerd[1466]: 2024-12-13 09:13:13.985 [INFO][3389] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:13:14.026316 containerd[1466]: 2024-12-13 09:13:14.008 [WARNING][3389] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160" HandleID="k8s-pod-network.8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160" Workload="147.182.199.141-k8s-nginx--deployment--8587fbcb89--ghx4z-eth0" Dec 13 09:13:14.026316 containerd[1466]: 2024-12-13 09:13:14.008 [INFO][3389] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160" HandleID="k8s-pod-network.8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160" Workload="147.182.199.141-k8s-nginx--deployment--8587fbcb89--ghx4z-eth0" Dec 13 09:13:14.026316 containerd[1466]: 2024-12-13 09:13:14.022 [INFO][3389] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:13:14.026316 containerd[1466]: 2024-12-13 09:13:14.024 [INFO][3383] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160" Dec 13 09:13:14.027713 containerd[1466]: time="2024-12-13T09:13:14.026380482Z" level=info msg="TearDown network for sandbox \"8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160\" successfully" Dec 13 09:13:14.029759 containerd[1466]: time="2024-12-13T09:13:14.029670993Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 09:13:14.029759 containerd[1466]: time="2024-12-13T09:13:14.029757580Z" level=info msg="RemovePodSandbox \"8f4d337793bf7d4534bac769e108fd5f8e6417a4fab3022eee81c9ed54465160\" returns successfully" Dec 13 09:13:14.277506 kubelet[1773]: E1213 09:13:14.277253 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:13:14.961670 systemd[1]: Created slice kubepods-besteffort-podb542d284_8543_496f_ad7d_d10cfdbdd7de.slice - libcontainer container kubepods-besteffort-podb542d284_8543_496f_ad7d_d10cfdbdd7de.slice. Dec 13 09:13:15.019130 kubelet[1773]: I1213 09:13:15.018588 1773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-35ae5f82-3d6e-4a30-af8e-4090b32b3551\" (UniqueName: \"kubernetes.io/nfs/b542d284-8543-496f-ad7d-d10cfdbdd7de-pvc-35ae5f82-3d6e-4a30-af8e-4090b32b3551\") pod \"test-pod-1\" (UID: \"b542d284-8543-496f-ad7d-d10cfdbdd7de\") " pod="default/test-pod-1" Dec 13 09:13:15.019130 kubelet[1773]: I1213 09:13:15.018704 1773 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zb27\" (UniqueName: \"kubernetes.io/projected/b542d284-8543-496f-ad7d-d10cfdbdd7de-kube-api-access-5zb27\") pod \"test-pod-1\" (UID: \"b542d284-8543-496f-ad7d-d10cfdbdd7de\") " pod="default/test-pod-1" Dec 13 09:13:15.185575 kernel: FS-Cache: Loaded Dec 13 09:13:15.278792 kubelet[1773]: E1213 09:13:15.278432 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:13:15.283434 kernel: RPC: Registered named UNIX socket transport module. Dec 13 09:13:15.283611 kernel: RPC: Registered udp transport module. Dec 13 09:13:15.283645 kernel: RPC: Registered tcp transport module. Dec 13 09:13:15.284302 kernel: RPC: Registered tcp-with-tls transport module. Dec 13 09:13:15.285196 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 09:13:15.727404 kernel: NFS: Registering the id_resolver key type Dec 13 09:13:15.729198 kernel: Key type id_resolver registered Dec 13 09:13:15.732764 kernel: Key type id_legacy registered Dec 13 09:13:15.798398 nfsidmap[3419]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '2.1-7-9f5b9bd84f' Dec 13 09:13:15.804316 nfsidmap[3420]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '2.1-7-9f5b9bd84f' Dec 13 09:13:15.881937 containerd[1466]: time="2024-12-13T09:13:15.881816824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:b542d284-8543-496f-ad7d-d10cfdbdd7de,Namespace:default,Attempt:0,}" Dec 13 09:13:16.140894 systemd-networkd[1372]: cali5ec59c6bf6e: Link UP Dec 13 09:13:16.148880 systemd-networkd[1372]: cali5ec59c6bf6e: Gained carrier Dec 13 09:13:16.167119 containerd[1466]: 2024-12-13 09:13:15.964 [INFO][3422] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {147.182.199.141-k8s-test--pod--1-eth0 default b542d284-8543-496f-ad7d-d10cfdbdd7de 1258 0 2024-12-13 09:12:59 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 147.182.199.141 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="df0030699c52045bbf56ab473b5ee405e83161b643413c44e28381fd5778a851" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="147.182.199.141-k8s-test--pod--1-" Dec 13 09:13:16.167119 containerd[1466]: 2024-12-13 09:13:15.964 [INFO][3422] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="df0030699c52045bbf56ab473b5ee405e83161b643413c44e28381fd5778a851" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="147.182.199.141-k8s-test--pod--1-eth0" Dec 13 09:13:16.167119 containerd[1466]: 2024-12-13 09:13:16.015 [INFO][3432] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="df0030699c52045bbf56ab473b5ee405e83161b643413c44e28381fd5778a851" HandleID="k8s-pod-network.df0030699c52045bbf56ab473b5ee405e83161b643413c44e28381fd5778a851" Workload="147.182.199.141-k8s-test--pod--1-eth0" Dec 13 09:13:16.167119 containerd[1466]: 2024-12-13 09:13:16.046 [INFO][3432] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="df0030699c52045bbf56ab473b5ee405e83161b643413c44e28381fd5778a851" HandleID="k8s-pod-network.df0030699c52045bbf56ab473b5ee405e83161b643413c44e28381fd5778a851" Workload="147.182.199.141-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000293030), Attrs:map[string]string{"namespace":"default", "node":"147.182.199.141", "pod":"test-pod-1", "timestamp":"2024-12-13 09:13:16.015947401 +0000 UTC"}, Hostname:"147.182.199.141", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 09:13:16.167119 containerd[1466]: 2024-12-13 09:13:16.046 [INFO][3432] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 09:13:16.167119 containerd[1466]: 2024-12-13 09:13:16.046 [INFO][3432] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 09:13:16.167119 containerd[1466]: 2024-12-13 09:13:16.046 [INFO][3432] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '147.182.199.141' Dec 13 09:13:16.167119 containerd[1466]: 2024-12-13 09:13:16.062 [INFO][3432] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.df0030699c52045bbf56ab473b5ee405e83161b643413c44e28381fd5778a851" host="147.182.199.141" Dec 13 09:13:16.167119 containerd[1466]: 2024-12-13 09:13:16.071 [INFO][3432] ipam/ipam.go 372: Looking up existing affinities for host host="147.182.199.141" Dec 13 09:13:16.167119 containerd[1466]: 2024-12-13 09:13:16.086 [INFO][3432] ipam/ipam.go 489: Trying affinity for 192.168.107.192/26 host="147.182.199.141" Dec 13 09:13:16.167119 containerd[1466]: 2024-12-13 09:13:16.090 [INFO][3432] ipam/ipam.go 155: Attempting to load block cidr=192.168.107.192/26 host="147.182.199.141" Dec 13 09:13:16.167119 containerd[1466]: 2024-12-13 09:13:16.096 [INFO][3432] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.107.192/26 host="147.182.199.141" Dec 13 09:13:16.167119 containerd[1466]: 2024-12-13 09:13:16.097 [INFO][3432] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.107.192/26 handle="k8s-pod-network.df0030699c52045bbf56ab473b5ee405e83161b643413c44e28381fd5778a851" host="147.182.199.141" Dec 13 09:13:16.167119 containerd[1466]: 2024-12-13 09:13:16.103 [INFO][3432] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.df0030699c52045bbf56ab473b5ee405e83161b643413c44e28381fd5778a851 Dec 13 09:13:16.167119 containerd[1466]: 2024-12-13 09:13:16.113 [INFO][3432] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.107.192/26 handle="k8s-pod-network.df0030699c52045bbf56ab473b5ee405e83161b643413c44e28381fd5778a851" host="147.182.199.141" Dec 13 09:13:16.167119 containerd[1466]: 2024-12-13 09:13:16.130 [INFO][3432] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.107.196/26] block=192.168.107.192/26 handle="k8s-pod-network.df0030699c52045bbf56ab473b5ee405e83161b643413c44e28381fd5778a851" host="147.182.199.141" Dec 13 09:13:16.167119 containerd[1466]: 2024-12-13 09:13:16.131 [INFO][3432] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.107.196/26] handle="k8s-pod-network.df0030699c52045bbf56ab473b5ee405e83161b643413c44e28381fd5778a851" host="147.182.199.141" Dec 13 09:13:16.167119 containerd[1466]: 2024-12-13 09:13:16.131 [INFO][3432] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 09:13:16.167119 containerd[1466]: 2024-12-13 09:13:16.131 [INFO][3432] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.196/26] IPv6=[] ContainerID="df0030699c52045bbf56ab473b5ee405e83161b643413c44e28381fd5778a851" HandleID="k8s-pod-network.df0030699c52045bbf56ab473b5ee405e83161b643413c44e28381fd5778a851" Workload="147.182.199.141-k8s-test--pod--1-eth0" Dec 13 09:13:16.167119 containerd[1466]: 2024-12-13 09:13:16.133 [INFO][3422] cni-plugin/k8s.go 386: Populated endpoint ContainerID="df0030699c52045bbf56ab473b5ee405e83161b643413c44e28381fd5778a851" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="147.182.199.141-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"147.182.199.141-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"b542d284-8543-496f-ad7d-d10cfdbdd7de", ResourceVersion:"1258", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 12, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"147.182.199.141", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.107.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:13:16.171320 containerd[1466]: 2024-12-13 09:13:16.134 [INFO][3422] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.107.196/32] ContainerID="df0030699c52045bbf56ab473b5ee405e83161b643413c44e28381fd5778a851" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="147.182.199.141-k8s-test--pod--1-eth0" Dec 13 09:13:16.171320 containerd[1466]: 2024-12-13 09:13:16.134 [INFO][3422] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="df0030699c52045bbf56ab473b5ee405e83161b643413c44e28381fd5778a851" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="147.182.199.141-k8s-test--pod--1-eth0" Dec 13 09:13:16.171320 containerd[1466]: 2024-12-13 09:13:16.146 [INFO][3422] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="df0030699c52045bbf56ab473b5ee405e83161b643413c44e28381fd5778a851" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="147.182.199.141-k8s-test--pod--1-eth0" Dec 13 09:13:16.171320 containerd[1466]: 2024-12-13 09:13:16.149 [INFO][3422] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="df0030699c52045bbf56ab473b5ee405e83161b643413c44e28381fd5778a851" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="147.182.199.141-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"147.182.199.141-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"b542d284-8543-496f-ad7d-d10cfdbdd7de", ResourceVersion:"1258", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 9, 12, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"147.182.199.141", ContainerID:"df0030699c52045bbf56ab473b5ee405e83161b643413c44e28381fd5778a851", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.107.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"9a:70:ec:5a:9b:0a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 09:13:16.171320 containerd[1466]: 2024-12-13 09:13:16.164 [INFO][3422] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="df0030699c52045bbf56ab473b5ee405e83161b643413c44e28381fd5778a851" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="147.182.199.141-k8s-test--pod--1-eth0" Dec 13 09:13:16.205462 containerd[1466]: time="2024-12-13T09:13:16.205311183Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:13:16.207272 containerd[1466]: time="2024-12-13T09:13:16.206110233Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:13:16.207272 containerd[1466]: time="2024-12-13T09:13:16.206244306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:13:16.208253 containerd[1466]: time="2024-12-13T09:13:16.207205915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:13:16.258437 systemd[1]: Started cri-containerd-df0030699c52045bbf56ab473b5ee405e83161b643413c44e28381fd5778a851.scope - libcontainer container df0030699c52045bbf56ab473b5ee405e83161b643413c44e28381fd5778a851. Dec 13 09:13:16.279063 kubelet[1773]: E1213 09:13:16.278943 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:13:16.333736 containerd[1466]: time="2024-12-13T09:13:16.333668528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:b542d284-8543-496f-ad7d-d10cfdbdd7de,Namespace:default,Attempt:0,} returns sandbox id \"df0030699c52045bbf56ab473b5ee405e83161b643413c44e28381fd5778a851\"" Dec 13 09:13:16.337458 containerd[1466]: time="2024-12-13T09:13:16.337392319Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 09:13:16.817614 containerd[1466]: time="2024-12-13T09:13:16.816564934Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:13:16.817614 containerd[1466]: time="2024-12-13T09:13:16.817112940Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Dec 13 09:13:16.833883 containerd[1466]: time="2024-12-13T09:13:16.833806930Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\", size \"71035905\" in 496.34075ms" Dec 13 09:13:16.834210 containerd[1466]: time="2024-12-13T09:13:16.834176162Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 09:13:16.842087 containerd[1466]: time="2024-12-13T09:13:16.841984028Z" level=info msg="CreateContainer within sandbox \"df0030699c52045bbf56ab473b5ee405e83161b643413c44e28381fd5778a851\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 09:13:16.867123 containerd[1466]: time="2024-12-13T09:13:16.867050761Z" level=info msg="CreateContainer within sandbox \"df0030699c52045bbf56ab473b5ee405e83161b643413c44e28381fd5778a851\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"b845ea347eefb33f4053a6813cbe05b46d401767bf64403232485ac9d501f998\"" Dec 13 09:13:16.868074 containerd[1466]: time="2024-12-13T09:13:16.867991970Z" level=info msg="StartContainer for \"b845ea347eefb33f4053a6813cbe05b46d401767bf64403232485ac9d501f998\"" Dec 13 09:13:16.908367 systemd[1]: Started cri-containerd-b845ea347eefb33f4053a6813cbe05b46d401767bf64403232485ac9d501f998.scope - libcontainer container b845ea347eefb33f4053a6813cbe05b46d401767bf64403232485ac9d501f998. Dec 13 09:13:16.952204 containerd[1466]: time="2024-12-13T09:13:16.952137793Z" level=info msg="StartContainer for \"b845ea347eefb33f4053a6813cbe05b46d401767bf64403232485ac9d501f998\" returns successfully" Dec 13 09:13:17.279813 kubelet[1773]: E1213 09:13:17.279728 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:13:17.698838 kubelet[1773]: I1213 09:13:17.698673 1773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=18.199458197 podStartE2EDuration="18.698645581s" podCreationTimestamp="2024-12-13 09:12:59 +0000 UTC" firstStartedPulling="2024-12-13 09:13:16.336266296 +0000 UTC m=+63.959602297" lastFinishedPulling="2024-12-13 09:13:16.835453686 +0000 UTC m=+64.458789681" observedRunningTime="2024-12-13 09:13:17.698446269 +0000 UTC m=+65.321782273" watchObservedRunningTime="2024-12-13 09:13:17.698645581 +0000 UTC m=+65.321981585" Dec 13 09:13:17.729439 systemd-networkd[1372]: cali5ec59c6bf6e: Gained IPv6LL Dec 13 09:13:18.281068 kubelet[1773]: E1213 09:13:18.280953 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:13:19.283314 kubelet[1773]: E1213 09:13:19.283235 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:13:20.284443 kubelet[1773]: E1213 09:13:20.284366 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:13:21.284769 kubelet[1773]: E1213 09:13:21.284631 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:13:22.285732 kubelet[1773]: E1213 09:13:22.285645 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:13:23.286218 kubelet[1773]: E1213 09:13:23.286133 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 09:13:24.286865 kubelet[1773]: E1213 09:13:24.286771 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"