Jan 16 09:01:12.102147 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:01:45 -00 2025 Jan 16 09:01:12.102187 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 16 09:01:12.102204 kernel: BIOS-provided physical RAM map: Jan 16 09:01:12.102218 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 16 09:01:12.102225 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 16 09:01:12.102231 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 16 09:01:12.102240 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Jan 16 09:01:12.102247 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Jan 16 09:01:12.102254 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 16 09:01:12.102264 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 16 09:01:12.102272 kernel: NX (Execute Disable) protection: active Jan 16 09:01:12.102279 kernel: APIC: Static calls initialized Jan 16 09:01:12.102295 kernel: SMBIOS 2.8 present. Jan 16 09:01:12.102303 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jan 16 09:01:12.102312 kernel: Hypervisor detected: KVM Jan 16 09:01:12.102323 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 16 09:01:12.102334 kernel: kvm-clock: using sched offset of 3952594037 cycles Jan 16 09:01:12.102343 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 16 09:01:12.102352 kernel: tsc: Detected 2494.140 MHz processor Jan 16 09:01:12.102365 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 16 09:01:12.102378 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 16 09:01:12.102390 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Jan 16 09:01:12.102402 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 16 09:01:12.102413 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 16 09:01:12.102425 kernel: ACPI: Early table checksum verification disabled Jan 16 09:01:12.102433 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Jan 16 09:01:12.102441 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 09:01:12.102450 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 09:01:12.102458 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 09:01:12.102466 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 16 09:01:12.102474 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 09:01:12.102482 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 09:01:12.102490 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 09:01:12.102502 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 09:01:12.102510 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jan 16 09:01:12.102517 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jan 16 09:01:12.102525 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 16 09:01:12.102533 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jan 16 09:01:12.102541 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jan 16 09:01:12.102549 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jan 16 09:01:12.102565 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jan 16 09:01:12.102574 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 16 09:01:12.102582 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 16 09:01:12.102591 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 16 09:01:12.102599 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 16 09:01:12.102610 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Jan 16 09:01:12.102619 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Jan 16 09:01:12.102631 kernel: Zone ranges: Jan 16 09:01:12.102640 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 16 09:01:12.102648 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Jan 16 09:01:12.102657 kernel: Normal empty Jan 16 09:01:12.102665 kernel: Movable zone start for each node Jan 16 09:01:12.102674 kernel: Early memory node ranges Jan 16 09:01:12.102682 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 16 09:01:12.102691 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Jan 16 09:01:12.102699 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Jan 16 09:01:12.102711 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 16 09:01:12.102720 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 16 09:01:12.102730 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Jan 16 09:01:12.102739 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 16 09:01:12.102747 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 16 09:01:12.102756 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 16 09:01:12.102776 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 16 09:01:12.102785 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 16 09:01:12.102794 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 16 09:01:12.102806 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 16 09:01:12.102819 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 16 09:01:12.102831 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 16 09:01:12.102845 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 16 09:01:12.102857 kernel: TSC deadline timer available Jan 16 09:01:12.102869 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 16 09:01:12.102888 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 16 09:01:12.102900 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jan 16 09:01:12.102915 kernel: Booting paravirtualized kernel on KVM Jan 16 09:01:12.102927 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 16 09:01:12.102945 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 16 09:01:12.102976 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 16 09:01:12.102989 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 16 09:01:12.103002 kernel: pcpu-alloc: [0] 0 1 Jan 16 09:01:12.103017 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 16 09:01:12.103033 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 16 09:01:12.103048 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 16 09:01:12.103068 kernel: random: crng init done Jan 16 09:01:12.103082 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 16 09:01:12.103095 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 16 09:01:12.103111 kernel: Fallback order for Node 0: 0 Jan 16 09:01:12.103128 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Jan 16 09:01:12.103144 kernel: Policy zone: DMA32 Jan 16 09:01:12.103161 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 16 09:01:12.103178 kernel: Memory: 1971200K/2096612K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 125152K reserved, 0K cma-reserved) Jan 16 09:01:12.103195 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 16 09:01:12.103233 kernel: Kernel/User page tables isolation: enabled Jan 16 09:01:12.103249 kernel: ftrace: allocating 37920 entries in 149 pages Jan 16 09:01:12.103265 kernel: ftrace: allocated 149 pages with 4 groups Jan 16 09:01:12.103282 kernel: Dynamic Preempt: voluntary Jan 16 09:01:12.103298 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 16 09:01:12.103317 kernel: rcu: RCU event tracing is enabled. Jan 16 09:01:12.103334 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 16 09:01:12.103350 kernel: Trampoline variant of Tasks RCU enabled. Jan 16 09:01:12.103365 kernel: Rude variant of Tasks RCU enabled. Jan 16 09:01:12.103377 kernel: Tracing variant of Tasks RCU enabled. Jan 16 09:01:12.103395 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 16 09:01:12.103408 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 16 09:01:12.103420 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 16 09:01:12.103429 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 16 09:01:12.103442 kernel: Console: colour VGA+ 80x25 Jan 16 09:01:12.103451 kernel: printk: console [tty0] enabled Jan 16 09:01:12.103459 kernel: printk: console [ttyS0] enabled Jan 16 09:01:12.103468 kernel: ACPI: Core revision 20230628 Jan 16 09:01:12.103477 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 16 09:01:12.103490 kernel: APIC: Switch to symmetric I/O mode setup Jan 16 09:01:12.103498 kernel: x2apic enabled Jan 16 09:01:12.103508 kernel: APIC: Switched APIC routing to: physical x2apic Jan 16 09:01:12.103517 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 16 09:01:12.103525 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Jan 16 09:01:12.103534 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Jan 16 09:01:12.103543 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 16 09:01:12.103552 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 16 09:01:12.103573 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 16 09:01:12.103583 kernel: Spectre V2 : Mitigation: Retpolines Jan 16 09:01:12.103592 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 16 09:01:12.103604 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 16 09:01:12.103614 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 16 09:01:12.103623 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 16 09:01:12.103632 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 16 09:01:12.103641 kernel: MDS: Mitigation: Clear CPU buffers Jan 16 09:01:12.103651 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 16 09:01:12.103665 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 16 09:01:12.103675 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 16 09:01:12.103684 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 16 09:01:12.103694 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 16 09:01:12.103703 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 16 09:01:12.103713 kernel: Freeing SMP alternatives memory: 32K Jan 16 09:01:12.103722 kernel: pid_max: default: 32768 minimum: 301 Jan 16 09:01:12.103732 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 16 09:01:12.103744 kernel: landlock: Up and running. Jan 16 09:01:12.103753 kernel: SELinux: Initializing. Jan 16 09:01:12.103763 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 16 09:01:12.103772 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 16 09:01:12.103781 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jan 16 09:01:12.103791 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 16 09:01:12.103800 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 16 09:01:12.103809 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 16 09:01:12.103818 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jan 16 09:01:12.103830 kernel: signal: max sigframe size: 1776 Jan 16 09:01:12.103840 kernel: rcu: Hierarchical SRCU implementation. Jan 16 09:01:12.103849 kernel: rcu: Max phase no-delay instances is 400. Jan 16 09:01:12.103858 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 16 09:01:12.103868 kernel: smp: Bringing up secondary CPUs ... Jan 16 09:01:12.103877 kernel: smpboot: x86: Booting SMP configuration: Jan 16 09:01:12.103887 kernel: .... node #0, CPUs: #1 Jan 16 09:01:12.103896 kernel: smp: Brought up 1 node, 2 CPUs Jan 16 09:01:12.103907 kernel: smpboot: Max logical packages: 1 Jan 16 09:01:12.103920 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Jan 16 09:01:12.103930 kernel: devtmpfs: initialized Jan 16 09:01:12.103939 kernel: x86/mm: Memory block size: 128MB Jan 16 09:01:12.105987 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 16 09:01:12.106070 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 16 09:01:12.106086 kernel: pinctrl core: initialized pinctrl subsystem Jan 16 09:01:12.106099 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 16 09:01:12.106113 kernel: audit: initializing netlink subsys (disabled) Jan 16 09:01:12.106127 kernel: audit: type=2000 audit(1737018070.447:1): state=initialized audit_enabled=0 res=1 Jan 16 09:01:12.106157 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 16 09:01:12.106167 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 16 09:01:12.106177 kernel: cpuidle: using governor menu Jan 16 09:01:12.106187 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 16 09:01:12.106196 kernel: dca service started, version 1.12.1 Jan 16 09:01:12.106205 kernel: PCI: Using configuration type 1 for base access Jan 16 09:01:12.106215 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 16 09:01:12.106224 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 16 09:01:12.106234 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 16 09:01:12.106246 kernel: ACPI: Added _OSI(Module Device) Jan 16 09:01:12.106255 kernel: ACPI: Added _OSI(Processor Device) Jan 16 09:01:12.106265 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 16 09:01:12.106274 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 16 09:01:12.106283 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 16 09:01:12.106292 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 16 09:01:12.106301 kernel: ACPI: Interpreter enabled Jan 16 09:01:12.106310 kernel: ACPI: PM: (supports S0 S5) Jan 16 09:01:12.106319 kernel: ACPI: Using IOAPIC for interrupt routing Jan 16 09:01:12.106331 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 16 09:01:12.106340 kernel: PCI: Using E820 reservations for host bridge windows Jan 16 09:01:12.106352 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 16 09:01:12.106366 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 16 09:01:12.106667 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 16 09:01:12.106784 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 16 09:01:12.106882 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 16 09:01:12.106899 kernel: acpiphp: Slot [3] registered Jan 16 09:01:12.106909 kernel: acpiphp: Slot [4] registered Jan 16 09:01:12.106919 kernel: acpiphp: Slot [5] registered Jan 16 09:01:12.106928 kernel: acpiphp: Slot [6] registered Jan 16 09:01:12.106937 kernel: acpiphp: Slot [7] registered Jan 16 09:01:12.106946 kernel: acpiphp: Slot [8] registered Jan 16 09:01:12.106970 kernel: acpiphp: Slot [9] registered Jan 16 09:01:12.106980 kernel: acpiphp: Slot [10] registered Jan 16 09:01:12.106989 kernel: acpiphp: Slot [11] registered Jan 16 09:01:12.107002 kernel: acpiphp: Slot [12] registered Jan 16 09:01:12.107011 kernel: acpiphp: Slot [13] registered Jan 16 09:01:12.107021 kernel: acpiphp: Slot [14] registered Jan 16 09:01:12.107030 kernel: acpiphp: Slot [15] registered Jan 16 09:01:12.107039 kernel: acpiphp: Slot [16] registered Jan 16 09:01:12.107060 kernel: acpiphp: Slot [17] registered Jan 16 09:01:12.107069 kernel: acpiphp: Slot [18] registered Jan 16 09:01:12.107078 kernel: acpiphp: Slot [19] registered Jan 16 09:01:12.107088 kernel: acpiphp: Slot [20] registered Jan 16 09:01:12.107097 kernel: acpiphp: Slot [21] registered Jan 16 09:01:12.107110 kernel: acpiphp: Slot [22] registered Jan 16 09:01:12.107119 kernel: acpiphp: Slot [23] registered Jan 16 09:01:12.107128 kernel: acpiphp: Slot [24] registered Jan 16 09:01:12.107137 kernel: acpiphp: Slot [25] registered Jan 16 09:01:12.107147 kernel: acpiphp: Slot [26] registered Jan 16 09:01:12.107156 kernel: acpiphp: Slot [27] registered Jan 16 09:01:12.107166 kernel: acpiphp: Slot [28] registered Jan 16 09:01:12.107175 kernel: acpiphp: Slot [29] registered Jan 16 09:01:12.107184 kernel: acpiphp: Slot [30] registered Jan 16 09:01:12.107196 kernel: acpiphp: Slot [31] registered Jan 16 09:01:12.107206 kernel: PCI host bridge to bus 0000:00 Jan 16 09:01:12.107324 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 16 09:01:12.107454 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 16 09:01:12.107556 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 16 09:01:12.107651 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 16 09:01:12.107777 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jan 16 09:01:12.107922 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 16 09:01:12.109411 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 16 09:01:12.109670 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 16 09:01:12.109897 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 16 09:01:12.111156 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jan 16 09:01:12.111348 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 16 09:01:12.111515 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 16 09:01:12.111672 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 16 09:01:12.111777 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 16 09:01:12.111897 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jan 16 09:01:12.113117 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jan 16 09:01:12.113306 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 16 09:01:12.113442 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 16 09:01:12.113607 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 16 09:01:12.113798 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 16 09:01:12.114012 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 16 09:01:12.115154 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jan 16 09:01:12.115276 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jan 16 09:01:12.115404 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 16 09:01:12.115538 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 16 09:01:12.115681 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 16 09:01:12.115811 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jan 16 09:01:12.115918 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jan 16 09:01:12.117161 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jan 16 09:01:12.117332 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 16 09:01:12.117488 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jan 16 09:01:12.117614 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jan 16 09:01:12.117738 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jan 16 09:01:12.117938 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jan 16 09:01:12.119152 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jan 16 09:01:12.119264 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jan 16 09:01:12.119369 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jan 16 09:01:12.119561 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jan 16 09:01:12.119705 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jan 16 09:01:12.119890 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jan 16 09:01:12.121135 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jan 16 09:01:12.121312 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jan 16 09:01:12.121424 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jan 16 09:01:12.121526 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jan 16 09:01:12.121646 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jan 16 09:01:12.121768 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jan 16 09:01:12.123091 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jan 16 09:01:12.123212 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jan 16 09:01:12.123225 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 16 09:01:12.123235 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 16 09:01:12.123245 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 16 09:01:12.123254 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 16 09:01:12.123274 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 16 09:01:12.123283 kernel: iommu: Default domain type: Translated Jan 16 09:01:12.123293 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 16 09:01:12.123302 kernel: PCI: Using ACPI for IRQ routing Jan 16 09:01:12.123312 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 16 09:01:12.123321 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 16 09:01:12.123331 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Jan 16 09:01:12.123433 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 16 09:01:12.123531 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 16 09:01:12.123631 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 16 09:01:12.123644 kernel: vgaarb: loaded Jan 16 09:01:12.123653 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 16 09:01:12.123663 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 16 09:01:12.123672 kernel: clocksource: Switched to clocksource kvm-clock Jan 16 09:01:12.123681 kernel: VFS: Disk quotas dquot_6.6.0 Jan 16 09:01:12.123691 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 16 09:01:12.123700 kernel: pnp: PnP ACPI init Jan 16 09:01:12.123709 kernel: pnp: PnP ACPI: found 4 devices Jan 16 09:01:12.123722 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 16 09:01:12.123732 kernel: NET: Registered PF_INET protocol family Jan 16 09:01:12.123741 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 16 09:01:12.123750 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 16 09:01:12.123760 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 16 09:01:12.123769 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 16 09:01:12.123779 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 16 09:01:12.123788 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 16 09:01:12.123797 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 16 09:01:12.123809 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 16 09:01:12.123819 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 16 09:01:12.123828 kernel: NET: Registered PF_XDP protocol family Jan 16 09:01:12.123924 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 16 09:01:12.125094 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 16 09:01:12.125201 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 16 09:01:12.125329 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 16 09:01:12.125467 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jan 16 09:01:12.125639 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 16 09:01:12.125795 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 16 09:01:12.125812 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 16 09:01:12.125937 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 42364 usecs Jan 16 09:01:12.125967 kernel: PCI: CLS 0 bytes, default 64 Jan 16 09:01:12.127025 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 16 09:01:12.127058 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Jan 16 09:01:12.127071 kernel: Initialise system trusted keyrings Jan 16 09:01:12.127098 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 16 09:01:12.127113 kernel: Key type asymmetric registered Jan 16 09:01:12.127128 kernel: Asymmetric key parser 'x509' registered Jan 16 09:01:12.127143 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 16 09:01:12.127158 kernel: io scheduler mq-deadline registered Jan 16 09:01:12.127173 kernel: io scheduler kyber registered Jan 16 09:01:12.127185 kernel: io scheduler bfq registered Jan 16 09:01:12.127198 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 16 09:01:12.127215 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 16 09:01:12.127225 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 16 09:01:12.127245 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 16 09:01:12.127259 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 16 09:01:12.127272 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 16 09:01:12.127285 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 16 09:01:12.127295 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 16 09:01:12.127310 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 16 09:01:12.127520 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 16 09:01:12.127540 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 16 09:01:12.127666 kernel: rtc_cmos 00:03: registered as rtc0 Jan 16 09:01:12.127802 kernel: rtc_cmos 00:03: setting system clock to 2025-01-16T09:01:11 UTC (1737018071) Jan 16 09:01:12.127943 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 16 09:01:12.129052 kernel: intel_pstate: CPU model not supported Jan 16 09:01:12.129069 kernel: NET: Registered PF_INET6 protocol family Jan 16 09:01:12.129079 kernel: Segment Routing with IPv6 Jan 16 09:01:12.129089 kernel: In-situ OAM (IOAM) with IPv6 Jan 16 09:01:12.129098 kernel: NET: Registered PF_PACKET protocol family Jan 16 09:01:12.129121 kernel: Key type dns_resolver registered Jan 16 09:01:12.129136 kernel: IPI shorthand broadcast: enabled Jan 16 09:01:12.129150 kernel: sched_clock: Marking stable (1182006693, 133194027)->(1352610040, -37409320) Jan 16 09:01:12.129164 kernel: registered taskstats version 1 Jan 16 09:01:12.129179 kernel: Loading compiled-in X.509 certificates Jan 16 09:01:12.129193 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 98739e9049f62881f4df7ffd1e39335f7f55b344' Jan 16 09:01:12.129209 kernel: Key type .fscrypt registered Jan 16 09:01:12.129223 kernel: Key type fscrypt-provisioning registered Jan 16 09:01:12.129237 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 16 09:01:12.129252 kernel: ima: Allocated hash algorithm: sha1 Jan 16 09:01:12.129262 kernel: ima: No architecture policies found Jan 16 09:01:12.129271 kernel: clk: Disabling unused clocks Jan 16 09:01:12.129281 kernel: Freeing unused kernel image (initmem) memory: 42976K Jan 16 09:01:12.129291 kernel: Write protecting the kernel read-only data: 36864k Jan 16 09:01:12.129321 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Jan 16 09:01:12.129334 kernel: Run /init as init process Jan 16 09:01:12.129344 kernel: with arguments: Jan 16 09:01:12.129355 kernel: /init Jan 16 09:01:12.129370 kernel: with environment: Jan 16 09:01:12.129380 kernel: HOME=/ Jan 16 09:01:12.129390 kernel: TERM=linux Jan 16 09:01:12.129400 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 16 09:01:12.129415 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 16 09:01:12.129428 systemd[1]: Detected virtualization kvm. Jan 16 09:01:12.129439 systemd[1]: Detected architecture x86-64. Jan 16 09:01:12.129449 systemd[1]: Running in initrd. Jan 16 09:01:12.129467 systemd[1]: No hostname configured, using default hostname. Jan 16 09:01:12.129481 systemd[1]: Hostname set to . Jan 16 09:01:12.129497 systemd[1]: Initializing machine ID from VM UUID. Jan 16 09:01:12.129512 systemd[1]: Queued start job for default target initrd.target. Jan 16 09:01:12.129528 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 09:01:12.129542 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 09:01:12.129560 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 16 09:01:12.129576 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 16 09:01:12.129595 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 16 09:01:12.129611 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 16 09:01:12.129629 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 16 09:01:12.129639 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 16 09:01:12.129650 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 09:01:12.129660 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 16 09:01:12.129674 systemd[1]: Reached target paths.target - Path Units. Jan 16 09:01:12.129684 systemd[1]: Reached target slices.target - Slice Units. Jan 16 09:01:12.129695 systemd[1]: Reached target swap.target - Swaps. Jan 16 09:01:12.129708 systemd[1]: Reached target timers.target - Timer Units. Jan 16 09:01:12.129723 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 16 09:01:12.129738 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 16 09:01:12.129757 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 16 09:01:12.129773 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 16 09:01:12.129787 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 16 09:01:12.129802 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 16 09:01:12.129816 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 09:01:12.129830 systemd[1]: Reached target sockets.target - Socket Units. Jan 16 09:01:12.129845 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 16 09:01:12.129881 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 16 09:01:12.129901 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 16 09:01:12.129917 systemd[1]: Starting systemd-fsck-usr.service... Jan 16 09:01:12.129932 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 16 09:01:12.129946 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 16 09:01:12.131041 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 09:01:12.131059 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 16 09:01:12.131070 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 09:01:12.131081 systemd[1]: Finished systemd-fsck-usr.service. Jan 16 09:01:12.131158 systemd-journald[183]: Collecting audit messages is disabled. Jan 16 09:01:12.131191 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 16 09:01:12.131208 systemd-journald[183]: Journal started Jan 16 09:01:12.131249 systemd-journald[183]: Runtime Journal (/run/log/journal/8e366d5a9aaa4ea19b8c563cc417a967) is 4.9M, max 39.3M, 34.4M free. Jan 16 09:01:12.109635 systemd-modules-load[184]: Inserted module 'overlay' Jan 16 09:01:12.159861 systemd[1]: Started systemd-journald.service - Journal Service. Jan 16 09:01:12.161977 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 09:01:12.168053 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 16 09:01:12.176085 systemd-modules-load[184]: Inserted module 'br_netfilter' Jan 16 09:01:12.177025 kernel: Bridge firewalling registered Jan 16 09:01:12.181276 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 16 09:01:12.191015 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 16 09:01:12.192770 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 16 09:01:12.195076 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 16 09:01:12.209260 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 16 09:01:12.213169 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 16 09:01:12.215039 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 09:01:12.232935 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 09:01:12.234550 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 16 09:01:12.241517 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 16 09:01:12.249397 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 16 09:01:12.251991 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 09:01:12.276841 dracut-cmdline[216]: dracut-dracut-053 Jan 16 09:01:12.283994 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 16 09:01:12.322584 systemd-resolved[218]: Positive Trust Anchors: Jan 16 09:01:12.322603 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 16 09:01:12.322662 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 16 09:01:12.327733 systemd-resolved[218]: Defaulting to hostname 'linux'. Jan 16 09:01:12.331491 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 16 09:01:12.332940 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 16 09:01:12.462038 kernel: SCSI subsystem initialized Jan 16 09:01:12.478005 kernel: Loading iSCSI transport class v2.0-870. Jan 16 09:01:12.509639 kernel: iscsi: registered transport (tcp) Jan 16 09:01:12.542015 kernel: iscsi: registered transport (qla4xxx) Jan 16 09:01:12.542122 kernel: QLogic iSCSI HBA Driver Jan 16 09:01:12.658865 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 16 09:01:12.677426 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 16 09:01:12.718050 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 16 09:01:12.718164 kernel: device-mapper: uevent: version 1.0.3 Jan 16 09:01:12.718188 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 16 09:01:12.782661 kernel: raid6: avx2x4 gen() 16291 MB/s Jan 16 09:01:12.799150 kernel: raid6: avx2x2 gen() 14061 MB/s Jan 16 09:01:12.816037 kernel: raid6: avx2x1 gen() 13875 MB/s Jan 16 09:01:12.816153 kernel: raid6: using algorithm avx2x4 gen() 16291 MB/s Jan 16 09:01:12.834155 kernel: raid6: .... xor() 5279 MB/s, rmw enabled Jan 16 09:01:12.834261 kernel: raid6: using avx2x2 recovery algorithm Jan 16 09:01:12.874031 kernel: xor: automatically using best checksumming function avx Jan 16 09:01:13.126237 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 16 09:01:13.151335 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 16 09:01:13.165229 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 09:01:13.203263 systemd-udevd[402]: Using default interface naming scheme 'v255'. Jan 16 09:01:13.213549 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 09:01:13.226045 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 16 09:01:13.267420 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Jan 16 09:01:13.321038 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 16 09:01:13.328428 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 16 09:01:13.435537 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 09:01:13.448045 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 16 09:01:13.489682 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 16 09:01:13.494911 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 16 09:01:13.496729 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 09:01:13.497813 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 16 09:01:13.506338 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 16 09:01:13.535447 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 16 09:01:13.572985 kernel: cryptd: max_cpu_qlen set to 1000 Jan 16 09:01:13.583980 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jan 16 09:01:13.668299 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 16 09:01:13.668469 kernel: scsi host0: Virtio SCSI HBA Jan 16 09:01:13.668647 kernel: AVX2 version of gcm_enc/dec engaged. Jan 16 09:01:13.668667 kernel: AES CTR mode by8 optimization enabled Jan 16 09:01:13.668686 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 16 09:01:13.668706 kernel: GPT:9289727 != 125829119 Jan 16 09:01:13.668739 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 16 09:01:13.668752 kernel: GPT:9289727 != 125829119 Jan 16 09:01:13.668764 kernel: libata version 3.00 loaded. Jan 16 09:01:13.668776 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 16 09:01:13.668788 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 16 09:01:13.668800 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 16 09:01:13.678274 kernel: scsi host1: ata_piix Jan 16 09:01:13.678548 kernel: scsi host2: ata_piix Jan 16 09:01:13.678777 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jan 16 09:01:13.678804 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jan 16 09:01:13.678823 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jan 16 09:01:13.681997 kernel: virtio_blk virtio5: [vdb] 920 512-byte logical blocks (471 kB/460 KiB) Jan 16 09:01:13.653148 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 16 09:01:13.654076 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 09:01:13.657046 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 16 09:01:13.657618 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 09:01:13.657982 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 09:01:13.658589 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 09:01:13.669344 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 09:01:13.710025 kernel: ACPI: bus type USB registered Jan 16 09:01:13.713508 kernel: usbcore: registered new interface driver usbfs Jan 16 09:01:13.713606 kernel: usbcore: registered new interface driver hub Jan 16 09:01:13.716072 kernel: usbcore: registered new device driver usb Jan 16 09:01:13.764132 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 09:01:13.775316 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 16 09:01:13.806941 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 09:01:13.871992 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (456) Jan 16 09:01:13.883044 kernel: BTRFS: device fsid 5e7921ba-229a-48a0-bc77-9b30aaa34aeb devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (448) Jan 16 09:01:13.886334 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 16 09:01:13.895770 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 16 09:01:13.900000 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jan 16 09:01:13.912198 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jan 16 09:01:13.912490 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jan 16 09:01:13.912701 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jan 16 09:01:13.912909 kernel: hub 1-0:1.0: USB hub found Jan 16 09:01:13.913225 kernel: hub 1-0:1.0: 2 ports detected Jan 16 09:01:13.911854 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 16 09:01:13.919866 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 16 09:01:13.920546 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 16 09:01:13.934348 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 16 09:01:13.942793 disk-uuid[548]: Primary Header is updated. Jan 16 09:01:13.942793 disk-uuid[548]: Secondary Entries is updated. Jan 16 09:01:13.942793 disk-uuid[548]: Secondary Header is updated. Jan 16 09:01:13.965019 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 16 09:01:14.978077 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 16 09:01:14.979511 disk-uuid[549]: The operation has completed successfully. Jan 16 09:01:15.032951 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 16 09:01:15.033170 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 16 09:01:15.055284 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 16 09:01:15.060251 sh[560]: Success Jan 16 09:01:15.078620 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 16 09:01:15.167425 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 16 09:01:15.170312 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 16 09:01:15.171722 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 16 09:01:15.217787 kernel: BTRFS info (device dm-0): first mount of filesystem 5e7921ba-229a-48a0-bc77-9b30aaa34aeb Jan 16 09:01:15.217942 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 16 09:01:15.217982 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 16 09:01:15.223301 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 16 09:01:15.223413 kernel: BTRFS info (device dm-0): using free space tree Jan 16 09:01:15.237655 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 16 09:01:15.239381 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 16 09:01:15.255333 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 16 09:01:15.259085 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 16 09:01:15.277828 kernel: BTRFS info (device vda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 16 09:01:15.277984 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 16 09:01:15.278004 kernel: BTRFS info (device vda6): using free space tree Jan 16 09:01:15.281985 kernel: BTRFS info (device vda6): auto enabling async discard Jan 16 09:01:15.299585 kernel: BTRFS info (device vda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 16 09:01:15.298992 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 16 09:01:15.315894 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 16 09:01:15.326440 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 16 09:01:15.503237 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 16 09:01:15.514333 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 16 09:01:15.515941 ignition[663]: Ignition 2.20.0 Jan 16 09:01:15.515979 ignition[663]: Stage: fetch-offline Jan 16 09:01:15.519821 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 16 09:01:15.516082 ignition[663]: no configs at "/usr/lib/ignition/base.d" Jan 16 09:01:15.516100 ignition[663]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 09:01:15.516309 ignition[663]: parsed url from cmdline: "" Jan 16 09:01:15.516317 ignition[663]: no config URL provided Jan 16 09:01:15.516328 ignition[663]: reading system config file "/usr/lib/ignition/user.ign" Jan 16 09:01:15.516344 ignition[663]: no config at "/usr/lib/ignition/user.ign" Jan 16 09:01:15.516355 ignition[663]: failed to fetch config: resource requires networking Jan 16 09:01:15.516812 ignition[663]: Ignition finished successfully Jan 16 09:01:15.561836 systemd-networkd[749]: lo: Link UP Jan 16 09:01:15.561868 systemd-networkd[749]: lo: Gained carrier Jan 16 09:01:15.564765 systemd-networkd[749]: Enumeration completed Jan 16 09:01:15.565455 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 16 09:01:15.565574 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 16 09:01:15.565582 systemd-networkd[749]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jan 16 09:01:15.566296 systemd[1]: Reached target network.target - Network. Jan 16 09:01:15.567492 systemd-networkd[749]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 09:01:15.567499 systemd-networkd[749]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 16 09:01:15.569026 systemd-networkd[749]: eth0: Link UP Jan 16 09:01:15.569034 systemd-networkd[749]: eth0: Gained carrier Jan 16 09:01:15.569049 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 16 09:01:15.572543 systemd-networkd[749]: eth1: Link UP Jan 16 09:01:15.572548 systemd-networkd[749]: eth1: Gained carrier Jan 16 09:01:15.572564 systemd-networkd[749]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 09:01:15.574693 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 16 09:01:15.588164 systemd-networkd[749]: eth1: DHCPv4 address 10.124.0.8/20 acquired from 169.254.169.253 Jan 16 09:01:15.593118 systemd-networkd[749]: eth0: DHCPv4 address 143.110.229.235/20, gateway 143.110.224.1 acquired from 169.254.169.253 Jan 16 09:01:15.622470 ignition[753]: Ignition 2.20.0 Jan 16 09:01:15.623603 ignition[753]: Stage: fetch Jan 16 09:01:15.624128 ignition[753]: no configs at "/usr/lib/ignition/base.d" Jan 16 09:01:15.624152 ignition[753]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 09:01:15.624371 ignition[753]: parsed url from cmdline: "" Jan 16 09:01:15.624386 ignition[753]: no config URL provided Jan 16 09:01:15.624396 ignition[753]: reading system config file "/usr/lib/ignition/user.ign" Jan 16 09:01:15.624412 ignition[753]: no config at "/usr/lib/ignition/user.ign" Jan 16 09:01:15.624457 ignition[753]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jan 16 09:01:15.647392 ignition[753]: GET result: OK Jan 16 09:01:15.647521 ignition[753]: parsing config with SHA512: 4b22062c3acb135deface795c83239566eb2efbec895c1df8a6efe0491b53e29959676ca0b5cf419481e5a6b6af46ee8b5149a39607595bcd4222620ff5fa290 Jan 16 09:01:15.655274 unknown[753]: fetched base config from "system" Jan 16 09:01:15.655291 unknown[753]: fetched base config from "system" Jan 16 09:01:15.656016 ignition[753]: fetch: fetch complete Jan 16 09:01:15.655303 unknown[753]: fetched user config from "digitalocean" Jan 16 09:01:15.656027 ignition[753]: fetch: fetch passed Jan 16 09:01:15.659502 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 16 09:01:15.656121 ignition[753]: Ignition finished successfully Jan 16 09:01:15.673463 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 16 09:01:15.715868 ignition[761]: Ignition 2.20.0 Jan 16 09:01:15.715885 ignition[761]: Stage: kargs Jan 16 09:01:15.716268 ignition[761]: no configs at "/usr/lib/ignition/base.d" Jan 16 09:01:15.716289 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 09:01:15.719444 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 16 09:01:15.717583 ignition[761]: kargs: kargs passed Jan 16 09:01:15.717680 ignition[761]: Ignition finished successfully Jan 16 09:01:15.733437 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 16 09:01:15.763532 ignition[768]: Ignition 2.20.0 Jan 16 09:01:15.763561 ignition[768]: Stage: disks Jan 16 09:01:15.763891 ignition[768]: no configs at "/usr/lib/ignition/base.d" Jan 16 09:01:15.763905 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 09:01:15.766562 ignition[768]: disks: disks passed Jan 16 09:01:15.766688 ignition[768]: Ignition finished successfully Jan 16 09:01:15.768197 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 16 09:01:15.772816 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 16 09:01:15.773806 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 16 09:01:15.775309 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 16 09:01:15.777230 systemd[1]: Reached target sysinit.target - System Initialization. Jan 16 09:01:15.777787 systemd[1]: Reached target basic.target - Basic System. Jan 16 09:01:15.806439 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 16 09:01:15.853191 systemd-fsck[777]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 16 09:01:15.859837 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 16 09:01:15.870241 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 16 09:01:16.015201 kernel: EXT4-fs (vda9): mounted filesystem 84bcd1b2-5573-4e91-8fd5-f97782397085 r/w with ordered data mode. Quota mode: none. Jan 16 09:01:16.016076 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 16 09:01:16.017738 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 16 09:01:16.029354 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 16 09:01:16.038347 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 16 09:01:16.042394 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Jan 16 09:01:16.053515 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 16 09:01:16.056802 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 16 09:01:16.056885 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 16 09:01:16.067033 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (786) Jan 16 09:01:16.072447 kernel: BTRFS info (device vda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 16 09:01:16.072561 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 16 09:01:16.072603 kernel: BTRFS info (device vda6): using free space tree Jan 16 09:01:16.082243 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 16 09:01:16.088049 kernel: BTRFS info (device vda6): auto enabling async discard Jan 16 09:01:16.092795 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 16 09:01:16.103904 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 16 09:01:16.210290 coreos-metadata[789]: Jan 16 09:01:16.209 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 16 09:01:16.212658 coreos-metadata[788]: Jan 16 09:01:16.212 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 16 09:01:16.220846 initrd-setup-root[817]: cut: /sysroot/etc/passwd: No such file or directory Jan 16 09:01:16.228622 coreos-metadata[789]: Jan 16 09:01:16.228 INFO Fetch successful Jan 16 09:01:16.235151 coreos-metadata[788]: Jan 16 09:01:16.235 INFO Fetch successful Jan 16 09:01:16.238890 initrd-setup-root[824]: cut: /sysroot/etc/group: No such file or directory Jan 16 09:01:16.243365 coreos-metadata[789]: Jan 16 09:01:16.242 INFO wrote hostname ci-4152.2.0-e-4ce9573906 to /sysroot/etc/hostname Jan 16 09:01:16.243879 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Jan 16 09:01:16.244113 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Jan 16 09:01:16.249469 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 16 09:01:16.258174 initrd-setup-root[832]: cut: /sysroot/etc/shadow: No such file or directory Jan 16 09:01:16.265423 initrd-setup-root[840]: cut: /sysroot/etc/gshadow: No such file or directory Jan 16 09:01:16.443352 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 16 09:01:16.448237 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 16 09:01:16.452299 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 16 09:01:16.475001 kernel: BTRFS info (device vda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 16 09:01:16.475261 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 16 09:01:16.510886 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 16 09:01:16.531109 ignition[908]: INFO : Ignition 2.20.0 Jan 16 09:01:16.531109 ignition[908]: INFO : Stage: mount Jan 16 09:01:16.531109 ignition[908]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 09:01:16.531109 ignition[908]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 09:01:16.535901 ignition[908]: INFO : mount: mount passed Jan 16 09:01:16.535901 ignition[908]: INFO : Ignition finished successfully Jan 16 09:01:16.537764 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 16 09:01:16.545231 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 16 09:01:16.575991 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 16 09:01:16.602405 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (920) Jan 16 09:01:16.605593 kernel: BTRFS info (device vda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 16 09:01:16.605724 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 16 09:01:16.605750 kernel: BTRFS info (device vda6): using free space tree Jan 16 09:01:16.614066 kernel: BTRFS info (device vda6): auto enabling async discard Jan 16 09:01:16.616875 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 16 09:01:16.665713 ignition[937]: INFO : Ignition 2.20.0 Jan 16 09:01:16.669236 ignition[937]: INFO : Stage: files Jan 16 09:01:16.669236 ignition[937]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 09:01:16.669236 ignition[937]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 09:01:16.669236 ignition[937]: DEBUG : files: compiled without relabeling support, skipping Jan 16 09:01:16.672171 ignition[937]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 16 09:01:16.672171 ignition[937]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 16 09:01:16.675559 ignition[937]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 16 09:01:16.676662 ignition[937]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 16 09:01:16.677698 ignition[937]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 16 09:01:16.677432 unknown[937]: wrote ssh authorized keys file for user: core Jan 16 09:01:16.681338 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 16 09:01:16.682051 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 16 09:01:16.682051 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 16 09:01:16.684154 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 16 09:01:16.684154 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 16 09:01:16.684154 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 16 09:01:16.684154 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 16 09:01:16.684154 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 16 09:01:16.684154 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 16 09:01:16.684154 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 16 09:01:16.821144 systemd-networkd[749]: eth0: Gained IPv6LL Jan 16 09:01:16.884426 systemd-networkd[749]: eth1: Gained IPv6LL Jan 16 09:01:17.188665 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Jan 16 09:01:17.543263 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 16 09:01:17.543263 ignition[937]: INFO : files: op(8): [started] processing unit "containerd.service" Jan 16 09:01:17.545218 ignition[937]: INFO : files: op(8): op(9): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 16 09:01:17.545218 ignition[937]: INFO : files: op(8): op(9): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 16 09:01:17.545218 ignition[937]: INFO : files: op(8): [finished] processing unit "containerd.service" Jan 16 09:01:17.553316 ignition[937]: INFO : files: createResultFile: createFiles: op(a): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 16 09:01:17.553316 ignition[937]: INFO : files: createResultFile: createFiles: op(a): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 16 09:01:17.553316 ignition[937]: INFO : files: files passed Jan 16 09:01:17.553316 ignition[937]: INFO : Ignition finished successfully Jan 16 09:01:17.547706 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 16 09:01:17.564103 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 16 09:01:17.568389 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 16 09:01:17.582876 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 16 09:01:17.583153 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 16 09:01:17.605250 initrd-setup-root-after-ignition[966]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 16 09:01:17.605250 initrd-setup-root-after-ignition[966]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 16 09:01:17.609487 initrd-setup-root-after-ignition[970]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 16 09:01:17.613246 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 16 09:01:17.615251 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 16 09:01:17.627424 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 16 09:01:17.681301 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 16 09:01:17.681545 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 16 09:01:17.683444 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 16 09:01:17.684152 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 16 09:01:17.685399 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 16 09:01:17.691489 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 16 09:01:17.732851 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 16 09:01:17.742416 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 16 09:01:17.759640 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 16 09:01:17.760334 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 09:01:17.761369 systemd[1]: Stopped target timers.target - Timer Units. Jan 16 09:01:17.762329 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 16 09:01:17.762525 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 16 09:01:17.763772 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 16 09:01:17.764966 systemd[1]: Stopped target basic.target - Basic System. Jan 16 09:01:17.765796 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 16 09:01:17.766692 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 16 09:01:17.767612 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 16 09:01:17.768628 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 16 09:01:17.769613 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 16 09:01:17.770528 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 16 09:01:17.771500 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 16 09:01:17.772352 systemd[1]: Stopped target swap.target - Swaps. Jan 16 09:01:17.773131 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 16 09:01:17.773299 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 16 09:01:17.774323 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 16 09:01:17.774946 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 09:01:17.775833 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 16 09:01:17.776019 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 09:01:17.776877 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 16 09:01:17.777103 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 16 09:01:17.778417 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 16 09:01:17.778720 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 16 09:01:17.779802 systemd[1]: ignition-files.service: Deactivated successfully. Jan 16 09:01:17.780011 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 16 09:01:17.781058 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 16 09:01:17.781328 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 16 09:01:17.796603 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 16 09:01:17.802520 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 16 09:01:17.804427 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 16 09:01:17.804926 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 09:01:17.808135 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 16 09:01:17.808505 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 16 09:01:17.817933 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 16 09:01:17.818163 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 16 09:01:17.828110 ignition[990]: INFO : Ignition 2.20.0 Jan 16 09:01:17.828110 ignition[990]: INFO : Stage: umount Jan 16 09:01:17.857426 ignition[990]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 09:01:17.857426 ignition[990]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 09:01:17.857426 ignition[990]: INFO : umount: umount passed Jan 16 09:01:17.857426 ignition[990]: INFO : Ignition finished successfully Jan 16 09:01:17.855381 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 16 09:01:17.855553 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 16 09:01:17.862941 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 16 09:01:17.863078 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 16 09:01:17.865656 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 16 09:01:17.865785 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 16 09:01:17.866548 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 16 09:01:17.866639 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 16 09:01:17.868247 systemd[1]: Stopped target network.target - Network. Jan 16 09:01:17.870920 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 16 09:01:17.872276 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 16 09:01:17.873826 systemd[1]: Stopped target paths.target - Path Units. Jan 16 09:01:17.875653 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 16 09:01:17.879234 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 09:01:17.880226 systemd[1]: Stopped target slices.target - Slice Units. Jan 16 09:01:17.880819 systemd[1]: Stopped target sockets.target - Socket Units. Jan 16 09:01:17.882697 systemd[1]: iscsid.socket: Deactivated successfully. Jan 16 09:01:17.882790 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 16 09:01:17.888464 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 16 09:01:17.888554 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 16 09:01:17.891255 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 16 09:01:17.891363 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 16 09:01:17.892512 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 16 09:01:17.892597 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 16 09:01:17.894500 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 16 09:01:17.895889 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 16 09:01:17.898052 systemd-networkd[749]: eth0: DHCPv6 lease lost Jan 16 09:01:17.899477 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 16 09:01:17.902190 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 16 09:01:17.902936 systemd-networkd[749]: eth1: DHCPv6 lease lost Jan 16 09:01:17.903295 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 16 09:01:17.906771 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 16 09:01:17.907236 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 16 09:01:17.916654 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 16 09:01:17.917021 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 16 09:01:17.922606 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 16 09:01:17.922706 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 16 09:01:17.923947 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 16 09:01:17.924113 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 16 09:01:17.932373 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 16 09:01:17.933109 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 16 09:01:17.933288 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 16 09:01:17.934585 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 16 09:01:17.934703 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 16 09:01:17.936270 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 16 09:01:17.936378 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 16 09:01:17.939752 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 16 09:01:17.939878 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 09:01:17.949450 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 09:01:17.972870 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 16 09:01:17.973251 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 16 09:01:17.974935 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 16 09:01:17.975224 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 09:01:17.977375 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 16 09:01:17.977550 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 16 09:01:17.978395 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 16 09:01:17.978459 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 09:01:17.979985 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 16 09:01:17.980105 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 16 09:01:17.981764 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 16 09:01:17.981885 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 16 09:01:17.983397 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 16 09:01:17.983509 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 09:01:17.999033 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 16 09:01:17.999696 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 16 09:01:17.999821 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 09:01:18.000509 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 16 09:01:18.000595 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 16 09:01:18.003663 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 16 09:01:18.003767 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 09:01:18.005434 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 09:01:18.005532 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 09:01:18.011627 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 16 09:01:18.011755 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 16 09:01:18.013166 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 16 09:01:18.021362 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 16 09:01:18.037230 systemd[1]: Switching root. Jan 16 09:01:18.074655 systemd-journald[183]: Journal stopped Jan 16 09:01:19.826478 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Jan 16 09:01:19.826610 kernel: SELinux: policy capability network_peer_controls=1 Jan 16 09:01:19.826643 kernel: SELinux: policy capability open_perms=1 Jan 16 09:01:19.826682 kernel: SELinux: policy capability extended_socket_class=1 Jan 16 09:01:19.826704 kernel: SELinux: policy capability always_check_network=0 Jan 16 09:01:19.826733 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 16 09:01:19.826756 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 16 09:01:19.826777 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 16 09:01:19.826801 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 16 09:01:19.826820 kernel: audit: type=1403 audit(1737018078.463:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 16 09:01:19.826843 systemd[1]: Successfully loaded SELinux policy in 58.914ms. Jan 16 09:01:19.826880 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 27.597ms. Jan 16 09:01:19.826912 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 16 09:01:19.826938 systemd[1]: Detected virtualization kvm. Jan 16 09:01:19.826980 systemd[1]: Detected architecture x86-64. Jan 16 09:01:19.827004 systemd[1]: Detected first boot. Jan 16 09:01:19.827028 systemd[1]: Hostname set to . Jan 16 09:01:19.827050 systemd[1]: Initializing machine ID from VM UUID. Jan 16 09:01:19.827074 zram_generator::config[1056]: No configuration found. Jan 16 09:01:19.827106 systemd[1]: Populated /etc with preset unit settings. Jan 16 09:01:19.827129 systemd[1]: Queued start job for default target multi-user.target. Jan 16 09:01:19.827149 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 16 09:01:19.827174 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 16 09:01:19.827199 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 16 09:01:19.827225 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 16 09:01:19.829096 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 16 09:01:19.829128 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 16 09:01:19.829149 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 16 09:01:19.829181 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 16 09:01:19.829208 systemd[1]: Created slice user.slice - User and Session Slice. Jan 16 09:01:19.829228 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 09:01:19.829249 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 09:01:19.829268 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 16 09:01:19.829288 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 16 09:01:19.829310 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 16 09:01:19.829330 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 16 09:01:19.829351 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 16 09:01:19.829376 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 09:01:19.829397 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 16 09:01:19.829425 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 09:01:19.829447 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 16 09:01:19.829469 systemd[1]: Reached target slices.target - Slice Units. Jan 16 09:01:19.829491 systemd[1]: Reached target swap.target - Swaps. Jan 16 09:01:19.829518 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 16 09:01:19.829540 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 16 09:01:19.829577 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 16 09:01:19.829597 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 16 09:01:19.829617 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 16 09:01:19.829638 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 16 09:01:19.829658 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 09:01:19.829678 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 16 09:01:19.829701 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 16 09:01:19.829721 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 16 09:01:19.829748 systemd[1]: Mounting media.mount - External Media Directory... Jan 16 09:01:19.829768 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 09:01:19.829788 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 16 09:01:19.829808 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 16 09:01:19.829828 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 16 09:01:19.829848 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 16 09:01:19.829886 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 09:01:19.829908 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 16 09:01:19.829935 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 16 09:01:19.830614 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 09:01:19.830657 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 16 09:01:19.830678 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 09:01:19.830699 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 16 09:01:19.830718 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 09:01:19.830741 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 16 09:01:19.830762 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 16 09:01:19.830786 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 16 09:01:19.830814 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 16 09:01:19.830833 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 16 09:01:19.830853 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 16 09:01:19.830875 kernel: loop: module loaded Jan 16 09:01:19.830896 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 16 09:01:19.830916 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 16 09:01:19.830938 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 09:01:19.830985 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 16 09:01:19.831013 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 16 09:01:19.831033 systemd[1]: Mounted media.mount - External Media Directory. Jan 16 09:01:19.831054 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 16 09:01:19.831074 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 16 09:01:19.831094 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 16 09:01:19.831116 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 09:01:19.831135 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 16 09:01:19.831155 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 16 09:01:19.831176 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 09:01:19.831202 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 09:01:19.831223 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 09:01:19.831244 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 09:01:19.831264 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 09:01:19.831343 systemd-journald[1142]: Collecting audit messages is disabled. Jan 16 09:01:19.831388 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 09:01:19.831411 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 16 09:01:19.831436 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 16 09:01:19.831463 systemd-journald[1142]: Journal started Jan 16 09:01:19.831503 systemd-journald[1142]: Runtime Journal (/run/log/journal/8e366d5a9aaa4ea19b8c563cc417a967) is 4.9M, max 39.3M, 34.4M free. Jan 16 09:01:19.849010 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 16 09:01:19.849140 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 09:01:19.875018 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 16 09:01:19.884987 systemd[1]: Started systemd-journald.service - Journal Service. Jan 16 09:01:19.885102 kernel: fuse: init (API version 7.39) Jan 16 09:01:19.886255 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 16 09:01:19.888583 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 16 09:01:19.916209 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 16 09:01:19.924015 kernel: ACPI: bus type drm_connector registered Jan 16 09:01:19.922310 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 16 09:01:19.929931 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 16 09:01:19.930259 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 16 09:01:19.931379 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 16 09:01:19.931589 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 16 09:01:19.967851 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 16 09:01:19.969158 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 16 09:01:19.985456 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 16 09:01:20.001337 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 16 09:01:20.002512 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 09:01:20.008333 systemd-tmpfiles[1164]: ACLs are not supported, ignoring. Jan 16 09:01:20.008364 systemd-tmpfiles[1164]: ACLs are not supported, ignoring. Jan 16 09:01:20.013318 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 16 09:01:20.036141 systemd-journald[1142]: Time spent on flushing to /var/log/journal/8e366d5a9aaa4ea19b8c563cc417a967 is 46.488ms for 959 entries. Jan 16 09:01:20.036141 systemd-journald[1142]: System Journal (/var/log/journal/8e366d5a9aaa4ea19b8c563cc417a967) is 8.0M, max 195.6M, 187.6M free. Jan 16 09:01:20.119279 systemd-journald[1142]: Received client request to flush runtime journal. Jan 16 09:01:20.031300 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 16 09:01:20.043659 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 16 09:01:20.044709 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 16 09:01:20.058235 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 16 09:01:20.061612 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 16 09:01:20.065783 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 16 09:01:20.127103 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 16 09:01:20.159619 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 16 09:01:20.207746 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 16 09:01:20.214626 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 09:01:20.227397 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 16 09:01:20.238244 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 16 09:01:20.288217 udevadm[1218]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 16 09:01:20.295207 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Jan 16 09:01:20.295255 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Jan 16 09:01:20.305205 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 09:01:21.238116 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 16 09:01:21.247421 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 09:01:21.317629 systemd-udevd[1224]: Using default interface naming scheme 'v255'. Jan 16 09:01:21.368245 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 09:01:21.375234 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 16 09:01:21.412677 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 16 09:01:21.537490 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 16 09:01:21.612858 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 16 09:01:21.644728 systemd-networkd[1228]: lo: Link UP Jan 16 09:01:21.645917 systemd-networkd[1228]: lo: Gained carrier Jan 16 09:01:21.648246 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 09:01:21.648634 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 09:01:21.650052 systemd-networkd[1228]: Enumeration completed Jan 16 09:01:21.650678 systemd-networkd[1228]: eth0: Configuring with /run/systemd/network/10-76:04:e3:5f:b6:89.network. Jan 16 09:01:21.651658 systemd-networkd[1228]: eth1: Configuring with /run/systemd/network/10-0e:04:f7:95:05:3f.network. Jan 16 09:01:21.653266 systemd-networkd[1228]: eth0: Link UP Jan 16 09:01:21.653368 systemd-networkd[1228]: eth0: Gained carrier Jan 16 09:01:21.657505 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 09:01:21.659195 systemd-networkd[1228]: eth1: Link UP Jan 16 09:01:21.659208 systemd-networkd[1228]: eth1: Gained carrier Jan 16 09:01:21.670322 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 09:01:21.677359 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1241) Jan 16 09:01:21.689343 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 09:01:21.694003 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 16 09:01:21.694111 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 16 09:01:21.694190 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 09:01:21.694427 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 16 09:01:21.737761 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 16 09:01:21.742671 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 09:01:21.753240 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 09:01:21.754631 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 09:01:21.754943 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 09:01:21.797007 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 09:01:21.798165 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 09:01:21.844423 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 09:01:21.844615 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 09:01:21.865012 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 16 09:01:21.887002 kernel: ACPI: button: Power Button [PWRF] Jan 16 09:01:21.887167 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 16 09:01:21.892013 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 16 09:01:21.934457 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 16 09:01:21.995007 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 16 09:01:21.999995 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 16 09:01:22.006004 kernel: Console: switching to colour dummy device 80x25 Jan 16 09:01:22.008166 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 16 09:01:22.008308 kernel: [drm] features: -context_init Jan 16 09:01:22.013998 kernel: [drm] number of scanouts: 1 Jan 16 09:01:22.014099 kernel: [drm] number of cap sets: 0 Jan 16 09:01:22.015990 kernel: mousedev: PS/2 mouse device common for all mice Jan 16 09:01:22.021002 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 16 09:01:22.041988 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 16 09:01:22.049014 kernel: Console: switching to colour frame buffer device 128x48 Jan 16 09:01:22.055163 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 09:01:22.061000 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 16 09:01:22.076623 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 09:01:22.077074 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 09:01:22.098531 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 09:01:22.136104 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 09:01:22.136535 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 09:01:22.207533 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 09:01:22.301472 kernel: EDAC MC: Ver: 3.0.0 Jan 16 09:01:22.321702 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 09:01:22.328460 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 16 09:01:22.340585 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 16 09:01:22.376514 lvm[1289]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 16 09:01:22.417687 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 16 09:01:22.427015 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 16 09:01:22.441264 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 16 09:01:22.458317 lvm[1292]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 16 09:01:22.507443 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 16 09:01:22.508793 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 16 09:01:22.524276 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jan 16 09:01:22.524793 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 16 09:01:22.524856 systemd[1]: Reached target machines.target - Containers. Jan 16 09:01:22.531410 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 16 09:01:22.563993 kernel: ISO 9660 Extensions: RRIP_1991A Jan 16 09:01:22.567945 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jan 16 09:01:22.570587 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 16 09:01:22.579550 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 16 09:01:22.586631 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 16 09:01:22.638731 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 16 09:01:22.657529 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 16 09:01:22.676560 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 09:01:22.689892 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 16 09:01:22.701191 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 16 09:01:22.714280 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 16 09:01:22.750352 kernel: loop0: detected capacity change from 0 to 211296 Jan 16 09:01:22.804496 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 16 09:01:22.805934 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 16 09:01:22.838329 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 16 09:01:22.883606 kernel: loop1: detected capacity change from 0 to 140992 Jan 16 09:01:22.900435 systemd-networkd[1228]: eth1: Gained IPv6LL Jan 16 09:01:22.909325 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 16 09:01:22.970616 kernel: loop2: detected capacity change from 0 to 138184 Jan 16 09:01:23.032921 kernel: loop3: detected capacity change from 0 to 8 Jan 16 09:01:23.079400 kernel: loop4: detected capacity change from 0 to 211296 Jan 16 09:01:23.109305 kernel: loop5: detected capacity change from 0 to 140992 Jan 16 09:01:23.137071 kernel: loop6: detected capacity change from 0 to 138184 Jan 16 09:01:23.158444 systemd-networkd[1228]: eth0: Gained IPv6LL Jan 16 09:01:23.174141 kernel: loop7: detected capacity change from 0 to 8 Jan 16 09:01:23.174846 (sd-merge)[1319]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jan 16 09:01:23.178067 (sd-merge)[1319]: Merged extensions into '/usr'. Jan 16 09:01:23.188313 systemd[1]: Reloading requested from client PID 1308 ('systemd-sysext') (unit systemd-sysext.service)... Jan 16 09:01:23.188342 systemd[1]: Reloading... Jan 16 09:01:23.355717 zram_generator::config[1346]: No configuration found. Jan 16 09:01:23.625437 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 16 09:01:23.696676 ldconfig[1305]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 16 09:01:23.734833 systemd[1]: Reloading finished in 545 ms. Jan 16 09:01:23.760425 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 16 09:01:23.763695 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 16 09:01:23.776379 systemd[1]: Starting ensure-sysext.service... Jan 16 09:01:23.788357 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 16 09:01:23.809346 systemd[1]: Reloading requested from client PID 1397 ('systemctl') (unit ensure-sysext.service)... Jan 16 09:01:23.809398 systemd[1]: Reloading... Jan 16 09:01:23.856779 systemd-tmpfiles[1398]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 16 09:01:23.858665 systemd-tmpfiles[1398]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 16 09:01:23.860775 systemd-tmpfiles[1398]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 16 09:01:23.861421 systemd-tmpfiles[1398]: ACLs are not supported, ignoring. Jan 16 09:01:23.861563 systemd-tmpfiles[1398]: ACLs are not supported, ignoring. Jan 16 09:01:23.870348 systemd-tmpfiles[1398]: Detected autofs mount point /boot during canonicalization of boot. Jan 16 09:01:23.870654 systemd-tmpfiles[1398]: Skipping /boot Jan 16 09:01:23.893521 systemd-tmpfiles[1398]: Detected autofs mount point /boot during canonicalization of boot. Jan 16 09:01:23.893840 systemd-tmpfiles[1398]: Skipping /boot Jan 16 09:01:23.958194 zram_generator::config[1425]: No configuration found. Jan 16 09:01:24.215263 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 16 09:01:24.338313 systemd[1]: Reloading finished in 527 ms. Jan 16 09:01:24.362539 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 09:01:24.385415 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 16 09:01:24.400346 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 16 09:01:24.415394 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 16 09:01:24.431442 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 16 09:01:24.439111 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 16 09:01:24.459582 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 09:01:24.459884 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 09:01:24.476506 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 09:01:24.494599 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 09:01:24.517588 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 09:01:24.519143 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 09:01:24.519384 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 09:01:24.538985 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 09:01:24.539710 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 09:01:24.558569 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 16 09:01:24.568782 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 09:01:24.569185 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 09:01:24.577147 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 09:01:24.579318 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 09:01:24.599763 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 09:01:24.602154 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 09:01:24.614779 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 09:01:24.631457 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 09:01:24.656674 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 09:01:24.658932 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 09:01:24.659297 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 16 09:01:24.659447 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 09:01:24.668347 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 16 09:01:24.675081 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 16 09:01:24.702277 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 09:01:24.702598 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 09:01:24.723709 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 09:01:24.727272 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 09:01:24.731776 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 09:01:24.734343 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 09:01:24.735133 augenrules[1521]: No rules Jan 16 09:01:24.744839 systemd[1]: audit-rules.service: Deactivated successfully. Jan 16 09:01:24.745547 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 16 09:01:24.759693 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 09:01:24.760745 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 09:01:24.770560 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 09:01:24.778595 systemd-resolved[1480]: Positive Trust Anchors: Jan 16 09:01:24.779231 systemd-resolved[1480]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 16 09:01:24.779362 systemd-resolved[1480]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 16 09:01:24.789411 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 16 09:01:24.793521 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 09:01:24.793639 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 09:01:24.795336 systemd-resolved[1480]: Using system hostname 'ci-4152.2.0-e-4ce9573906'. Jan 16 09:01:24.807360 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 16 09:01:24.808385 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 16 09:01:24.808458 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 09:01:24.816374 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 16 09:01:24.819486 systemd[1]: Finished ensure-sysext.service. Jan 16 09:01:24.837214 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 09:01:24.837486 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 09:01:24.838817 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 16 09:01:24.844480 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 16 09:01:24.862934 systemd[1]: Reached target network.target - Network. Jan 16 09:01:24.867554 systemd[1]: Reached target network-online.target - Network is Online. Jan 16 09:01:24.872349 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 16 09:01:24.873444 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 09:01:24.888312 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 16 09:01:24.894008 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 16 09:01:24.991286 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 16 09:01:24.992648 systemd[1]: Reached target sysinit.target - System Initialization. Jan 16 09:01:24.994625 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 16 09:01:24.995502 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 16 09:01:25.000121 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 16 09:01:25.000937 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 16 09:01:25.002136 systemd[1]: Reached target paths.target - Path Units. Jan 16 09:01:25.005831 systemd[1]: Reached target time-set.target - System Time Set. Jan 16 09:01:25.009779 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 16 09:01:25.012208 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 16 09:01:25.014999 systemd[1]: Reached target timers.target - Timer Units. Jan 16 09:01:25.018736 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 16 09:01:25.024668 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 16 09:01:25.033048 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 16 09:01:25.037692 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 16 09:01:25.042193 systemd[1]: Reached target sockets.target - Socket Units. Jan 16 09:01:25.045333 systemd[1]: Reached target basic.target - Basic System. Jan 16 09:01:25.046860 systemd[1]: System is tainted: cgroupsv1 Jan 16 09:01:25.048077 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 16 09:01:25.048193 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 16 09:01:25.060209 systemd[1]: Starting containerd.service - containerd container runtime... Jan 16 09:01:25.074423 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 16 09:01:25.080324 systemd-timesyncd[1544]: Contacted time server 208.67.75.242:123 (0.flatcar.pool.ntp.org). Jan 16 09:01:25.080443 systemd-timesyncd[1544]: Initial clock synchronization to Thu 2025-01-16 09:01:25.234688 UTC. Jan 16 09:01:25.085670 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 16 09:01:25.102372 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 16 09:01:25.120797 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 16 09:01:25.121654 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 16 09:01:25.128549 jq[1554]: false Jan 16 09:01:25.143933 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 09:01:25.158471 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 16 09:01:25.178327 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 16 09:01:25.198393 dbus-daemon[1553]: [system] SELinux support is enabled Jan 16 09:01:25.210299 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 16 09:01:25.233151 coreos-metadata[1551]: Jan 16 09:01:25.225 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 16 09:01:25.231323 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 16 09:01:25.238571 extend-filesystems[1555]: Found loop4 Jan 16 09:01:25.238571 extend-filesystems[1555]: Found loop5 Jan 16 09:01:25.238571 extend-filesystems[1555]: Found loop6 Jan 16 09:01:25.238571 extend-filesystems[1555]: Found loop7 Jan 16 09:01:25.238571 extend-filesystems[1555]: Found vda Jan 16 09:01:25.238571 extend-filesystems[1555]: Found vda1 Jan 16 09:01:25.238571 extend-filesystems[1555]: Found vda2 Jan 16 09:01:25.238571 extend-filesystems[1555]: Found vda3 Jan 16 09:01:25.238571 extend-filesystems[1555]: Found usr Jan 16 09:01:25.238571 extend-filesystems[1555]: Found vda4 Jan 16 09:01:25.238571 extend-filesystems[1555]: Found vda6 Jan 16 09:01:25.238571 extend-filesystems[1555]: Found vda7 Jan 16 09:01:25.238571 extend-filesystems[1555]: Found vda9 Jan 16 09:01:25.238571 extend-filesystems[1555]: Checking size of /dev/vda9 Jan 16 09:01:25.401260 extend-filesystems[1555]: Resized partition /dev/vda9 Jan 16 09:01:25.252593 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 16 09:01:25.404230 coreos-metadata[1551]: Jan 16 09:01:25.252 INFO Fetch successful Jan 16 09:01:25.404566 extend-filesystems[1585]: resize2fs 1.47.1 (20-May-2024) Jan 16 09:01:25.433656 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jan 16 09:01:25.280164 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 16 09:01:25.300321 systemd[1]: Starting update-engine.service - Update Engine... Jan 16 09:01:25.354557 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 16 09:01:25.456402 update_engine[1575]: I20250116 09:01:25.450153 1575 main.cc:92] Flatcar Update Engine starting Jan 16 09:01:25.456402 update_engine[1575]: I20250116 09:01:25.454576 1575 update_check_scheduler.cc:74] Next update check in 8m59s Jan 16 09:01:25.366734 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 16 09:01:25.490944 jq[1584]: true Jan 16 09:01:25.425848 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 16 09:01:25.426544 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 16 09:01:25.449681 systemd[1]: motdgen.service: Deactivated successfully. Jan 16 09:01:25.450180 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 16 09:01:25.479946 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 16 09:01:25.492796 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 16 09:01:25.493328 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 16 09:01:25.556611 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 16 09:01:25.556708 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 16 09:01:25.557760 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 16 09:01:25.557978 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jan 16 09:01:25.560139 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 16 09:01:25.568708 systemd[1]: Started update-engine.service - Update Engine. Jan 16 09:01:25.577435 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 16 09:01:25.592295 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 16 09:01:25.636911 (ntainerd)[1600]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 16 09:01:25.666456 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1606) Jan 16 09:01:25.637933 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 16 09:01:25.645258 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 16 09:01:25.717998 jq[1598]: true Jan 16 09:01:25.779998 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 16 09:01:25.795715 systemd-logind[1569]: New seat seat0. Jan 16 09:01:25.828752 systemd-logind[1569]: Watching system buttons on /dev/input/event1 (Power Button) Jan 16 09:01:25.828798 systemd-logind[1569]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 16 09:01:25.836040 extend-filesystems[1585]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 16 09:01:25.836040 extend-filesystems[1585]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 16 09:01:25.836040 extend-filesystems[1585]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 16 09:01:25.885469 extend-filesystems[1555]: Resized filesystem in /dev/vda9 Jan 16 09:01:25.885469 extend-filesystems[1555]: Found vdb Jan 16 09:01:25.837791 systemd[1]: Started systemd-logind.service - User Login Management. Jan 16 09:01:25.856385 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 16 09:01:25.856747 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 16 09:01:25.983442 bash[1642]: Updated "/home/core/.ssh/authorized_keys" Jan 16 09:01:25.986787 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 16 09:01:26.015762 systemd[1]: Starting sshkeys.service... Jan 16 09:01:26.097117 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 16 09:01:26.114560 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 16 09:01:26.143299 locksmithd[1616]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 16 09:01:26.254125 coreos-metadata[1652]: Jan 16 09:01:26.251 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 16 09:01:26.272999 coreos-metadata[1652]: Jan 16 09:01:26.269 INFO Fetch successful Jan 16 09:01:26.293522 unknown[1652]: wrote ssh authorized keys file for user: core Jan 16 09:01:26.312633 sshd_keygen[1599]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 16 09:01:26.356346 update-ssh-keys[1662]: Updated "/home/core/.ssh/authorized_keys" Jan 16 09:01:26.346177 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 16 09:01:26.366516 systemd[1]: Finished sshkeys.service. Jan 16 09:01:26.461884 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 16 09:01:26.496834 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 16 09:01:26.542215 systemd[1]: issuegen.service: Deactivated successfully. Jan 16 09:01:26.542646 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 16 09:01:26.549880 containerd[1600]: time="2025-01-16T09:01:26.549756737Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 16 09:01:26.562454 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 16 09:01:26.618723 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 16 09:01:26.641438 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 16 09:01:26.661629 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 16 09:01:26.663496 containerd[1600]: time="2025-01-16T09:01:26.663406696Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 16 09:01:26.669058 systemd[1]: Reached target getty.target - Login Prompts. Jan 16 09:01:26.678426 containerd[1600]: time="2025-01-16T09:01:26.678274073Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 16 09:01:26.678426 containerd[1600]: time="2025-01-16T09:01:26.678392487Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 16 09:01:26.678426 containerd[1600]: time="2025-01-16T09:01:26.678434423Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 16 09:01:26.679158 containerd[1600]: time="2025-01-16T09:01:26.678833372Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 16 09:01:26.679158 containerd[1600]: time="2025-01-16T09:01:26.678901906Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 16 09:01:26.679158 containerd[1600]: time="2025-01-16T09:01:26.679087050Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 09:01:26.679158 containerd[1600]: time="2025-01-16T09:01:26.679115075Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 16 09:01:26.680505 containerd[1600]: time="2025-01-16T09:01:26.679811416Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 09:01:26.680505 containerd[1600]: time="2025-01-16T09:01:26.679852622Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 16 09:01:26.680505 containerd[1600]: time="2025-01-16T09:01:26.679894059Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 09:01:26.680505 containerd[1600]: time="2025-01-16T09:01:26.679915999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 16 09:01:26.680505 containerd[1600]: time="2025-01-16T09:01:26.680156181Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 16 09:01:26.680900 containerd[1600]: time="2025-01-16T09:01:26.680675777Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 16 09:01:26.683536 containerd[1600]: time="2025-01-16T09:01:26.681632136Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 09:01:26.683536 containerd[1600]: time="2025-01-16T09:01:26.681688094Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 16 09:01:26.683536 containerd[1600]: time="2025-01-16T09:01:26.682062406Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 16 09:01:26.683536 containerd[1600]: time="2025-01-16T09:01:26.682175668Z" level=info msg="metadata content store policy set" policy=shared Jan 16 09:01:26.690458 containerd[1600]: time="2025-01-16T09:01:26.689790300Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 16 09:01:26.690458 containerd[1600]: time="2025-01-16T09:01:26.689938827Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 16 09:01:26.690458 containerd[1600]: time="2025-01-16T09:01:26.690152385Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 16 09:01:26.690458 containerd[1600]: time="2025-01-16T09:01:26.690185745Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 16 09:01:26.690458 containerd[1600]: time="2025-01-16T09:01:26.690227842Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 16 09:01:26.690854 containerd[1600]: time="2025-01-16T09:01:26.690678004Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 16 09:01:26.693948 containerd[1600]: time="2025-01-16T09:01:26.691652590Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 16 09:01:26.693948 containerd[1600]: time="2025-01-16T09:01:26.692013972Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 16 09:01:26.693948 containerd[1600]: time="2025-01-16T09:01:26.692062047Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 16 09:01:26.693948 containerd[1600]: time="2025-01-16T09:01:26.692089409Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 16 09:01:26.693948 containerd[1600]: time="2025-01-16T09:01:26.692114009Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 16 09:01:26.693948 containerd[1600]: time="2025-01-16T09:01:26.692148552Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 16 09:01:26.693948 containerd[1600]: time="2025-01-16T09:01:26.692171528Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 16 09:01:26.693948 containerd[1600]: time="2025-01-16T09:01:26.692196028Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 16 09:01:26.693948 containerd[1600]: time="2025-01-16T09:01:26.692237031Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 16 09:01:26.693948 containerd[1600]: time="2025-01-16T09:01:26.692258652Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 16 09:01:26.693948 containerd[1600]: time="2025-01-16T09:01:26.692279316Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 16 09:01:26.693948 containerd[1600]: time="2025-01-16T09:01:26.692317387Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 16 09:01:26.693948 containerd[1600]: time="2025-01-16T09:01:26.692374423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 16 09:01:26.693948 containerd[1600]: time="2025-01-16T09:01:26.692399786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 16 09:01:26.697318 containerd[1600]: time="2025-01-16T09:01:26.692432456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 16 09:01:26.697318 containerd[1600]: time="2025-01-16T09:01:26.692459222Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 16 09:01:26.697318 containerd[1600]: time="2025-01-16T09:01:26.692478737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 16 09:01:26.697318 containerd[1600]: time="2025-01-16T09:01:26.692499917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 16 09:01:26.697318 containerd[1600]: time="2025-01-16T09:01:26.692520474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 16 09:01:26.697318 containerd[1600]: time="2025-01-16T09:01:26.692544858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 16 09:01:26.697318 containerd[1600]: time="2025-01-16T09:01:26.692591294Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 16 09:01:26.697318 containerd[1600]: time="2025-01-16T09:01:26.692624192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 16 09:01:26.697318 containerd[1600]: time="2025-01-16T09:01:26.692792941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 16 09:01:26.697318 containerd[1600]: time="2025-01-16T09:01:26.692826324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 16 09:01:26.697318 containerd[1600]: time="2025-01-16T09:01:26.692852385Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 16 09:01:26.697318 containerd[1600]: time="2025-01-16T09:01:26.692882202Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 16 09:01:26.697318 containerd[1600]: time="2025-01-16T09:01:26.692926064Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 16 09:01:26.697318 containerd[1600]: time="2025-01-16T09:01:26.692949892Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 16 09:01:26.697318 containerd[1600]: time="2025-01-16T09:01:26.693005039Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 16 09:01:26.698043 containerd[1600]: time="2025-01-16T09:01:26.693104728Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 16 09:01:26.698043 containerd[1600]: time="2025-01-16T09:01:26.693137939Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 16 09:01:26.698043 containerd[1600]: time="2025-01-16T09:01:26.693157731Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 16 09:01:26.698043 containerd[1600]: time="2025-01-16T09:01:26.693364570Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 16 09:01:26.698043 containerd[1600]: time="2025-01-16T09:01:26.693402176Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 16 09:01:26.698043 containerd[1600]: time="2025-01-16T09:01:26.693429485Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 16 09:01:26.698043 containerd[1600]: time="2025-01-16T09:01:26.693448914Z" level=info msg="NRI interface is disabled by configuration." Jan 16 09:01:26.698043 containerd[1600]: time="2025-01-16T09:01:26.693468403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 16 09:01:26.697801 systemd[1]: Started containerd.service - containerd container runtime. Jan 16 09:01:26.698614 containerd[1600]: time="2025-01-16T09:01:26.694178769Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 16 09:01:26.698614 containerd[1600]: time="2025-01-16T09:01:26.694303875Z" level=info msg="Connect containerd service" Jan 16 09:01:26.698614 containerd[1600]: time="2025-01-16T09:01:26.694415821Z" level=info msg="using legacy CRI server" Jan 16 09:01:26.698614 containerd[1600]: time="2025-01-16T09:01:26.694433035Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 16 09:01:26.698614 containerd[1600]: time="2025-01-16T09:01:26.694679338Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 16 09:01:26.698614 containerd[1600]: time="2025-01-16T09:01:26.696062558Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 16 09:01:26.698614 containerd[1600]: time="2025-01-16T09:01:26.696172864Z" level=info msg="Start subscribing containerd event" Jan 16 09:01:26.698614 containerd[1600]: time="2025-01-16T09:01:26.696263988Z" level=info msg="Start recovering state" Jan 16 09:01:26.698614 containerd[1600]: time="2025-01-16T09:01:26.696368811Z" level=info msg="Start event monitor" Jan 16 09:01:26.698614 containerd[1600]: time="2025-01-16T09:01:26.696387762Z" level=info msg="Start snapshots syncer" Jan 16 09:01:26.698614 containerd[1600]: time="2025-01-16T09:01:26.696409341Z" level=info msg="Start cni network conf syncer for default" Jan 16 09:01:26.698614 containerd[1600]: time="2025-01-16T09:01:26.696424023Z" level=info msg="Start streaming server" Jan 16 09:01:26.698614 containerd[1600]: time="2025-01-16T09:01:26.697430524Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 16 09:01:26.698614 containerd[1600]: time="2025-01-16T09:01:26.697500791Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 16 09:01:26.698614 containerd[1600]: time="2025-01-16T09:01:26.698139695Z" level=info msg="containerd successfully booted in 0.149687s" Jan 16 09:01:27.595258 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 09:01:27.600945 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 16 09:01:27.605227 systemd[1]: Startup finished in 8.055s (kernel) + 9.200s (userspace) = 17.255s. Jan 16 09:01:27.613279 (kubelet)[1700]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 09:01:28.749300 kubelet[1700]: E0116 09:01:28.749128 1700 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 09:01:28.753820 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 09:01:28.754766 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 09:01:32.391616 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 16 09:01:32.398448 systemd[1]: Started sshd@0-143.110.229.235:22-147.75.109.163:54734.service - OpenSSH per-connection server daemon (147.75.109.163:54734). Jan 16 09:01:32.540636 sshd[1713]: Accepted publickey for core from 147.75.109.163 port 54734 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 09:01:32.556111 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:01:32.576191 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 16 09:01:32.588424 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 16 09:01:32.598345 systemd-logind[1569]: New session 1 of user core. Jan 16 09:01:32.628325 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 16 09:01:32.640636 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 16 09:01:32.662295 (systemd)[1719]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 16 09:01:32.838940 systemd[1719]: Queued start job for default target default.target. Jan 16 09:01:32.839696 systemd[1719]: Created slice app.slice - User Application Slice. Jan 16 09:01:32.839742 systemd[1719]: Reached target paths.target - Paths. Jan 16 09:01:32.839763 systemd[1719]: Reached target timers.target - Timers. Jan 16 09:01:32.853538 systemd[1719]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 16 09:01:32.872715 systemd[1719]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 16 09:01:32.872849 systemd[1719]: Reached target sockets.target - Sockets. Jan 16 09:01:32.872876 systemd[1719]: Reached target basic.target - Basic System. Jan 16 09:01:32.873171 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 16 09:01:32.875083 systemd[1719]: Reached target default.target - Main User Target. Jan 16 09:01:32.875163 systemd[1719]: Startup finished in 201ms. Jan 16 09:01:32.878568 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 16 09:01:32.972802 systemd[1]: Started sshd@1-143.110.229.235:22-147.75.109.163:54736.service - OpenSSH per-connection server daemon (147.75.109.163:54736). Jan 16 09:01:33.060463 sshd[1731]: Accepted publickey for core from 147.75.109.163 port 54736 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 09:01:33.063335 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:01:33.081446 systemd-logind[1569]: New session 2 of user core. Jan 16 09:01:33.088762 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 16 09:01:33.161767 sshd[1734]: Connection closed by 147.75.109.163 port 54736 Jan 16 09:01:33.162861 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Jan 16 09:01:33.175238 systemd[1]: Started sshd@2-143.110.229.235:22-147.75.109.163:54746.service - OpenSSH per-connection server daemon (147.75.109.163:54746). Jan 16 09:01:33.177258 systemd[1]: sshd@1-143.110.229.235:22-147.75.109.163:54736.service: Deactivated successfully. Jan 16 09:01:33.184554 systemd[1]: session-2.scope: Deactivated successfully. Jan 16 09:01:33.188278 systemd-logind[1569]: Session 2 logged out. Waiting for processes to exit. Jan 16 09:01:33.191305 systemd-logind[1569]: Removed session 2. Jan 16 09:01:33.265495 sshd[1736]: Accepted publickey for core from 147.75.109.163 port 54746 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 09:01:33.268806 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:01:33.280306 systemd-logind[1569]: New session 3 of user core. Jan 16 09:01:33.295686 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 16 09:01:33.367664 sshd[1742]: Connection closed by 147.75.109.163 port 54746 Jan 16 09:01:33.370146 sshd-session[1736]: pam_unix(sshd:session): session closed for user core Jan 16 09:01:33.377699 systemd[1]: Started sshd@3-143.110.229.235:22-147.75.109.163:54756.service - OpenSSH per-connection server daemon (147.75.109.163:54756). Jan 16 09:01:33.381073 systemd[1]: sshd@2-143.110.229.235:22-147.75.109.163:54746.service: Deactivated successfully. Jan 16 09:01:33.381456 systemd-logind[1569]: Session 3 logged out. Waiting for processes to exit. Jan 16 09:01:33.394611 systemd[1]: session-3.scope: Deactivated successfully. Jan 16 09:01:33.402667 systemd-logind[1569]: Removed session 3. Jan 16 09:01:33.467141 sshd[1744]: Accepted publickey for core from 147.75.109.163 port 54756 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 09:01:33.469464 sshd-session[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:01:33.478302 systemd-logind[1569]: New session 4 of user core. Jan 16 09:01:33.485636 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 16 09:01:33.558316 sshd[1750]: Connection closed by 147.75.109.163 port 54756 Jan 16 09:01:33.559264 sshd-session[1744]: pam_unix(sshd:session): session closed for user core Jan 16 09:01:33.573952 systemd[1]: Started sshd@4-143.110.229.235:22-147.75.109.163:54758.service - OpenSSH per-connection server daemon (147.75.109.163:54758). Jan 16 09:01:33.576160 systemd[1]: sshd@3-143.110.229.235:22-147.75.109.163:54756.service: Deactivated successfully. Jan 16 09:01:33.582594 systemd[1]: session-4.scope: Deactivated successfully. Jan 16 09:01:33.586612 systemd-logind[1569]: Session 4 logged out. Waiting for processes to exit. Jan 16 09:01:33.589319 systemd-logind[1569]: Removed session 4. Jan 16 09:01:33.642023 sshd[1752]: Accepted publickey for core from 147.75.109.163 port 54758 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 09:01:33.644480 sshd-session[1752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:01:33.653020 systemd-logind[1569]: New session 5 of user core. Jan 16 09:01:33.665761 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 16 09:01:33.747851 sudo[1759]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 16 09:01:33.748721 sudo[1759]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 09:01:33.765405 sudo[1759]: pam_unix(sudo:session): session closed for user root Jan 16 09:01:33.769883 sshd[1758]: Connection closed by 147.75.109.163 port 54758 Jan 16 09:01:33.772693 sshd-session[1752]: pam_unix(sshd:session): session closed for user core Jan 16 09:01:33.790487 systemd[1]: Started sshd@5-143.110.229.235:22-147.75.109.163:54764.service - OpenSSH per-connection server daemon (147.75.109.163:54764). Jan 16 09:01:33.791271 systemd[1]: sshd@4-143.110.229.235:22-147.75.109.163:54758.service: Deactivated successfully. Jan 16 09:01:33.802173 systemd[1]: session-5.scope: Deactivated successfully. Jan 16 09:01:33.806161 systemd-logind[1569]: Session 5 logged out. Waiting for processes to exit. Jan 16 09:01:33.808386 systemd-logind[1569]: Removed session 5. Jan 16 09:01:33.867898 sshd[1761]: Accepted publickey for core from 147.75.109.163 port 54764 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 09:01:33.871746 sshd-session[1761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:01:33.890275 systemd-logind[1569]: New session 6 of user core. Jan 16 09:01:33.897977 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 16 09:01:33.973596 sudo[1769]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 16 09:01:33.974568 sudo[1769]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 09:01:33.985025 sudo[1769]: pam_unix(sudo:session): session closed for user root Jan 16 09:01:33.995622 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 16 09:01:33.996036 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 09:01:34.019584 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 16 09:01:34.084584 augenrules[1791]: No rules Jan 16 09:01:34.083699 systemd[1]: audit-rules.service: Deactivated successfully. Jan 16 09:01:34.084212 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 16 09:01:34.086691 sudo[1768]: pam_unix(sudo:session): session closed for user root Jan 16 09:01:34.092378 sshd[1767]: Connection closed by 147.75.109.163 port 54764 Jan 16 09:01:34.094590 sshd-session[1761]: pam_unix(sshd:session): session closed for user core Jan 16 09:01:34.108558 systemd[1]: Started sshd@6-143.110.229.235:22-147.75.109.163:54772.service - OpenSSH per-connection server daemon (147.75.109.163:54772). Jan 16 09:01:34.110507 systemd[1]: sshd@5-143.110.229.235:22-147.75.109.163:54764.service: Deactivated successfully. Jan 16 09:01:34.121078 systemd[1]: session-6.scope: Deactivated successfully. Jan 16 09:01:34.124594 systemd-logind[1569]: Session 6 logged out. Waiting for processes to exit. Jan 16 09:01:34.128062 systemd-logind[1569]: Removed session 6. Jan 16 09:01:34.177077 sshd[1797]: Accepted publickey for core from 147.75.109.163 port 54772 ssh2: RSA SHA256:JFCq2iHRoEjWAf+XB9dAYBdNgVKarxLIY/Gd6UT86ms Jan 16 09:01:34.179397 sshd-session[1797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:01:34.187985 systemd-logind[1569]: New session 7 of user core. Jan 16 09:01:34.193727 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 16 09:01:34.261075 sudo[1804]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 16 09:01:34.261664 sudo[1804]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 09:01:35.777793 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 09:01:35.788544 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 09:01:35.851261 systemd[1]: Reloading requested from client PID 1844 ('systemctl') (unit session-7.scope)... Jan 16 09:01:35.851493 systemd[1]: Reloading... Jan 16 09:01:36.168997 zram_generator::config[1885]: No configuration found. Jan 16 09:01:36.435907 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 16 09:01:36.560572 systemd[1]: Reloading finished in 708 ms. Jan 16 09:01:36.666600 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 16 09:01:36.666689 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 16 09:01:36.667030 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 09:01:36.681575 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 09:01:36.864393 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 09:01:36.870298 (kubelet)[1949]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 16 09:01:36.963994 kubelet[1949]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 09:01:36.963994 kubelet[1949]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 16 09:01:36.963994 kubelet[1949]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 09:01:36.963994 kubelet[1949]: I0116 09:01:36.962422 1949 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 16 09:01:37.964819 kubelet[1949]: I0116 09:01:37.964729 1949 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 16 09:01:37.964819 kubelet[1949]: I0116 09:01:37.964794 1949 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 16 09:01:37.965632 kubelet[1949]: I0116 09:01:37.965281 1949 server.go:919] "Client rotation is on, will bootstrap in background" Jan 16 09:01:37.995217 kubelet[1949]: I0116 09:01:37.994561 1949 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 16 09:01:38.026174 kubelet[1949]: I0116 09:01:38.026127 1949 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 16 09:01:38.026975 kubelet[1949]: I0116 09:01:38.026796 1949 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 16 09:01:38.027107 kubelet[1949]: I0116 09:01:38.027079 1949 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 16 09:01:38.027345 kubelet[1949]: I0116 09:01:38.027120 1949 topology_manager.go:138] "Creating topology manager with none policy" Jan 16 09:01:38.027345 kubelet[1949]: I0116 09:01:38.027132 1949 container_manager_linux.go:301] "Creating device plugin manager" Jan 16 09:01:38.027345 kubelet[1949]: I0116 09:01:38.027331 1949 state_mem.go:36] "Initialized new in-memory state store" Jan 16 09:01:38.027547 kubelet[1949]: I0116 09:01:38.027485 1949 kubelet.go:396] "Attempting to sync node with API server" Jan 16 09:01:38.027547 kubelet[1949]: I0116 09:01:38.027509 1949 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 16 09:01:38.029191 kubelet[1949]: I0116 09:01:38.027558 1949 kubelet.go:312] "Adding apiserver pod source" Jan 16 09:01:38.029191 kubelet[1949]: I0116 09:01:38.027580 1949 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 16 09:01:38.029191 kubelet[1949]: E0116 09:01:38.028509 1949 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:01:38.029191 kubelet[1949]: E0116 09:01:38.028640 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:01:38.030534 kubelet[1949]: I0116 09:01:38.030488 1949 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 16 09:01:38.036503 kubelet[1949]: I0116 09:01:38.036447 1949 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 16 09:01:38.039227 kubelet[1949]: W0116 09:01:38.038378 1949 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 16 09:01:38.039793 kubelet[1949]: I0116 09:01:38.039760 1949 server.go:1256] "Started kubelet" Jan 16 09:01:38.040590 kubelet[1949]: I0116 09:01:38.040555 1949 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 16 09:01:38.042038 kubelet[1949]: I0116 09:01:38.042004 1949 server.go:461] "Adding debug handlers to kubelet server" Jan 16 09:01:38.046387 kubelet[1949]: I0116 09:01:38.046316 1949 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 16 09:01:38.048756 kubelet[1949]: I0116 09:01:38.048243 1949 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 16 09:01:38.048756 kubelet[1949]: I0116 09:01:38.048533 1949 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 16 09:01:38.061929 kubelet[1949]: W0116 09:01:38.061884 1949 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "143.110.229.235" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 16 09:01:38.066878 kubelet[1949]: E0116 09:01:38.062152 1949 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "143.110.229.235" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 16 09:01:38.066878 kubelet[1949]: W0116 09:01:38.062265 1949 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 16 09:01:38.066878 kubelet[1949]: E0116 09:01:38.062281 1949 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 16 09:01:38.066878 kubelet[1949]: I0116 09:01:38.066064 1949 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 16 09:01:38.068984 kubelet[1949]: I0116 09:01:38.068274 1949 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 16 09:01:38.068984 kubelet[1949]: I0116 09:01:38.068505 1949 reconciler_new.go:29] "Reconciler: start to sync state" Jan 16 09:01:38.091942 kubelet[1949]: I0116 09:01:38.091897 1949 factory.go:221] Registration of the systemd container factory successfully Jan 16 09:01:38.092342 kubelet[1949]: I0116 09:01:38.092300 1949 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 16 09:01:38.094669 kubelet[1949]: E0116 09:01:38.094630 1949 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"143.110.229.235\" not found" node="143.110.229.235" Jan 16 09:01:38.095722 kubelet[1949]: E0116 09:01:38.095695 1949 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 16 09:01:38.097200 kubelet[1949]: I0116 09:01:38.097177 1949 factory.go:221] Registration of the containerd container factory successfully Jan 16 09:01:38.142980 kubelet[1949]: I0116 09:01:38.142882 1949 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 16 09:01:38.143981 kubelet[1949]: I0116 09:01:38.143614 1949 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 16 09:01:38.143981 kubelet[1949]: I0116 09:01:38.143698 1949 state_mem.go:36] "Initialized new in-memory state store" Jan 16 09:01:38.154831 kubelet[1949]: I0116 09:01:38.154521 1949 policy_none.go:49] "None policy: Start" Jan 16 09:01:38.159659 kubelet[1949]: I0116 09:01:38.159351 1949 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 16 09:01:38.159659 kubelet[1949]: I0116 09:01:38.159404 1949 state_mem.go:35] "Initializing new in-memory state store" Jan 16 09:01:38.174041 kubelet[1949]: I0116 09:01:38.174002 1949 kubelet_node_status.go:73] "Attempting to register node" node="143.110.229.235" Jan 16 09:01:38.193562 kubelet[1949]: I0116 09:01:38.193453 1949 kubelet_node_status.go:76] "Successfully registered node" node="143.110.229.235" Jan 16 09:01:38.198153 kubelet[1949]: I0116 09:01:38.198028 1949 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 16 09:01:38.199690 kubelet[1949]: I0116 09:01:38.199509 1949 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 16 09:01:38.205401 kubelet[1949]: E0116 09:01:38.205136 1949 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"143.110.229.235\" not found" Jan 16 09:01:38.229703 kubelet[1949]: E0116 09:01:38.229012 1949 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"143.110.229.235\" not found" Jan 16 09:01:38.284725 kubelet[1949]: I0116 09:01:38.284484 1949 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 16 09:01:38.287684 kubelet[1949]: I0116 09:01:38.287521 1949 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 16 09:01:38.290006 kubelet[1949]: I0116 09:01:38.289012 1949 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 16 09:01:38.290006 kubelet[1949]: I0116 09:01:38.289088 1949 kubelet.go:2329] "Starting kubelet main sync loop" Jan 16 09:01:38.290006 kubelet[1949]: E0116 09:01:38.289246 1949 kubelet.go:2353] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 16 09:01:38.331691 kubelet[1949]: E0116 09:01:38.331503 1949 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"143.110.229.235\" not found" Jan 16 09:01:38.432685 kubelet[1949]: E0116 09:01:38.432566 1949 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"143.110.229.235\" not found" Jan 16 09:01:38.533094 kubelet[1949]: E0116 09:01:38.532797 1949 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"143.110.229.235\" not found" Jan 16 09:01:38.633980 kubelet[1949]: E0116 09:01:38.633866 1949 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"143.110.229.235\" not found" Jan 16 09:01:38.734997 kubelet[1949]: E0116 09:01:38.734879 1949 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"143.110.229.235\" not found" Jan 16 09:01:38.836067 kubelet[1949]: E0116 09:01:38.836006 1949 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"143.110.229.235\" not found" Jan 16 09:01:38.937593 kubelet[1949]: E0116 09:01:38.937501 1949 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"143.110.229.235\" not found" Jan 16 09:01:38.975554 kubelet[1949]: I0116 09:01:38.975485 1949 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 16 09:01:38.976291 kubelet[1949]: W0116 09:01:38.975886 1949 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.CSIDriver ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Jan 16 09:01:38.976291 kubelet[1949]: W0116 09:01:38.975945 1949 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Jan 16 09:01:39.029630 kubelet[1949]: E0116 09:01:39.029528 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:01:39.038780 kubelet[1949]: E0116 09:01:39.038670 1949 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"143.110.229.235\" not found" Jan 16 09:01:39.139080 kubelet[1949]: E0116 09:01:39.138906 1949 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"143.110.229.235\" not found" Jan 16 09:01:39.239399 kubelet[1949]: E0116 09:01:39.239319 1949 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"143.110.229.235\" not found" Jan 16 09:01:39.339753 kubelet[1949]: E0116 09:01:39.339636 1949 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"143.110.229.235\" not found" Jan 16 09:01:39.433505 sudo[1804]: pam_unix(sudo:session): session closed for user root Jan 16 09:01:39.439098 sshd[1803]: Connection closed by 147.75.109.163 port 54772 Jan 16 09:01:39.438974 sshd-session[1797]: pam_unix(sshd:session): session closed for user core Jan 16 09:01:39.445496 kubelet[1949]: I0116 09:01:39.444874 1949 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 16 09:01:39.447982 containerd[1600]: time="2025-01-16T09:01:39.447775982Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 16 09:01:39.449660 systemd[1]: sshd@6-143.110.229.235:22-147.75.109.163:54772.service: Deactivated successfully. Jan 16 09:01:39.452848 kubelet[1949]: I0116 09:01:39.452118 1949 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 16 09:01:39.456875 systemd[1]: session-7.scope: Deactivated successfully. Jan 16 09:01:39.461282 systemd-logind[1569]: Session 7 logged out. Waiting for processes to exit. Jan 16 09:01:39.464149 systemd-logind[1569]: Removed session 7. Jan 16 09:01:40.031006 kubelet[1949]: E0116 09:01:40.030869 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:01:40.032264 kubelet[1949]: I0116 09:01:40.031815 1949 apiserver.go:52] "Watching apiserver" Jan 16 09:01:40.042010 kubelet[1949]: I0116 09:01:40.041932 1949 topology_manager.go:215] "Topology Admit Handler" podUID="5f7fd5e7-27fc-405d-8080-db1fd7edfbc2" podNamespace="kube-system" podName="cilium-vw4db" Jan 16 09:01:40.042234 kubelet[1949]: I0116 09:01:40.042200 1949 topology_manager.go:215] "Topology Admit Handler" podUID="e574f080-d98e-4ebb-9b42-886ede20f751" podNamespace="kube-system" podName="kube-proxy-vqv9d" Jan 16 09:01:40.070195 kubelet[1949]: I0116 09:01:40.069515 1949 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 16 09:01:40.079877 kubelet[1949]: I0116 09:01:40.079828 1949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2-etc-cni-netd\") pod \"cilium-vw4db\" (UID: \"5f7fd5e7-27fc-405d-8080-db1fd7edfbc2\") " pod="kube-system/cilium-vw4db" Jan 16 09:01:40.079877 kubelet[1949]: I0116 09:01:40.079892 1949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2-xtables-lock\") pod \"cilium-vw4db\" (UID: \"5f7fd5e7-27fc-405d-8080-db1fd7edfbc2\") " pod="kube-system/cilium-vw4db" Jan 16 09:01:40.080133 kubelet[1949]: I0116 09:01:40.079924 1949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2-clustermesh-secrets\") pod \"cilium-vw4db\" (UID: \"5f7fd5e7-27fc-405d-8080-db1fd7edfbc2\") " pod="kube-system/cilium-vw4db" Jan 16 09:01:40.080133 kubelet[1949]: I0116 09:01:40.079945 1949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2-host-proc-sys-net\") pod \"cilium-vw4db\" (UID: \"5f7fd5e7-27fc-405d-8080-db1fd7edfbc2\") " pod="kube-system/cilium-vw4db" Jan 16 09:01:40.080133 kubelet[1949]: I0116 09:01:40.079982 1949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2-host-proc-sys-kernel\") pod \"cilium-vw4db\" (UID: \"5f7fd5e7-27fc-405d-8080-db1fd7edfbc2\") " pod="kube-system/cilium-vw4db" Jan 16 09:01:40.080133 kubelet[1949]: I0116 09:01:40.080001 1949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2-cilium-run\") pod \"cilium-vw4db\" (UID: \"5f7fd5e7-27fc-405d-8080-db1fd7edfbc2\") " pod="kube-system/cilium-vw4db" Jan 16 09:01:40.080133 kubelet[1949]: I0116 09:01:40.080020 1949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2-cni-path\") pod \"cilium-vw4db\" (UID: \"5f7fd5e7-27fc-405d-8080-db1fd7edfbc2\") " pod="kube-system/cilium-vw4db" Jan 16 09:01:40.080133 kubelet[1949]: I0116 09:01:40.080039 1949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2-cilium-config-path\") pod \"cilium-vw4db\" (UID: \"5f7fd5e7-27fc-405d-8080-db1fd7edfbc2\") " pod="kube-system/cilium-vw4db" Jan 16 09:01:40.080586 kubelet[1949]: I0116 09:01:40.080063 1949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgbnm\" (UniqueName: \"kubernetes.io/projected/e574f080-d98e-4ebb-9b42-886ede20f751-kube-api-access-lgbnm\") pod \"kube-proxy-vqv9d\" (UID: \"e574f080-d98e-4ebb-9b42-886ede20f751\") " pod="kube-system/kube-proxy-vqv9d" Jan 16 09:01:40.080586 kubelet[1949]: I0116 09:01:40.080089 1949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2-bpf-maps\") pod \"cilium-vw4db\" (UID: \"5f7fd5e7-27fc-405d-8080-db1fd7edfbc2\") " pod="kube-system/cilium-vw4db" Jan 16 09:01:40.080586 kubelet[1949]: I0116 09:01:40.080118 1949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2-lib-modules\") pod \"cilium-vw4db\" (UID: \"5f7fd5e7-27fc-405d-8080-db1fd7edfbc2\") " pod="kube-system/cilium-vw4db" Jan 16 09:01:40.080586 kubelet[1949]: I0116 09:01:40.080160 1949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e574f080-d98e-4ebb-9b42-886ede20f751-xtables-lock\") pod \"kube-proxy-vqv9d\" (UID: \"e574f080-d98e-4ebb-9b42-886ede20f751\") " pod="kube-system/kube-proxy-vqv9d" Jan 16 09:01:40.080586 kubelet[1949]: I0116 09:01:40.080180 1949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e574f080-d98e-4ebb-9b42-886ede20f751-lib-modules\") pod \"kube-proxy-vqv9d\" (UID: \"e574f080-d98e-4ebb-9b42-886ede20f751\") " pod="kube-system/kube-proxy-vqv9d" Jan 16 09:01:40.080586 kubelet[1949]: I0116 09:01:40.080199 1949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2-cilium-cgroup\") pod \"cilium-vw4db\" (UID: \"5f7fd5e7-27fc-405d-8080-db1fd7edfbc2\") " pod="kube-system/cilium-vw4db" Jan 16 09:01:40.080862 kubelet[1949]: I0116 09:01:40.080231 1949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2-hubble-tls\") pod \"cilium-vw4db\" (UID: \"5f7fd5e7-27fc-405d-8080-db1fd7edfbc2\") " pod="kube-system/cilium-vw4db" Jan 16 09:01:40.080862 kubelet[1949]: I0116 09:01:40.080253 1949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hsgs\" (UniqueName: \"kubernetes.io/projected/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2-kube-api-access-2hsgs\") pod \"cilium-vw4db\" (UID: \"5f7fd5e7-27fc-405d-8080-db1fd7edfbc2\") " pod="kube-system/cilium-vw4db" Jan 16 09:01:40.080862 kubelet[1949]: I0116 09:01:40.080279 1949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e574f080-d98e-4ebb-9b42-886ede20f751-kube-proxy\") pod \"kube-proxy-vqv9d\" (UID: \"e574f080-d98e-4ebb-9b42-886ede20f751\") " pod="kube-system/kube-proxy-vqv9d" Jan 16 09:01:40.080862 kubelet[1949]: I0116 09:01:40.080299 1949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2-hostproc\") pod \"cilium-vw4db\" (UID: \"5f7fd5e7-27fc-405d-8080-db1fd7edfbc2\") " pod="kube-system/cilium-vw4db" Jan 16 09:01:40.347837 kubelet[1949]: E0116 09:01:40.347751 1949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:01:40.349224 kubelet[1949]: E0116 09:01:40.349179 1949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:01:40.350897 containerd[1600]: time="2025-01-16T09:01:40.350328792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vqv9d,Uid:e574f080-d98e-4ebb-9b42-886ede20f751,Namespace:kube-system,Attempt:0,}" Jan 16 09:01:40.350897 containerd[1600]: time="2025-01-16T09:01:40.350540645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vw4db,Uid:5f7fd5e7-27fc-405d-8080-db1fd7edfbc2,Namespace:kube-system,Attempt:0,}" Jan 16 09:01:41.020989 containerd[1600]: time="2025-01-16T09:01:41.019242675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 09:01:41.022608 containerd[1600]: time="2025-01-16T09:01:41.022521738Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 16 09:01:41.022843 containerd[1600]: time="2025-01-16T09:01:41.022703789Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 09:01:41.024449 containerd[1600]: time="2025-01-16T09:01:41.023374389Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 16 09:01:41.024449 containerd[1600]: time="2025-01-16T09:01:41.023728398Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 09:01:41.032016 kubelet[1949]: E0116 09:01:41.031238 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:01:41.033102 containerd[1600]: time="2025-01-16T09:01:41.032076496Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 09:01:41.037006 containerd[1600]: time="2025-01-16T09:01:41.036383861Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 685.747127ms" Jan 16 09:01:41.039072 containerd[1600]: time="2025-01-16T09:01:41.039003874Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 688.330224ms" Jan 16 09:01:41.197809 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3955704843.mount: Deactivated successfully. Jan 16 09:01:41.306135 containerd[1600]: time="2025-01-16T09:01:41.305539912Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 09:01:41.309288 containerd[1600]: time="2025-01-16T09:01:41.307561714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 09:01:41.311564 containerd[1600]: time="2025-01-16T09:01:41.310607138Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 09:01:41.311564 containerd[1600]: time="2025-01-16T09:01:41.310698942Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 09:01:41.311564 containerd[1600]: time="2025-01-16T09:01:41.310725124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:01:41.311564 containerd[1600]: time="2025-01-16T09:01:41.310871463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:01:41.313662 containerd[1600]: time="2025-01-16T09:01:41.313553221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:01:41.322453 containerd[1600]: time="2025-01-16T09:01:41.322256426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:01:41.526790 containerd[1600]: time="2025-01-16T09:01:41.526723348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vqv9d,Uid:e574f080-d98e-4ebb-9b42-886ede20f751,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f11e608ec481306b93669ff108d2fcebe768a3564e3b1cc2cfe7682b698a05c\"" Jan 16 09:01:41.529890 kubelet[1949]: E0116 09:01:41.529805 1949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:01:41.535733 containerd[1600]: time="2025-01-16T09:01:41.534562274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vw4db,Uid:5f7fd5e7-27fc-405d-8080-db1fd7edfbc2,Namespace:kube-system,Attempt:0,} returns sandbox id \"8dbc76b1c0b3efab19927556f87f80902534999c19fa437311814b228a3dbe1b\"" Jan 16 09:01:41.536022 kubelet[1949]: E0116 09:01:41.535970 1949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:01:41.542523 containerd[1600]: time="2025-01-16T09:01:41.542135710Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\"" Jan 16 09:01:42.032310 kubelet[1949]: E0116 09:01:42.032238 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:01:43.033505 kubelet[1949]: E0116 09:01:43.033381 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:01:43.362228 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1897656756.mount: Deactivated successfully. Jan 16 09:01:44.034025 kubelet[1949]: E0116 09:01:44.033947 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:01:44.328262 containerd[1600]: time="2025-01-16T09:01:44.327161536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:01:44.329066 containerd[1600]: time="2025-01-16T09:01:44.329025128Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.13: active requests=0, bytes read=28620941" Jan 16 09:01:44.329771 containerd[1600]: time="2025-01-16T09:01:44.329735281Z" level=info msg="ImageCreate event name:\"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:01:44.334005 containerd[1600]: time="2025-01-16T09:01:44.333895905Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:01:44.334459 containerd[1600]: time="2025-01-16T09:01:44.334400256Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.13\" with image id \"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\", repo tag \"registry.k8s.io/kube-proxy:v1.29.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\", size \"28619960\" in 2.792221488s" Jan 16 09:01:44.334568 containerd[1600]: time="2025-01-16T09:01:44.334483169Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\" returns image reference \"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\"" Jan 16 09:01:44.336984 containerd[1600]: time="2025-01-16T09:01:44.336920358Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 16 09:01:44.339038 containerd[1600]: time="2025-01-16T09:01:44.338942824Z" level=info msg="CreateContainer within sandbox \"7f11e608ec481306b93669ff108d2fcebe768a3564e3b1cc2cfe7682b698a05c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 16 09:01:44.370761 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1674155311.mount: Deactivated successfully. Jan 16 09:01:44.374507 containerd[1600]: time="2025-01-16T09:01:44.374349795Z" level=info msg="CreateContainer within sandbox \"7f11e608ec481306b93669ff108d2fcebe768a3564e3b1cc2cfe7682b698a05c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7aace2472de45a580feb5cd36c7731ba578af79a177ed313be34e336e61ab133\"" Jan 16 09:01:44.377270 containerd[1600]: time="2025-01-16T09:01:44.376748810Z" level=info msg="StartContainer for \"7aace2472de45a580feb5cd36c7731ba578af79a177ed313be34e336e61ab133\"" Jan 16 09:01:44.448685 systemd[1]: run-containerd-runc-k8s.io-7aace2472de45a580feb5cd36c7731ba578af79a177ed313be34e336e61ab133-runc.jWjT0t.mount: Deactivated successfully. Jan 16 09:01:44.527188 containerd[1600]: time="2025-01-16T09:01:44.527114388Z" level=info msg="StartContainer for \"7aace2472de45a580feb5cd36c7731ba578af79a177ed313be34e336e61ab133\" returns successfully" Jan 16 09:01:45.034481 kubelet[1949]: E0116 09:01:45.034406 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:01:45.344487 kubelet[1949]: E0116 09:01:45.342940 1949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:01:46.036475 kubelet[1949]: E0116 09:01:46.036177 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:01:46.352073 kubelet[1949]: E0116 09:01:46.350651 1949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:01:46.517344 systemd-resolved[1480]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jan 16 09:01:47.036415 kubelet[1949]: E0116 09:01:47.036357 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:01:48.037108 kubelet[1949]: E0116 09:01:48.036823 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:01:49.040456 kubelet[1949]: E0116 09:01:49.040320 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:01:50.042596 kubelet[1949]: E0116 09:01:50.041368 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:01:51.041611 kubelet[1949]: E0116 09:01:51.041502 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:01:52.051989 kubelet[1949]: E0116 09:01:52.042611 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:01:52.199559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount346444517.mount: Deactivated successfully. Jan 16 09:01:53.047212 kubelet[1949]: E0116 09:01:53.047112 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:01:54.048017 kubelet[1949]: E0116 09:01:54.047859 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:01:55.049164 kubelet[1949]: E0116 09:01:55.049099 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:01:56.052382 kubelet[1949]: E0116 09:01:56.052322 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:01:57.053896 kubelet[1949]: E0116 09:01:57.053840 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:01:57.726039 containerd[1600]: time="2025-01-16T09:01:57.725767810Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:01:57.729158 containerd[1600]: time="2025-01-16T09:01:57.729044901Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735379" Jan 16 09:01:57.730855 containerd[1600]: time="2025-01-16T09:01:57.730759708Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:01:57.734390 containerd[1600]: time="2025-01-16T09:01:57.733619580Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 13.396444111s" Jan 16 09:01:57.734390 containerd[1600]: time="2025-01-16T09:01:57.733686066Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 16 09:01:57.738318 containerd[1600]: time="2025-01-16T09:01:57.738095714Z" level=info msg="CreateContainer within sandbox \"8dbc76b1c0b3efab19927556f87f80902534999c19fa437311814b228a3dbe1b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 16 09:01:57.763517 containerd[1600]: time="2025-01-16T09:01:57.763426731Z" level=info msg="CreateContainer within sandbox \"8dbc76b1c0b3efab19927556f87f80902534999c19fa437311814b228a3dbe1b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a84af7cfdf48473bbd96b4791ba0c9960999a2ab6515e1032ec4026f097a0ab4\"" Jan 16 09:01:57.764546 containerd[1600]: time="2025-01-16T09:01:57.764509428Z" level=info msg="StartContainer for \"a84af7cfdf48473bbd96b4791ba0c9960999a2ab6515e1032ec4026f097a0ab4\"" Jan 16 09:01:57.892020 containerd[1600]: time="2025-01-16T09:01:57.890449648Z" level=info msg="StartContainer for \"a84af7cfdf48473bbd96b4791ba0c9960999a2ab6515e1032ec4026f097a0ab4\" returns successfully" Jan 16 09:01:58.030438 containerd[1600]: time="2025-01-16T09:01:58.022873144Z" level=info msg="shim disconnected" id=a84af7cfdf48473bbd96b4791ba0c9960999a2ab6515e1032ec4026f097a0ab4 namespace=k8s.io Jan 16 09:01:58.030438 containerd[1600]: time="2025-01-16T09:01:58.023003374Z" level=warning msg="cleaning up after shim disconnected" id=a84af7cfdf48473bbd96b4791ba0c9960999a2ab6515e1032ec4026f097a0ab4 namespace=k8s.io Jan 16 09:01:58.030438 containerd[1600]: time="2025-01-16T09:01:58.023023648Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 09:01:58.033359 kubelet[1949]: E0116 09:01:58.029046 1949 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:01:58.055495 kubelet[1949]: E0116 09:01:58.055357 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:01:58.410071 kubelet[1949]: E0116 09:01:58.410017 1949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:01:58.422339 containerd[1600]: time="2025-01-16T09:01:58.422246986Z" level=info msg="CreateContainer within sandbox \"8dbc76b1c0b3efab19927556f87f80902534999c19fa437311814b228a3dbe1b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 16 09:01:58.444232 containerd[1600]: time="2025-01-16T09:01:58.444141482Z" level=info msg="CreateContainer within sandbox \"8dbc76b1c0b3efab19927556f87f80902534999c19fa437311814b228a3dbe1b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8a0dda2cac5c314c6b3184c5928db44d77e1102e3b6600b6d25bac6acade549b\"" Jan 16 09:01:58.446138 containerd[1600]: time="2025-01-16T09:01:58.445717833Z" level=info msg="StartContainer for \"8a0dda2cac5c314c6b3184c5928db44d77e1102e3b6600b6d25bac6acade549b\"" Jan 16 09:01:58.446962 kubelet[1949]: I0116 09:01:58.446923 1949 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-vqv9d" podStartSLOduration=17.642753184 podStartE2EDuration="20.446811434s" podCreationTimestamp="2025-01-16 09:01:38 +0000 UTC" firstStartedPulling="2025-01-16 09:01:41.531398725 +0000 UTC m=+4.653283372" lastFinishedPulling="2025-01-16 09:01:44.335456993 +0000 UTC m=+7.457341622" observedRunningTime="2025-01-16 09:01:45.385629866 +0000 UTC m=+8.507514521" watchObservedRunningTime="2025-01-16 09:01:58.446811434 +0000 UTC m=+21.568696087" Jan 16 09:01:58.553633 containerd[1600]: time="2025-01-16T09:01:58.553569930Z" level=info msg="StartContainer for \"8a0dda2cac5c314c6b3184c5928db44d77e1102e3b6600b6d25bac6acade549b\" returns successfully" Jan 16 09:01:58.577741 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 16 09:01:58.578608 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 16 09:01:58.578728 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 16 09:01:58.591835 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 16 09:01:58.656527 containerd[1600]: time="2025-01-16T09:01:58.656069804Z" level=info msg="shim disconnected" id=8a0dda2cac5c314c6b3184c5928db44d77e1102e3b6600b6d25bac6acade549b namespace=k8s.io Jan 16 09:01:58.656527 containerd[1600]: time="2025-01-16T09:01:58.656208870Z" level=warning msg="cleaning up after shim disconnected" id=8a0dda2cac5c314c6b3184c5928db44d77e1102e3b6600b6d25bac6acade549b namespace=k8s.io Jan 16 09:01:58.656527 containerd[1600]: time="2025-01-16T09:01:58.656229281Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 09:01:58.661949 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 16 09:01:58.758513 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a84af7cfdf48473bbd96b4791ba0c9960999a2ab6515e1032ec4026f097a0ab4-rootfs.mount: Deactivated successfully. Jan 16 09:01:59.058333 kubelet[1949]: E0116 09:01:59.056286 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:01:59.415990 kubelet[1949]: E0116 09:01:59.415245 1949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:01:59.419291 containerd[1600]: time="2025-01-16T09:01:59.418824434Z" level=info msg="CreateContainer within sandbox \"8dbc76b1c0b3efab19927556f87f80902534999c19fa437311814b228a3dbe1b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 16 09:01:59.471091 containerd[1600]: time="2025-01-16T09:01:59.469062220Z" level=info msg="CreateContainer within sandbox \"8dbc76b1c0b3efab19927556f87f80902534999c19fa437311814b228a3dbe1b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"203e3d8be195940057efdeb6ca5dc73ca3022453b6a9c7f82508d4955fad4042\"" Jan 16 09:01:59.472553 containerd[1600]: time="2025-01-16T09:01:59.472331600Z" level=info msg="StartContainer for \"203e3d8be195940057efdeb6ca5dc73ca3022453b6a9c7f82508d4955fad4042\"" Jan 16 09:01:59.612829 containerd[1600]: time="2025-01-16T09:01:59.612032139Z" level=info msg="StartContainer for \"203e3d8be195940057efdeb6ca5dc73ca3022453b6a9c7f82508d4955fad4042\" returns successfully" Jan 16 09:01:59.659180 containerd[1600]: time="2025-01-16T09:01:59.658821134Z" level=info msg="shim disconnected" id=203e3d8be195940057efdeb6ca5dc73ca3022453b6a9c7f82508d4955fad4042 namespace=k8s.io Jan 16 09:01:59.659180 containerd[1600]: time="2025-01-16T09:01:59.658900833Z" level=warning msg="cleaning up after shim disconnected" id=203e3d8be195940057efdeb6ca5dc73ca3022453b6a9c7f82508d4955fad4042 namespace=k8s.io Jan 16 09:01:59.659180 containerd[1600]: time="2025-01-16T09:01:59.658913557Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 09:01:59.759483 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-203e3d8be195940057efdeb6ca5dc73ca3022453b6a9c7f82508d4955fad4042-rootfs.mount: Deactivated successfully. Jan 16 09:02:00.064896 kubelet[1949]: E0116 09:02:00.057151 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:00.424392 kubelet[1949]: E0116 09:02:00.423430 1949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:02:00.428137 containerd[1600]: time="2025-01-16T09:02:00.428039076Z" level=info msg="CreateContainer within sandbox \"8dbc76b1c0b3efab19927556f87f80902534999c19fa437311814b228a3dbe1b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 16 09:02:00.466073 containerd[1600]: time="2025-01-16T09:02:00.465284320Z" level=info msg="CreateContainer within sandbox \"8dbc76b1c0b3efab19927556f87f80902534999c19fa437311814b228a3dbe1b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c3add8e0659b1dea214adff5b8c17a68ce1c0c2482f6c22f91aa59871db2e414\"" Jan 16 09:02:00.471722 containerd[1600]: time="2025-01-16T09:02:00.466887203Z" level=info msg="StartContainer for \"c3add8e0659b1dea214adff5b8c17a68ce1c0c2482f6c22f91aa59871db2e414\"" Jan 16 09:02:00.530481 systemd[1]: run-containerd-runc-k8s.io-c3add8e0659b1dea214adff5b8c17a68ce1c0c2482f6c22f91aa59871db2e414-runc.ABnwMd.mount: Deactivated successfully. Jan 16 09:02:00.608063 containerd[1600]: time="2025-01-16T09:02:00.608008091Z" level=info msg="StartContainer for \"c3add8e0659b1dea214adff5b8c17a68ce1c0c2482f6c22f91aa59871db2e414\" returns successfully" Jan 16 09:02:00.670020 containerd[1600]: time="2025-01-16T09:02:00.669797204Z" level=info msg="shim disconnected" id=c3add8e0659b1dea214adff5b8c17a68ce1c0c2482f6c22f91aa59871db2e414 namespace=k8s.io Jan 16 09:02:00.670020 containerd[1600]: time="2025-01-16T09:02:00.669925898Z" level=warning msg="cleaning up after shim disconnected" id=c3add8e0659b1dea214adff5b8c17a68ce1c0c2482f6c22f91aa59871db2e414 namespace=k8s.io Jan 16 09:02:00.670020 containerd[1600]: time="2025-01-16T09:02:00.669941196Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 09:02:00.719997 containerd[1600]: time="2025-01-16T09:02:00.717881331Z" level=warning msg="cleanup warnings time=\"2025-01-16T09:02:00Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 16 09:02:00.759038 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c3add8e0659b1dea214adff5b8c17a68ce1c0c2482f6c22f91aa59871db2e414-rootfs.mount: Deactivated successfully. Jan 16 09:02:01.065938 kubelet[1949]: E0116 09:02:01.065859 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:01.437307 kubelet[1949]: E0116 09:02:01.436345 1949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:02:01.442006 containerd[1600]: time="2025-01-16T09:02:01.441719469Z" level=info msg="CreateContainer within sandbox \"8dbc76b1c0b3efab19927556f87f80902534999c19fa437311814b228a3dbe1b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 16 09:02:01.494766 containerd[1600]: time="2025-01-16T09:02:01.494678499Z" level=info msg="CreateContainer within sandbox \"8dbc76b1c0b3efab19927556f87f80902534999c19fa437311814b228a3dbe1b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"428d2254e2ab9e3c32550b011a5bb3edb1cfad3ca0f3bec43b0551a4e7ccfa25\"" Jan 16 09:02:01.496144 containerd[1600]: time="2025-01-16T09:02:01.496004054Z" level=info msg="StartContainer for \"428d2254e2ab9e3c32550b011a5bb3edb1cfad3ca0f3bec43b0551a4e7ccfa25\"" Jan 16 09:02:01.727773 containerd[1600]: time="2025-01-16T09:02:01.727432264Z" level=info msg="StartContainer for \"428d2254e2ab9e3c32550b011a5bb3edb1cfad3ca0f3bec43b0551a4e7ccfa25\" returns successfully" Jan 16 09:02:01.779111 systemd[1]: run-containerd-runc-k8s.io-428d2254e2ab9e3c32550b011a5bb3edb1cfad3ca0f3bec43b0551a4e7ccfa25-runc.RoJ5gG.mount: Deactivated successfully. Jan 16 09:02:01.819113 systemd[1]: run-containerd-runc-k8s.io-428d2254e2ab9e3c32550b011a5bb3edb1cfad3ca0f3bec43b0551a4e7ccfa25-runc.Rhj0oh.mount: Deactivated successfully. Jan 16 09:02:02.056617 kubelet[1949]: I0116 09:02:02.056466 1949 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 16 09:02:02.070186 kubelet[1949]: E0116 09:02:02.070135 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:02.454060 kubelet[1949]: E0116 09:02:02.453414 1949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:02:02.495736 kubelet[1949]: I0116 09:02:02.495592 1949 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-vw4db" podStartSLOduration=8.298656021 podStartE2EDuration="24.495533209s" podCreationTimestamp="2025-01-16 09:01:38 +0000 UTC" firstStartedPulling="2025-01-16 09:01:41.538031463 +0000 UTC m=+4.659916094" lastFinishedPulling="2025-01-16 09:01:57.734908639 +0000 UTC m=+20.856793282" observedRunningTime="2025-01-16 09:02:02.495207829 +0000 UTC m=+25.617092482" watchObservedRunningTime="2025-01-16 09:02:02.495533209 +0000 UTC m=+25.617417860" Jan 16 09:02:02.804380 kernel: Initializing XFRM netlink socket Jan 16 09:02:03.072200 kubelet[1949]: E0116 09:02:03.072111 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:03.457797 kubelet[1949]: E0116 09:02:03.457090 1949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:02:04.072892 kubelet[1949]: E0116 09:02:04.072766 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:04.463317 kubelet[1949]: E0116 09:02:04.463155 1949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:02:04.661904 systemd-networkd[1228]: cilium_host: Link UP Jan 16 09:02:04.663448 systemd-networkd[1228]: cilium_net: Link UP Jan 16 09:02:04.668587 systemd-networkd[1228]: cilium_net: Gained carrier Jan 16 09:02:04.668870 systemd-networkd[1228]: cilium_host: Gained carrier Jan 16 09:02:04.948212 systemd-networkd[1228]: cilium_vxlan: Link UP Jan 16 09:02:04.948224 systemd-networkd[1228]: cilium_vxlan: Gained carrier Jan 16 09:02:04.977120 systemd-networkd[1228]: cilium_net: Gained IPv6LL Jan 16 09:02:05.073575 kubelet[1949]: E0116 09:02:05.073484 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:05.404391 systemd-networkd[1228]: cilium_host: Gained IPv6LL Jan 16 09:02:05.478721 kernel: NET: Registered PF_ALG protocol family Jan 16 09:02:06.080590 kubelet[1949]: E0116 09:02:06.080224 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:06.550181 systemd-networkd[1228]: cilium_vxlan: Gained IPv6LL Jan 16 09:02:07.080274 systemd-networkd[1228]: lxc_health: Link UP Jan 16 09:02:07.080861 kubelet[1949]: E0116 09:02:07.080429 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:07.099270 systemd-networkd[1228]: lxc_health: Gained carrier Jan 16 09:02:07.488304 kubelet[1949]: I0116 09:02:07.486286 1949 topology_manager.go:215] "Topology Admit Handler" podUID="c6a2ddfb-febc-45a9-ae18-c2c4c6547238" podNamespace="default" podName="nginx-deployment-6d5f899847-r4d2l" Jan 16 09:02:07.630101 kubelet[1949]: I0116 09:02:07.630030 1949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9bj5\" (UniqueName: \"kubernetes.io/projected/c6a2ddfb-febc-45a9-ae18-c2c4c6547238-kube-api-access-n9bj5\") pod \"nginx-deployment-6d5f899847-r4d2l\" (UID: \"c6a2ddfb-febc-45a9-ae18-c2c4c6547238\") " pod="default/nginx-deployment-6d5f899847-r4d2l" Jan 16 09:02:07.797750 containerd[1600]: time="2025-01-16T09:02:07.797559819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-r4d2l,Uid:c6a2ddfb-febc-45a9-ae18-c2c4c6547238,Namespace:default,Attempt:0,}" Jan 16 09:02:07.903995 kernel: eth0: renamed from tmpcd50b Jan 16 09:02:07.910646 systemd-networkd[1228]: lxc0e6aca73a26f: Link UP Jan 16 09:02:07.917021 systemd-networkd[1228]: lxc0e6aca73a26f: Gained carrier Jan 16 09:02:07.978581 kubelet[1949]: E0116 09:02:07.978089 1949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:02:08.087368 kubelet[1949]: E0116 09:02:08.080805 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:08.482102 kubelet[1949]: E0116 09:02:08.480310 1949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:02:08.916224 systemd-networkd[1228]: lxc_health: Gained IPv6LL Jan 16 09:02:09.081322 kubelet[1949]: E0116 09:02:09.081227 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:09.309486 systemd-networkd[1228]: lxc0e6aca73a26f: Gained IPv6LL Jan 16 09:02:09.507244 kubelet[1949]: E0116 09:02:09.505440 1949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:02:10.081510 kubelet[1949]: E0116 09:02:10.081432 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:10.384265 update_engine[1575]: I20250116 09:02:10.381451 1575 update_attempter.cc:509] Updating boot flags... Jan 16 09:02:10.480028 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2656) Jan 16 09:02:10.652246 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (3019) Jan 16 09:02:11.082298 kubelet[1949]: E0116 09:02:11.082205 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:12.084728 kubelet[1949]: E0116 09:02:12.084290 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:13.088676 kubelet[1949]: E0116 09:02:13.084563 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:14.089863 kubelet[1949]: E0116 09:02:14.089767 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:15.090490 kubelet[1949]: E0116 09:02:15.090411 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:15.382039 containerd[1600]: time="2025-01-16T09:02:15.381495759Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 09:02:15.382039 containerd[1600]: time="2025-01-16T09:02:15.381578855Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 09:02:15.382039 containerd[1600]: time="2025-01-16T09:02:15.381596157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:02:15.382039 containerd[1600]: time="2025-01-16T09:02:15.381706730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:02:15.508337 containerd[1600]: time="2025-01-16T09:02:15.507624337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-r4d2l,Uid:c6a2ddfb-febc-45a9-ae18-c2c4c6547238,Namespace:default,Attempt:0,} returns sandbox id \"cd50b1a85e6c6958cd67de9bcb46af016f2be405214f0931fbfd2ddee69d1067\"" Jan 16 09:02:15.511762 containerd[1600]: time="2025-01-16T09:02:15.511712567Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 16 09:02:15.514622 systemd-resolved[1480]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Jan 16 09:02:16.090828 kubelet[1949]: E0116 09:02:16.090733 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:17.091812 kubelet[1949]: E0116 09:02:17.091721 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:18.028342 kubelet[1949]: E0116 09:02:18.028251 1949 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:18.092353 kubelet[1949]: E0116 09:02:18.092189 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:19.092693 kubelet[1949]: E0116 09:02:19.092604 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:20.095367 kubelet[1949]: E0116 09:02:20.095271 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:20.113229 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2492741711.mount: Deactivated successfully. Jan 16 09:02:21.097667 kubelet[1949]: E0116 09:02:21.097571 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:22.098746 kubelet[1949]: E0116 09:02:22.098690 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:22.976910 containerd[1600]: time="2025-01-16T09:02:22.976685208Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:02:22.978603 containerd[1600]: time="2025-01-16T09:02:22.978494028Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71036018" Jan 16 09:02:22.980995 containerd[1600]: time="2025-01-16T09:02:22.980165574Z" level=info msg="ImageCreate event name:\"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:02:22.984072 containerd[1600]: time="2025-01-16T09:02:22.984005805Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:02:22.986483 containerd[1600]: time="2025-01-16T09:02:22.986410156Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 7.474645684s" Jan 16 09:02:22.986729 containerd[1600]: time="2025-01-16T09:02:22.986701488Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 16 09:02:22.995603 containerd[1600]: time="2025-01-16T09:02:22.990515245Z" level=info msg="CreateContainer within sandbox \"cd50b1a85e6c6958cd67de9bcb46af016f2be405214f0931fbfd2ddee69d1067\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 16 09:02:23.012303 containerd[1600]: time="2025-01-16T09:02:23.012027032Z" level=info msg="CreateContainer within sandbox \"cd50b1a85e6c6958cd67de9bcb46af016f2be405214f0931fbfd2ddee69d1067\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"4e4e528494732f99eefe6abbfa6275f9aac0fd1ccc444ea824d400ca189e3385\"" Jan 16 09:02:23.013747 containerd[1600]: time="2025-01-16T09:02:23.013404626Z" level=info msg="StartContainer for \"4e4e528494732f99eefe6abbfa6275f9aac0fd1ccc444ea824d400ca189e3385\"" Jan 16 09:02:23.101875 kubelet[1949]: E0116 09:02:23.101570 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:23.224355 containerd[1600]: time="2025-01-16T09:02:23.222809633Z" level=info msg="StartContainer for \"4e4e528494732f99eefe6abbfa6275f9aac0fd1ccc444ea824d400ca189e3385\" returns successfully" Jan 16 09:02:23.624279 kubelet[1949]: I0116 09:02:23.624067 1949 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-r4d2l" podStartSLOduration=9.147039688 podStartE2EDuration="16.624002732s" podCreationTimestamp="2025-01-16 09:02:07 +0000 UTC" firstStartedPulling="2025-01-16 09:02:15.510749596 +0000 UTC m=+38.632634231" lastFinishedPulling="2025-01-16 09:02:22.987712622 +0000 UTC m=+46.109597275" observedRunningTime="2025-01-16 09:02:23.622297128 +0000 UTC m=+46.744181768" watchObservedRunningTime="2025-01-16 09:02:23.624002732 +0000 UTC m=+46.745887384" Jan 16 09:02:24.102196 kubelet[1949]: E0116 09:02:24.102121 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:25.104246 kubelet[1949]: E0116 09:02:25.104125 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:26.104662 kubelet[1949]: E0116 09:02:26.104528 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:27.085948 kubelet[1949]: I0116 09:02:27.085882 1949 topology_manager.go:215] "Topology Admit Handler" podUID="0ba6435a-d2f9-49d6-bf65-042b3cce4d9c" podNamespace="default" podName="nfs-server-provisioner-0" Jan 16 09:02:27.104883 kubelet[1949]: E0116 09:02:27.104699 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:27.189214 kubelet[1949]: I0116 09:02:27.188867 1949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/0ba6435a-d2f9-49d6-bf65-042b3cce4d9c-data\") pod \"nfs-server-provisioner-0\" (UID: \"0ba6435a-d2f9-49d6-bf65-042b3cce4d9c\") " pod="default/nfs-server-provisioner-0" Jan 16 09:02:27.189214 kubelet[1949]: I0116 09:02:27.189018 1949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjn6l\" (UniqueName: \"kubernetes.io/projected/0ba6435a-d2f9-49d6-bf65-042b3cce4d9c-kube-api-access-rjn6l\") pod \"nfs-server-provisioner-0\" (UID: \"0ba6435a-d2f9-49d6-bf65-042b3cce4d9c\") " pod="default/nfs-server-provisioner-0" Jan 16 09:02:27.394551 containerd[1600]: time="2025-01-16T09:02:27.393139874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:0ba6435a-d2f9-49d6-bf65-042b3cce4d9c,Namespace:default,Attempt:0,}" Jan 16 09:02:27.521530 systemd-networkd[1228]: lxcdb441d45f937: Link UP Jan 16 09:02:27.533120 kernel: eth0: renamed from tmp1dc8a Jan 16 09:02:27.543263 systemd-networkd[1228]: lxcdb441d45f937: Gained carrier Jan 16 09:02:28.040425 containerd[1600]: time="2025-01-16T09:02:28.040040631Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 09:02:28.040425 containerd[1600]: time="2025-01-16T09:02:28.040128219Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 09:02:28.040425 containerd[1600]: time="2025-01-16T09:02:28.040166383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:02:28.040425 containerd[1600]: time="2025-01-16T09:02:28.040324626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:02:28.103346 systemd[1]: run-containerd-runc-k8s.io-1dc8a88be891ee5d37af2c484626f8aaee2fbbe361998ee69332313887dfc167-runc.CrHsPG.mount: Deactivated successfully. Jan 16 09:02:28.106594 kubelet[1949]: E0116 09:02:28.104870 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:28.194503 containerd[1600]: time="2025-01-16T09:02:28.194427847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:0ba6435a-d2f9-49d6-bf65-042b3cce4d9c,Namespace:default,Attempt:0,} returns sandbox id \"1dc8a88be891ee5d37af2c484626f8aaee2fbbe361998ee69332313887dfc167\"" Jan 16 09:02:28.197388 containerd[1600]: time="2025-01-16T09:02:28.197299895Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 16 09:02:28.776387 systemd-networkd[1228]: lxcdb441d45f937: Gained IPv6LL Jan 16 09:02:29.134785 kubelet[1949]: E0116 09:02:29.134620 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:30.135066 kubelet[1949]: E0116 09:02:30.134940 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:31.138112 kubelet[1949]: E0116 09:02:31.138053 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:31.779531 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2771590663.mount: Deactivated successfully. Jan 16 09:02:32.139451 kubelet[1949]: E0116 09:02:32.139348 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:33.140635 kubelet[1949]: E0116 09:02:33.140550 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:34.142203 kubelet[1949]: E0116 09:02:34.141305 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:35.142018 kubelet[1949]: E0116 09:02:35.141712 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:35.800254 containerd[1600]: time="2025-01-16T09:02:35.800179185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:02:35.806214 containerd[1600]: time="2025-01-16T09:02:35.804279866Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jan 16 09:02:35.806214 containerd[1600]: time="2025-01-16T09:02:35.805465361Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:02:35.813378 containerd[1600]: time="2025-01-16T09:02:35.813272355Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:02:35.822009 containerd[1600]: time="2025-01-16T09:02:35.820200465Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 7.622837585s" Jan 16 09:02:35.822009 containerd[1600]: time="2025-01-16T09:02:35.820273336Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 16 09:02:35.832888 containerd[1600]: time="2025-01-16T09:02:35.832606393Z" level=info msg="CreateContainer within sandbox \"1dc8a88be891ee5d37af2c484626f8aaee2fbbe361998ee69332313887dfc167\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 16 09:02:35.871432 containerd[1600]: time="2025-01-16T09:02:35.871311143Z" level=info msg="CreateContainer within sandbox \"1dc8a88be891ee5d37af2c484626f8aaee2fbbe361998ee69332313887dfc167\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"c7f8c66851c639c29493ef6a4059b2c1b3e4c373dfb8eb4d41be67a64c6a931b\"" Jan 16 09:02:35.874163 containerd[1600]: time="2025-01-16T09:02:35.873110129Z" level=info msg="StartContainer for \"c7f8c66851c639c29493ef6a4059b2c1b3e4c373dfb8eb4d41be67a64c6a931b\"" Jan 16 09:02:36.014499 containerd[1600]: time="2025-01-16T09:02:36.014311816Z" level=info msg="StartContainer for \"c7f8c66851c639c29493ef6a4059b2c1b3e4c373dfb8eb4d41be67a64c6a931b\" returns successfully" Jan 16 09:02:36.142654 kubelet[1949]: E0116 09:02:36.142568 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:36.700461 kubelet[1949]: I0116 09:02:36.700024 1949 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.075587963 podStartE2EDuration="9.699920983s" podCreationTimestamp="2025-01-16 09:02:27 +0000 UTC" firstStartedPulling="2025-01-16 09:02:28.196794238 +0000 UTC m=+51.318678870" lastFinishedPulling="2025-01-16 09:02:35.821127252 +0000 UTC m=+58.943011890" observedRunningTime="2025-01-16 09:02:36.699442559 +0000 UTC m=+59.821327213" watchObservedRunningTime="2025-01-16 09:02:36.699920983 +0000 UTC m=+59.821805635" Jan 16 09:02:37.142892 kubelet[1949]: E0116 09:02:37.142801 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:38.028745 kubelet[1949]: E0116 09:02:38.028653 1949 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:38.144992 kubelet[1949]: E0116 09:02:38.143177 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:39.143994 kubelet[1949]: E0116 09:02:39.143825 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:40.144419 kubelet[1949]: E0116 09:02:40.144209 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:41.145793 kubelet[1949]: E0116 09:02:41.145618 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:42.146649 kubelet[1949]: E0116 09:02:42.146120 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:43.148999 kubelet[1949]: E0116 09:02:43.146946 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:44.149363 kubelet[1949]: E0116 09:02:44.149285 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:45.149616 kubelet[1949]: E0116 09:02:45.149536 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:45.693002 kubelet[1949]: I0116 09:02:45.692371 1949 topology_manager.go:215] "Topology Admit Handler" podUID="5f000786-1aac-45db-b6d7-cbff712bcf6e" podNamespace="default" podName="test-pod-1" Jan 16 09:02:45.801154 kubelet[1949]: I0116 09:02:45.801082 1949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx88v\" (UniqueName: \"kubernetes.io/projected/5f000786-1aac-45db-b6d7-cbff712bcf6e-kube-api-access-vx88v\") pod \"test-pod-1\" (UID: \"5f000786-1aac-45db-b6d7-cbff712bcf6e\") " pod="default/test-pod-1" Jan 16 09:02:45.801685 kubelet[1949]: I0116 09:02:45.801644 1949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-431ddff8-15c3-4d1f-845f-923ef97a7e44\" (UniqueName: \"kubernetes.io/nfs/5f000786-1aac-45db-b6d7-cbff712bcf6e-pvc-431ddff8-15c3-4d1f-845f-923ef97a7e44\") pod \"test-pod-1\" (UID: \"5f000786-1aac-45db-b6d7-cbff712bcf6e\") " pod="default/test-pod-1" Jan 16 09:02:45.945067 kernel: FS-Cache: Loaded Jan 16 09:02:46.064494 kernel: RPC: Registered named UNIX socket transport module. Jan 16 09:02:46.064754 kernel: RPC: Registered udp transport module. Jan 16 09:02:46.064819 kernel: RPC: Registered tcp transport module. Jan 16 09:02:46.068100 kernel: RPC: Registered tcp-with-tls transport module. Jan 16 09:02:46.069205 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 16 09:02:46.151700 kubelet[1949]: E0116 09:02:46.151590 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:46.522153 kernel: NFS: Registering the id_resolver key type Jan 16 09:02:46.524111 kernel: Key type id_resolver registered Jan 16 09:02:46.526142 kernel: Key type id_legacy registered Jan 16 09:02:46.579894 nfsidmap[3349]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '2.0-e-4ce9573906' Jan 16 09:02:46.606825 nfsidmap[3350]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '2.0-e-4ce9573906' Jan 16 09:02:46.899565 containerd[1600]: time="2025-01-16T09:02:46.899020933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:5f000786-1aac-45db-b6d7-cbff712bcf6e,Namespace:default,Attempt:0,}" Jan 16 09:02:46.959159 systemd-networkd[1228]: lxc12cc21841afa: Link UP Jan 16 09:02:46.969001 kernel: eth0: renamed from tmp9dfbf Jan 16 09:02:46.982233 systemd-networkd[1228]: lxc12cc21841afa: Gained carrier Jan 16 09:02:47.156366 kubelet[1949]: E0116 09:02:47.156158 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:47.253839 containerd[1600]: time="2025-01-16T09:02:47.253640677Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 09:02:47.253839 containerd[1600]: time="2025-01-16T09:02:47.253719535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 09:02:47.253839 containerd[1600]: time="2025-01-16T09:02:47.253735942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:02:47.255054 containerd[1600]: time="2025-01-16T09:02:47.254007421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:02:47.292901 systemd[1]: run-containerd-runc-k8s.io-9dfbf50a70422ce9d8e60d2e80c0396e9f736443ffb9ea0a86aac01cb7fd5789-runc.eT3Wbm.mount: Deactivated successfully. Jan 16 09:02:47.365077 containerd[1600]: time="2025-01-16T09:02:47.364946867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:5f000786-1aac-45db-b6d7-cbff712bcf6e,Namespace:default,Attempt:0,} returns sandbox id \"9dfbf50a70422ce9d8e60d2e80c0396e9f736443ffb9ea0a86aac01cb7fd5789\"" Jan 16 09:02:47.367776 containerd[1600]: time="2025-01-16T09:02:47.367722742Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 16 09:02:47.784197 containerd[1600]: time="2025-01-16T09:02:47.782610924Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:02:47.784526 containerd[1600]: time="2025-01-16T09:02:47.784460810Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 16 09:02:47.792338 containerd[1600]: time="2025-01-16T09:02:47.792237210Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 424.206986ms" Jan 16 09:02:47.792338 containerd[1600]: time="2025-01-16T09:02:47.792330262Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 16 09:02:47.797829 containerd[1600]: time="2025-01-16T09:02:47.797405446Z" level=info msg="CreateContainer within sandbox \"9dfbf50a70422ce9d8e60d2e80c0396e9f736443ffb9ea0a86aac01cb7fd5789\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 16 09:02:47.820698 containerd[1600]: time="2025-01-16T09:02:47.820325053Z" level=info msg="CreateContainer within sandbox \"9dfbf50a70422ce9d8e60d2e80c0396e9f736443ffb9ea0a86aac01cb7fd5789\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"d0c59183d250d12e1c6d83df306531f9a51a0811c0dde3be3ff5a634b13bc291\"" Jan 16 09:02:47.822259 containerd[1600]: time="2025-01-16T09:02:47.822173781Z" level=info msg="StartContainer for \"d0c59183d250d12e1c6d83df306531f9a51a0811c0dde3be3ff5a634b13bc291\"" Jan 16 09:02:47.953462 containerd[1600]: time="2025-01-16T09:02:47.950798961Z" level=info msg="StartContainer for \"d0c59183d250d12e1c6d83df306531f9a51a0811c0dde3be3ff5a634b13bc291\" returns successfully" Jan 16 09:02:48.157165 kubelet[1949]: E0116 09:02:48.157073 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:48.279056 systemd-networkd[1228]: lxc12cc21841afa: Gained IPv6LL Jan 16 09:02:48.742871 kubelet[1949]: I0116 09:02:48.742825 1949 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=21.316658137 podStartE2EDuration="21.742775535s" podCreationTimestamp="2025-01-16 09:02:27 +0000 UTC" firstStartedPulling="2025-01-16 09:02:47.367070511 +0000 UTC m=+70.488955148" lastFinishedPulling="2025-01-16 09:02:47.793187916 +0000 UTC m=+70.915072546" observedRunningTime="2025-01-16 09:02:48.742218148 +0000 UTC m=+71.864102799" watchObservedRunningTime="2025-01-16 09:02:48.742775535 +0000 UTC m=+71.864660187" Jan 16 09:02:49.159346 kubelet[1949]: E0116 09:02:49.159236 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:50.167063 kubelet[1949]: E0116 09:02:50.161252 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:51.168031 kubelet[1949]: E0116 09:02:51.167864 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:51.336228 containerd[1600]: time="2025-01-16T09:02:51.336068296Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 16 09:02:51.359757 containerd[1600]: time="2025-01-16T09:02:51.359629291Z" level=info msg="StopContainer for \"428d2254e2ab9e3c32550b011a5bb3edb1cfad3ca0f3bec43b0551a4e7ccfa25\" with timeout 2 (s)" Jan 16 09:02:51.360698 containerd[1600]: time="2025-01-16T09:02:51.360429120Z" level=info msg="Stop container \"428d2254e2ab9e3c32550b011a5bb3edb1cfad3ca0f3bec43b0551a4e7ccfa25\" with signal terminated" Jan 16 09:02:51.381052 systemd-networkd[1228]: lxc_health: Link DOWN Jan 16 09:02:51.381061 systemd-networkd[1228]: lxc_health: Lost carrier Jan 16 09:02:51.498508 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-428d2254e2ab9e3c32550b011a5bb3edb1cfad3ca0f3bec43b0551a4e7ccfa25-rootfs.mount: Deactivated successfully. Jan 16 09:02:51.508562 containerd[1600]: time="2025-01-16T09:02:51.507643079Z" level=info msg="shim disconnected" id=428d2254e2ab9e3c32550b011a5bb3edb1cfad3ca0f3bec43b0551a4e7ccfa25 namespace=k8s.io Jan 16 09:02:51.508562 containerd[1600]: time="2025-01-16T09:02:51.507736047Z" level=warning msg="cleaning up after shim disconnected" id=428d2254e2ab9e3c32550b011a5bb3edb1cfad3ca0f3bec43b0551a4e7ccfa25 namespace=k8s.io Jan 16 09:02:51.508562 containerd[1600]: time="2025-01-16T09:02:51.507753511Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 09:02:51.533349 containerd[1600]: time="2025-01-16T09:02:51.533276579Z" level=info msg="StopContainer for \"428d2254e2ab9e3c32550b011a5bb3edb1cfad3ca0f3bec43b0551a4e7ccfa25\" returns successfully" Jan 16 09:02:51.534562 containerd[1600]: time="2025-01-16T09:02:51.534415105Z" level=info msg="StopPodSandbox for \"8dbc76b1c0b3efab19927556f87f80902534999c19fa437311814b228a3dbe1b\"" Jan 16 09:02:51.534659 containerd[1600]: time="2025-01-16T09:02:51.534500902Z" level=info msg="Container to stop \"8a0dda2cac5c314c6b3184c5928db44d77e1102e3b6600b6d25bac6acade549b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 16 09:02:51.534659 containerd[1600]: time="2025-01-16T09:02:51.534578349Z" level=info msg="Container to stop \"c3add8e0659b1dea214adff5b8c17a68ce1c0c2482f6c22f91aa59871db2e414\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 16 09:02:51.534659 containerd[1600]: time="2025-01-16T09:02:51.534591995Z" level=info msg="Container to stop \"428d2254e2ab9e3c32550b011a5bb3edb1cfad3ca0f3bec43b0551a4e7ccfa25\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 16 09:02:51.534659 containerd[1600]: time="2025-01-16T09:02:51.534610007Z" level=info msg="Container to stop \"a84af7cfdf48473bbd96b4791ba0c9960999a2ab6515e1032ec4026f097a0ab4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 16 09:02:51.534659 containerd[1600]: time="2025-01-16T09:02:51.534641260Z" level=info msg="Container to stop \"203e3d8be195940057efdeb6ca5dc73ca3022453b6a9c7f82508d4955fad4042\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 16 09:02:51.539480 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8dbc76b1c0b3efab19927556f87f80902534999c19fa437311814b228a3dbe1b-shm.mount: Deactivated successfully. Jan 16 09:02:51.600028 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8dbc76b1c0b3efab19927556f87f80902534999c19fa437311814b228a3dbe1b-rootfs.mount: Deactivated successfully. Jan 16 09:02:51.624326 containerd[1600]: time="2025-01-16T09:02:51.622974620Z" level=info msg="shim disconnected" id=8dbc76b1c0b3efab19927556f87f80902534999c19fa437311814b228a3dbe1b namespace=k8s.io Jan 16 09:02:51.624326 containerd[1600]: time="2025-01-16T09:02:51.623075273Z" level=warning msg="cleaning up after shim disconnected" id=8dbc76b1c0b3efab19927556f87f80902534999c19fa437311814b228a3dbe1b namespace=k8s.io Jan 16 09:02:51.624326 containerd[1600]: time="2025-01-16T09:02:51.623088791Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 09:02:51.658488 containerd[1600]: time="2025-01-16T09:02:51.658438048Z" level=info msg="TearDown network for sandbox \"8dbc76b1c0b3efab19927556f87f80902534999c19fa437311814b228a3dbe1b\" successfully" Jan 16 09:02:51.658808 containerd[1600]: time="2025-01-16T09:02:51.658706103Z" level=info msg="StopPodSandbox for \"8dbc76b1c0b3efab19927556f87f80902534999c19fa437311814b228a3dbe1b\" returns successfully" Jan 16 09:02:51.738597 kubelet[1949]: I0116 09:02:51.738558 1949 scope.go:117] "RemoveContainer" containerID="428d2254e2ab9e3c32550b011a5bb3edb1cfad3ca0f3bec43b0551a4e7ccfa25" Jan 16 09:02:51.741345 containerd[1600]: time="2025-01-16T09:02:51.741265103Z" level=info msg="RemoveContainer for \"428d2254e2ab9e3c32550b011a5bb3edb1cfad3ca0f3bec43b0551a4e7ccfa25\"" Jan 16 09:02:51.753884 containerd[1600]: time="2025-01-16T09:02:51.753473553Z" level=info msg="RemoveContainer for \"428d2254e2ab9e3c32550b011a5bb3edb1cfad3ca0f3bec43b0551a4e7ccfa25\" returns successfully" Jan 16 09:02:51.757349 kubelet[1949]: I0116 09:02:51.756227 1949 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2-bpf-maps\") pod \"5f7fd5e7-27fc-405d-8080-db1fd7edfbc2\" (UID: \"5f7fd5e7-27fc-405d-8080-db1fd7edfbc2\") " Jan 16 09:02:51.757349 kubelet[1949]: I0116 09:02:51.756296 1949 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2-hubble-tls\") pod \"5f7fd5e7-27fc-405d-8080-db1fd7edfbc2\" (UID: \"5f7fd5e7-27fc-405d-8080-db1fd7edfbc2\") " Jan 16 09:02:51.757349 kubelet[1949]: I0116 09:02:51.756332 1949 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hsgs\" (UniqueName: \"kubernetes.io/projected/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2-kube-api-access-2hsgs\") pod \"5f7fd5e7-27fc-405d-8080-db1fd7edfbc2\" (UID: \"5f7fd5e7-27fc-405d-8080-db1fd7edfbc2\") " Jan 16 09:02:51.757349 kubelet[1949]: I0116 09:02:51.756370 1949 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2-clustermesh-secrets\") pod \"5f7fd5e7-27fc-405d-8080-db1fd7edfbc2\" (UID: \"5f7fd5e7-27fc-405d-8080-db1fd7edfbc2\") " Jan 16 09:02:51.757349 kubelet[1949]: I0116 09:02:51.756397 1949 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2-host-proc-sys-kernel\") pod \"5f7fd5e7-27fc-405d-8080-db1fd7edfbc2\" (UID: \"5f7fd5e7-27fc-405d-8080-db1fd7edfbc2\") " Jan 16 09:02:51.757349 kubelet[1949]: I0116 09:02:51.756424 1949 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2-cilium-run\") pod \"5f7fd5e7-27fc-405d-8080-db1fd7edfbc2\" (UID: \"5f7fd5e7-27fc-405d-8080-db1fd7edfbc2\") " Jan 16 09:02:51.757849 kubelet[1949]: I0116 09:02:51.756455 1949 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2-cilium-config-path\") pod \"5f7fd5e7-27fc-405d-8080-db1fd7edfbc2\" (UID: \"5f7fd5e7-27fc-405d-8080-db1fd7edfbc2\") " Jan 16 09:02:51.757849 kubelet[1949]: I0116 09:02:51.756483 1949 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2-lib-modules\") pod \"5f7fd5e7-27fc-405d-8080-db1fd7edfbc2\" (UID: \"5f7fd5e7-27fc-405d-8080-db1fd7edfbc2\") " Jan 16 09:02:51.757849 kubelet[1949]: I0116 09:02:51.756519 1949 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2-cilium-cgroup\") pod \"5f7fd5e7-27fc-405d-8080-db1fd7edfbc2\" (UID: \"5f7fd5e7-27fc-405d-8080-db1fd7edfbc2\") " Jan 16 09:02:51.757849 kubelet[1949]: I0116 09:02:51.756545 1949 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2-etc-cni-netd\") pod \"5f7fd5e7-27fc-405d-8080-db1fd7edfbc2\" (UID: \"5f7fd5e7-27fc-405d-8080-db1fd7edfbc2\") " Jan 16 09:02:51.757849 kubelet[1949]: I0116 09:02:51.756570 1949 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2-xtables-lock\") pod \"5f7fd5e7-27fc-405d-8080-db1fd7edfbc2\" (UID: \"5f7fd5e7-27fc-405d-8080-db1fd7edfbc2\") " Jan 16 09:02:51.757849 kubelet[1949]: I0116 09:02:51.756599 1949 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2-cni-path\") pod \"5f7fd5e7-27fc-405d-8080-db1fd7edfbc2\" (UID: \"5f7fd5e7-27fc-405d-8080-db1fd7edfbc2\") " Jan 16 09:02:51.758232 kubelet[1949]: I0116 09:02:51.756628 1949 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2-hostproc\") pod \"5f7fd5e7-27fc-405d-8080-db1fd7edfbc2\" (UID: \"5f7fd5e7-27fc-405d-8080-db1fd7edfbc2\") " Jan 16 09:02:51.758232 kubelet[1949]: I0116 09:02:51.756655 1949 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2-host-proc-sys-net\") pod \"5f7fd5e7-27fc-405d-8080-db1fd7edfbc2\" (UID: \"5f7fd5e7-27fc-405d-8080-db1fd7edfbc2\") " Jan 16 09:02:51.758232 kubelet[1949]: I0116 09:02:51.756769 1949 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5f7fd5e7-27fc-405d-8080-db1fd7edfbc2" (UID: "5f7fd5e7-27fc-405d-8080-db1fd7edfbc2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 09:02:51.758232 kubelet[1949]: I0116 09:02:51.757069 1949 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5f7fd5e7-27fc-405d-8080-db1fd7edfbc2" (UID: "5f7fd5e7-27fc-405d-8080-db1fd7edfbc2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 09:02:51.758232 kubelet[1949]: I0116 09:02:51.757152 1949 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5f7fd5e7-27fc-405d-8080-db1fd7edfbc2" (UID: "5f7fd5e7-27fc-405d-8080-db1fd7edfbc2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 09:02:51.759933 kubelet[1949]: I0116 09:02:51.758903 1949 scope.go:117] "RemoveContainer" containerID="c3add8e0659b1dea214adff5b8c17a68ce1c0c2482f6c22f91aa59871db2e414" Jan 16 09:02:51.764538 kubelet[1949]: I0116 09:02:51.762526 1949 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5f7fd5e7-27fc-405d-8080-db1fd7edfbc2" (UID: "5f7fd5e7-27fc-405d-8080-db1fd7edfbc2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 09:02:51.764538 kubelet[1949]: I0116 09:02:51.763194 1949 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5f7fd5e7-27fc-405d-8080-db1fd7edfbc2" (UID: "5f7fd5e7-27fc-405d-8080-db1fd7edfbc2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 09:02:51.767986 kubelet[1949]: I0116 09:02:51.763224 1949 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5f7fd5e7-27fc-405d-8080-db1fd7edfbc2" (UID: "5f7fd5e7-27fc-405d-8080-db1fd7edfbc2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 09:02:51.768584 kubelet[1949]: I0116 09:02:51.768232 1949 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5f7fd5e7-27fc-405d-8080-db1fd7edfbc2" (UID: "5f7fd5e7-27fc-405d-8080-db1fd7edfbc2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 09:02:51.769755 systemd[1]: var-lib-kubelet-pods-5f7fd5e7\x2d27fc\x2d405d\x2d8080\x2ddb1fd7edfbc2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 16 09:02:51.770647 kubelet[1949]: I0116 09:02:51.768534 1949 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2-cni-path" (OuterVolumeSpecName: "cni-path") pod "5f7fd5e7-27fc-405d-8080-db1fd7edfbc2" (UID: "5f7fd5e7-27fc-405d-8080-db1fd7edfbc2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 09:02:51.770647 kubelet[1949]: I0116 09:02:51.768559 1949 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2-hostproc" (OuterVolumeSpecName: "hostproc") pod "5f7fd5e7-27fc-405d-8080-db1fd7edfbc2" (UID: "5f7fd5e7-27fc-405d-8080-db1fd7edfbc2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 09:02:51.774298 kubelet[1949]: I0116 09:02:51.773462 1949 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5f7fd5e7-27fc-405d-8080-db1fd7edfbc2" (UID: "5f7fd5e7-27fc-405d-8080-db1fd7edfbc2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 09:02:51.774298 kubelet[1949]: I0116 09:02:51.773662 1949 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5f7fd5e7-27fc-405d-8080-db1fd7edfbc2" (UID: "5f7fd5e7-27fc-405d-8080-db1fd7edfbc2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 16 09:02:51.774298 kubelet[1949]: I0116 09:02:51.774094 1949 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5f7fd5e7-27fc-405d-8080-db1fd7edfbc2" (UID: "5f7fd5e7-27fc-405d-8080-db1fd7edfbc2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 16 09:02:51.776970 kubelet[1949]: I0116 09:02:51.776447 1949 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5f7fd5e7-27fc-405d-8080-db1fd7edfbc2" (UID: "5f7fd5e7-27fc-405d-8080-db1fd7edfbc2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 16 09:02:51.778379 containerd[1600]: time="2025-01-16T09:02:51.777879189Z" level=info msg="RemoveContainer for \"c3add8e0659b1dea214adff5b8c17a68ce1c0c2482f6c22f91aa59871db2e414\"" Jan 16 09:02:51.781325 kubelet[1949]: I0116 09:02:51.781245 1949 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2-kube-api-access-2hsgs" (OuterVolumeSpecName: "kube-api-access-2hsgs") pod "5f7fd5e7-27fc-405d-8080-db1fd7edfbc2" (UID: "5f7fd5e7-27fc-405d-8080-db1fd7edfbc2"). InnerVolumeSpecName "kube-api-access-2hsgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 16 09:02:51.784862 containerd[1600]: time="2025-01-16T09:02:51.784775530Z" level=info msg="RemoveContainer for \"c3add8e0659b1dea214adff5b8c17a68ce1c0c2482f6c22f91aa59871db2e414\" returns successfully" Jan 16 09:02:51.785477 kubelet[1949]: I0116 09:02:51.785290 1949 scope.go:117] "RemoveContainer" containerID="203e3d8be195940057efdeb6ca5dc73ca3022453b6a9c7f82508d4955fad4042" Jan 16 09:02:51.789676 containerd[1600]: time="2025-01-16T09:02:51.789626345Z" level=info msg="RemoveContainer for \"203e3d8be195940057efdeb6ca5dc73ca3022453b6a9c7f82508d4955fad4042\"" Jan 16 09:02:51.795398 containerd[1600]: time="2025-01-16T09:02:51.794806951Z" level=info msg="RemoveContainer for \"203e3d8be195940057efdeb6ca5dc73ca3022453b6a9c7f82508d4955fad4042\" returns successfully" Jan 16 09:02:51.795609 kubelet[1949]: I0116 09:02:51.795261 1949 scope.go:117] "RemoveContainer" containerID="8a0dda2cac5c314c6b3184c5928db44d77e1102e3b6600b6d25bac6acade549b" Jan 16 09:02:51.797883 containerd[1600]: time="2025-01-16T09:02:51.797437190Z" level=info msg="RemoveContainer for \"8a0dda2cac5c314c6b3184c5928db44d77e1102e3b6600b6d25bac6acade549b\"" Jan 16 09:02:51.801144 containerd[1600]: time="2025-01-16T09:02:51.800946620Z" level=info msg="RemoveContainer for \"8a0dda2cac5c314c6b3184c5928db44d77e1102e3b6600b6d25bac6acade549b\" returns successfully" Jan 16 09:02:51.801581 kubelet[1949]: I0116 09:02:51.801550 1949 scope.go:117] "RemoveContainer" containerID="a84af7cfdf48473bbd96b4791ba0c9960999a2ab6515e1032ec4026f097a0ab4" Jan 16 09:02:51.805512 containerd[1600]: time="2025-01-16T09:02:51.805455356Z" level=info msg="RemoveContainer for \"a84af7cfdf48473bbd96b4791ba0c9960999a2ab6515e1032ec4026f097a0ab4\"" Jan 16 09:02:51.809625 containerd[1600]: time="2025-01-16T09:02:51.809090994Z" level=info msg="RemoveContainer for \"a84af7cfdf48473bbd96b4791ba0c9960999a2ab6515e1032ec4026f097a0ab4\" returns successfully" Jan 16 09:02:51.810289 kubelet[1949]: I0116 09:02:51.810109 1949 scope.go:117] "RemoveContainer" containerID="428d2254e2ab9e3c32550b011a5bb3edb1cfad3ca0f3bec43b0551a4e7ccfa25" Jan 16 09:02:51.811880 kubelet[1949]: E0116 09:02:51.810925 1949 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"428d2254e2ab9e3c32550b011a5bb3edb1cfad3ca0f3bec43b0551a4e7ccfa25\": not found" containerID="428d2254e2ab9e3c32550b011a5bb3edb1cfad3ca0f3bec43b0551a4e7ccfa25" Jan 16 09:02:51.811880 kubelet[1949]: I0116 09:02:51.811073 1949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"428d2254e2ab9e3c32550b011a5bb3edb1cfad3ca0f3bec43b0551a4e7ccfa25"} err="failed to get container status \"428d2254e2ab9e3c32550b011a5bb3edb1cfad3ca0f3bec43b0551a4e7ccfa25\": rpc error: code = NotFound desc = an error occurred when try to find container \"428d2254e2ab9e3c32550b011a5bb3edb1cfad3ca0f3bec43b0551a4e7ccfa25\": not found" Jan 16 09:02:51.811880 kubelet[1949]: I0116 09:02:51.811101 1949 scope.go:117] "RemoveContainer" containerID="c3add8e0659b1dea214adff5b8c17a68ce1c0c2482f6c22f91aa59871db2e414" Jan 16 09:02:51.811880 kubelet[1949]: E0116 09:02:51.811639 1949 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c3add8e0659b1dea214adff5b8c17a68ce1c0c2482f6c22f91aa59871db2e414\": not found" containerID="c3add8e0659b1dea214adff5b8c17a68ce1c0c2482f6c22f91aa59871db2e414" Jan 16 09:02:51.811880 kubelet[1949]: I0116 09:02:51.811698 1949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c3add8e0659b1dea214adff5b8c17a68ce1c0c2482f6c22f91aa59871db2e414"} err="failed to get container status \"c3add8e0659b1dea214adff5b8c17a68ce1c0c2482f6c22f91aa59871db2e414\": rpc error: code = NotFound desc = an error occurred when try to find container \"c3add8e0659b1dea214adff5b8c17a68ce1c0c2482f6c22f91aa59871db2e414\": not found" Jan 16 09:02:51.811880 kubelet[1949]: I0116 09:02:51.811721 1949 scope.go:117] "RemoveContainer" containerID="203e3d8be195940057efdeb6ca5dc73ca3022453b6a9c7f82508d4955fad4042" Jan 16 09:02:51.813139 containerd[1600]: time="2025-01-16T09:02:51.810603357Z" level=error msg="ContainerStatus for \"428d2254e2ab9e3c32550b011a5bb3edb1cfad3ca0f3bec43b0551a4e7ccfa25\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"428d2254e2ab9e3c32550b011a5bb3edb1cfad3ca0f3bec43b0551a4e7ccfa25\": not found" Jan 16 09:02:51.813139 containerd[1600]: time="2025-01-16T09:02:51.811391207Z" level=error msg="ContainerStatus for \"c3add8e0659b1dea214adff5b8c17a68ce1c0c2482f6c22f91aa59871db2e414\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c3add8e0659b1dea214adff5b8c17a68ce1c0c2482f6c22f91aa59871db2e414\": not found" Jan 16 09:02:51.813369 containerd[1600]: time="2025-01-16T09:02:51.813321074Z" level=error msg="ContainerStatus for \"203e3d8be195940057efdeb6ca5dc73ca3022453b6a9c7f82508d4955fad4042\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"203e3d8be195940057efdeb6ca5dc73ca3022453b6a9c7f82508d4955fad4042\": not found" Jan 16 09:02:51.813790 kubelet[1949]: E0116 09:02:51.813588 1949 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"203e3d8be195940057efdeb6ca5dc73ca3022453b6a9c7f82508d4955fad4042\": not found" containerID="203e3d8be195940057efdeb6ca5dc73ca3022453b6a9c7f82508d4955fad4042" Jan 16 09:02:51.813790 kubelet[1949]: I0116 09:02:51.813648 1949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"203e3d8be195940057efdeb6ca5dc73ca3022453b6a9c7f82508d4955fad4042"} err="failed to get container status \"203e3d8be195940057efdeb6ca5dc73ca3022453b6a9c7f82508d4955fad4042\": rpc error: code = NotFound desc = an error occurred when try to find container \"203e3d8be195940057efdeb6ca5dc73ca3022453b6a9c7f82508d4955fad4042\": not found" Jan 16 09:02:51.813790 kubelet[1949]: I0116 09:02:51.813667 1949 scope.go:117] "RemoveContainer" containerID="8a0dda2cac5c314c6b3184c5928db44d77e1102e3b6600b6d25bac6acade549b" Jan 16 09:02:51.814163 containerd[1600]: time="2025-01-16T09:02:51.814084915Z" level=error msg="ContainerStatus for \"8a0dda2cac5c314c6b3184c5928db44d77e1102e3b6600b6d25bac6acade549b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8a0dda2cac5c314c6b3184c5928db44d77e1102e3b6600b6d25bac6acade549b\": not found" Jan 16 09:02:51.814398 kubelet[1949]: E0116 09:02:51.814374 1949 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8a0dda2cac5c314c6b3184c5928db44d77e1102e3b6600b6d25bac6acade549b\": not found" containerID="8a0dda2cac5c314c6b3184c5928db44d77e1102e3b6600b6d25bac6acade549b" Jan 16 09:02:51.814463 kubelet[1949]: I0116 09:02:51.814436 1949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8a0dda2cac5c314c6b3184c5928db44d77e1102e3b6600b6d25bac6acade549b"} err="failed to get container status \"8a0dda2cac5c314c6b3184c5928db44d77e1102e3b6600b6d25bac6acade549b\": rpc error: code = NotFound desc = an error occurred when try to find container \"8a0dda2cac5c314c6b3184c5928db44d77e1102e3b6600b6d25bac6acade549b\": not found" Jan 16 09:02:51.814463 kubelet[1949]: I0116 09:02:51.814453 1949 scope.go:117] "RemoveContainer" containerID="a84af7cfdf48473bbd96b4791ba0c9960999a2ab6515e1032ec4026f097a0ab4" Jan 16 09:02:51.814938 containerd[1600]: time="2025-01-16T09:02:51.814876608Z" level=error msg="ContainerStatus for \"a84af7cfdf48473bbd96b4791ba0c9960999a2ab6515e1032ec4026f097a0ab4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a84af7cfdf48473bbd96b4791ba0c9960999a2ab6515e1032ec4026f097a0ab4\": not found" Jan 16 09:02:51.815329 kubelet[1949]: E0116 09:02:51.815210 1949 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a84af7cfdf48473bbd96b4791ba0c9960999a2ab6515e1032ec4026f097a0ab4\": not found" containerID="a84af7cfdf48473bbd96b4791ba0c9960999a2ab6515e1032ec4026f097a0ab4" Jan 16 09:02:51.815329 kubelet[1949]: I0116 09:02:51.815263 1949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a84af7cfdf48473bbd96b4791ba0c9960999a2ab6515e1032ec4026f097a0ab4"} err="failed to get container status \"a84af7cfdf48473bbd96b4791ba0c9960999a2ab6515e1032ec4026f097a0ab4\": rpc error: code = NotFound desc = an error occurred when try to find container \"a84af7cfdf48473bbd96b4791ba0c9960999a2ab6515e1032ec4026f097a0ab4\": not found" Jan 16 09:02:51.857804 kubelet[1949]: I0116 09:02:51.857700 1949 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2-bpf-maps\") on node \"143.110.229.235\" DevicePath \"\"" Jan 16 09:02:51.857804 kubelet[1949]: I0116 09:02:51.857768 1949 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-2hsgs\" (UniqueName: \"kubernetes.io/projected/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2-kube-api-access-2hsgs\") on node \"143.110.229.235\" DevicePath \"\"" Jan 16 09:02:51.857804 kubelet[1949]: I0116 09:02:51.857787 1949 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2-clustermesh-secrets\") on node \"143.110.229.235\" DevicePath \"\"" Jan 16 09:02:51.857804 kubelet[1949]: I0116 09:02:51.857808 1949 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2-host-proc-sys-kernel\") on node \"143.110.229.235\" DevicePath \"\"" Jan 16 09:02:51.857804 kubelet[1949]: I0116 09:02:51.857827 1949 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2-hubble-tls\") on node \"143.110.229.235\" DevicePath \"\"" Jan 16 09:02:51.858707 kubelet[1949]: I0116 09:02:51.857843 1949 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2-cilium-config-path\") on node \"143.110.229.235\" DevicePath \"\"" Jan 16 09:02:51.858707 kubelet[1949]: I0116 09:02:51.857860 1949 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2-lib-modules\") on node \"143.110.229.235\" DevicePath \"\"" Jan 16 09:02:51.858707 kubelet[1949]: I0116 09:02:51.857877 1949 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2-cilium-cgroup\") on node \"143.110.229.235\" DevicePath \"\"" Jan 16 09:02:51.858707 kubelet[1949]: I0116 09:02:51.857896 1949 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2-etc-cni-netd\") on node \"143.110.229.235\" DevicePath \"\"" Jan 16 09:02:51.858707 kubelet[1949]: I0116 09:02:51.857915 1949 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2-xtables-lock\") on node \"143.110.229.235\" DevicePath \"\"" Jan 16 09:02:51.858707 kubelet[1949]: I0116 09:02:51.857933 1949 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2-cilium-run\") on node \"143.110.229.235\" DevicePath \"\"" Jan 16 09:02:51.858707 kubelet[1949]: I0116 09:02:51.857949 1949 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2-hostproc\") on node \"143.110.229.235\" DevicePath \"\"" Jan 16 09:02:51.858707 kubelet[1949]: I0116 09:02:51.857999 1949 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2-host-proc-sys-net\") on node \"143.110.229.235\" DevicePath \"\"" Jan 16 09:02:51.859165 kubelet[1949]: I0116 09:02:51.858016 1949 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2-cni-path\") on node \"143.110.229.235\" DevicePath \"\"" Jan 16 09:02:52.168914 kubelet[1949]: E0116 09:02:52.168776 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:52.300266 kubelet[1949]: I0116 09:02:52.300160 1949 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="5f7fd5e7-27fc-405d-8080-db1fd7edfbc2" path="/var/lib/kubelet/pods/5f7fd5e7-27fc-405d-8080-db1fd7edfbc2/volumes" Jan 16 09:02:52.315122 systemd[1]: var-lib-kubelet-pods-5f7fd5e7\x2d27fc\x2d405d\x2d8080\x2ddb1fd7edfbc2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2hsgs.mount: Deactivated successfully. Jan 16 09:02:52.315408 systemd[1]: var-lib-kubelet-pods-5f7fd5e7\x2d27fc\x2d405d\x2d8080\x2ddb1fd7edfbc2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 16 09:02:53.169664 kubelet[1949]: E0116 09:02:53.169513 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:53.247744 kubelet[1949]: E0116 09:02:53.247676 1949 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 16 09:02:54.170517 kubelet[1949]: E0116 09:02:54.170438 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:54.377998 kubelet[1949]: I0116 09:02:54.376556 1949 topology_manager.go:215] "Topology Admit Handler" podUID="c17b9bde-955f-4758-b6ff-d5416a21154e" podNamespace="kube-system" podName="cilium-operator-5cc964979-cr99p" Jan 16 09:02:54.377998 kubelet[1949]: E0116 09:02:54.376659 1949 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5f7fd5e7-27fc-405d-8080-db1fd7edfbc2" containerName="apply-sysctl-overwrites" Jan 16 09:02:54.377998 kubelet[1949]: E0116 09:02:54.376682 1949 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5f7fd5e7-27fc-405d-8080-db1fd7edfbc2" containerName="mount-cgroup" Jan 16 09:02:54.377998 kubelet[1949]: E0116 09:02:54.376690 1949 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5f7fd5e7-27fc-405d-8080-db1fd7edfbc2" containerName="mount-bpf-fs" Jan 16 09:02:54.377998 kubelet[1949]: E0116 09:02:54.376700 1949 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5f7fd5e7-27fc-405d-8080-db1fd7edfbc2" containerName="clean-cilium-state" Jan 16 09:02:54.377998 kubelet[1949]: E0116 09:02:54.376708 1949 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5f7fd5e7-27fc-405d-8080-db1fd7edfbc2" containerName="cilium-agent" Jan 16 09:02:54.377998 kubelet[1949]: I0116 09:02:54.376728 1949 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f7fd5e7-27fc-405d-8080-db1fd7edfbc2" containerName="cilium-agent" Jan 16 09:02:54.427583 kubelet[1949]: I0116 09:02:54.427384 1949 topology_manager.go:215] "Topology Admit Handler" podUID="52dcd437-b506-48c6-aab7-d926092eb12e" podNamespace="kube-system" podName="cilium-79pbl" Jan 16 09:02:54.481561 kubelet[1949]: I0116 09:02:54.481484 1949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q88w7\" (UniqueName: \"kubernetes.io/projected/c17b9bde-955f-4758-b6ff-d5416a21154e-kube-api-access-q88w7\") pod \"cilium-operator-5cc964979-cr99p\" (UID: \"c17b9bde-955f-4758-b6ff-d5416a21154e\") " pod="kube-system/cilium-operator-5cc964979-cr99p" Jan 16 09:02:54.482012 kubelet[1949]: I0116 09:02:54.481988 1949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c17b9bde-955f-4758-b6ff-d5416a21154e-cilium-config-path\") pod \"cilium-operator-5cc964979-cr99p\" (UID: \"c17b9bde-955f-4758-b6ff-d5416a21154e\") " pod="kube-system/cilium-operator-5cc964979-cr99p" Jan 16 09:02:54.592300 kubelet[1949]: I0116 09:02:54.592238 1949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/52dcd437-b506-48c6-aab7-d926092eb12e-cilium-ipsec-secrets\") pod \"cilium-79pbl\" (UID: \"52dcd437-b506-48c6-aab7-d926092eb12e\") " pod="kube-system/cilium-79pbl" Jan 16 09:02:54.592906 kubelet[1949]: I0116 09:02:54.592835 1949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/52dcd437-b506-48c6-aab7-d926092eb12e-cilium-run\") pod \"cilium-79pbl\" (UID: \"52dcd437-b506-48c6-aab7-d926092eb12e\") " pod="kube-system/cilium-79pbl" Jan 16 09:02:54.594018 kubelet[1949]: I0116 09:02:54.593137 1949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/52dcd437-b506-48c6-aab7-d926092eb12e-cilium-cgroup\") pod \"cilium-79pbl\" (UID: \"52dcd437-b506-48c6-aab7-d926092eb12e\") " pod="kube-system/cilium-79pbl" Jan 16 09:02:54.594018 kubelet[1949]: I0116 09:02:54.593261 1949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/52dcd437-b506-48c6-aab7-d926092eb12e-host-proc-sys-kernel\") pod \"cilium-79pbl\" (UID: \"52dcd437-b506-48c6-aab7-d926092eb12e\") " pod="kube-system/cilium-79pbl" Jan 16 09:02:54.594018 kubelet[1949]: I0116 09:02:54.593301 1949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/52dcd437-b506-48c6-aab7-d926092eb12e-bpf-maps\") pod \"cilium-79pbl\" (UID: \"52dcd437-b506-48c6-aab7-d926092eb12e\") " pod="kube-system/cilium-79pbl" Jan 16 09:02:54.594018 kubelet[1949]: I0116 09:02:54.593331 1949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/52dcd437-b506-48c6-aab7-d926092eb12e-lib-modules\") pod \"cilium-79pbl\" (UID: \"52dcd437-b506-48c6-aab7-d926092eb12e\") " pod="kube-system/cilium-79pbl" Jan 16 09:02:54.594018 kubelet[1949]: I0116 09:02:54.593364 1949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/52dcd437-b506-48c6-aab7-d926092eb12e-xtables-lock\") pod \"cilium-79pbl\" (UID: \"52dcd437-b506-48c6-aab7-d926092eb12e\") " pod="kube-system/cilium-79pbl" Jan 16 09:02:54.594018 kubelet[1949]: I0116 09:02:54.593392 1949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/52dcd437-b506-48c6-aab7-d926092eb12e-hubble-tls\") pod \"cilium-79pbl\" (UID: \"52dcd437-b506-48c6-aab7-d926092eb12e\") " pod="kube-system/cilium-79pbl" Jan 16 09:02:54.594514 kubelet[1949]: I0116 09:02:54.593414 1949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5r9f\" (UniqueName: \"kubernetes.io/projected/52dcd437-b506-48c6-aab7-d926092eb12e-kube-api-access-v5r9f\") pod \"cilium-79pbl\" (UID: \"52dcd437-b506-48c6-aab7-d926092eb12e\") " pod="kube-system/cilium-79pbl" Jan 16 09:02:54.594514 kubelet[1949]: I0116 09:02:54.593481 1949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/52dcd437-b506-48c6-aab7-d926092eb12e-hostproc\") pod \"cilium-79pbl\" (UID: \"52dcd437-b506-48c6-aab7-d926092eb12e\") " pod="kube-system/cilium-79pbl" Jan 16 09:02:54.594514 kubelet[1949]: I0116 09:02:54.593538 1949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/52dcd437-b506-48c6-aab7-d926092eb12e-host-proc-sys-net\") pod \"cilium-79pbl\" (UID: \"52dcd437-b506-48c6-aab7-d926092eb12e\") " pod="kube-system/cilium-79pbl" Jan 16 09:02:54.594514 kubelet[1949]: I0116 09:02:54.593567 1949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/52dcd437-b506-48c6-aab7-d926092eb12e-clustermesh-secrets\") pod \"cilium-79pbl\" (UID: \"52dcd437-b506-48c6-aab7-d926092eb12e\") " pod="kube-system/cilium-79pbl" Jan 16 09:02:54.594514 kubelet[1949]: I0116 09:02:54.593598 1949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/52dcd437-b506-48c6-aab7-d926092eb12e-cilium-config-path\") pod \"cilium-79pbl\" (UID: \"52dcd437-b506-48c6-aab7-d926092eb12e\") " pod="kube-system/cilium-79pbl" Jan 16 09:02:54.594514 kubelet[1949]: I0116 09:02:54.593651 1949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/52dcd437-b506-48c6-aab7-d926092eb12e-cni-path\") pod \"cilium-79pbl\" (UID: \"52dcd437-b506-48c6-aab7-d926092eb12e\") " pod="kube-system/cilium-79pbl" Jan 16 09:02:54.594795 kubelet[1949]: I0116 09:02:54.593682 1949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/52dcd437-b506-48c6-aab7-d926092eb12e-etc-cni-netd\") pod \"cilium-79pbl\" (UID: \"52dcd437-b506-48c6-aab7-d926092eb12e\") " pod="kube-system/cilium-79pbl" Jan 16 09:02:54.683631 kubelet[1949]: E0116 09:02:54.682670 1949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:02:54.686124 containerd[1600]: time="2025-01-16T09:02:54.685239296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-cr99p,Uid:c17b9bde-955f-4758-b6ff-d5416a21154e,Namespace:kube-system,Attempt:0,}" Jan 16 09:02:54.763496 containerd[1600]: time="2025-01-16T09:02:54.762967447Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 09:02:54.763496 containerd[1600]: time="2025-01-16T09:02:54.763052279Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 09:02:54.763496 containerd[1600]: time="2025-01-16T09:02:54.763073071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:02:54.763496 containerd[1600]: time="2025-01-16T09:02:54.763217457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:02:54.849816 containerd[1600]: time="2025-01-16T09:02:54.849772780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-cr99p,Uid:c17b9bde-955f-4758-b6ff-d5416a21154e,Namespace:kube-system,Attempt:0,} returns sandbox id \"996ddabefa432533fd76866697af51318421158d5d68c066442ba1ef0bdecbeb\"" Jan 16 09:02:54.851936 kubelet[1949]: E0116 09:02:54.851218 1949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:02:54.853387 containerd[1600]: time="2025-01-16T09:02:54.853281415Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 16 09:02:55.039177 kubelet[1949]: E0116 09:02:55.035427 1949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:02:55.039734 containerd[1600]: time="2025-01-16T09:02:55.039680775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-79pbl,Uid:52dcd437-b506-48c6-aab7-d926092eb12e,Namespace:kube-system,Attempt:0,}" Jan 16 09:02:55.095307 containerd[1600]: time="2025-01-16T09:02:55.094826089Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 09:02:55.095307 containerd[1600]: time="2025-01-16T09:02:55.094935309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 09:02:55.095307 containerd[1600]: time="2025-01-16T09:02:55.094987126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:02:55.095307 containerd[1600]: time="2025-01-16T09:02:55.095155518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:02:55.168561 containerd[1600]: time="2025-01-16T09:02:55.168439643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-79pbl,Uid:52dcd437-b506-48c6-aab7-d926092eb12e,Namespace:kube-system,Attempt:0,} returns sandbox id \"4418be4dcb4c3cc31b3259bd9a2c354b286529a6adae6cca1bf45727c4b751f2\"" Jan 16 09:02:55.170188 kubelet[1949]: E0116 09:02:55.169760 1949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:02:55.170751 kubelet[1949]: E0116 09:02:55.170589 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:55.175591 containerd[1600]: time="2025-01-16T09:02:55.175074191Z" level=info msg="CreateContainer within sandbox \"4418be4dcb4c3cc31b3259bd9a2c354b286529a6adae6cca1bf45727c4b751f2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 16 09:02:55.196252 containerd[1600]: time="2025-01-16T09:02:55.195913379Z" level=info msg="CreateContainer within sandbox \"4418be4dcb4c3cc31b3259bd9a2c354b286529a6adae6cca1bf45727c4b751f2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8f028d553dce835753891b94385505fcf160e485422d0769e73f2552c1b854cf\"" Jan 16 09:02:55.197532 containerd[1600]: time="2025-01-16T09:02:55.197441888Z" level=info msg="StartContainer for \"8f028d553dce835753891b94385505fcf160e485422d0769e73f2552c1b854cf\"" Jan 16 09:02:55.322832 containerd[1600]: time="2025-01-16T09:02:55.321385791Z" level=info msg="StartContainer for \"8f028d553dce835753891b94385505fcf160e485422d0769e73f2552c1b854cf\" returns successfully" Jan 16 09:02:55.389862 containerd[1600]: time="2025-01-16T09:02:55.389022495Z" level=info msg="shim disconnected" id=8f028d553dce835753891b94385505fcf160e485422d0769e73f2552c1b854cf namespace=k8s.io Jan 16 09:02:55.389862 containerd[1600]: time="2025-01-16T09:02:55.389234135Z" level=warning msg="cleaning up after shim disconnected" id=8f028d553dce835753891b94385505fcf160e485422d0769e73f2552c1b854cf namespace=k8s.io Jan 16 09:02:55.389862 containerd[1600]: time="2025-01-16T09:02:55.389250142Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 09:02:55.773061 kubelet[1949]: E0116 09:02:55.772844 1949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:02:55.787157 containerd[1600]: time="2025-01-16T09:02:55.787044507Z" level=info msg="CreateContainer within sandbox \"4418be4dcb4c3cc31b3259bd9a2c354b286529a6adae6cca1bf45727c4b751f2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 16 09:02:55.820913 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount632082882.mount: Deactivated successfully. Jan 16 09:02:55.829472 containerd[1600]: time="2025-01-16T09:02:55.829377268Z" level=info msg="CreateContainer within sandbox \"4418be4dcb4c3cc31b3259bd9a2c354b286529a6adae6cca1bf45727c4b751f2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"32af1a4d69564970584ced7d270a4aa1b553afb8b26e22e7e53017b247f85bbd\"" Jan 16 09:02:55.834834 containerd[1600]: time="2025-01-16T09:02:55.830727349Z" level=info msg="StartContainer for \"32af1a4d69564970584ced7d270a4aa1b553afb8b26e22e7e53017b247f85bbd\"" Jan 16 09:02:55.958853 containerd[1600]: time="2025-01-16T09:02:55.956809964Z" level=info msg="StartContainer for \"32af1a4d69564970584ced7d270a4aa1b553afb8b26e22e7e53017b247f85bbd\" returns successfully" Jan 16 09:02:56.030016 containerd[1600]: time="2025-01-16T09:02:56.029795490Z" level=info msg="shim disconnected" id=32af1a4d69564970584ced7d270a4aa1b553afb8b26e22e7e53017b247f85bbd namespace=k8s.io Jan 16 09:02:56.030387 containerd[1600]: time="2025-01-16T09:02:56.029884807Z" level=warning msg="cleaning up after shim disconnected" id=32af1a4d69564970584ced7d270a4aa1b553afb8b26e22e7e53017b247f85bbd namespace=k8s.io Jan 16 09:02:56.030387 containerd[1600]: time="2025-01-16T09:02:56.030264307Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 09:02:56.171438 kubelet[1949]: E0116 09:02:56.171334 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:56.666051 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-32af1a4d69564970584ced7d270a4aa1b553afb8b26e22e7e53017b247f85bbd-rootfs.mount: Deactivated successfully. Jan 16 09:02:56.783967 kubelet[1949]: E0116 09:02:56.783015 1949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:02:56.787899 containerd[1600]: time="2025-01-16T09:02:56.787616476Z" level=info msg="CreateContainer within sandbox \"4418be4dcb4c3cc31b3259bd9a2c354b286529a6adae6cca1bf45727c4b751f2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 16 09:02:56.835627 containerd[1600]: time="2025-01-16T09:02:56.835487924Z" level=info msg="CreateContainer within sandbox \"4418be4dcb4c3cc31b3259bd9a2c354b286529a6adae6cca1bf45727c4b751f2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9a7a80e69447ee026fc979e2fd9509e48410ebbbffdb106c6d453fc173e6dbb3\"" Jan 16 09:02:56.841788 containerd[1600]: time="2025-01-16T09:02:56.841351631Z" level=info msg="StartContainer for \"9a7a80e69447ee026fc979e2fd9509e48410ebbbffdb106c6d453fc173e6dbb3\"" Jan 16 09:02:56.996921 containerd[1600]: time="2025-01-16T09:02:56.996546083Z" level=info msg="StartContainer for \"9a7a80e69447ee026fc979e2fd9509e48410ebbbffdb106c6d453fc173e6dbb3\" returns successfully" Jan 16 09:02:57.071591 containerd[1600]: time="2025-01-16T09:02:57.071191714Z" level=info msg="shim disconnected" id=9a7a80e69447ee026fc979e2fd9509e48410ebbbffdb106c6d453fc173e6dbb3 namespace=k8s.io Jan 16 09:02:57.071591 containerd[1600]: time="2025-01-16T09:02:57.071280172Z" level=warning msg="cleaning up after shim disconnected" id=9a7a80e69447ee026fc979e2fd9509e48410ebbbffdb106c6d453fc173e6dbb3 namespace=k8s.io Jan 16 09:02:57.071591 containerd[1600]: time="2025-01-16T09:02:57.071295423Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 09:02:57.172313 kubelet[1949]: E0116 09:02:57.172213 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:57.665808 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9a7a80e69447ee026fc979e2fd9509e48410ebbbffdb106c6d453fc173e6dbb3-rootfs.mount: Deactivated successfully. Jan 16 09:02:57.794663 kubelet[1949]: E0116 09:02:57.794042 1949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:02:57.801246 containerd[1600]: time="2025-01-16T09:02:57.800134486Z" level=info msg="CreateContainer within sandbox \"4418be4dcb4c3cc31b3259bd9a2c354b286529a6adae6cca1bf45727c4b751f2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 16 09:02:57.823848 containerd[1600]: time="2025-01-16T09:02:57.822491373Z" level=info msg="CreateContainer within sandbox \"4418be4dcb4c3cc31b3259bd9a2c354b286529a6adae6cca1bf45727c4b751f2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2f74108d616008f8d14d38cd7644322e409086d419544c52eba650d66236182a\"" Jan 16 09:02:57.827065 containerd[1600]: time="2025-01-16T09:02:57.826148336Z" level=info msg="StartContainer for \"2f74108d616008f8d14d38cd7644322e409086d419544c52eba650d66236182a\"" Jan 16 09:02:57.952198 containerd[1600]: time="2025-01-16T09:02:57.952013876Z" level=info msg="StartContainer for \"2f74108d616008f8d14d38cd7644322e409086d419544c52eba650d66236182a\" returns successfully" Jan 16 09:02:58.028053 kubelet[1949]: E0116 09:02:58.027984 1949 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:58.039178 containerd[1600]: time="2025-01-16T09:02:58.038845199Z" level=info msg="shim disconnected" id=2f74108d616008f8d14d38cd7644322e409086d419544c52eba650d66236182a namespace=k8s.io Jan 16 09:02:58.039178 containerd[1600]: time="2025-01-16T09:02:58.038984322Z" level=warning msg="cleaning up after shim disconnected" id=2f74108d616008f8d14d38cd7644322e409086d419544c52eba650d66236182a namespace=k8s.io Jan 16 09:02:58.039178 containerd[1600]: time="2025-01-16T09:02:58.039005448Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 09:02:58.083541 containerd[1600]: time="2025-01-16T09:02:58.083228946Z" level=warning msg="cleanup warnings time=\"2025-01-16T09:02:58Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 16 09:02:58.173542 kubelet[1949]: E0116 09:02:58.173447 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:58.249612 kubelet[1949]: E0116 09:02:58.249379 1949 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 16 09:02:58.666708 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f74108d616008f8d14d38cd7644322e409086d419544c52eba650d66236182a-rootfs.mount: Deactivated successfully. Jan 16 09:02:58.798998 kubelet[1949]: E0116 09:02:58.798216 1949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:02:58.820212 containerd[1600]: time="2025-01-16T09:02:58.805351447Z" level=info msg="CreateContainer within sandbox \"4418be4dcb4c3cc31b3259bd9a2c354b286529a6adae6cca1bf45727c4b751f2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 16 09:02:58.894060 containerd[1600]: time="2025-01-16T09:02:58.893699043Z" level=info msg="CreateContainer within sandbox \"4418be4dcb4c3cc31b3259bd9a2c354b286529a6adae6cca1bf45727c4b751f2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7393a852a25c88abdc324cf5c2e1dcf8a35f6ca3cb01a345b29e94b4da027b2d\"" Jan 16 09:02:58.902786 containerd[1600]: time="2025-01-16T09:02:58.897602312Z" level=info msg="StartContainer for \"7393a852a25c88abdc324cf5c2e1dcf8a35f6ca3cb01a345b29e94b4da027b2d\"" Jan 16 09:02:59.056152 containerd[1600]: time="2025-01-16T09:02:59.054610984Z" level=info msg="StartContainer for \"7393a852a25c88abdc324cf5c2e1dcf8a35f6ca3cb01a345b29e94b4da027b2d\" returns successfully" Jan 16 09:02:59.174719 kubelet[1949]: E0116 09:02:59.174581 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:02:59.761146 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 16 09:02:59.811036 kubelet[1949]: E0116 09:02:59.810677 1949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:03:00.175175 kubelet[1949]: E0116 09:03:00.175114 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:03:00.228995 kubelet[1949]: I0116 09:03:00.227634 1949 setters.go:568] "Node became not ready" node="143.110.229.235" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-16T09:03:00Z","lastTransitionTime":"2025-01-16T09:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 16 09:03:00.569729 containerd[1600]: time="2025-01-16T09:03:00.568400872Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:03:00.572413 containerd[1600]: time="2025-01-16T09:03:00.572320434Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907245" Jan 16 09:03:00.578718 containerd[1600]: time="2025-01-16T09:03:00.574091080Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:03:00.578718 containerd[1600]: time="2025-01-16T09:03:00.577090319Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 5.723719336s" Jan 16 09:03:00.578718 containerd[1600]: time="2025-01-16T09:03:00.577148857Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 16 09:03:00.581265 containerd[1600]: time="2025-01-16T09:03:00.581211433Z" level=info msg="CreateContainer within sandbox \"996ddabefa432533fd76866697af51318421158d5d68c066442ba1ef0bdecbeb\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 16 09:03:00.599074 containerd[1600]: time="2025-01-16T09:03:00.598757133Z" level=info msg="CreateContainer within sandbox \"996ddabefa432533fd76866697af51318421158d5d68c066442ba1ef0bdecbeb\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"13edac3a6316626c7b51d88d609ce2cec2252c4c57810dd976e6b5e265a0e707\"" Jan 16 09:03:00.601213 containerd[1600]: time="2025-01-16T09:03:00.601158227Z" level=info msg="StartContainer for \"13edac3a6316626c7b51d88d609ce2cec2252c4c57810dd976e6b5e265a0e707\"" Jan 16 09:03:00.734438 containerd[1600]: time="2025-01-16T09:03:00.733104398Z" level=info msg="StartContainer for \"13edac3a6316626c7b51d88d609ce2cec2252c4c57810dd976e6b5e265a0e707\" returns successfully" Jan 16 09:03:00.816391 kubelet[1949]: E0116 09:03:00.815543 1949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:03:00.849736 kubelet[1949]: I0116 09:03:00.849561 1949 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-79pbl" podStartSLOduration=6.849493192 podStartE2EDuration="6.849493192s" podCreationTimestamp="2025-01-16 09:02:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-16 09:02:59.864095631 +0000 UTC m=+82.985980283" watchObservedRunningTime="2025-01-16 09:03:00.849493192 +0000 UTC m=+83.971377875" Jan 16 09:03:01.042853 kubelet[1949]: E0116 09:03:01.042785 1949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:03:01.179041 kubelet[1949]: E0116 09:03:01.176386 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:03:01.822144 kubelet[1949]: E0116 09:03:01.820694 1949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:03:02.178279 kubelet[1949]: E0116 09:03:02.178117 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:03:02.291940 kubelet[1949]: E0116 09:03:02.290910 1949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:03:03.181440 kubelet[1949]: E0116 09:03:03.179042 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:03:04.179874 kubelet[1949]: E0116 09:03:04.179812 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:03:05.180900 kubelet[1949]: E0116 09:03:05.180820 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:03:05.230180 systemd-networkd[1228]: lxc_health: Link UP Jan 16 09:03:05.234767 systemd-networkd[1228]: lxc_health: Gained carrier Jan 16 09:03:06.182046 kubelet[1949]: E0116 09:03:06.181969 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:03:06.453840 systemd-networkd[1228]: lxc_health: Gained IPv6LL Jan 16 09:03:07.046009 kubelet[1949]: E0116 09:03:07.044086 1949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:03:07.076068 kubelet[1949]: I0116 09:03:07.075990 1949 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-cr99p" podStartSLOduration=7.351060427 podStartE2EDuration="13.07588863s" podCreationTimestamp="2025-01-16 09:02:54 +0000 UTC" firstStartedPulling="2025-01-16 09:02:54.852742693 +0000 UTC m=+77.974627387" lastFinishedPulling="2025-01-16 09:03:00.577570957 +0000 UTC m=+83.699455590" observedRunningTime="2025-01-16 09:03:00.850883863 +0000 UTC m=+83.972768515" watchObservedRunningTime="2025-01-16 09:03:07.07588863 +0000 UTC m=+90.197773284" Jan 16 09:03:07.184006 kubelet[1949]: E0116 09:03:07.182809 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:03:07.861002 kubelet[1949]: E0116 09:03:07.858152 1949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:03:08.185586 kubelet[1949]: E0116 09:03:08.183903 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:03:08.865547 kubelet[1949]: E0116 09:03:08.865488 1949 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 16 09:03:09.185051 kubelet[1949]: E0116 09:03:09.184740 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:03:10.185180 kubelet[1949]: E0116 09:03:10.185112 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:03:11.185734 kubelet[1949]: E0116 09:03:11.185671 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:03:12.186929 kubelet[1949]: E0116 09:03:12.186845 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:03:13.187560 kubelet[1949]: E0116 09:03:13.187474 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 16 09:03:14.188249 kubelet[1949]: E0116 09:03:14.188133 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"