Jan 30 12:57:05.046770 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 09:29:54 -00 2025 Jan 30 12:57:05.046810 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 12:57:05.046829 kernel: BIOS-provided physical RAM map: Jan 30 12:57:05.046840 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 30 12:57:05.046849 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 30 12:57:05.046860 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 30 12:57:05.046872 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Jan 30 12:57:05.046884 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Jan 30 12:57:05.046895 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 30 12:57:05.046906 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 30 12:57:05.046922 kernel: NX (Execute Disable) protection: active Jan 30 12:57:05.046932 kernel: APIC: Static calls initialized Jan 30 12:57:05.046949 kernel: SMBIOS 2.8 present. Jan 30 12:57:05.049258 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jan 30 12:57:05.049309 kernel: Hypervisor detected: KVM Jan 30 12:57:05.049323 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 12:57:05.049353 kernel: kvm-clock: using sched offset of 3578976749 cycles Jan 30 12:57:05.049368 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 12:57:05.049380 kernel: tsc: Detected 1995.307 MHz processor Jan 30 12:57:05.049389 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 12:57:05.049397 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 12:57:05.049405 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Jan 30 12:57:05.049413 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 30 12:57:05.049421 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 12:57:05.049432 kernel: ACPI: Early table checksum verification disabled Jan 30 12:57:05.049439 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Jan 30 12:57:05.049447 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:57:05.049455 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:57:05.049463 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:57:05.049470 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 30 12:57:05.049478 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:57:05.049485 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:57:05.049493 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:57:05.049503 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:57:05.049516 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jan 30 12:57:05.049528 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jan 30 12:57:05.049537 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 30 12:57:05.049546 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jan 30 12:57:05.049558 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jan 30 12:57:05.049571 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jan 30 12:57:05.049588 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jan 30 12:57:05.049603 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 30 12:57:05.049614 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 30 12:57:05.049622 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 30 12:57:05.049635 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 30 12:57:05.049652 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Jan 30 12:57:05.049665 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Jan 30 12:57:05.049679 kernel: Zone ranges: Jan 30 12:57:05.049687 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 12:57:05.049696 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Jan 30 12:57:05.049704 kernel: Normal empty Jan 30 12:57:05.049712 kernel: Movable zone start for each node Jan 30 12:57:05.049720 kernel: Early memory node ranges Jan 30 12:57:05.049729 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 30 12:57:05.049741 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Jan 30 12:57:05.049750 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Jan 30 12:57:05.049758 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 12:57:05.049769 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 30 12:57:05.049781 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Jan 30 12:57:05.049789 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 30 12:57:05.049797 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 12:57:05.049808 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 12:57:05.049819 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 12:57:05.049827 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 12:57:05.049835 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 12:57:05.049848 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 12:57:05.049860 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 12:57:05.049871 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 12:57:05.049881 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 12:57:05.049889 kernel: TSC deadline timer available Jan 30 12:57:05.049897 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 30 12:57:05.049910 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 12:57:05.049918 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jan 30 12:57:05.049930 kernel: Booting paravirtualized kernel on KVM Jan 30 12:57:05.049942 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 12:57:05.049957 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 30 12:57:05.049967 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 30 12:57:05.049981 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 30 12:57:05.049991 kernel: pcpu-alloc: [0] 0 1 Jan 30 12:57:05.049999 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 30 12:57:05.050014 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 12:57:05.050025 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 12:57:05.050037 kernel: random: crng init done Jan 30 12:57:05.050050 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 12:57:05.050062 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 30 12:57:05.050071 kernel: Fallback order for Node 0: 0 Jan 30 12:57:05.050085 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Jan 30 12:57:05.050093 kernel: Policy zone: DMA32 Jan 30 12:57:05.050101 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 12:57:05.050114 kernel: Memory: 1969156K/2096612K available (14336K kernel code, 2301K rwdata, 22800K rodata, 43320K init, 1752K bss, 127196K reserved, 0K cma-reserved) Jan 30 12:57:05.050128 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 12:57:05.050140 kernel: Kernel/User page tables isolation: enabled Jan 30 12:57:05.050165 kernel: ftrace: allocating 37893 entries in 149 pages Jan 30 12:57:05.050173 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 12:57:05.050181 kernel: Dynamic Preempt: voluntary Jan 30 12:57:05.050189 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 12:57:05.050198 kernel: rcu: RCU event tracing is enabled. Jan 30 12:57:05.050206 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 12:57:05.050215 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 12:57:05.050222 kernel: Rude variant of Tasks RCU enabled. Jan 30 12:57:05.050233 kernel: Tracing variant of Tasks RCU enabled. Jan 30 12:57:05.050241 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 12:57:05.050249 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 12:57:05.050257 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 30 12:57:05.050265 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 12:57:05.050277 kernel: Console: colour VGA+ 80x25 Jan 30 12:57:05.050285 kernel: printk: console [tty0] enabled Jan 30 12:57:05.050293 kernel: printk: console [ttyS0] enabled Jan 30 12:57:05.050301 kernel: ACPI: Core revision 20230628 Jan 30 12:57:05.050309 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 30 12:57:05.050320 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 12:57:05.050327 kernel: x2apic enabled Jan 30 12:57:05.050335 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 12:57:05.050348 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 30 12:57:05.050362 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3985ba32100, max_idle_ns: 881590654722 ns Jan 30 12:57:05.050371 kernel: Calibrating delay loop (skipped) preset value.. 3990.61 BogoMIPS (lpj=1995307) Jan 30 12:57:05.050379 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 30 12:57:05.050387 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 30 12:57:05.050406 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 12:57:05.050414 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 12:57:05.050423 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 12:57:05.050433 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 12:57:05.050442 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 30 12:57:05.050451 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 12:57:05.050465 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 12:57:05.050473 kernel: MDS: Mitigation: Clear CPU buffers Jan 30 12:57:05.050482 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 12:57:05.050497 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 12:57:05.050506 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 12:57:05.050515 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 12:57:05.050524 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 12:57:05.050533 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 30 12:57:05.050543 kernel: Freeing SMP alternatives memory: 32K Jan 30 12:57:05.050556 kernel: pid_max: default: 32768 minimum: 301 Jan 30 12:57:05.050564 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 12:57:05.050582 kernel: landlock: Up and running. Jan 30 12:57:05.050592 kernel: SELinux: Initializing. Jan 30 12:57:05.050600 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 12:57:05.050609 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 12:57:05.050619 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jan 30 12:57:05.050634 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 12:57:05.050644 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 12:57:05.050653 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 12:57:05.050664 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jan 30 12:57:05.050673 kernel: signal: max sigframe size: 1776 Jan 30 12:57:05.050681 kernel: rcu: Hierarchical SRCU implementation. Jan 30 12:57:05.050690 kernel: rcu: Max phase no-delay instances is 400. Jan 30 12:57:05.050703 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 30 12:57:05.050713 kernel: smp: Bringing up secondary CPUs ... Jan 30 12:57:05.050722 kernel: smpboot: x86: Booting SMP configuration: Jan 30 12:57:05.050731 kernel: .... node #0, CPUs: #1 Jan 30 12:57:05.050739 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 12:57:05.050752 kernel: smpboot: Max logical packages: 1 Jan 30 12:57:05.050766 kernel: smpboot: Total of 2 processors activated (7981.22 BogoMIPS) Jan 30 12:57:05.050778 kernel: devtmpfs: initialized Jan 30 12:57:05.050786 kernel: x86/mm: Memory block size: 128MB Jan 30 12:57:05.050795 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 12:57:05.050811 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 12:57:05.050820 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 12:57:05.050829 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 12:57:05.050843 kernel: audit: initializing netlink subsys (disabled) Jan 30 12:57:05.050857 kernel: audit: type=2000 audit(1738241824.081:1): state=initialized audit_enabled=0 res=1 Jan 30 12:57:05.050874 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 12:57:05.050888 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 12:57:05.050904 kernel: cpuidle: using governor menu Jan 30 12:57:05.050918 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 12:57:05.050931 kernel: dca service started, version 1.12.1 Jan 30 12:57:05.050944 kernel: PCI: Using configuration type 1 for base access Jan 30 12:57:05.050958 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 12:57:05.050969 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 12:57:05.050979 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 12:57:05.050992 kernel: ACPI: Added _OSI(Module Device) Jan 30 12:57:05.051000 kernel: ACPI: Added _OSI(Processor Device) Jan 30 12:57:05.051009 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 12:57:05.051023 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 12:57:05.051037 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 12:57:05.051051 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 12:57:05.051065 kernel: ACPI: Interpreter enabled Jan 30 12:57:05.051073 kernel: ACPI: PM: (supports S0 S5) Jan 30 12:57:05.051082 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 12:57:05.051095 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 12:57:05.051104 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 12:57:05.051112 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 30 12:57:05.051127 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 12:57:05.052278 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 30 12:57:05.052476 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 30 12:57:05.052584 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 30 12:57:05.052602 kernel: acpiphp: Slot [3] registered Jan 30 12:57:05.052611 kernel: acpiphp: Slot [4] registered Jan 30 12:57:05.052622 kernel: acpiphp: Slot [5] registered Jan 30 12:57:05.052634 kernel: acpiphp: Slot [6] registered Jan 30 12:57:05.052645 kernel: acpiphp: Slot [7] registered Jan 30 12:57:05.052653 kernel: acpiphp: Slot [8] registered Jan 30 12:57:05.052662 kernel: acpiphp: Slot [9] registered Jan 30 12:57:05.052670 kernel: acpiphp: Slot [10] registered Jan 30 12:57:05.052681 kernel: acpiphp: Slot [11] registered Jan 30 12:57:05.052697 kernel: acpiphp: Slot [12] registered Jan 30 12:57:05.052711 kernel: acpiphp: Slot [13] registered Jan 30 12:57:05.052720 kernel: acpiphp: Slot [14] registered Jan 30 12:57:05.052728 kernel: acpiphp: Slot [15] registered Jan 30 12:57:05.052737 kernel: acpiphp: Slot [16] registered Jan 30 12:57:05.052745 kernel: acpiphp: Slot [17] registered Jan 30 12:57:05.052754 kernel: acpiphp: Slot [18] registered Jan 30 12:57:05.052762 kernel: acpiphp: Slot [19] registered Jan 30 12:57:05.052771 kernel: acpiphp: Slot [20] registered Jan 30 12:57:05.052786 kernel: acpiphp: Slot [21] registered Jan 30 12:57:05.052803 kernel: acpiphp: Slot [22] registered Jan 30 12:57:05.052814 kernel: acpiphp: Slot [23] registered Jan 30 12:57:05.052823 kernel: acpiphp: Slot [24] registered Jan 30 12:57:05.052831 kernel: acpiphp: Slot [25] registered Jan 30 12:57:05.052840 kernel: acpiphp: Slot [26] registered Jan 30 12:57:05.052848 kernel: acpiphp: Slot [27] registered Jan 30 12:57:05.052859 kernel: acpiphp: Slot [28] registered Jan 30 12:57:05.052873 kernel: acpiphp: Slot [29] registered Jan 30 12:57:05.052887 kernel: acpiphp: Slot [30] registered Jan 30 12:57:05.052903 kernel: acpiphp: Slot [31] registered Jan 30 12:57:05.052917 kernel: PCI host bridge to bus 0000:00 Jan 30 12:57:05.053067 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 12:57:05.053243 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 12:57:05.053380 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 12:57:05.053470 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 30 12:57:05.053559 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jan 30 12:57:05.053646 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 12:57:05.053821 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 30 12:57:05.053950 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 30 12:57:05.054070 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 30 12:57:05.054285 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jan 30 12:57:05.054408 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 30 12:57:05.054536 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 30 12:57:05.054651 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 30 12:57:05.054766 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 30 12:57:05.054924 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jan 30 12:57:05.055024 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jan 30 12:57:05.055139 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 30 12:57:05.055254 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 30 12:57:05.055359 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 30 12:57:05.055468 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 30 12:57:05.055571 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 30 12:57:05.055677 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jan 30 12:57:05.055816 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jan 30 12:57:05.055928 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 30 12:57:05.056034 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 12:57:05.056220 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 30 12:57:05.056323 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jan 30 12:57:05.056436 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jan 30 12:57:05.056584 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jan 30 12:57:05.056743 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 30 12:57:05.056899 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jan 30 12:57:05.057050 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jan 30 12:57:05.057200 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jan 30 12:57:05.057370 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jan 30 12:57:05.057467 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jan 30 12:57:05.057561 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jan 30 12:57:05.057656 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jan 30 12:57:05.057767 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jan 30 12:57:05.057915 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jan 30 12:57:05.058075 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jan 30 12:57:05.058250 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jan 30 12:57:05.059513 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jan 30 12:57:05.059690 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jan 30 12:57:05.059842 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jan 30 12:57:05.059944 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jan 30 12:57:05.060056 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jan 30 12:57:05.060203 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jan 30 12:57:05.060303 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jan 30 12:57:05.060316 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 12:57:05.060325 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 12:57:05.060334 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 12:57:05.060343 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 12:57:05.060356 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 30 12:57:05.060365 kernel: iommu: Default domain type: Translated Jan 30 12:57:05.060373 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 12:57:05.060382 kernel: PCI: Using ACPI for IRQ routing Jan 30 12:57:05.060391 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 12:57:05.060400 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 30 12:57:05.060408 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Jan 30 12:57:05.060508 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 30 12:57:05.060622 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 30 12:57:05.060751 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 12:57:05.060769 kernel: vgaarb: loaded Jan 30 12:57:05.060779 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 30 12:57:05.060792 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 30 12:57:05.060805 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 12:57:05.060818 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 12:57:05.060833 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 12:57:05.060847 kernel: pnp: PnP ACPI init Jan 30 12:57:05.060862 kernel: pnp: PnP ACPI: found 4 devices Jan 30 12:57:05.060882 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 12:57:05.060896 kernel: NET: Registered PF_INET protocol family Jan 30 12:57:05.060911 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 12:57:05.060926 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 30 12:57:05.060941 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 12:57:05.060955 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 12:57:05.060970 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 12:57:05.060983 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 30 12:57:05.060997 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 12:57:05.061016 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 12:57:05.061030 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 12:57:05.061045 kernel: NET: Registered PF_XDP protocol family Jan 30 12:57:05.064350 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 12:57:05.064510 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 12:57:05.064631 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 12:57:05.064764 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 30 12:57:05.064888 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jan 30 12:57:05.065035 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 30 12:57:05.065220 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 30 12:57:05.065237 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 30 12:57:05.065363 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 41283 usecs Jan 30 12:57:05.065376 kernel: PCI: CLS 0 bytes, default 64 Jan 30 12:57:05.065386 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 30 12:57:05.065395 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x3985ba32100, max_idle_ns: 881590654722 ns Jan 30 12:57:05.065404 kernel: Initialise system trusted keyrings Jan 30 12:57:05.065420 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 30 12:57:05.065429 kernel: Key type asymmetric registered Jan 30 12:57:05.065440 kernel: Asymmetric key parser 'x509' registered Jan 30 12:57:05.065455 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 12:57:05.065466 kernel: io scheduler mq-deadline registered Jan 30 12:57:05.065475 kernel: io scheduler kyber registered Jan 30 12:57:05.065484 kernel: io scheduler bfq registered Jan 30 12:57:05.065493 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 12:57:05.065502 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 30 12:57:05.065511 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 30 12:57:05.065523 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 30 12:57:05.065532 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 12:57:05.065541 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 12:57:05.065550 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 12:57:05.065559 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 12:57:05.065568 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 12:57:05.065711 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 30 12:57:05.065726 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 12:57:05.065824 kernel: rtc_cmos 00:03: registered as rtc0 Jan 30 12:57:05.065925 kernel: rtc_cmos 00:03: setting system clock to 2025-01-30T12:57:04 UTC (1738241824) Jan 30 12:57:05.066068 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 30 12:57:05.066087 kernel: intel_pstate: CPU model not supported Jan 30 12:57:05.066101 kernel: NET: Registered PF_INET6 protocol family Jan 30 12:57:05.066109 kernel: Segment Routing with IPv6 Jan 30 12:57:05.066119 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 12:57:05.066128 kernel: NET: Registered PF_PACKET protocol family Jan 30 12:57:05.066142 kernel: Key type dns_resolver registered Jan 30 12:57:05.068547 kernel: IPI shorthand broadcast: enabled Jan 30 12:57:05.068577 kernel: sched_clock: Marking stable (1440006883, 198738610)->(1670709177, -31963684) Jan 30 12:57:05.068587 kernel: registered taskstats version 1 Jan 30 12:57:05.068596 kernel: Loading compiled-in X.509 certificates Jan 30 12:57:05.068605 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 7f0738935740330d55027faa5877e7155d5f24f4' Jan 30 12:57:05.068614 kernel: Key type .fscrypt registered Jan 30 12:57:05.068623 kernel: Key type fscrypt-provisioning registered Jan 30 12:57:05.068633 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 12:57:05.068651 kernel: ima: Allocated hash algorithm: sha1 Jan 30 12:57:05.068659 kernel: ima: No architecture policies found Jan 30 12:57:05.068669 kernel: clk: Disabling unused clocks Jan 30 12:57:05.068677 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 30 12:57:05.068686 kernel: Write protecting the kernel read-only data: 38912k Jan 30 12:57:05.068714 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 30 12:57:05.068725 kernel: Run /init as init process Jan 30 12:57:05.068735 kernel: with arguments: Jan 30 12:57:05.068744 kernel: /init Jan 30 12:57:05.068755 kernel: with environment: Jan 30 12:57:05.068764 kernel: HOME=/ Jan 30 12:57:05.068772 kernel: TERM=linux Jan 30 12:57:05.068781 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 12:57:05.068795 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 12:57:05.068809 systemd[1]: Detected virtualization kvm. Jan 30 12:57:05.068819 systemd[1]: Detected architecture x86-64. Jan 30 12:57:05.068829 systemd[1]: Running in initrd. Jan 30 12:57:05.068841 systemd[1]: No hostname configured, using default hostname. Jan 30 12:57:05.068851 systemd[1]: Hostname set to . Jan 30 12:57:05.068861 systemd[1]: Initializing machine ID from VM UUID. Jan 30 12:57:05.068870 systemd[1]: Queued start job for default target initrd.target. Jan 30 12:57:05.068880 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 12:57:05.068889 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 12:57:05.068901 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 12:57:05.068910 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 12:57:05.068922 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 12:57:05.068933 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 12:57:05.068949 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 12:57:05.068959 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 12:57:05.068968 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 12:57:05.068978 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 12:57:05.068990 systemd[1]: Reached target paths.target - Path Units. Jan 30 12:57:05.069000 systemd[1]: Reached target slices.target - Slice Units. Jan 30 12:57:05.069010 systemd[1]: Reached target swap.target - Swaps. Jan 30 12:57:05.069022 systemd[1]: Reached target timers.target - Timer Units. Jan 30 12:57:05.069031 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 12:57:05.069042 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 12:57:05.069057 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 12:57:05.069071 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 12:57:05.069085 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 12:57:05.069100 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 12:57:05.069114 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 12:57:05.069123 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 12:57:05.069133 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 12:57:05.069142 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 12:57:05.069170 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 12:57:05.069180 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 12:57:05.069189 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 12:57:05.069199 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 12:57:05.069208 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 12:57:05.069218 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 12:57:05.069233 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 12:57:05.069327 systemd-journald[182]: Collecting audit messages is disabled. Jan 30 12:57:05.069358 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 12:57:05.069369 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 12:57:05.069383 systemd-journald[182]: Journal started Jan 30 12:57:05.069408 systemd-journald[182]: Runtime Journal (/run/log/journal/1b3b53d15c7b419dba08b30676343e2c) is 4.9M, max 39.3M, 34.4M free. Jan 30 12:57:05.048876 systemd-modules-load[183]: Inserted module 'overlay' Jan 30 12:57:05.093185 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 12:57:05.095134 systemd-modules-load[183]: Inserted module 'br_netfilter' Jan 30 12:57:05.124406 kernel: Bridge firewalling registered Jan 30 12:57:05.127209 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 12:57:05.127788 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 12:57:05.128568 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 12:57:05.134733 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 12:57:05.144446 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 12:57:05.157395 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 12:57:05.160582 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 12:57:05.168416 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 12:57:05.171278 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 12:57:05.190837 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 12:57:05.192808 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 12:57:05.193759 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 12:57:05.200457 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 12:57:05.204357 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 12:57:05.220582 dracut-cmdline[218]: dracut-dracut-053 Jan 30 12:57:05.227537 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 12:57:05.243351 systemd-resolved[221]: Positive Trust Anchors: Jan 30 12:57:05.243377 systemd-resolved[221]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 12:57:05.243429 systemd-resolved[221]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 12:57:05.248325 systemd-resolved[221]: Defaulting to hostname 'linux'. Jan 30 12:57:05.250236 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 12:57:05.251725 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 12:57:05.347210 kernel: SCSI subsystem initialized Jan 30 12:57:05.359204 kernel: Loading iSCSI transport class v2.0-870. Jan 30 12:57:05.373209 kernel: iscsi: registered transport (tcp) Jan 30 12:57:05.400876 kernel: iscsi: registered transport (qla4xxx) Jan 30 12:57:05.400980 kernel: QLogic iSCSI HBA Driver Jan 30 12:57:05.464241 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 12:57:05.471462 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 12:57:05.507258 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 12:57:05.507353 kernel: device-mapper: uevent: version 1.0.3 Jan 30 12:57:05.507376 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 12:57:05.559256 kernel: raid6: avx2x4 gen() 22479 MB/s Jan 30 12:57:05.576242 kernel: raid6: avx2x2 gen() 26515 MB/s Jan 30 12:57:05.593527 kernel: raid6: avx2x1 gen() 22464 MB/s Jan 30 12:57:05.593628 kernel: raid6: using algorithm avx2x2 gen() 26515 MB/s Jan 30 12:57:05.612207 kernel: raid6: .... xor() 17852 MB/s, rmw enabled Jan 30 12:57:05.612300 kernel: raid6: using avx2x2 recovery algorithm Jan 30 12:57:05.637197 kernel: xor: automatically using best checksumming function avx Jan 30 12:57:05.819209 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 12:57:05.835545 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 12:57:05.847526 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 12:57:05.863665 systemd-udevd[404]: Using default interface naming scheme 'v255'. Jan 30 12:57:05.869626 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 12:57:05.879476 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 12:57:05.901898 dracut-pre-trigger[411]: rd.md=0: removing MD RAID activation Jan 30 12:57:05.950055 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 12:57:05.957578 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 12:57:06.021622 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 12:57:06.032010 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 12:57:06.060795 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 12:57:06.063981 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 12:57:06.065939 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 12:57:06.068523 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 12:57:06.078479 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 12:57:06.108269 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 12:57:06.140185 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jan 30 12:57:06.243068 kernel: scsi host0: Virtio SCSI HBA Jan 30 12:57:06.243265 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 30 12:57:06.243380 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 12:57:06.243393 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 12:57:06.243405 kernel: GPT:9289727 != 125829119 Jan 30 12:57:06.243416 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 12:57:06.243427 kernel: GPT:9289727 != 125829119 Jan 30 12:57:06.243442 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 12:57:06.243453 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 12:57:06.243464 kernel: ACPI: bus type USB registered Jan 30 12:57:06.243476 kernel: usbcore: registered new interface driver usbfs Jan 30 12:57:06.243487 kernel: libata version 3.00 loaded. Jan 30 12:57:06.243498 kernel: usbcore: registered new interface driver hub Jan 30 12:57:06.243509 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 30 12:57:06.261542 kernel: usbcore: registered new device driver usb Jan 30 12:57:06.261562 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jan 30 12:57:06.262436 kernel: scsi host1: ata_piix Jan 30 12:57:06.262627 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 12:57:06.262650 kernel: AES CTR mode by8 optimization enabled Jan 30 12:57:06.262668 kernel: virtio_blk virtio5: [vdb] 932 512-byte logical blocks (477 kB/466 KiB) Jan 30 12:57:06.262798 kernel: scsi host2: ata_piix Jan 30 12:57:06.262991 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jan 30 12:57:06.263011 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jan 30 12:57:06.202232 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 12:57:06.202378 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 12:57:06.203235 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 12:57:06.203793 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 12:57:06.203958 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 12:57:06.204703 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 12:57:06.214395 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 12:57:06.308319 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 12:57:06.316460 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 12:57:06.341475 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 12:57:06.465228 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (448) Jan 30 12:57:06.472189 kernel: BTRFS: device fsid f8084233-4a6f-4e67-af0b-519e43b19e58 devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (459) Jan 30 12:57:06.479754 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jan 30 12:57:06.490745 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jan 30 12:57:06.491041 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jan 30 12:57:06.491311 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jan 30 12:57:06.491501 kernel: hub 1-0:1.0: USB hub found Jan 30 12:57:06.491749 kernel: hub 1-0:1.0: 2 ports detected Jan 30 12:57:06.481130 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 12:57:06.492339 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 12:57:06.500763 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 12:57:06.505962 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 12:57:06.506693 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 12:57:06.513517 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 12:57:06.522251 disk-uuid[548]: Primary Header is updated. Jan 30 12:57:06.522251 disk-uuid[548]: Secondary Entries is updated. Jan 30 12:57:06.522251 disk-uuid[548]: Secondary Header is updated. Jan 30 12:57:06.527247 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 12:57:06.533178 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 12:57:07.537243 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 12:57:07.537825 disk-uuid[549]: The operation has completed successfully. Jan 30 12:57:07.594504 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 12:57:07.594660 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 12:57:07.614492 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 12:57:07.621196 sh[560]: Success Jan 30 12:57:07.638183 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 30 12:57:07.703650 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 12:57:07.715363 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 12:57:07.717027 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 12:57:07.741237 kernel: BTRFS info (device dm-0): first mount of filesystem f8084233-4a6f-4e67-af0b-519e43b19e58 Jan 30 12:57:07.741361 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 12:57:07.741384 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 12:57:07.742794 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 12:57:07.742873 kernel: BTRFS info (device dm-0): using free space tree Jan 30 12:57:07.752635 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 12:57:07.753914 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 12:57:07.765572 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 12:57:07.770402 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 12:57:07.780335 kernel: BTRFS info (device vda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 12:57:07.780396 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 12:57:07.782944 kernel: BTRFS info (device vda6): using free space tree Jan 30 12:57:07.789293 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 12:57:07.803304 kernel: BTRFS info (device vda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 12:57:07.802892 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 12:57:07.811931 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 12:57:07.820522 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 12:57:07.919101 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 12:57:07.930558 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 12:57:07.965813 systemd-networkd[744]: lo: Link UP Jan 30 12:57:07.966989 systemd-networkd[744]: lo: Gained carrier Jan 30 12:57:07.971036 systemd-networkd[744]: Enumeration completed Jan 30 12:57:07.971493 systemd-networkd[744]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 30 12:57:07.971497 systemd-networkd[744]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jan 30 12:57:07.973901 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 12:57:07.975033 systemd-networkd[744]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 12:57:07.975038 systemd-networkd[744]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 12:57:07.976974 systemd[1]: Reached target network.target - Network. Jan 30 12:57:07.989255 systemd-networkd[744]: eth0: Link UP Jan 30 12:57:07.989268 systemd-networkd[744]: eth0: Gained carrier Jan 30 12:57:07.989287 systemd-networkd[744]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 30 12:57:07.995789 systemd-networkd[744]: eth1: Link UP Jan 30 12:57:07.995802 systemd-networkd[744]: eth1: Gained carrier Jan 30 12:57:07.995822 systemd-networkd[744]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 12:57:08.009999 ignition[658]: Ignition 2.20.0 Jan 30 12:57:08.010016 ignition[658]: Stage: fetch-offline Jan 30 12:57:08.010061 ignition[658]: no configs at "/usr/lib/ignition/base.d" Jan 30 12:57:08.010072 ignition[658]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 12:57:08.012479 systemd-networkd[744]: eth0: DHCPv4 address 159.223.192.231/20, gateway 159.223.192.1 acquired from 169.254.169.253 Jan 30 12:57:08.014821 ignition[658]: parsed url from cmdline: "" Jan 30 12:57:08.014908 ignition[658]: no config URL provided Jan 30 12:57:08.015510 ignition[658]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 12:57:08.015539 ignition[658]: no config at "/usr/lib/ignition/user.ign" Jan 30 12:57:08.018327 systemd-networkd[744]: eth1: DHCPv4 address 10.124.0.17/20 acquired from 169.254.169.253 Jan 30 12:57:08.015552 ignition[658]: failed to fetch config: resource requires networking Jan 30 12:57:08.018608 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 12:57:08.015881 ignition[658]: Ignition finished successfully Jan 30 12:57:08.027458 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 12:57:08.058002 ignition[752]: Ignition 2.20.0 Jan 30 12:57:08.058020 ignition[752]: Stage: fetch Jan 30 12:57:08.060294 ignition[752]: no configs at "/usr/lib/ignition/base.d" Jan 30 12:57:08.061132 ignition[752]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 12:57:08.062382 ignition[752]: parsed url from cmdline: "" Jan 30 12:57:08.062472 ignition[752]: no config URL provided Jan 30 12:57:08.063030 ignition[752]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 12:57:08.063056 ignition[752]: no config at "/usr/lib/ignition/user.ign" Jan 30 12:57:08.063106 ignition[752]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jan 30 12:57:08.084080 ignition[752]: GET result: OK Jan 30 12:57:08.084288 ignition[752]: parsing config with SHA512: 994cef758f3615481e60cda35e422e8df48e1acc28066897ce1a1ab70d69bf61bdd01f92bef432b5e4f6bc8f45d7c704bb7c129692c51b6f79c179c378745219 Jan 30 12:57:08.089362 unknown[752]: fetched base config from "system" Jan 30 12:57:08.089379 unknown[752]: fetched base config from "system" Jan 30 12:57:08.089733 ignition[752]: fetch: fetch complete Jan 30 12:57:08.089389 unknown[752]: fetched user config from "digitalocean" Jan 30 12:57:08.089739 ignition[752]: fetch: fetch passed Jan 30 12:57:08.091941 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 12:57:08.089807 ignition[752]: Ignition finished successfully Jan 30 12:57:08.100512 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 12:57:08.126223 ignition[758]: Ignition 2.20.0 Jan 30 12:57:08.126245 ignition[758]: Stage: kargs Jan 30 12:57:08.126440 ignition[758]: no configs at "/usr/lib/ignition/base.d" Jan 30 12:57:08.128723 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 12:57:08.126451 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 12:57:08.127316 ignition[758]: kargs: kargs passed Jan 30 12:57:08.127373 ignition[758]: Ignition finished successfully Jan 30 12:57:08.142461 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 12:57:08.157175 ignition[765]: Ignition 2.20.0 Jan 30 12:57:08.157194 ignition[765]: Stage: disks Jan 30 12:57:08.157548 ignition[765]: no configs at "/usr/lib/ignition/base.d" Jan 30 12:57:08.160968 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 12:57:08.157565 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 12:57:08.158652 ignition[765]: disks: disks passed Jan 30 12:57:08.162080 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 12:57:08.158724 ignition[765]: Ignition finished successfully Jan 30 12:57:08.167872 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 12:57:08.168842 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 12:57:08.169930 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 12:57:08.170927 systemd[1]: Reached target basic.target - Basic System. Jan 30 12:57:08.178500 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 12:57:08.212813 systemd-fsck[773]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 12:57:08.216466 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 12:57:08.225329 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 12:57:08.340183 kernel: EXT4-fs (vda9): mounted filesystem cdc615db-d057-439f-af25-aa57b1c399e2 r/w with ordered data mode. Quota mode: none. Jan 30 12:57:08.340936 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 12:57:08.342465 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 12:57:08.348293 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 12:57:08.366373 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 12:57:08.369052 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Jan 30 12:57:08.372375 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 12:57:08.375281 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 12:57:08.377952 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (781) Jan 30 12:57:08.375323 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 12:57:08.382352 kernel: BTRFS info (device vda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 12:57:08.382430 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 12:57:08.382451 kernel: BTRFS info (device vda6): using free space tree Jan 30 12:57:08.385911 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 12:57:08.393492 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 12:57:08.404634 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 12:57:08.412231 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 12:57:08.492199 initrd-setup-root[812]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 12:57:08.495586 coreos-metadata[783]: Jan 30 12:57:08.495 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 12:57:08.498351 coreos-metadata[784]: Jan 30 12:57:08.496 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 12:57:08.502599 initrd-setup-root[819]: cut: /sysroot/etc/group: No such file or directory Jan 30 12:57:08.508676 initrd-setup-root[826]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 12:57:08.511686 coreos-metadata[784]: Jan 30 12:57:08.511 INFO Fetch successful Jan 30 12:57:08.514766 coreos-metadata[783]: Jan 30 12:57:08.514 INFO Fetch successful Jan 30 12:57:08.520828 coreos-metadata[784]: Jan 30 12:57:08.519 INFO wrote hostname ci-4186.1.0-8-5f4ae1611a to /sysroot/etc/hostname Jan 30 12:57:08.523428 initrd-setup-root[833]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 12:57:08.521767 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 12:57:08.525980 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Jan 30 12:57:08.526921 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Jan 30 12:57:08.631937 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 12:57:08.637294 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 12:57:08.641393 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 12:57:08.651258 kernel: BTRFS info (device vda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 12:57:08.681302 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 12:57:08.691703 ignition[903]: INFO : Ignition 2.20.0 Jan 30 12:57:08.694114 ignition[903]: INFO : Stage: mount Jan 30 12:57:08.694114 ignition[903]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 12:57:08.694114 ignition[903]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 12:57:08.694114 ignition[903]: INFO : mount: mount passed Jan 30 12:57:08.694114 ignition[903]: INFO : Ignition finished successfully Jan 30 12:57:08.696146 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 12:57:08.704419 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 12:57:08.736985 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 12:57:08.749590 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 12:57:08.761624 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (915) Jan 30 12:57:08.761684 kernel: BTRFS info (device vda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 12:57:08.764985 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 12:57:08.765071 kernel: BTRFS info (device vda6): using free space tree Jan 30 12:57:08.770247 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 12:57:08.772937 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 12:57:08.800499 ignition[932]: INFO : Ignition 2.20.0 Jan 30 12:57:08.800499 ignition[932]: INFO : Stage: files Jan 30 12:57:08.802321 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 12:57:08.802321 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 12:57:08.802321 ignition[932]: DEBUG : files: compiled without relabeling support, skipping Jan 30 12:57:08.804823 ignition[932]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 12:57:08.804823 ignition[932]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 12:57:08.806586 ignition[932]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 12:57:08.807328 ignition[932]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 12:57:08.807328 ignition[932]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 12:57:08.807108 unknown[932]: wrote ssh authorized keys file for user: core Jan 30 12:57:08.809723 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 30 12:57:08.809723 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 12:57:08.809723 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 12:57:08.809723 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 12:57:08.809723 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 12:57:08.809723 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 12:57:08.809723 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 12:57:08.809723 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Jan 30 12:57:09.287664 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 30 12:57:09.559704 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 12:57:09.561037 ignition[932]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 12:57:09.561037 ignition[932]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 12:57:09.561037 ignition[932]: INFO : files: files passed Jan 30 12:57:09.561037 ignition[932]: INFO : Ignition finished successfully Jan 30 12:57:09.561174 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 12:57:09.568394 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 12:57:09.571534 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 12:57:09.578074 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 12:57:09.578195 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 12:57:09.596278 initrd-setup-root-after-ignition[960]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 12:57:09.598007 initrd-setup-root-after-ignition[960]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 12:57:09.599071 initrd-setup-root-after-ignition[964]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 12:57:09.600403 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 12:57:09.601638 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 12:57:09.608461 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 12:57:09.649618 systemd-networkd[744]: eth1: Gained IPv6LL Jan 30 12:57:09.654923 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 12:57:09.655039 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 12:57:09.655975 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 12:57:09.656730 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 12:57:09.658133 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 12:57:09.660397 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 12:57:09.682298 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 12:57:09.691433 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 12:57:09.704018 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 12:57:09.704772 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 12:57:09.706259 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 12:57:09.707280 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 12:57:09.707417 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 12:57:09.708984 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 12:57:09.710461 systemd[1]: Stopped target basic.target - Basic System. Jan 30 12:57:09.711527 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 12:57:09.712566 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 12:57:09.713754 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 12:57:09.714875 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 12:57:09.715982 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 12:57:09.717420 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 12:57:09.718584 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 12:57:09.719736 systemd[1]: Stopped target swap.target - Swaps. Jan 30 12:57:09.720787 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 12:57:09.720982 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 12:57:09.722551 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 12:57:09.723264 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 12:57:09.724592 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 12:57:09.724706 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 12:57:09.726015 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 12:57:09.726206 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 12:57:09.727814 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 12:57:09.727962 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 12:57:09.729766 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 12:57:09.729890 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 12:57:09.731076 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 12:57:09.731298 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 12:57:09.741693 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 12:57:09.748503 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 12:57:09.749958 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 12:57:09.750949 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 12:57:09.752478 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 12:57:09.752632 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 12:57:09.764251 ignition[984]: INFO : Ignition 2.20.0 Jan 30 12:57:09.764251 ignition[984]: INFO : Stage: umount Jan 30 12:57:09.774023 ignition[984]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 12:57:09.774023 ignition[984]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 12:57:09.774023 ignition[984]: INFO : umount: umount passed Jan 30 12:57:09.774023 ignition[984]: INFO : Ignition finished successfully Jan 30 12:57:09.767908 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 12:57:09.768083 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 12:57:09.771531 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 12:57:09.771658 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 12:57:09.773339 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 12:57:09.773498 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 12:57:09.774620 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 12:57:09.774685 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 12:57:09.777267 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 12:57:09.777349 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 12:57:09.778688 systemd-networkd[744]: eth0: Gained IPv6LL Jan 30 12:57:09.779597 systemd[1]: Stopped target network.target - Network. Jan 30 12:57:09.781390 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 12:57:09.781481 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 12:57:09.783204 systemd[1]: Stopped target paths.target - Path Units. Jan 30 12:57:09.786002 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 12:57:09.791250 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 12:57:09.792763 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 12:57:09.793655 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 12:57:09.798093 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 12:57:09.798175 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 12:57:09.799351 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 12:57:09.799400 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 12:57:09.800552 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 12:57:09.800626 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 12:57:09.803784 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 12:57:09.803860 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 12:57:09.804662 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 12:57:09.806318 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 12:57:09.808328 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 12:57:09.809239 systemd-networkd[744]: eth0: DHCPv6 lease lost Jan 30 12:57:09.809528 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 12:57:09.809648 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 12:57:09.814470 systemd-networkd[744]: eth1: DHCPv6 lease lost Jan 30 12:57:09.816347 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 12:57:09.816504 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 12:57:09.820583 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 12:57:09.820935 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 12:57:09.824799 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 12:57:09.824901 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 12:57:09.825732 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 12:57:09.825798 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 12:57:09.832380 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 12:57:09.833624 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 12:57:09.834128 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 12:57:09.834935 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 12:57:09.834985 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 12:57:09.835492 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 12:57:09.835533 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 12:57:09.836032 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 12:57:09.836076 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 12:57:09.836776 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 12:57:09.851956 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 12:57:09.852251 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 12:57:09.854065 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 12:57:09.854516 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 12:57:09.856231 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 12:57:09.856299 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 12:57:09.857412 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 12:57:09.857476 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 12:57:09.859165 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 12:57:09.859228 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 12:57:09.860340 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 12:57:09.860415 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 12:57:09.872959 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 12:57:09.873652 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 12:57:09.873731 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 12:57:09.874464 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 12:57:09.874535 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 12:57:09.875211 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 12:57:09.875273 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 12:57:09.876772 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 12:57:09.876845 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 12:57:09.878516 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 12:57:09.878636 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 12:57:09.885632 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 12:57:09.885753 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 12:57:09.887606 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 12:57:09.900545 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 12:57:09.911796 systemd[1]: Switching root. Jan 30 12:57:09.986721 systemd-journald[182]: Journal stopped Jan 30 12:57:11.445094 systemd-journald[182]: Received SIGTERM from PID 1 (systemd). Jan 30 12:57:11.445351 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 12:57:11.445383 kernel: SELinux: policy capability open_perms=1 Jan 30 12:57:11.445404 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 12:57:11.445425 kernel: SELinux: policy capability always_check_network=0 Jan 30 12:57:11.445451 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 12:57:11.445471 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 12:57:11.445490 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 12:57:11.445502 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 12:57:11.445513 kernel: audit: type=1403 audit(1738241830.220:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 12:57:11.445528 systemd[1]: Successfully loaded SELinux policy in 52.019ms. Jan 30 12:57:11.445550 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 17.227ms. Jan 30 12:57:11.445566 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 12:57:11.445579 systemd[1]: Detected virtualization kvm. Jan 30 12:57:11.445595 systemd[1]: Detected architecture x86-64. Jan 30 12:57:11.445606 systemd[1]: Detected first boot. Jan 30 12:57:11.445622 systemd[1]: Hostname set to . Jan 30 12:57:11.445638 systemd[1]: Initializing machine ID from VM UUID. Jan 30 12:57:11.445650 zram_generator::config[1030]: No configuration found. Jan 30 12:57:11.445665 systemd[1]: Populated /etc with preset unit settings. Jan 30 12:57:11.445677 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 12:57:11.445689 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 12:57:11.445705 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 12:57:11.445718 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 12:57:11.445739 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 12:57:11.445761 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 12:57:11.445778 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 12:57:11.445796 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 12:57:11.445813 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 12:57:11.445830 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 12:57:11.445851 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 12:57:11.445869 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 12:57:11.445884 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 12:57:11.445897 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 12:57:11.445909 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 12:57:11.445921 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 12:57:11.445934 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 12:57:11.445946 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 12:57:11.445958 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 12:57:11.445972 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 12:57:11.445996 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 12:57:11.446016 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 12:57:11.446034 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 12:57:11.446051 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 12:57:11.446068 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 12:57:11.446088 systemd[1]: Reached target slices.target - Slice Units. Jan 30 12:57:11.446105 systemd[1]: Reached target swap.target - Swaps. Jan 30 12:57:11.446123 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 12:57:11.446142 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 12:57:11.446181 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 12:57:11.446202 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 12:57:11.446222 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 12:57:11.446242 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 12:57:11.446263 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 12:57:11.446284 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 12:57:11.446309 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 12:57:11.446330 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 12:57:11.446351 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 12:57:11.446373 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 12:57:11.446394 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 12:57:11.446416 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 12:57:11.446437 systemd[1]: Reached target machines.target - Containers. Jan 30 12:57:11.446457 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 12:57:11.446483 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 12:57:11.446505 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 12:57:11.446526 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 12:57:11.446544 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 12:57:11.446563 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 12:57:11.446583 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 12:57:11.446603 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 12:57:11.446625 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 12:57:11.446651 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 12:57:11.446671 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 12:57:11.446693 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 12:57:11.446714 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 12:57:11.446734 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 12:57:11.446755 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 12:57:11.446776 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 12:57:11.446797 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 12:57:11.446819 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 12:57:11.446843 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 12:57:11.446865 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 12:57:11.446885 systemd[1]: Stopped verity-setup.service. Jan 30 12:57:11.446908 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 12:57:11.446929 kernel: ACPI: bus type drm_connector registered Jan 30 12:57:11.446951 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 12:57:11.446970 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 12:57:11.446987 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 12:57:11.447004 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 12:57:11.447026 kernel: loop: module loaded Jan 30 12:57:11.447045 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 12:57:11.447074 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 12:57:11.447095 kernel: fuse: init (API version 7.39) Jan 30 12:57:11.447116 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 12:57:11.447144 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 12:57:11.447259 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 12:57:11.447280 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 12:57:11.447300 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 12:57:11.447320 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 12:57:11.447345 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 12:57:11.447366 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 12:57:11.447386 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 12:57:11.447404 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 12:57:11.447422 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 12:57:11.447442 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 12:57:11.447462 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 12:57:11.447481 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 12:57:11.447503 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 12:57:11.447529 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 12:57:11.447601 systemd-journald[1105]: Collecting audit messages is disabled. Jan 30 12:57:11.447641 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 12:57:11.447661 systemd-journald[1105]: Journal started Jan 30 12:57:11.447696 systemd-journald[1105]: Runtime Journal (/run/log/journal/1b3b53d15c7b419dba08b30676343e2c) is 4.9M, max 39.3M, 34.4M free. Jan 30 12:57:10.956826 systemd[1]: Queued start job for default target multi-user.target. Jan 30 12:57:11.464971 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 12:57:10.979983 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 12:57:10.980655 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 12:57:11.473722 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 12:57:11.478236 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 12:57:11.478355 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 12:57:11.487249 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 12:57:11.500200 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 12:57:11.504497 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 12:57:11.509437 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 12:57:11.520369 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 12:57:11.525225 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 12:57:11.534016 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 12:57:11.538208 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 12:57:11.549208 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 12:57:11.569077 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 12:57:11.580723 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 12:57:11.583382 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 12:57:11.589278 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 12:57:11.591574 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 12:57:11.592459 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 12:57:11.593872 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 12:57:11.596238 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 12:57:11.614908 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 12:57:11.647527 kernel: loop0: detected capacity change from 0 to 138184 Jan 30 12:57:11.650069 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 12:57:11.663632 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 12:57:11.679116 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 12:57:11.689463 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 12:57:11.690469 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 12:57:11.709275 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 12:57:11.739021 kernel: loop1: detected capacity change from 0 to 218376 Jan 30 12:57:11.742344 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 12:57:11.743895 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 12:57:11.753015 systemd-journald[1105]: Time spent on flushing to /var/log/journal/1b3b53d15c7b419dba08b30676343e2c is 64.186ms for 983 entries. Jan 30 12:57:11.753015 systemd-journald[1105]: System Journal (/var/log/journal/1b3b53d15c7b419dba08b30676343e2c) is 8.0M, max 195.6M, 187.6M free. Jan 30 12:57:11.830796 systemd-journald[1105]: Received client request to flush runtime journal. Jan 30 12:57:11.755188 udevadm[1157]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 12:57:11.759394 systemd-tmpfiles[1129]: ACLs are not supported, ignoring. Jan 30 12:57:11.759408 systemd-tmpfiles[1129]: ACLs are not supported, ignoring. Jan 30 12:57:11.779771 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 12:57:11.793741 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 12:57:11.833628 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 12:57:11.838203 kernel: loop2: detected capacity change from 0 to 141000 Jan 30 12:57:11.878873 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 12:57:11.901492 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 12:57:11.916204 kernel: loop3: detected capacity change from 0 to 8 Jan 30 12:57:11.961079 kernel: loop4: detected capacity change from 0 to 138184 Jan 30 12:57:11.991199 kernel: loop5: detected capacity change from 0 to 218376 Jan 30 12:57:11.995888 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Jan 30 12:57:11.996851 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Jan 30 12:57:12.011385 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 12:57:12.033502 kernel: loop6: detected capacity change from 0 to 141000 Jan 30 12:57:12.052347 kernel: loop7: detected capacity change from 0 to 8 Jan 30 12:57:12.055061 (sd-merge)[1173]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jan 30 12:57:12.058821 (sd-merge)[1173]: Merged extensions into '/usr'. Jan 30 12:57:12.073525 systemd[1]: Reloading requested from client PID 1128 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 12:57:12.073561 systemd[1]: Reloading... Jan 30 12:57:12.252192 zram_generator::config[1200]: No configuration found. Jan 30 12:57:12.378477 ldconfig[1124]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 12:57:12.481426 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 12:57:12.543960 systemd[1]: Reloading finished in 469 ms. Jan 30 12:57:12.567018 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 12:57:12.570870 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 12:57:12.584017 systemd[1]: Starting ensure-sysext.service... Jan 30 12:57:12.587540 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 12:57:12.602279 systemd[1]: Reloading requested from client PID 1243 ('systemctl') (unit ensure-sysext.service)... Jan 30 12:57:12.602315 systemd[1]: Reloading... Jan 30 12:57:12.634437 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 12:57:12.635391 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 12:57:12.637301 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 12:57:12.637961 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Jan 30 12:57:12.638464 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Jan 30 12:57:12.644858 systemd-tmpfiles[1244]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 12:57:12.644988 systemd-tmpfiles[1244]: Skipping /boot Jan 30 12:57:12.666807 systemd-tmpfiles[1244]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 12:57:12.666968 systemd-tmpfiles[1244]: Skipping /boot Jan 30 12:57:12.744230 zram_generator::config[1282]: No configuration found. Jan 30 12:57:12.881653 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 12:57:12.946690 systemd[1]: Reloading finished in 343 ms. Jan 30 12:57:12.967974 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 12:57:12.979903 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 12:57:12.995658 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 12:57:13.002321 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 12:57:13.006476 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 12:57:13.017652 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 12:57:13.023680 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 12:57:13.035623 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 12:57:13.041738 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 12:57:13.042035 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 12:57:13.050654 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 12:57:13.063315 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 12:57:13.077061 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 12:57:13.084873 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 12:57:13.085473 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 12:57:13.104259 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 12:57:13.120258 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 12:57:13.124773 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 12:57:13.126295 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 12:57:13.126637 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 12:57:13.133635 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 12:57:13.143576 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 12:57:13.144375 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 12:57:13.145908 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 12:57:13.147939 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 12:57:13.148854 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 12:57:13.151003 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 12:57:13.152435 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 12:57:13.154117 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 12:57:13.155435 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 12:57:13.172854 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 12:57:13.175541 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 12:57:13.185619 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 12:57:13.197838 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 12:57:13.203566 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 12:57:13.209382 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 12:57:13.210423 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 12:57:13.210683 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 12:57:13.210817 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 12:57:13.214504 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 12:57:13.217356 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 12:57:13.217599 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 12:57:13.220777 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 12:57:13.221032 systemd-udevd[1320]: Using default interface naming scheme 'v255'. Jan 30 12:57:13.224399 augenrules[1355]: No rules Jan 30 12:57:13.221769 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 12:57:13.223943 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 12:57:13.224789 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 12:57:13.238596 systemd[1]: Finished ensure-sysext.service. Jan 30 12:57:13.253273 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 12:57:13.256071 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 12:57:13.256355 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 12:57:13.259497 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 12:57:13.267133 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 12:57:13.269716 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 12:57:13.269983 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 12:57:13.284884 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 12:57:13.286596 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 12:57:13.286797 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 12:57:13.441287 systemd-networkd[1374]: lo: Link UP Jan 30 12:57:13.443218 systemd-networkd[1374]: lo: Gained carrier Jan 30 12:57:13.450894 systemd-resolved[1318]: Positive Trust Anchors: Jan 30 12:57:13.450914 systemd-resolved[1318]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 12:57:13.450953 systemd-resolved[1318]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 12:57:13.451926 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 12:57:13.454252 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 12:57:13.467310 systemd-resolved[1318]: Using system hostname 'ci-4186.1.0-8-5f4ae1611a'. Jan 30 12:57:13.472978 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 12:57:13.473816 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 12:57:13.478834 systemd-networkd[1374]: Enumeration completed Jan 30 12:57:13.479105 systemd-networkd[1374]: eth0: Configuring with /run/systemd/network/10-8a:a0:9a:59:a1:94.network. Jan 30 12:57:13.479401 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 12:57:13.480598 systemd[1]: Reached target network.target - Network. Jan 30 12:57:13.480840 systemd-networkd[1374]: eth0: Link UP Jan 30 12:57:13.482211 systemd-networkd[1374]: eth0: Gained carrier Jan 30 12:57:13.490465 systemd-timesyncd[1365]: Network configuration changed, trying to establish connection. Jan 30 12:57:13.490523 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 12:57:13.495309 systemd-timesyncd[1365]: Network configuration changed, trying to establish connection. Jan 30 12:57:13.509948 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 30 12:57:13.552189 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jan 30 12:57:13.552804 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 12:57:13.552946 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 12:57:13.556059 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 12:57:13.566670 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 12:57:13.571704 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 12:57:13.573475 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 12:57:13.573538 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 12:57:13.573559 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 12:57:13.574087 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 12:57:13.576255 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 12:57:13.597657 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 12:57:13.599312 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 12:57:13.600680 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 12:57:13.604198 kernel: ISO 9660 Extensions: RRIP_1991A Jan 30 12:57:13.608601 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jan 30 12:57:13.614651 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 12:57:13.615181 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 12:57:13.623759 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 12:57:13.632309 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 30 12:57:13.647624 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1390) Jan 30 12:57:13.647721 kernel: ACPI: button: Power Button [PWRF] Jan 30 12:57:13.719191 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 30 12:57:13.730464 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 30 12:57:13.740988 systemd-networkd[1374]: eth1: Configuring with /run/systemd/network/10-d2:bb:12:1c:8f:d4.network. Jan 30 12:57:13.741877 systemd-timesyncd[1365]: Network configuration changed, trying to establish connection. Jan 30 12:57:13.742756 systemd-networkd[1374]: eth1: Link UP Jan 30 12:57:13.743240 systemd-networkd[1374]: eth1: Gained carrier Jan 30 12:57:13.750251 systemd-timesyncd[1365]: Network configuration changed, trying to establish connection. Jan 30 12:57:13.751813 systemd-timesyncd[1365]: Network configuration changed, trying to establish connection. Jan 30 12:57:13.796284 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 12:57:13.803086 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 12:57:13.812533 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 12:57:13.837204 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 12:57:13.848195 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 30 12:57:13.856585 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 12:57:13.858796 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 30 12:57:13.867189 kernel: Console: switching to colour dummy device 80x25 Jan 30 12:57:13.867318 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 30 12:57:13.867340 kernel: [drm] features: -context_init Jan 30 12:57:13.871276 kernel: [drm] number of scanouts: 1 Jan 30 12:57:13.871355 kernel: [drm] number of cap sets: 0 Jan 30 12:57:13.876194 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 30 12:57:13.884657 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 30 12:57:13.884733 kernel: Console: switching to colour frame buffer device 128x48 Jan 30 12:57:13.897192 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 30 12:57:13.915149 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 12:57:13.915762 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 12:57:13.941549 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 12:57:13.947055 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 12:57:13.947521 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 12:57:13.953618 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 12:57:14.049053 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 12:57:14.051260 kernel: EDAC MC: Ver: 3.0.0 Jan 30 12:57:14.082894 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 12:57:14.090527 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 12:57:14.111194 lvm[1434]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 12:57:14.149219 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 12:57:14.149673 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 12:57:14.149773 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 12:57:14.149950 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 12:57:14.150066 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 12:57:14.150422 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 12:57:14.150641 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 12:57:14.150728 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 12:57:14.150784 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 12:57:14.150826 systemd[1]: Reached target paths.target - Path Units. Jan 30 12:57:14.150878 systemd[1]: Reached target timers.target - Timer Units. Jan 30 12:57:14.152806 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 12:57:14.154935 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 12:57:14.161687 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 12:57:14.164919 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 12:57:14.166967 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 12:57:14.169174 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 12:57:14.169708 systemd[1]: Reached target basic.target - Basic System. Jan 30 12:57:14.170250 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 12:57:14.170286 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 12:57:14.174404 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 12:57:14.178621 lvm[1438]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 12:57:14.185593 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 12:57:14.191551 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 12:57:14.203380 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 12:57:14.206449 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 12:57:14.208847 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 12:57:14.211655 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 12:57:14.223431 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 12:57:14.229379 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 12:57:14.240445 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 12:57:14.242630 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 12:57:14.244130 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 12:57:14.247734 jq[1442]: false Jan 30 12:57:14.251498 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 12:57:14.262344 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 12:57:14.264640 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 12:57:14.268663 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 12:57:14.268853 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 12:57:14.272801 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 12:57:14.273217 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 12:57:14.315016 extend-filesystems[1445]: Found loop4 Jan 30 12:57:14.326578 extend-filesystems[1445]: Found loop5 Jan 30 12:57:14.326578 extend-filesystems[1445]: Found loop6 Jan 30 12:57:14.326578 extend-filesystems[1445]: Found loop7 Jan 30 12:57:14.326578 extend-filesystems[1445]: Found vda Jan 30 12:57:14.326578 extend-filesystems[1445]: Found vda1 Jan 30 12:57:14.326578 extend-filesystems[1445]: Found vda2 Jan 30 12:57:14.326578 extend-filesystems[1445]: Found vda3 Jan 30 12:57:14.326578 extend-filesystems[1445]: Found usr Jan 30 12:57:14.326578 extend-filesystems[1445]: Found vda4 Jan 30 12:57:14.326578 extend-filesystems[1445]: Found vda6 Jan 30 12:57:14.326578 extend-filesystems[1445]: Found vda7 Jan 30 12:57:14.326578 extend-filesystems[1445]: Found vda9 Jan 30 12:57:14.326578 extend-filesystems[1445]: Checking size of /dev/vda9 Jan 30 12:57:14.407662 jq[1451]: true Jan 30 12:57:14.407843 update_engine[1450]: I20250130 12:57:14.374756 1450 main.cc:92] Flatcar Update Engine starting Jan 30 12:57:14.361637 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 12:57:14.364133 dbus-daemon[1441]: [system] SELinux support is enabled Jan 30 12:57:14.425199 coreos-metadata[1440]: Jan 30 12:57:14.416 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 12:57:14.425561 extend-filesystems[1445]: Resized partition /dev/vda9 Jan 30 12:57:14.361866 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 12:57:14.436334 update_engine[1450]: I20250130 12:57:14.420629 1450 update_check_scheduler.cc:74] Next update check in 3m26s Jan 30 12:57:14.436677 extend-filesystems[1475]: resize2fs 1.47.1 (20-May-2024) Jan 30 12:57:14.451934 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jan 30 12:57:14.365467 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 12:57:14.374962 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 12:57:14.452805 coreos-metadata[1440]: Jan 30 12:57:14.452 INFO Fetch successful Jan 30 12:57:14.375005 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 12:57:14.385700 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 12:57:14.385803 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jan 30 12:57:14.385832 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 12:57:14.390991 (ntainerd)[1470]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 12:57:14.402193 systemd[1]: Started update-engine.service - Update Engine. Jan 30 12:57:14.422487 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 12:57:14.469369 jq[1469]: true Jan 30 12:57:14.474190 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1382) Jan 30 12:57:14.549050 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 12:57:14.558362 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 12:57:14.615778 systemd-logind[1449]: New seat seat0. Jan 30 12:57:14.621991 systemd-logind[1449]: Watching system buttons on /dev/input/event1 (Power Button) Jan 30 12:57:14.622019 systemd-logind[1449]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 12:57:14.625663 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 12:57:14.635181 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 30 12:57:14.670203 extend-filesystems[1475]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 12:57:14.670203 extend-filesystems[1475]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 30 12:57:14.670203 extend-filesystems[1475]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 30 12:57:14.686611 extend-filesystems[1445]: Resized filesystem in /dev/vda9 Jan 30 12:57:14.686611 extend-filesystems[1445]: Found vdb Jan 30 12:57:14.670790 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 12:57:14.672747 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 12:57:14.749192 bash[1499]: Updated "/home/core/.ssh/authorized_keys" Jan 30 12:57:14.751002 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 12:57:14.764900 systemd[1]: Starting sshkeys.service... Jan 30 12:57:14.780247 locksmithd[1476]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 12:57:14.823755 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 12:57:14.839770 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 12:57:14.881007 coreos-metadata[1513]: Jan 30 12:57:14.880 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 12:57:14.897491 systemd-networkd[1374]: eth0: Gained IPv6LL Jan 30 12:57:14.898126 systemd-timesyncd[1365]: Network configuration changed, trying to establish connection. Jan 30 12:57:14.904806 coreos-metadata[1513]: Jan 30 12:57:14.901 INFO Fetch successful Jan 30 12:57:14.908758 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 12:57:14.911753 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 12:57:14.921969 unknown[1513]: wrote ssh authorized keys file for user: core Jan 30 12:57:14.936767 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 12:57:14.948275 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 12:57:15.019275 update-ssh-keys[1519]: Updated "/home/core/.ssh/authorized_keys" Jan 30 12:57:15.011967 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 12:57:15.015515 systemd[1]: Finished sshkeys.service. Jan 30 12:57:15.047356 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 12:57:15.081276 sshd_keygen[1466]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 12:57:15.085595 containerd[1470]: time="2025-01-30T12:57:15.085470101Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 30 12:57:15.089925 systemd-networkd[1374]: eth1: Gained IPv6LL Jan 30 12:57:15.090348 systemd-timesyncd[1365]: Network configuration changed, trying to establish connection. Jan 30 12:57:15.121777 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 12:57:15.132323 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 12:57:15.137496 containerd[1470]: time="2025-01-30T12:57:15.134455330Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 12:57:15.137551 containerd[1470]: time="2025-01-30T12:57:15.137483065Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 12:57:15.137551 containerd[1470]: time="2025-01-30T12:57:15.137526602Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 12:57:15.137551 containerd[1470]: time="2025-01-30T12:57:15.137545523Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 12:57:15.138892 containerd[1470]: time="2025-01-30T12:57:15.137739391Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 12:57:15.138892 containerd[1470]: time="2025-01-30T12:57:15.137768152Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 12:57:15.138892 containerd[1470]: time="2025-01-30T12:57:15.137841043Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 12:57:15.138892 containerd[1470]: time="2025-01-30T12:57:15.137854592Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 12:57:15.138892 containerd[1470]: time="2025-01-30T12:57:15.138034144Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 12:57:15.138892 containerd[1470]: time="2025-01-30T12:57:15.138048953Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 12:57:15.138892 containerd[1470]: time="2025-01-30T12:57:15.138062051Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 12:57:15.138892 containerd[1470]: time="2025-01-30T12:57:15.138071675Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 12:57:15.138892 containerd[1470]: time="2025-01-30T12:57:15.138149309Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 12:57:15.138892 containerd[1470]: time="2025-01-30T12:57:15.138420292Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 12:57:15.138892 containerd[1470]: time="2025-01-30T12:57:15.138556052Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 12:57:15.139289 containerd[1470]: time="2025-01-30T12:57:15.138576691Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 12:57:15.139289 containerd[1470]: time="2025-01-30T12:57:15.138653681Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 12:57:15.139289 containerd[1470]: time="2025-01-30T12:57:15.138696830Z" level=info msg="metadata content store policy set" policy=shared Jan 30 12:57:15.151988 containerd[1470]: time="2025-01-30T12:57:15.151435340Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 12:57:15.151988 containerd[1470]: time="2025-01-30T12:57:15.151527426Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 12:57:15.151988 containerd[1470]: time="2025-01-30T12:57:15.151545394Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 12:57:15.151988 containerd[1470]: time="2025-01-30T12:57:15.151563747Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 12:57:15.151988 containerd[1470]: time="2025-01-30T12:57:15.151653406Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 12:57:15.151988 containerd[1470]: time="2025-01-30T12:57:15.151894617Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 12:57:15.152663 containerd[1470]: time="2025-01-30T12:57:15.152636939Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 12:57:15.153193 containerd[1470]: time="2025-01-30T12:57:15.152923039Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 12:57:15.153193 containerd[1470]: time="2025-01-30T12:57:15.152948372Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 12:57:15.153193 containerd[1470]: time="2025-01-30T12:57:15.152965514Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 12:57:15.153193 containerd[1470]: time="2025-01-30T12:57:15.152979855Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 12:57:15.153193 containerd[1470]: time="2025-01-30T12:57:15.152993740Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 12:57:15.153193 containerd[1470]: time="2025-01-30T12:57:15.153006065Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 12:57:15.153193 containerd[1470]: time="2025-01-30T12:57:15.153022355Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 12:57:15.153193 containerd[1470]: time="2025-01-30T12:57:15.153040404Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 12:57:15.153193 containerd[1470]: time="2025-01-30T12:57:15.153063091Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 12:57:15.153193 containerd[1470]: time="2025-01-30T12:57:15.153076768Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 12:57:15.153193 containerd[1470]: time="2025-01-30T12:57:15.153089235Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 12:57:15.153193 containerd[1470]: time="2025-01-30T12:57:15.153113797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 12:57:15.153193 containerd[1470]: time="2025-01-30T12:57:15.153128196Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 12:57:15.153193 containerd[1470]: time="2025-01-30T12:57:15.153142548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 12:57:15.153926 containerd[1470]: time="2025-01-30T12:57:15.153610535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 12:57:15.153926 containerd[1470]: time="2025-01-30T12:57:15.153637979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 12:57:15.153926 containerd[1470]: time="2025-01-30T12:57:15.153652588Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 12:57:15.153926 containerd[1470]: time="2025-01-30T12:57:15.153666129Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 12:57:15.153926 containerd[1470]: time="2025-01-30T12:57:15.153679433Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 12:57:15.153926 containerd[1470]: time="2025-01-30T12:57:15.153711184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 12:57:15.153926 containerd[1470]: time="2025-01-30T12:57:15.153727107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 12:57:15.153926 containerd[1470]: time="2025-01-30T12:57:15.153740199Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 12:57:15.153926 containerd[1470]: time="2025-01-30T12:57:15.153753070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 12:57:15.153926 containerd[1470]: time="2025-01-30T12:57:15.153768702Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 12:57:15.153926 containerd[1470]: time="2025-01-30T12:57:15.153791913Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 12:57:15.153926 containerd[1470]: time="2025-01-30T12:57:15.153816609Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 12:57:15.153926 containerd[1470]: time="2025-01-30T12:57:15.153832476Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 12:57:15.153926 containerd[1470]: time="2025-01-30T12:57:15.153843369Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 12:57:15.155477 containerd[1470]: time="2025-01-30T12:57:15.154328753Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 12:57:15.155477 containerd[1470]: time="2025-01-30T12:57:15.154433703Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 12:57:15.155477 containerd[1470]: time="2025-01-30T12:57:15.154448149Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 12:57:15.155477 containerd[1470]: time="2025-01-30T12:57:15.154460626Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 12:57:15.155477 containerd[1470]: time="2025-01-30T12:57:15.154470906Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 12:57:15.155477 containerd[1470]: time="2025-01-30T12:57:15.154485415Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 12:57:15.155477 containerd[1470]: time="2025-01-30T12:57:15.154496282Z" level=info msg="NRI interface is disabled by configuration." Jan 30 12:57:15.155477 containerd[1470]: time="2025-01-30T12:57:15.154506895Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 12:57:15.155865 containerd[1470]: time="2025-01-30T12:57:15.154830762Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 12:57:15.155865 containerd[1470]: time="2025-01-30T12:57:15.154878535Z" level=info msg="Connect containerd service" Jan 30 12:57:15.155865 containerd[1470]: time="2025-01-30T12:57:15.154914038Z" level=info msg="using legacy CRI server" Jan 30 12:57:15.155865 containerd[1470]: time="2025-01-30T12:57:15.154921211Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 12:57:15.155865 containerd[1470]: time="2025-01-30T12:57:15.155058518Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 12:57:15.160726 containerd[1470]: time="2025-01-30T12:57:15.160419147Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 12:57:15.160928 containerd[1470]: time="2025-01-30T12:57:15.160896226Z" level=info msg="Start subscribing containerd event" Jan 30 12:57:15.161332 containerd[1470]: time="2025-01-30T12:57:15.161009788Z" level=info msg="Start recovering state" Jan 30 12:57:15.161332 containerd[1470]: time="2025-01-30T12:57:15.161101264Z" level=info msg="Start event monitor" Jan 30 12:57:15.161332 containerd[1470]: time="2025-01-30T12:57:15.161121385Z" level=info msg="Start snapshots syncer" Jan 30 12:57:15.161332 containerd[1470]: time="2025-01-30T12:57:15.161134221Z" level=info msg="Start cni network conf syncer for default" Jan 30 12:57:15.161332 containerd[1470]: time="2025-01-30T12:57:15.161147416Z" level=info msg="Start streaming server" Jan 30 12:57:15.161892 containerd[1470]: time="2025-01-30T12:57:15.161869412Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 12:57:15.162427 containerd[1470]: time="2025-01-30T12:57:15.162409471Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 12:57:15.163417 containerd[1470]: time="2025-01-30T12:57:15.162599175Z" level=info msg="containerd successfully booted in 0.080509s" Jan 30 12:57:15.162748 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 12:57:15.174730 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 12:57:15.175096 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 12:57:15.185717 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 12:57:15.203679 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 12:57:15.214851 systemd[1]: Started sshd@0-159.223.192.231:22-139.178.68.195:34378.service - OpenSSH per-connection server daemon (139.178.68.195:34378). Jan 30 12:57:15.221299 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 12:57:15.237595 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 12:57:15.242617 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 12:57:15.244932 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 12:57:15.326758 sshd[1549]: Accepted publickey for core from 139.178.68.195 port 34378 ssh2: RSA SHA256:SFnHtt+NvFpqnNn2/BMXZVgPxdeWFU7F3mq/iAucyCA Jan 30 12:57:15.328418 sshd-session[1549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:57:15.339717 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 12:57:15.348665 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 12:57:15.359045 systemd-logind[1449]: New session 1 of user core. Jan 30 12:57:15.389865 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 12:57:15.401850 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 12:57:15.422610 (systemd)[1556]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 12:57:15.572661 systemd[1556]: Queued start job for default target default.target. Jan 30 12:57:15.579031 systemd[1556]: Created slice app.slice - User Application Slice. Jan 30 12:57:15.579088 systemd[1556]: Reached target paths.target - Paths. Jan 30 12:57:15.579111 systemd[1556]: Reached target timers.target - Timers. Jan 30 12:57:15.583491 systemd[1556]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 12:57:15.611901 systemd[1556]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 12:57:15.612095 systemd[1556]: Reached target sockets.target - Sockets. Jan 30 12:57:15.612119 systemd[1556]: Reached target basic.target - Basic System. Jan 30 12:57:15.612218 systemd[1556]: Reached target default.target - Main User Target. Jan 30 12:57:15.612264 systemd[1556]: Startup finished in 177ms. Jan 30 12:57:15.612459 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 12:57:15.625614 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 12:57:15.707795 systemd[1]: Started sshd@1-159.223.192.231:22-139.178.68.195:34384.service - OpenSSH per-connection server daemon (139.178.68.195:34384). Jan 30 12:57:15.790225 sshd[1567]: Accepted publickey for core from 139.178.68.195 port 34384 ssh2: RSA SHA256:SFnHtt+NvFpqnNn2/BMXZVgPxdeWFU7F3mq/iAucyCA Jan 30 12:57:15.792355 sshd-session[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:57:15.799759 systemd-logind[1449]: New session 2 of user core. Jan 30 12:57:15.804758 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 12:57:15.879111 sshd[1569]: Connection closed by 139.178.68.195 port 34384 Jan 30 12:57:15.877989 sshd-session[1567]: pam_unix(sshd:session): session closed for user core Jan 30 12:57:15.891331 systemd[1]: sshd@1-159.223.192.231:22-139.178.68.195:34384.service: Deactivated successfully. Jan 30 12:57:15.894607 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 12:57:15.899068 systemd-logind[1449]: Session 2 logged out. Waiting for processes to exit. Jan 30 12:57:15.906572 systemd[1]: Started sshd@2-159.223.192.231:22-139.178.68.195:34392.service - OpenSSH per-connection server daemon (139.178.68.195:34392). Jan 30 12:57:15.910443 systemd-logind[1449]: Removed session 2. Jan 30 12:57:15.973992 sshd[1574]: Accepted publickey for core from 139.178.68.195 port 34392 ssh2: RSA SHA256:SFnHtt+NvFpqnNn2/BMXZVgPxdeWFU7F3mq/iAucyCA Jan 30 12:57:15.976622 sshd-session[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:57:15.983285 systemd-logind[1449]: New session 3 of user core. Jan 30 12:57:15.988465 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 12:57:16.055174 sshd[1576]: Connection closed by 139.178.68.195 port 34392 Jan 30 12:57:16.055946 sshd-session[1574]: pam_unix(sshd:session): session closed for user core Jan 30 12:57:16.060327 systemd[1]: sshd@2-159.223.192.231:22-139.178.68.195:34392.service: Deactivated successfully. Jan 30 12:57:16.063507 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 12:57:16.066528 systemd-logind[1449]: Session 3 logged out. Waiting for processes to exit. Jan 30 12:57:16.068075 systemd-logind[1449]: Removed session 3. Jan 30 12:57:16.287675 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:57:16.291459 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 12:57:16.293913 systemd[1]: Startup finished in 1.617s (kernel) + 5.458s (initrd) + 6.123s (userspace) = 13.199s. Jan 30 12:57:16.303316 (kubelet)[1585]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 12:57:16.326878 agetty[1551]: failed to open credentials directory Jan 30 12:57:16.328742 agetty[1552]: failed to open credentials directory Jan 30 12:57:17.052273 kubelet[1585]: E0130 12:57:17.052126 1585 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 12:57:17.056031 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 12:57:17.056314 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 12:57:17.056796 systemd[1]: kubelet.service: Consumed 1.472s CPU time. Jan 30 12:57:26.070026 systemd[1]: Started sshd@3-159.223.192.231:22-139.178.68.195:46666.service - OpenSSH per-connection server daemon (139.178.68.195:46666). Jan 30 12:57:26.137991 sshd[1597]: Accepted publickey for core from 139.178.68.195 port 46666 ssh2: RSA SHA256:SFnHtt+NvFpqnNn2/BMXZVgPxdeWFU7F3mq/iAucyCA Jan 30 12:57:26.139938 sshd-session[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:57:26.146486 systemd-logind[1449]: New session 4 of user core. Jan 30 12:57:26.153577 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 12:57:26.216822 sshd[1599]: Connection closed by 139.178.68.195 port 46666 Jan 30 12:57:26.217791 sshd-session[1597]: pam_unix(sshd:session): session closed for user core Jan 30 12:57:26.232340 systemd[1]: sshd@3-159.223.192.231:22-139.178.68.195:46666.service: Deactivated successfully. Jan 30 12:57:26.235206 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 12:57:26.238393 systemd-logind[1449]: Session 4 logged out. Waiting for processes to exit. Jan 30 12:57:26.242712 systemd[1]: Started sshd@4-159.223.192.231:22-139.178.68.195:46676.service - OpenSSH per-connection server daemon (139.178.68.195:46676). Jan 30 12:57:26.245093 systemd-logind[1449]: Removed session 4. Jan 30 12:57:26.320439 sshd[1604]: Accepted publickey for core from 139.178.68.195 port 46676 ssh2: RSA SHA256:SFnHtt+NvFpqnNn2/BMXZVgPxdeWFU7F3mq/iAucyCA Jan 30 12:57:26.322625 sshd-session[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:57:26.330599 systemd-logind[1449]: New session 5 of user core. Jan 30 12:57:26.337524 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 12:57:26.399669 sshd[1606]: Connection closed by 139.178.68.195 port 46676 Jan 30 12:57:26.400712 sshd-session[1604]: pam_unix(sshd:session): session closed for user core Jan 30 12:57:26.410000 systemd[1]: sshd@4-159.223.192.231:22-139.178.68.195:46676.service: Deactivated successfully. Jan 30 12:57:26.412296 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 12:57:26.414495 systemd-logind[1449]: Session 5 logged out. Waiting for processes to exit. Jan 30 12:57:26.422645 systemd[1]: Started sshd@5-159.223.192.231:22-139.178.68.195:46684.service - OpenSSH per-connection server daemon (139.178.68.195:46684). Jan 30 12:57:26.425041 systemd-logind[1449]: Removed session 5. Jan 30 12:57:26.491737 sshd[1611]: Accepted publickey for core from 139.178.68.195 port 46684 ssh2: RSA SHA256:SFnHtt+NvFpqnNn2/BMXZVgPxdeWFU7F3mq/iAucyCA Jan 30 12:57:26.494098 sshd-session[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:57:26.503348 systemd-logind[1449]: New session 6 of user core. Jan 30 12:57:26.514511 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 12:57:26.580477 sshd[1613]: Connection closed by 139.178.68.195 port 46684 Jan 30 12:57:26.582749 sshd-session[1611]: pam_unix(sshd:session): session closed for user core Jan 30 12:57:26.591182 systemd[1]: sshd@5-159.223.192.231:22-139.178.68.195:46684.service: Deactivated successfully. Jan 30 12:57:26.592993 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 12:57:26.595478 systemd-logind[1449]: Session 6 logged out. Waiting for processes to exit. Jan 30 12:57:26.600733 systemd[1]: Started sshd@6-159.223.192.231:22-139.178.68.195:46692.service - OpenSSH per-connection server daemon (139.178.68.195:46692). Jan 30 12:57:26.603276 systemd-logind[1449]: Removed session 6. Jan 30 12:57:26.663080 sshd[1618]: Accepted publickey for core from 139.178.68.195 port 46692 ssh2: RSA SHA256:SFnHtt+NvFpqnNn2/BMXZVgPxdeWFU7F3mq/iAucyCA Jan 30 12:57:26.665816 sshd-session[1618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:57:26.673507 systemd-logind[1449]: New session 7 of user core. Jan 30 12:57:26.680458 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 12:57:26.751785 sudo[1621]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 12:57:26.752656 sudo[1621]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 12:57:26.769727 sudo[1621]: pam_unix(sudo:session): session closed for user root Jan 30 12:57:26.773793 sshd[1620]: Connection closed by 139.178.68.195 port 46692 Jan 30 12:57:26.774549 sshd-session[1618]: pam_unix(sshd:session): session closed for user core Jan 30 12:57:26.785097 systemd[1]: sshd@6-159.223.192.231:22-139.178.68.195:46692.service: Deactivated successfully. Jan 30 12:57:26.787684 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 12:57:26.790503 systemd-logind[1449]: Session 7 logged out. Waiting for processes to exit. Jan 30 12:57:26.796646 systemd[1]: Started sshd@7-159.223.192.231:22-139.178.68.195:46704.service - OpenSSH per-connection server daemon (139.178.68.195:46704). Jan 30 12:57:26.798820 systemd-logind[1449]: Removed session 7. Jan 30 12:57:26.846412 sshd[1626]: Accepted publickey for core from 139.178.68.195 port 46704 ssh2: RSA SHA256:SFnHtt+NvFpqnNn2/BMXZVgPxdeWFU7F3mq/iAucyCA Jan 30 12:57:26.848763 sshd-session[1626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:57:26.855329 systemd-logind[1449]: New session 8 of user core. Jan 30 12:57:26.860513 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 12:57:26.922391 sudo[1630]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 12:57:26.922884 sudo[1630]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 12:57:26.928655 sudo[1630]: pam_unix(sudo:session): session closed for user root Jan 30 12:57:26.938094 sudo[1629]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 30 12:57:26.938725 sudo[1629]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 12:57:26.956756 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 12:57:27.008430 augenrules[1652]: No rules Jan 30 12:57:27.009519 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 12:57:27.009994 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 12:57:27.012965 sudo[1629]: pam_unix(sudo:session): session closed for user root Jan 30 12:57:27.017717 sshd[1628]: Connection closed by 139.178.68.195 port 46704 Jan 30 12:57:27.016901 sshd-session[1626]: pam_unix(sshd:session): session closed for user core Jan 30 12:57:27.029809 systemd[1]: sshd@7-159.223.192.231:22-139.178.68.195:46704.service: Deactivated successfully. Jan 30 12:57:27.032033 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 12:57:27.033292 systemd-logind[1449]: Session 8 logged out. Waiting for processes to exit. Jan 30 12:57:27.039798 systemd[1]: Started sshd@8-159.223.192.231:22-139.178.68.195:46714.service - OpenSSH per-connection server daemon (139.178.68.195:46714). Jan 30 12:57:27.041953 systemd-logind[1449]: Removed session 8. Jan 30 12:57:27.086146 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 12:57:27.095611 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 12:57:27.102289 sshd[1660]: Accepted publickey for core from 139.178.68.195 port 46714 ssh2: RSA SHA256:SFnHtt+NvFpqnNn2/BMXZVgPxdeWFU7F3mq/iAucyCA Jan 30 12:57:27.104031 sshd-session[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:57:27.110143 systemd-logind[1449]: New session 9 of user core. Jan 30 12:57:27.113045 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 12:57:27.185334 sudo[1666]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 12:57:27.185906 sudo[1666]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 12:57:27.279448 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:57:27.285713 (kubelet)[1681]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 12:57:27.372686 kubelet[1681]: E0130 12:57:27.372531 1681 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 12:57:27.379510 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 12:57:27.379715 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 12:57:27.964426 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:57:27.977739 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 12:57:28.017663 systemd[1]: Reloading requested from client PID 1711 ('systemctl') (unit session-9.scope)... Jan 30 12:57:28.017885 systemd[1]: Reloading... Jan 30 12:57:28.212462 zram_generator::config[1761]: No configuration found. Jan 30 12:57:28.349721 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 12:57:28.449286 systemd[1]: Reloading finished in 430 ms. Jan 30 12:57:28.508718 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 12:57:28.508813 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 12:57:28.509224 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:57:28.516728 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 12:57:28.693558 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:57:28.710847 (kubelet)[1802]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 12:57:28.765658 kubelet[1802]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 12:57:28.765658 kubelet[1802]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 30 12:57:28.765658 kubelet[1802]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 12:57:28.766216 kubelet[1802]: I0130 12:57:28.765714 1802 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 12:57:28.973042 kubelet[1802]: I0130 12:57:28.972869 1802 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 30 12:57:28.973042 kubelet[1802]: I0130 12:57:28.972917 1802 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 12:57:28.973672 kubelet[1802]: I0130 12:57:28.973352 1802 server.go:954] "Client rotation is on, will bootstrap in background" Jan 30 12:57:29.007198 kubelet[1802]: I0130 12:57:29.007032 1802 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 12:57:29.021352 kubelet[1802]: E0130 12:57:29.020927 1802 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 12:57:29.021352 kubelet[1802]: I0130 12:57:29.020997 1802 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 12:57:29.026002 kubelet[1802]: I0130 12:57:29.025933 1802 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 12:57:29.027788 kubelet[1802]: I0130 12:57:29.027680 1802 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 12:57:29.028547 kubelet[1802]: I0130 12:57:29.027769 1802 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"159.223.192.231","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 12:57:29.028547 kubelet[1802]: I0130 12:57:29.028470 1802 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 12:57:29.028547 kubelet[1802]: I0130 12:57:29.028484 1802 container_manager_linux.go:304] "Creating device plugin manager" Jan 30 12:57:29.028892 kubelet[1802]: I0130 12:57:29.028655 1802 state_mem.go:36] "Initialized new in-memory state store" Jan 30 12:57:29.035071 kubelet[1802]: I0130 12:57:29.034545 1802 kubelet.go:446] "Attempting to sync node with API server" Jan 30 12:57:29.035071 kubelet[1802]: I0130 12:57:29.034622 1802 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 12:57:29.035071 kubelet[1802]: I0130 12:57:29.034660 1802 kubelet.go:352] "Adding apiserver pod source" Jan 30 12:57:29.035071 kubelet[1802]: I0130 12:57:29.034677 1802 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 12:57:29.035373 kubelet[1802]: E0130 12:57:29.035332 1802 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:57:29.036336 kubelet[1802]: E0130 12:57:29.036307 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:57:29.039107 kubelet[1802]: I0130 12:57:29.038476 1802 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 30 12:57:29.039107 kubelet[1802]: I0130 12:57:29.038920 1802 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 12:57:29.039772 kubelet[1802]: W0130 12:57:29.039745 1802 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 12:57:29.042853 kubelet[1802]: I0130 12:57:29.042807 1802 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 30 12:57:29.044998 kubelet[1802]: I0130 12:57:29.044972 1802 server.go:1287] "Started kubelet" Jan 30 12:57:29.048270 kubelet[1802]: I0130 12:57:29.048210 1802 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 12:57:29.055554 kubelet[1802]: I0130 12:57:29.055263 1802 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 12:57:29.057196 kubelet[1802]: I0130 12:57:29.056714 1802 server.go:490] "Adding debug handlers to kubelet server" Jan 30 12:57:29.058044 kubelet[1802]: I0130 12:57:29.057961 1802 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 12:57:29.058352 kubelet[1802]: I0130 12:57:29.058332 1802 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 12:57:29.063200 kubelet[1802]: I0130 12:57:29.062382 1802 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 12:57:29.065215 kubelet[1802]: I0130 12:57:29.063473 1802 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 30 12:57:29.065215 kubelet[1802]: E0130 12:57:29.063854 1802 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"159.223.192.231\" not found" Jan 30 12:57:29.065215 kubelet[1802]: I0130 12:57:29.064215 1802 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 12:57:29.065215 kubelet[1802]: I0130 12:57:29.064302 1802 reconciler.go:26] "Reconciler: start to sync state" Jan 30 12:57:29.065548 kubelet[1802]: E0130 12:57:29.064834 1802 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 12:57:29.066477 kubelet[1802]: I0130 12:57:29.066382 1802 factory.go:221] Registration of the systemd container factory successfully Jan 30 12:57:29.066950 kubelet[1802]: I0130 12:57:29.066906 1802 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 12:57:29.070098 kubelet[1802]: I0130 12:57:29.069924 1802 factory.go:221] Registration of the containerd container factory successfully Jan 30 12:57:29.095927 kubelet[1802]: E0130 12:57:29.089574 1802 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"159.223.192.231\" not found" node="159.223.192.231" Jan 30 12:57:29.096746 kubelet[1802]: I0130 12:57:29.096712 1802 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 30 12:57:29.096968 kubelet[1802]: I0130 12:57:29.096944 1802 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 30 12:57:29.097201 kubelet[1802]: I0130 12:57:29.097101 1802 state_mem.go:36] "Initialized new in-memory state store" Jan 30 12:57:29.104148 kubelet[1802]: I0130 12:57:29.104007 1802 policy_none.go:49] "None policy: Start" Jan 30 12:57:29.104148 kubelet[1802]: I0130 12:57:29.104215 1802 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 30 12:57:29.104148 kubelet[1802]: I0130 12:57:29.104251 1802 state_mem.go:35] "Initializing new in-memory state store" Jan 30 12:57:29.123085 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 12:57:29.154123 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 12:57:29.162670 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 12:57:29.164135 kubelet[1802]: E0130 12:57:29.164088 1802 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"159.223.192.231\" not found" Jan 30 12:57:29.171359 kubelet[1802]: I0130 12:57:29.171310 1802 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 12:57:29.171878 kubelet[1802]: I0130 12:57:29.171847 1802 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 12:57:29.172344 kubelet[1802]: I0130 12:57:29.172209 1802 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 12:57:29.172978 kubelet[1802]: I0130 12:57:29.172919 1802 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 12:57:29.181655 kubelet[1802]: E0130 12:57:29.181608 1802 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 30 12:57:29.182365 kubelet[1802]: E0130 12:57:29.182319 1802 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"159.223.192.231\" not found" Jan 30 12:57:29.204766 kubelet[1802]: I0130 12:57:29.204643 1802 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 12:57:29.208225 kubelet[1802]: I0130 12:57:29.207842 1802 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 12:57:29.208225 kubelet[1802]: I0130 12:57:29.207934 1802 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 30 12:57:29.209415 kubelet[1802]: I0130 12:57:29.209341 1802 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 30 12:57:29.209415 kubelet[1802]: I0130 12:57:29.209398 1802 kubelet.go:2388] "Starting kubelet main sync loop" Jan 30 12:57:29.209636 kubelet[1802]: E0130 12:57:29.209578 1802 kubelet.go:2412] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 30 12:57:29.274996 kubelet[1802]: I0130 12:57:29.274422 1802 kubelet_node_status.go:76] "Attempting to register node" node="159.223.192.231" Jan 30 12:57:29.284829 kubelet[1802]: I0130 12:57:29.284120 1802 kubelet_node_status.go:79] "Successfully registered node" node="159.223.192.231" Jan 30 12:57:29.284829 kubelet[1802]: E0130 12:57:29.284211 1802 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"159.223.192.231\": node \"159.223.192.231\" not found" Jan 30 12:57:29.299313 kubelet[1802]: I0130 12:57:29.299249 1802 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 30 12:57:29.300207 containerd[1470]: time="2025-01-30T12:57:29.299956610Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 12:57:29.301131 kubelet[1802]: I0130 12:57:29.300385 1802 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 30 12:57:29.752299 sudo[1666]: pam_unix(sudo:session): session closed for user root Jan 30 12:57:29.756081 sshd[1665]: Connection closed by 139.178.68.195 port 46714 Jan 30 12:57:29.756857 sshd-session[1660]: pam_unix(sshd:session): session closed for user core Jan 30 12:57:29.761706 systemd-logind[1449]: Session 9 logged out. Waiting for processes to exit. Jan 30 12:57:29.762508 systemd[1]: sshd@8-159.223.192.231:22-139.178.68.195:46714.service: Deactivated successfully. Jan 30 12:57:29.766199 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 12:57:29.768116 systemd-logind[1449]: Removed session 9. Jan 30 12:57:29.976349 kubelet[1802]: I0130 12:57:29.976220 1802 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 30 12:57:29.977120 kubelet[1802]: W0130 12:57:29.976600 1802 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 30 12:57:29.977120 kubelet[1802]: W0130 12:57:29.976939 1802 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 30 12:57:29.977120 kubelet[1802]: W0130 12:57:29.977011 1802 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 30 12:57:30.036813 kubelet[1802]: I0130 12:57:30.036415 1802 apiserver.go:52] "Watching apiserver" Jan 30 12:57:30.036813 kubelet[1802]: E0130 12:57:30.036612 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:57:30.056424 systemd[1]: Created slice kubepods-besteffort-pod5ed7f5d3_6674_4e66_a79e_75310bb8d49b.slice - libcontainer container kubepods-besteffort-pod5ed7f5d3_6674_4e66_a79e_75310bb8d49b.slice. Jan 30 12:57:30.065261 kubelet[1802]: I0130 12:57:30.065204 1802 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 12:57:30.070686 kubelet[1802]: I0130 12:57:30.070599 1802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4c8904a4-405f-445d-9b96-91db040d7b3e-hubble-tls\") pod \"cilium-pqknc\" (UID: \"4c8904a4-405f-445d-9b96-91db040d7b3e\") " pod="kube-system/cilium-pqknc" Jan 30 12:57:30.070686 kubelet[1802]: I0130 12:57:30.070638 1802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4c8904a4-405f-445d-9b96-91db040d7b3e-cilium-run\") pod \"cilium-pqknc\" (UID: \"4c8904a4-405f-445d-9b96-91db040d7b3e\") " pod="kube-system/cilium-pqknc" Jan 30 12:57:30.070686 kubelet[1802]: I0130 12:57:30.070662 1802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4c8904a4-405f-445d-9b96-91db040d7b3e-etc-cni-netd\") pod \"cilium-pqknc\" (UID: \"4c8904a4-405f-445d-9b96-91db040d7b3e\") " pod="kube-system/cilium-pqknc" Jan 30 12:57:30.070686 kubelet[1802]: I0130 12:57:30.070702 1802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c8904a4-405f-445d-9b96-91db040d7b3e-xtables-lock\") pod \"cilium-pqknc\" (UID: \"4c8904a4-405f-445d-9b96-91db040d7b3e\") " pod="kube-system/cilium-pqknc" Jan 30 12:57:30.070947 kubelet[1802]: I0130 12:57:30.070721 1802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ed7f5d3-6674-4e66-a79e-75310bb8d49b-xtables-lock\") pod \"kube-proxy-5hlm6\" (UID: \"5ed7f5d3-6674-4e66-a79e-75310bb8d49b\") " pod="kube-system/kube-proxy-5hlm6" Jan 30 12:57:30.070947 kubelet[1802]: I0130 12:57:30.070736 1802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ed7f5d3-6674-4e66-a79e-75310bb8d49b-lib-modules\") pod \"kube-proxy-5hlm6\" (UID: \"5ed7f5d3-6674-4e66-a79e-75310bb8d49b\") " pod="kube-system/kube-proxy-5hlm6" Jan 30 12:57:30.070947 kubelet[1802]: I0130 12:57:30.070767 1802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4c8904a4-405f-445d-9b96-91db040d7b3e-cilium-config-path\") pod \"cilium-pqknc\" (UID: \"4c8904a4-405f-445d-9b96-91db040d7b3e\") " pod="kube-system/cilium-pqknc" Jan 30 12:57:30.070947 kubelet[1802]: I0130 12:57:30.070783 1802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9btv\" (UniqueName: \"kubernetes.io/projected/4c8904a4-405f-445d-9b96-91db040d7b3e-kube-api-access-l9btv\") pod \"cilium-pqknc\" (UID: \"4c8904a4-405f-445d-9b96-91db040d7b3e\") " pod="kube-system/cilium-pqknc" Jan 30 12:57:30.070947 kubelet[1802]: I0130 12:57:30.070799 1802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5ed7f5d3-6674-4e66-a79e-75310bb8d49b-kube-proxy\") pod \"kube-proxy-5hlm6\" (UID: \"5ed7f5d3-6674-4e66-a79e-75310bb8d49b\") " pod="kube-system/kube-proxy-5hlm6" Jan 30 12:57:30.071099 kubelet[1802]: I0130 12:57:30.070828 1802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4c8904a4-405f-445d-9b96-91db040d7b3e-host-proc-sys-net\") pod \"cilium-pqknc\" (UID: \"4c8904a4-405f-445d-9b96-91db040d7b3e\") " pod="kube-system/cilium-pqknc" Jan 30 12:57:30.071099 kubelet[1802]: I0130 12:57:30.070846 1802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4c8904a4-405f-445d-9b96-91db040d7b3e-cni-path\") pod \"cilium-pqknc\" (UID: \"4c8904a4-405f-445d-9b96-91db040d7b3e\") " pod="kube-system/cilium-pqknc" Jan 30 12:57:30.071099 kubelet[1802]: I0130 12:57:30.070860 1802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4c8904a4-405f-445d-9b96-91db040d7b3e-lib-modules\") pod \"cilium-pqknc\" (UID: \"4c8904a4-405f-445d-9b96-91db040d7b3e\") " pod="kube-system/cilium-pqknc" Jan 30 12:57:30.071099 kubelet[1802]: I0130 12:57:30.070873 1802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4c8904a4-405f-445d-9b96-91db040d7b3e-clustermesh-secrets\") pod \"cilium-pqknc\" (UID: \"4c8904a4-405f-445d-9b96-91db040d7b3e\") " pod="kube-system/cilium-pqknc" Jan 30 12:57:30.071099 kubelet[1802]: I0130 12:57:30.070889 1802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4c8904a4-405f-445d-9b96-91db040d7b3e-host-proc-sys-kernel\") pod \"cilium-pqknc\" (UID: \"4c8904a4-405f-445d-9b96-91db040d7b3e\") " pod="kube-system/cilium-pqknc" Jan 30 12:57:30.071278 kubelet[1802]: I0130 12:57:30.070903 1802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlswx\" (UniqueName: \"kubernetes.io/projected/5ed7f5d3-6674-4e66-a79e-75310bb8d49b-kube-api-access-tlswx\") pod \"kube-proxy-5hlm6\" (UID: \"5ed7f5d3-6674-4e66-a79e-75310bb8d49b\") " pod="kube-system/kube-proxy-5hlm6" Jan 30 12:57:30.071278 kubelet[1802]: I0130 12:57:30.070932 1802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4c8904a4-405f-445d-9b96-91db040d7b3e-bpf-maps\") pod \"cilium-pqknc\" (UID: \"4c8904a4-405f-445d-9b96-91db040d7b3e\") " pod="kube-system/cilium-pqknc" Jan 30 12:57:30.071278 kubelet[1802]: I0130 12:57:30.070947 1802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4c8904a4-405f-445d-9b96-91db040d7b3e-hostproc\") pod \"cilium-pqknc\" (UID: \"4c8904a4-405f-445d-9b96-91db040d7b3e\") " pod="kube-system/cilium-pqknc" Jan 30 12:57:30.071278 kubelet[1802]: I0130 12:57:30.070961 1802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4c8904a4-405f-445d-9b96-91db040d7b3e-cilium-cgroup\") pod \"cilium-pqknc\" (UID: \"4c8904a4-405f-445d-9b96-91db040d7b3e\") " pod="kube-system/cilium-pqknc" Jan 30 12:57:30.071690 systemd[1]: Created slice kubepods-burstable-pod4c8904a4_405f_445d_9b96_91db040d7b3e.slice - libcontainer container kubepods-burstable-pod4c8904a4_405f_445d_9b96_91db040d7b3e.slice. Jan 30 12:57:30.367817 kubelet[1802]: E0130 12:57:30.367632 1802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 12:57:30.369444 containerd[1470]: time="2025-01-30T12:57:30.368808490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5hlm6,Uid:5ed7f5d3-6674-4e66-a79e-75310bb8d49b,Namespace:kube-system,Attempt:0,}" Jan 30 12:57:30.383528 kubelet[1802]: E0130 12:57:30.383482 1802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 12:57:30.384749 containerd[1470]: time="2025-01-30T12:57:30.384595023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pqknc,Uid:4c8904a4-405f-445d-9b96-91db040d7b3e,Namespace:kube-system,Attempt:0,}" Jan 30 12:57:30.962216 containerd[1470]: time="2025-01-30T12:57:30.961286151Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 12:57:30.963216 containerd[1470]: time="2025-01-30T12:57:30.963104952Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 30 12:57:30.966180 containerd[1470]: time="2025-01-30T12:57:30.966073674Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 12:57:30.967172 containerd[1470]: time="2025-01-30T12:57:30.967102134Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 12:57:30.968299 containerd[1470]: time="2025-01-30T12:57:30.968228244Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 12:57:30.971772 containerd[1470]: time="2025-01-30T12:57:30.971630146Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 12:57:30.973221 containerd[1470]: time="2025-01-30T12:57:30.972649339Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 603.664675ms" Jan 30 12:57:30.979904 containerd[1470]: time="2025-01-30T12:57:30.979827139Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 595.081827ms" Jan 30 12:57:31.038593 kubelet[1802]: E0130 12:57:31.038497 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:57:31.154611 containerd[1470]: time="2025-01-30T12:57:31.151002091Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:57:31.154979 containerd[1470]: time="2025-01-30T12:57:31.151497759Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:57:31.155099 containerd[1470]: time="2025-01-30T12:57:31.154967871Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:57:31.155099 containerd[1470]: time="2025-01-30T12:57:31.154999122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:57:31.156406 containerd[1470]: time="2025-01-30T12:57:31.156271974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:57:31.157954 containerd[1470]: time="2025-01-30T12:57:31.157081217Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:57:31.157954 containerd[1470]: time="2025-01-30T12:57:31.157178652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:57:31.157954 containerd[1470]: time="2025-01-30T12:57:31.157338997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:57:31.192494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4132486575.mount: Deactivated successfully. Jan 30 12:57:31.286233 systemd[1]: run-containerd-runc-k8s.io-0c5eda6279802109e5e72800c706e489aa9a6792aa9874d32a1d14c180e2a87a-runc.lMsDSx.mount: Deactivated successfully. Jan 30 12:57:31.299336 systemd[1]: run-containerd-runc-k8s.io-64ccb3a6938bd87edb9fa52b0b9d72fb4e8d7257ec042a1420a07d4a5b6a2056-runc.taqp1B.mount: Deactivated successfully. Jan 30 12:57:31.312559 systemd[1]: Started cri-containerd-0c5eda6279802109e5e72800c706e489aa9a6792aa9874d32a1d14c180e2a87a.scope - libcontainer container 0c5eda6279802109e5e72800c706e489aa9a6792aa9874d32a1d14c180e2a87a. Jan 30 12:57:31.316685 systemd[1]: Started cri-containerd-64ccb3a6938bd87edb9fa52b0b9d72fb4e8d7257ec042a1420a07d4a5b6a2056.scope - libcontainer container 64ccb3a6938bd87edb9fa52b0b9d72fb4e8d7257ec042a1420a07d4a5b6a2056. Jan 30 12:57:31.376923 containerd[1470]: time="2025-01-30T12:57:31.376867216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pqknc,Uid:4c8904a4-405f-445d-9b96-91db040d7b3e,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c5eda6279802109e5e72800c706e489aa9a6792aa9874d32a1d14c180e2a87a\"" Jan 30 12:57:31.379769 kubelet[1802]: E0130 12:57:31.379246 1802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 12:57:31.382827 containerd[1470]: time="2025-01-30T12:57:31.382643521Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 30 12:57:31.389787 containerd[1470]: time="2025-01-30T12:57:31.389428433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5hlm6,Uid:5ed7f5d3-6674-4e66-a79e-75310bb8d49b,Namespace:kube-system,Attempt:0,} returns sandbox id \"64ccb3a6938bd87edb9fa52b0b9d72fb4e8d7257ec042a1420a07d4a5b6a2056\"" Jan 30 12:57:31.390620 kubelet[1802]: E0130 12:57:31.390585 1802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 12:57:32.038865 kubelet[1802]: E0130 12:57:32.038791 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:57:33.039439 kubelet[1802]: E0130 12:57:33.039387 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:57:34.041260 kubelet[1802]: E0130 12:57:34.041102 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:57:35.041496 kubelet[1802]: E0130 12:57:35.041420 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:57:36.041825 kubelet[1802]: E0130 12:57:36.041721 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:57:36.674711 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1097328922.mount: Deactivated successfully. Jan 30 12:57:37.042616 kubelet[1802]: E0130 12:57:37.042474 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:57:38.044314 kubelet[1802]: E0130 12:57:38.044223 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:57:39.045013 kubelet[1802]: E0130 12:57:39.044913 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:57:39.432811 containerd[1470]: time="2025-01-30T12:57:39.432612167Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:57:39.433942 containerd[1470]: time="2025-01-30T12:57:39.433791509Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 30 12:57:39.434508 containerd[1470]: time="2025-01-30T12:57:39.434472357Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:57:39.436321 containerd[1470]: time="2025-01-30T12:57:39.436284527Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.053204896s" Jan 30 12:57:39.436446 containerd[1470]: time="2025-01-30T12:57:39.436431665Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 30 12:57:39.438204 containerd[1470]: time="2025-01-30T12:57:39.438106988Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\"" Jan 30 12:57:39.439764 containerd[1470]: time="2025-01-30T12:57:39.439718119Z" level=info msg="CreateContainer within sandbox \"0c5eda6279802109e5e72800c706e489aa9a6792aa9874d32a1d14c180e2a87a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 12:57:39.440415 systemd-resolved[1318]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jan 30 12:57:39.463963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4247486309.mount: Deactivated successfully. Jan 30 12:57:39.471976 containerd[1470]: time="2025-01-30T12:57:39.471769970Z" level=info msg="CreateContainer within sandbox \"0c5eda6279802109e5e72800c706e489aa9a6792aa9874d32a1d14c180e2a87a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"84a5ebeddf60e733b817354888c94433e5290d83a19c5500cebdbe877583bd9f\"" Jan 30 12:57:39.472949 containerd[1470]: time="2025-01-30T12:57:39.472908171Z" level=info msg="StartContainer for \"84a5ebeddf60e733b817354888c94433e5290d83a19c5500cebdbe877583bd9f\"" Jan 30 12:57:39.529529 systemd[1]: Started cri-containerd-84a5ebeddf60e733b817354888c94433e5290d83a19c5500cebdbe877583bd9f.scope - libcontainer container 84a5ebeddf60e733b817354888c94433e5290d83a19c5500cebdbe877583bd9f. Jan 30 12:57:39.588146 containerd[1470]: time="2025-01-30T12:57:39.587913691Z" level=info msg="StartContainer for \"84a5ebeddf60e733b817354888c94433e5290d83a19c5500cebdbe877583bd9f\" returns successfully" Jan 30 12:57:39.603088 systemd[1]: cri-containerd-84a5ebeddf60e733b817354888c94433e5290d83a19c5500cebdbe877583bd9f.scope: Deactivated successfully. Jan 30 12:57:39.735327 containerd[1470]: time="2025-01-30T12:57:39.734767737Z" level=info msg="shim disconnected" id=84a5ebeddf60e733b817354888c94433e5290d83a19c5500cebdbe877583bd9f namespace=k8s.io Jan 30 12:57:39.735327 containerd[1470]: time="2025-01-30T12:57:39.734942196Z" level=warning msg="cleaning up after shim disconnected" id=84a5ebeddf60e733b817354888c94433e5290d83a19c5500cebdbe877583bd9f namespace=k8s.io Jan 30 12:57:39.735327 containerd[1470]: time="2025-01-30T12:57:39.734966370Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:57:40.046340 kubelet[1802]: E0130 12:57:40.046037 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:57:40.285121 kubelet[1802]: E0130 12:57:40.285057 1802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 12:57:40.290723 containerd[1470]: time="2025-01-30T12:57:40.290325877Z" level=info msg="CreateContainer within sandbox \"0c5eda6279802109e5e72800c706e489aa9a6792aa9874d32a1d14c180e2a87a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 12:57:40.326061 containerd[1470]: time="2025-01-30T12:57:40.325039907Z" level=info msg="CreateContainer within sandbox \"0c5eda6279802109e5e72800c706e489aa9a6792aa9874d32a1d14c180e2a87a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ce41abe5d9dfda1e6e5e6ec10ecc560fcb7d177c65524f444f855836c2543a69\"" Jan 30 12:57:40.326061 containerd[1470]: time="2025-01-30T12:57:40.325933390Z" level=info msg="StartContainer for \"ce41abe5d9dfda1e6e5e6ec10ecc560fcb7d177c65524f444f855836c2543a69\"" Jan 30 12:57:40.390614 systemd[1]: Started cri-containerd-ce41abe5d9dfda1e6e5e6ec10ecc560fcb7d177c65524f444f855836c2543a69.scope - libcontainer container ce41abe5d9dfda1e6e5e6ec10ecc560fcb7d177c65524f444f855836c2543a69. Jan 30 12:57:40.452008 containerd[1470]: time="2025-01-30T12:57:40.451844675Z" level=info msg="StartContainer for \"ce41abe5d9dfda1e6e5e6ec10ecc560fcb7d177c65524f444f855836c2543a69\" returns successfully" Jan 30 12:57:40.465494 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-84a5ebeddf60e733b817354888c94433e5290d83a19c5500cebdbe877583bd9f-rootfs.mount: Deactivated successfully. Jan 30 12:57:40.497569 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 12:57:40.499010 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 12:57:40.500021 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 30 12:57:40.507843 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 12:57:40.508190 systemd[1]: cri-containerd-ce41abe5d9dfda1e6e5e6ec10ecc560fcb7d177c65524f444f855836c2543a69.scope: Deactivated successfully. Jan 30 12:57:40.560220 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 12:57:40.580423 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce41abe5d9dfda1e6e5e6ec10ecc560fcb7d177c65524f444f855836c2543a69-rootfs.mount: Deactivated successfully. Jan 30 12:57:40.607931 containerd[1470]: time="2025-01-30T12:57:40.607598380Z" level=info msg="shim disconnected" id=ce41abe5d9dfda1e6e5e6ec10ecc560fcb7d177c65524f444f855836c2543a69 namespace=k8s.io Jan 30 12:57:40.607931 containerd[1470]: time="2025-01-30T12:57:40.607682446Z" level=warning msg="cleaning up after shim disconnected" id=ce41abe5d9dfda1e6e5e6ec10ecc560fcb7d177c65524f444f855836c2543a69 namespace=k8s.io Jan 30 12:57:40.607931 containerd[1470]: time="2025-01-30T12:57:40.607697599Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:57:40.857911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1374260250.mount: Deactivated successfully. Jan 30 12:57:41.046815 kubelet[1802]: E0130 12:57:41.046759 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:57:41.291037 kubelet[1802]: E0130 12:57:41.289271 1802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 12:57:41.295104 containerd[1470]: time="2025-01-30T12:57:41.295043182Z" level=info msg="CreateContainer within sandbox \"0c5eda6279802109e5e72800c706e489aa9a6792aa9874d32a1d14c180e2a87a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 12:57:41.336422 containerd[1470]: time="2025-01-30T12:57:41.336349366Z" level=info msg="CreateContainer within sandbox \"0c5eda6279802109e5e72800c706e489aa9a6792aa9874d32a1d14c180e2a87a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"98ddf2cc9cbc8ef22b2c7e27b81603c9cec7c9264f464c91333420bd11148294\"" Jan 30 12:57:41.337492 containerd[1470]: time="2025-01-30T12:57:41.337442260Z" level=info msg="StartContainer for \"98ddf2cc9cbc8ef22b2c7e27b81603c9cec7c9264f464c91333420bd11148294\"" Jan 30 12:57:41.411062 systemd[1]: Started cri-containerd-98ddf2cc9cbc8ef22b2c7e27b81603c9cec7c9264f464c91333420bd11148294.scope - libcontainer container 98ddf2cc9cbc8ef22b2c7e27b81603c9cec7c9264f464c91333420bd11148294. Jan 30 12:57:41.462068 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount170655768.mount: Deactivated successfully. Jan 30 12:57:41.483261 containerd[1470]: time="2025-01-30T12:57:41.482676297Z" level=info msg="StartContainer for \"98ddf2cc9cbc8ef22b2c7e27b81603c9cec7c9264f464c91333420bd11148294\" returns successfully" Jan 30 12:57:41.486280 systemd[1]: cri-containerd-98ddf2cc9cbc8ef22b2c7e27b81603c9cec7c9264f464c91333420bd11148294.scope: Deactivated successfully. Jan 30 12:57:41.554216 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98ddf2cc9cbc8ef22b2c7e27b81603c9cec7c9264f464c91333420bd11148294-rootfs.mount: Deactivated successfully. Jan 30 12:57:41.649564 containerd[1470]: time="2025-01-30T12:57:41.649059236Z" level=info msg="shim disconnected" id=98ddf2cc9cbc8ef22b2c7e27b81603c9cec7c9264f464c91333420bd11148294 namespace=k8s.io Jan 30 12:57:41.649564 containerd[1470]: time="2025-01-30T12:57:41.649248207Z" level=warning msg="cleaning up after shim disconnected" id=98ddf2cc9cbc8ef22b2c7e27b81603c9cec7c9264f464c91333420bd11148294 namespace=k8s.io Jan 30 12:57:41.649564 containerd[1470]: time="2025-01-30T12:57:41.649266744Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:57:41.789601 containerd[1470]: time="2025-01-30T12:57:41.789296789Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:57:41.790713 containerd[1470]: time="2025-01-30T12:57:41.790621030Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.1: active requests=0, bytes read=30909466" Jan 30 12:57:41.791933 containerd[1470]: time="2025-01-30T12:57:41.791833120Z" level=info msg="ImageCreate event name:\"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:57:41.794331 containerd[1470]: time="2025-01-30T12:57:41.794258586Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:57:41.795400 containerd[1470]: time="2025-01-30T12:57:41.795240397Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.1\" with image id \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\", repo tag \"registry.k8s.io/kube-proxy:v1.32.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\", size \"30908485\" in 2.357081717s" Jan 30 12:57:41.795400 containerd[1470]: time="2025-01-30T12:57:41.795289447Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\" returns image reference \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\"" Jan 30 12:57:41.799013 containerd[1470]: time="2025-01-30T12:57:41.798826302Z" level=info msg="CreateContainer within sandbox \"64ccb3a6938bd87edb9fa52b0b9d72fb4e8d7257ec042a1420a07d4a5b6a2056\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 12:57:41.843798 containerd[1470]: time="2025-01-30T12:57:41.843618005Z" level=info msg="CreateContainer within sandbox \"64ccb3a6938bd87edb9fa52b0b9d72fb4e8d7257ec042a1420a07d4a5b6a2056\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"655156428ffae6b630e6f1597473e310efae6e9b229c6f3a19c4946d3e9aa6e7\"" Jan 30 12:57:41.845464 containerd[1470]: time="2025-01-30T12:57:41.845319152Z" level=info msg="StartContainer for \"655156428ffae6b630e6f1597473e310efae6e9b229c6f3a19c4946d3e9aa6e7\"" Jan 30 12:57:41.894638 systemd[1]: Started cri-containerd-655156428ffae6b630e6f1597473e310efae6e9b229c6f3a19c4946d3e9aa6e7.scope - libcontainer container 655156428ffae6b630e6f1597473e310efae6e9b229c6f3a19c4946d3e9aa6e7. Jan 30 12:57:41.963628 containerd[1470]: time="2025-01-30T12:57:41.963554882Z" level=info msg="StartContainer for \"655156428ffae6b630e6f1597473e310efae6e9b229c6f3a19c4946d3e9aa6e7\" returns successfully" Jan 30 12:57:42.047339 kubelet[1802]: E0130 12:57:42.047235 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:57:42.294685 kubelet[1802]: E0130 12:57:42.293819 1802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 12:57:42.301719 kubelet[1802]: E0130 12:57:42.301094 1802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 12:57:42.305719 containerd[1470]: time="2025-01-30T12:57:42.305434724Z" level=info msg="CreateContainer within sandbox \"0c5eda6279802109e5e72800c706e489aa9a6792aa9874d32a1d14c180e2a87a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 12:57:42.312387 kubelet[1802]: I0130 12:57:42.312169 1802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5hlm6" podStartSLOduration=1.9070863070000001 podStartE2EDuration="12.31212984s" podCreationTimestamp="2025-01-30 12:57:30 +0000 UTC" firstStartedPulling="2025-01-30 12:57:31.391384873 +0000 UTC m=+2.676036758" lastFinishedPulling="2025-01-30 12:57:41.796428433 +0000 UTC m=+13.081080291" observedRunningTime="2025-01-30 12:57:42.311992845 +0000 UTC m=+13.596644733" watchObservedRunningTime="2025-01-30 12:57:42.31212984 +0000 UTC m=+13.596781710" Jan 30 12:57:42.334044 containerd[1470]: time="2025-01-30T12:57:42.333980095Z" level=info msg="CreateContainer within sandbox \"0c5eda6279802109e5e72800c706e489aa9a6792aa9874d32a1d14c180e2a87a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7f0dc8d2251b3b1658ee56aac5a8c8f121c54193665d92bd225c42847b1adc98\"" Jan 30 12:57:42.335357 containerd[1470]: time="2025-01-30T12:57:42.334978907Z" level=info msg="StartContainer for \"7f0dc8d2251b3b1658ee56aac5a8c8f121c54193665d92bd225c42847b1adc98\"" Jan 30 12:57:42.401448 systemd[1]: Started cri-containerd-7f0dc8d2251b3b1658ee56aac5a8c8f121c54193665d92bd225c42847b1adc98.scope - libcontainer container 7f0dc8d2251b3b1658ee56aac5a8c8f121c54193665d92bd225c42847b1adc98. Jan 30 12:57:42.463268 containerd[1470]: time="2025-01-30T12:57:42.462487819Z" level=info msg="StartContainer for \"7f0dc8d2251b3b1658ee56aac5a8c8f121c54193665d92bd225c42847b1adc98\" returns successfully" Jan 30 12:57:42.463026 systemd[1]: cri-containerd-7f0dc8d2251b3b1658ee56aac5a8c8f121c54193665d92bd225c42847b1adc98.scope: Deactivated successfully. Jan 30 12:57:42.503729 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f0dc8d2251b3b1658ee56aac5a8c8f121c54193665d92bd225c42847b1adc98-rootfs.mount: Deactivated successfully. Jan 30 12:57:42.530901 containerd[1470]: time="2025-01-30T12:57:42.530586805Z" level=info msg="shim disconnected" id=7f0dc8d2251b3b1658ee56aac5a8c8f121c54193665d92bd225c42847b1adc98 namespace=k8s.io Jan 30 12:57:42.530901 containerd[1470]: time="2025-01-30T12:57:42.530701078Z" level=warning msg="cleaning up after shim disconnected" id=7f0dc8d2251b3b1658ee56aac5a8c8f121c54193665d92bd225c42847b1adc98 namespace=k8s.io Jan 30 12:57:42.530901 containerd[1470]: time="2025-01-30T12:57:42.530720748Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:57:42.546503 systemd-resolved[1318]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jan 30 12:57:43.048556 kubelet[1802]: E0130 12:57:43.048485 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:57:43.308331 kubelet[1802]: E0130 12:57:43.308111 1802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 12:57:43.308840 kubelet[1802]: E0130 12:57:43.308581 1802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 12:57:43.312409 containerd[1470]: time="2025-01-30T12:57:43.312213464Z" level=info msg="CreateContainer within sandbox \"0c5eda6279802109e5e72800c706e489aa9a6792aa9874d32a1d14c180e2a87a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 12:57:43.387450 containerd[1470]: time="2025-01-30T12:57:43.387351529Z" level=info msg="CreateContainer within sandbox \"0c5eda6279802109e5e72800c706e489aa9a6792aa9874d32a1d14c180e2a87a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"dbb2948b80affc29925cba5530d393fe72a74b383f0063098bdc7f6b44439e4d\"" Jan 30 12:57:43.390270 containerd[1470]: time="2025-01-30T12:57:43.388140992Z" level=info msg="StartContainer for \"dbb2948b80affc29925cba5530d393fe72a74b383f0063098bdc7f6b44439e4d\"" Jan 30 12:57:43.431471 systemd[1]: Started cri-containerd-dbb2948b80affc29925cba5530d393fe72a74b383f0063098bdc7f6b44439e4d.scope - libcontainer container dbb2948b80affc29925cba5530d393fe72a74b383f0063098bdc7f6b44439e4d. Jan 30 12:57:43.499875 containerd[1470]: time="2025-01-30T12:57:43.499783000Z" level=info msg="StartContainer for \"dbb2948b80affc29925cba5530d393fe72a74b383f0063098bdc7f6b44439e4d\" returns successfully" Jan 30 12:57:43.635400 kubelet[1802]: I0130 12:57:43.634653 1802 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Jan 30 12:57:44.050633 kubelet[1802]: E0130 12:57:44.050292 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:57:44.058218 kernel: Initializing XFRM netlink socket Jan 30 12:57:44.315295 kubelet[1802]: E0130 12:57:44.315042 1802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 12:57:44.342575 kubelet[1802]: I0130 12:57:44.342434 1802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-pqknc" podStartSLOduration=6.285764861 podStartE2EDuration="14.342403188s" podCreationTimestamp="2025-01-30 12:57:30 +0000 UTC" firstStartedPulling="2025-01-30 12:57:31.380697083 +0000 UTC m=+2.665348969" lastFinishedPulling="2025-01-30 12:57:39.43733544 +0000 UTC m=+10.721987296" observedRunningTime="2025-01-30 12:57:44.342243806 +0000 UTC m=+15.626895697" watchObservedRunningTime="2025-01-30 12:57:44.342403188 +0000 UTC m=+15.627055064" Jan 30 12:57:45.051192 kubelet[1802]: E0130 12:57:45.051083 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:57:45.317196 kubelet[1802]: E0130 12:57:45.316976 1802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 12:57:46.346044 systemd-resolved[1318]: Clock change detected. Flushing caches. Jan 30 12:57:46.346631 systemd-timesyncd[1365]: Contacted time server 172.234.37.140:123 (2.flatcar.pool.ntp.org). Jan 30 12:57:46.346743 systemd-timesyncd[1365]: Initial clock synchronization to Thu 2025-01-30 12:57:46.345756 UTC. Jan 30 12:57:46.549852 systemd-resolved[1318]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Jan 30 12:57:46.688628 systemd-networkd[1374]: cilium_host: Link UP Jan 30 12:57:46.688923 systemd-networkd[1374]: cilium_net: Link UP Jan 30 12:57:46.689641 systemd-networkd[1374]: cilium_net: Gained carrier Jan 30 12:57:46.689913 systemd-networkd[1374]: cilium_host: Gained carrier Jan 30 12:57:46.848241 systemd-networkd[1374]: cilium_vxlan: Link UP Jan 30 12:57:46.848251 systemd-networkd[1374]: cilium_vxlan: Gained carrier Jan 30 12:57:46.983790 kubelet[1802]: E0130 12:57:46.983719 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:57:47.170264 kernel: NET: Registered PF_ALG protocol family Jan 30 12:57:47.250601 kubelet[1802]: E0130 12:57:47.250555 1802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 12:57:47.318201 systemd-networkd[1374]: cilium_host: Gained IPv6LL Jan 30 12:57:47.446549 systemd-networkd[1374]: cilium_net: Gained IPv6LL Jan 30 12:57:47.847283 systemd[1]: Created slice kubepods-besteffort-podb4f26ab3_b1a9_4830_8e77_febbc035d230.slice - libcontainer container kubepods-besteffort-podb4f26ab3_b1a9_4830_8e77_febbc035d230.slice. Jan 30 12:57:47.956771 kubelet[1802]: I0130 12:57:47.956700 1802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dklt7\" (UniqueName: \"kubernetes.io/projected/b4f26ab3-b1a9-4830-8e77-febbc035d230-kube-api-access-dklt7\") pod \"nginx-deployment-7fcdb87857-msw2v\" (UID: \"b4f26ab3-b1a9-4830-8e77-febbc035d230\") " pod="default/nginx-deployment-7fcdb87857-msw2v" Jan 30 12:57:47.957817 systemd-networkd[1374]: cilium_vxlan: Gained IPv6LL Jan 30 12:57:47.983965 kubelet[1802]: E0130 12:57:47.983907 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:57:48.138407 systemd-networkd[1374]: lxc_health: Link UP Jan 30 12:57:48.150132 systemd-networkd[1374]: lxc_health: Gained carrier Jan 30 12:57:48.156706 containerd[1470]: time="2025-01-30T12:57:48.156320262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-msw2v,Uid:b4f26ab3-b1a9-4830-8e77-febbc035d230,Namespace:default,Attempt:0,}" Jan 30 12:57:48.330242 kubelet[1802]: E0130 12:57:48.329161 1802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 12:57:48.729568 systemd-networkd[1374]: lxc62d213e9cfda: Link UP Jan 30 12:57:48.732414 kernel: eth0: renamed from tmpcb98e Jan 30 12:57:48.740774 systemd-networkd[1374]: lxc62d213e9cfda: Gained carrier Jan 30 12:57:48.986284 kubelet[1802]: E0130 12:57:48.984982 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:57:49.257490 kubelet[1802]: E0130 12:57:49.257035 1802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 12:57:49.301566 systemd-networkd[1374]: lxc_health: Gained IPv6LL Jan 30 12:57:49.967520 kubelet[1802]: E0130 12:57:49.967442 1802 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:57:49.987647 kubelet[1802]: E0130 12:57:49.987572 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:57:50.260040 kubelet[1802]: E0130 12:57:50.258825 1802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 12:57:50.709656 systemd-networkd[1374]: lxc62d213e9cfda: Gained IPv6LL Jan 30 12:57:50.989012 kubelet[1802]: E0130 12:57:50.988821 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:57:51.990376 kubelet[1802]: E0130 12:57:51.990303 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:57:52.990580 kubelet[1802]: E0130 12:57:52.990497 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:57:53.991345 kubelet[1802]: E0130 12:57:53.991250 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:57:54.613353 containerd[1470]: time="2025-01-30T12:57:54.613023679Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:57:54.613353 containerd[1470]: time="2025-01-30T12:57:54.613173983Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:57:54.613353 containerd[1470]: time="2025-01-30T12:57:54.613202574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:57:54.614331 containerd[1470]: time="2025-01-30T12:57:54.613387752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:57:54.651536 systemd[1]: Started cri-containerd-cb98e6cb8d6e58a05b11c14995084ddf2172b2a43a4cfa154fbb9e2f72e39b1b.scope - libcontainer container cb98e6cb8d6e58a05b11c14995084ddf2172b2a43a4cfa154fbb9e2f72e39b1b. Jan 30 12:57:54.718017 containerd[1470]: time="2025-01-30T12:57:54.717948424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-msw2v,Uid:b4f26ab3-b1a9-4830-8e77-febbc035d230,Namespace:default,Attempt:0,} returns sandbox id \"cb98e6cb8d6e58a05b11c14995084ddf2172b2a43a4cfa154fbb9e2f72e39b1b\"" Jan 30 12:57:54.720282 containerd[1470]: time="2025-01-30T12:57:54.720182879Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 30 12:57:54.992339 kubelet[1802]: E0130 12:57:54.992266 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:57:55.992606 kubelet[1802]: E0130 12:57:55.992509 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:57:56.993719 kubelet[1802]: E0130 12:57:56.993620 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:57:57.858018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount995977247.mount: Deactivated successfully. Jan 30 12:57:57.994492 kubelet[1802]: E0130 12:57:57.994401 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:57:58.995683 kubelet[1802]: E0130 12:57:58.995618 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:57:59.366226 containerd[1470]: time="2025-01-30T12:57:59.365982867Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:57:59.367948 containerd[1470]: time="2025-01-30T12:57:59.367850313Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71015561" Jan 30 12:57:59.368739 containerd[1470]: time="2025-01-30T12:57:59.368650920Z" level=info msg="ImageCreate event name:\"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:57:59.373172 containerd[1470]: time="2025-01-30T12:57:59.373071525Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:57:59.375069 containerd[1470]: time="2025-01-30T12:57:59.374593078Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 4.654325969s" Jan 30 12:57:59.375069 containerd[1470]: time="2025-01-30T12:57:59.374645028Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 30 12:57:59.378025 containerd[1470]: time="2025-01-30T12:57:59.377875582Z" level=info msg="CreateContainer within sandbox \"cb98e6cb8d6e58a05b11c14995084ddf2172b2a43a4cfa154fbb9e2f72e39b1b\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 30 12:57:59.397641 containerd[1470]: time="2025-01-30T12:57:59.397394326Z" level=info msg="CreateContainer within sandbox \"cb98e6cb8d6e58a05b11c14995084ddf2172b2a43a4cfa154fbb9e2f72e39b1b\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"3c37fb1dea89d6ac0653335915b6b5fb2bb043d006c321e33eebae1ebd664be0\"" Jan 30 12:57:59.399552 containerd[1470]: time="2025-01-30T12:57:59.398489867Z" level=info msg="StartContainer for \"3c37fb1dea89d6ac0653335915b6b5fb2bb043d006c321e33eebae1ebd664be0\"" Jan 30 12:57:59.503626 systemd[1]: Started cri-containerd-3c37fb1dea89d6ac0653335915b6b5fb2bb043d006c321e33eebae1ebd664be0.scope - libcontainer container 3c37fb1dea89d6ac0653335915b6b5fb2bb043d006c321e33eebae1ebd664be0. Jan 30 12:57:59.545149 containerd[1470]: time="2025-01-30T12:57:59.544966449Z" level=info msg="StartContainer for \"3c37fb1dea89d6ac0653335915b6b5fb2bb043d006c321e33eebae1ebd664be0\" returns successfully" Jan 30 12:57:59.996680 kubelet[1802]: E0130 12:57:59.996584 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:58:00.997795 kubelet[1802]: E0130 12:58:00.997716 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:58:01.028827 update_engine[1450]: I20250130 12:58:01.028669 1450 update_attempter.cc:509] Updating boot flags... Jan 30 12:58:01.074302 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (3015) Jan 30 12:58:01.174729 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (3013) Jan 30 12:58:01.263440 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (3013) Jan 30 12:58:01.998115 kubelet[1802]: E0130 12:58:01.998027 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:58:02.998667 kubelet[1802]: E0130 12:58:02.998598 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:58:03.998883 kubelet[1802]: E0130 12:58:03.998817 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:58:04.999134 kubelet[1802]: E0130 12:58:04.999017 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:58:05.915020 kubelet[1802]: I0130 12:58:05.914929 1802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-msw2v" podStartSLOduration=14.258415506 podStartE2EDuration="18.914903483s" podCreationTimestamp="2025-01-30 12:57:47 +0000 UTC" firstStartedPulling="2025-01-30 12:57:54.719577496 +0000 UTC m=+25.072173431" lastFinishedPulling="2025-01-30 12:57:59.376065503 +0000 UTC m=+29.728661408" observedRunningTime="2025-01-30 12:58:00.309162158 +0000 UTC m=+30.661758100" watchObservedRunningTime="2025-01-30 12:58:05.914903483 +0000 UTC m=+36.267499420" Jan 30 12:58:05.923421 systemd[1]: Created slice kubepods-besteffort-pod3f1cf6e3_df2f_4131_9808_d3d39789e37b.slice - libcontainer container kubepods-besteffort-pod3f1cf6e3_df2f_4131_9808_d3d39789e37b.slice. Jan 30 12:58:05.984966 kubelet[1802]: I0130 12:58:05.984914 1802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/3f1cf6e3-df2f-4131-9808-d3d39789e37b-data\") pod \"nfs-server-provisioner-0\" (UID: \"3f1cf6e3-df2f-4131-9808-d3d39789e37b\") " pod="default/nfs-server-provisioner-0" Jan 30 12:58:05.985345 kubelet[1802]: I0130 12:58:05.985296 1802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz2xk\" (UniqueName: \"kubernetes.io/projected/3f1cf6e3-df2f-4131-9808-d3d39789e37b-kube-api-access-nz2xk\") pod \"nfs-server-provisioner-0\" (UID: \"3f1cf6e3-df2f-4131-9808-d3d39789e37b\") " pod="default/nfs-server-provisioner-0" Jan 30 12:58:06.000337 kubelet[1802]: E0130 12:58:06.000259 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:58:06.228083 containerd[1470]: time="2025-01-30T12:58:06.228029669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:3f1cf6e3-df2f-4131-9808-d3d39789e37b,Namespace:default,Attempt:0,}" Jan 30 12:58:06.278805 systemd-networkd[1374]: lxc46d958216077: Link UP Jan 30 12:58:06.286284 kernel: eth0: renamed from tmpa0ec7 Jan 30 12:58:06.293843 systemd-networkd[1374]: lxc46d958216077: Gained carrier Jan 30 12:58:06.587646 containerd[1470]: time="2025-01-30T12:58:06.586888191Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:58:06.587646 containerd[1470]: time="2025-01-30T12:58:06.586985432Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:58:06.587646 containerd[1470]: time="2025-01-30T12:58:06.587005825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:58:06.587646 containerd[1470]: time="2025-01-30T12:58:06.587149651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:58:06.618782 systemd[1]: Started cri-containerd-a0ec7cca93d266f5b4dba3f616099b6953c8e210ee724d72cbd80ad5226507f7.scope - libcontainer container a0ec7cca93d266f5b4dba3f616099b6953c8e210ee724d72cbd80ad5226507f7. Jan 30 12:58:06.686584 containerd[1470]: time="2025-01-30T12:58:06.686508317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:3f1cf6e3-df2f-4131-9808-d3d39789e37b,Namespace:default,Attempt:0,} returns sandbox id \"a0ec7cca93d266f5b4dba3f616099b6953c8e210ee724d72cbd80ad5226507f7\"" Jan 30 12:58:06.688940 containerd[1470]: time="2025-01-30T12:58:06.688889629Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 30 12:58:07.001087 kubelet[1802]: E0130 12:58:07.001000 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:58:07.349773 systemd-networkd[1374]: lxc46d958216077: Gained IPv6LL Jan 30 12:58:08.001677 kubelet[1802]: E0130 12:58:08.001594 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:58:09.002597 kubelet[1802]: E0130 12:58:09.002535 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:58:09.086015 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1983615316.mount: Deactivated successfully. Jan 30 12:58:09.967414 kubelet[1802]: E0130 12:58:09.967278 1802 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:58:10.003118 kubelet[1802]: E0130 12:58:10.002958 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:58:11.003374 kubelet[1802]: E0130 12:58:11.003309 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:58:11.590359 containerd[1470]: time="2025-01-30T12:58:11.589923183Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:58:11.592241 containerd[1470]: time="2025-01-30T12:58:11.591702757Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jan 30 12:58:11.593257 containerd[1470]: time="2025-01-30T12:58:11.593126715Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:58:11.599512 containerd[1470]: time="2025-01-30T12:58:11.599432126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:58:11.600865 containerd[1470]: time="2025-01-30T12:58:11.600636286Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 4.91169375s" Jan 30 12:58:11.600865 containerd[1470]: time="2025-01-30T12:58:11.600696468Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 30 12:58:11.605182 containerd[1470]: time="2025-01-30T12:58:11.604981043Z" level=info msg="CreateContainer within sandbox \"a0ec7cca93d266f5b4dba3f616099b6953c8e210ee724d72cbd80ad5226507f7\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 30 12:58:11.631140 containerd[1470]: time="2025-01-30T12:58:11.630973550Z" level=info msg="CreateContainer within sandbox \"a0ec7cca93d266f5b4dba3f616099b6953c8e210ee724d72cbd80ad5226507f7\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"23152806e3c00353e63cc08e812ce5be5ec255e5be610ec4368cb70e56069af4\"" Jan 30 12:58:11.632178 containerd[1470]: time="2025-01-30T12:58:11.632116092Z" level=info msg="StartContainer for \"23152806e3c00353e63cc08e812ce5be5ec255e5be610ec4368cb70e56069af4\"" Jan 30 12:58:11.678521 systemd[1]: Started cri-containerd-23152806e3c00353e63cc08e812ce5be5ec255e5be610ec4368cb70e56069af4.scope - libcontainer container 23152806e3c00353e63cc08e812ce5be5ec255e5be610ec4368cb70e56069af4. Jan 30 12:58:11.759517 containerd[1470]: time="2025-01-30T12:58:11.759448165Z" level=info msg="StartContainer for \"23152806e3c00353e63cc08e812ce5be5ec255e5be610ec4368cb70e56069af4\" returns successfully" Jan 30 12:58:12.004661 kubelet[1802]: E0130 12:58:12.004561 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:58:13.005900 kubelet[1802]: E0130 12:58:13.005812 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:58:14.007009 kubelet[1802]: E0130 12:58:14.006883 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:58:15.007417 kubelet[1802]: E0130 12:58:15.007330 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:58:16.008045 kubelet[1802]: E0130 12:58:16.007965 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:58:17.008753 kubelet[1802]: E0130 12:58:17.008683 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:58:18.009340 kubelet[1802]: E0130 12:58:18.009259 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:58:19.009757 kubelet[1802]: E0130 12:58:19.009683 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:58:20.010701 kubelet[1802]: E0130 12:58:20.010589 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:58:21.011454 kubelet[1802]: E0130 12:58:21.011378 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:58:21.282529 kubelet[1802]: I0130 12:58:21.282251 1802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=11.367947057 podStartE2EDuration="16.282185221s" podCreationTimestamp="2025-01-30 12:58:05 +0000 UTC" firstStartedPulling="2025-01-30 12:58:06.688560961 +0000 UTC m=+37.041156876" lastFinishedPulling="2025-01-30 12:58:11.602799115 +0000 UTC m=+41.955395040" observedRunningTime="2025-01-30 12:58:12.36077363 +0000 UTC m=+42.713369563" watchObservedRunningTime="2025-01-30 12:58:21.282185221 +0000 UTC m=+51.634781155" Jan 30 12:58:21.290871 systemd[1]: Created slice kubepods-besteffort-pod9b0e678d_f68c_4add_871d_a10ecdb52d30.slice - libcontainer container kubepods-besteffort-pod9b0e678d_f68c_4add_871d_a10ecdb52d30.slice. Jan 30 12:58:21.393410 kubelet[1802]: I0130 12:58:21.393292 1802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-667a8f68-de02-47f0-b253-77f07ec6da49\" (UniqueName: \"kubernetes.io/nfs/9b0e678d-f68c-4add-871d-a10ecdb52d30-pvc-667a8f68-de02-47f0-b253-77f07ec6da49\") pod \"test-pod-1\" (UID: \"9b0e678d-f68c-4add-871d-a10ecdb52d30\") " pod="default/test-pod-1" Jan 30 12:58:21.393410 kubelet[1802]: I0130 12:58:21.393351 1802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hjr4\" (UniqueName: \"kubernetes.io/projected/9b0e678d-f68c-4add-871d-a10ecdb52d30-kube-api-access-2hjr4\") pod \"test-pod-1\" (UID: \"9b0e678d-f68c-4add-871d-a10ecdb52d30\") " pod="default/test-pod-1" Jan 30 12:58:21.529836 kernel: FS-Cache: Loaded Jan 30 12:58:21.613540 kernel: RPC: Registered named UNIX socket transport module. Jan 30 12:58:21.613731 kernel: RPC: Registered udp transport module. Jan 30 12:58:21.614553 kernel: RPC: Registered tcp transport module. Jan 30 12:58:21.615451 kernel: RPC: Registered tcp-with-tls transport module. Jan 30 12:58:21.616611 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 30 12:58:21.871406 kernel: NFS: Registering the id_resolver key type Jan 30 12:58:21.871591 kernel: Key type id_resolver registered Jan 30 12:58:21.874051 kernel: Key type id_legacy registered Jan 30 12:58:21.922293 nfsidmap[3212]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '1.0-8-5f4ae1611a' Jan 30 12:58:21.927523 nfsidmap[3213]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '1.0-8-5f4ae1611a' Jan 30 12:58:22.012109 kubelet[1802]: E0130 12:58:22.012027 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:58:22.198107 containerd[1470]: time="2025-01-30T12:58:22.197279337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:9b0e678d-f68c-4add-871d-a10ecdb52d30,Namespace:default,Attempt:0,}" Jan 30 12:58:22.239462 systemd-networkd[1374]: lxc85e32ae8e34b: Link UP Jan 30 12:58:22.251239 kernel: eth0: renamed from tmpa74a5 Jan 30 12:58:22.263404 systemd-networkd[1374]: lxc85e32ae8e34b: Gained carrier Jan 30 12:58:22.511536 containerd[1470]: time="2025-01-30T12:58:22.511352062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:58:22.511536 containerd[1470]: time="2025-01-30T12:58:22.511476006Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:58:22.511810 containerd[1470]: time="2025-01-30T12:58:22.511513430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:58:22.511810 containerd[1470]: time="2025-01-30T12:58:22.511661067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:58:22.547639 systemd[1]: Started cri-containerd-a74a567b4cea4d8e0d309d15190b7c259b8334ba93e43ca08860c0da34dab597.scope - libcontainer container a74a567b4cea4d8e0d309d15190b7c259b8334ba93e43ca08860c0da34dab597. Jan 30 12:58:22.612717 containerd[1470]: time="2025-01-30T12:58:22.612597713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:9b0e678d-f68c-4add-871d-a10ecdb52d30,Namespace:default,Attempt:0,} returns sandbox id \"a74a567b4cea4d8e0d309d15190b7c259b8334ba93e43ca08860c0da34dab597\"" Jan 30 12:58:22.615072 containerd[1470]: time="2025-01-30T12:58:22.614979341Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 30 12:58:22.975655 containerd[1470]: time="2025-01-30T12:58:22.975447478Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:58:22.976732 containerd[1470]: time="2025-01-30T12:58:22.976648875Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 30 12:58:22.981336 containerd[1470]: time="2025-01-30T12:58:22.981191322Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 366.049148ms" Jan 30 12:58:22.981748 containerd[1470]: time="2025-01-30T12:58:22.981581835Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 30 12:58:22.984962 containerd[1470]: time="2025-01-30T12:58:22.984871296Z" level=info msg="CreateContainer within sandbox \"a74a567b4cea4d8e0d309d15190b7c259b8334ba93e43ca08860c0da34dab597\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 30 12:58:23.008728 containerd[1470]: time="2025-01-30T12:58:23.008599828Z" level=info msg="CreateContainer within sandbox \"a74a567b4cea4d8e0d309d15190b7c259b8334ba93e43ca08860c0da34dab597\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"d33fb0243cb4484b949ffd260ee250a4388f077c933d03acf415d5d68467752f\"" Jan 30 12:58:23.010109 containerd[1470]: time="2025-01-30T12:58:23.009860852Z" level=info msg="StartContainer for \"d33fb0243cb4484b949ffd260ee250a4388f077c933d03acf415d5d68467752f\"" Jan 30 12:58:23.012325 kubelet[1802]: E0130 12:58:23.012187 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:58:23.054623 systemd[1]: Started cri-containerd-d33fb0243cb4484b949ffd260ee250a4388f077c933d03acf415d5d68467752f.scope - libcontainer container d33fb0243cb4484b949ffd260ee250a4388f077c933d03acf415d5d68467752f. Jan 30 12:58:23.098924 containerd[1470]: time="2025-01-30T12:58:23.096792502Z" level=info msg="StartContainer for \"d33fb0243cb4484b949ffd260ee250a4388f077c933d03acf415d5d68467752f\" returns successfully" Jan 30 12:58:23.387686 kubelet[1802]: I0130 12:58:23.385327 1802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=17.016395344 podStartE2EDuration="17.385299395s" podCreationTimestamp="2025-01-30 12:58:06 +0000 UTC" firstStartedPulling="2025-01-30 12:58:22.61399319 +0000 UTC m=+52.966589126" lastFinishedPulling="2025-01-30 12:58:22.98289726 +0000 UTC m=+53.335493177" observedRunningTime="2025-01-30 12:58:23.385045829 +0000 UTC m=+53.737641759" watchObservedRunningTime="2025-01-30 12:58:23.385299395 +0000 UTC m=+53.737895331" Jan 30 12:58:24.012486 kubelet[1802]: E0130 12:58:24.012410 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:58:24.053657 systemd-networkd[1374]: lxc85e32ae8e34b: Gained IPv6LL Jan 30 12:58:24.118753 systemd[1]: Started sshd@9-159.223.192.231:22-218.92.0.165:62320.service - OpenSSH per-connection server daemon (218.92.0.165:62320). Jan 30 12:58:25.013798 kubelet[1802]: E0130 12:58:25.013681 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:58:26.014559 kubelet[1802]: E0130 12:58:26.014485 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:58:27.015204 kubelet[1802]: E0130 12:58:27.015131 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:58:28.016383 kubelet[1802]: E0130 12:58:28.016307 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:58:29.017559 kubelet[1802]: E0130 12:58:29.017472 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:58:29.397638 containerd[1470]: time="2025-01-30T12:58:29.397459992Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 12:58:29.405663 containerd[1470]: time="2025-01-30T12:58:29.405592992Z" level=info msg="StopContainer for \"dbb2948b80affc29925cba5530d393fe72a74b383f0063098bdc7f6b44439e4d\" with timeout 2 (s)" Jan 30 12:58:29.406077 containerd[1470]: time="2025-01-30T12:58:29.405995460Z" level=info msg="Stop container \"dbb2948b80affc29925cba5530d393fe72a74b383f0063098bdc7f6b44439e4d\" with signal terminated" Jan 30 12:58:29.417592 systemd-networkd[1374]: lxc_health: Link DOWN Jan 30 12:58:29.417609 systemd-networkd[1374]: lxc_health: Lost carrier Jan 30 12:58:29.435856 systemd[1]: cri-containerd-dbb2948b80affc29925cba5530d393fe72a74b383f0063098bdc7f6b44439e4d.scope: Deactivated successfully. Jan 30 12:58:29.437014 systemd[1]: cri-containerd-dbb2948b80affc29925cba5530d393fe72a74b383f0063098bdc7f6b44439e4d.scope: Consumed 10.056s CPU time. Jan 30 12:58:29.464724 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dbb2948b80affc29925cba5530d393fe72a74b383f0063098bdc7f6b44439e4d-rootfs.mount: Deactivated successfully. Jan 30 12:58:29.482173 containerd[1470]: time="2025-01-30T12:58:29.482082436Z" level=info msg="shim disconnected" id=dbb2948b80affc29925cba5530d393fe72a74b383f0063098bdc7f6b44439e4d namespace=k8s.io Jan 30 12:58:29.482636 containerd[1470]: time="2025-01-30T12:58:29.482264186Z" level=warning msg="cleaning up after shim disconnected" id=dbb2948b80affc29925cba5530d393fe72a74b383f0063098bdc7f6b44439e4d namespace=k8s.io Jan 30 12:58:29.482636 containerd[1470]: time="2025-01-30T12:58:29.482277559Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:58:29.502758 containerd[1470]: time="2025-01-30T12:58:29.502691263Z" level=info msg="StopContainer for \"dbb2948b80affc29925cba5530d393fe72a74b383f0063098bdc7f6b44439e4d\" returns successfully" Jan 30 12:58:29.503518 containerd[1470]: time="2025-01-30T12:58:29.503458957Z" level=info msg="StopPodSandbox for \"0c5eda6279802109e5e72800c706e489aa9a6792aa9874d32a1d14c180e2a87a\"" Jan 30 12:58:29.503613 containerd[1470]: time="2025-01-30T12:58:29.503516154Z" level=info msg="Container to stop \"ce41abe5d9dfda1e6e5e6ec10ecc560fcb7d177c65524f444f855836c2543a69\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 12:58:29.503613 containerd[1470]: time="2025-01-30T12:58:29.503564309Z" level=info msg="Container to stop \"98ddf2cc9cbc8ef22b2c7e27b81603c9cec7c9264f464c91333420bd11148294\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 12:58:29.503613 containerd[1470]: time="2025-01-30T12:58:29.503573929Z" level=info msg="Container to stop \"7f0dc8d2251b3b1658ee56aac5a8c8f121c54193665d92bd225c42847b1adc98\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 12:58:29.503613 containerd[1470]: time="2025-01-30T12:58:29.503586742Z" level=info msg="Container to stop \"84a5ebeddf60e733b817354888c94433e5290d83a19c5500cebdbe877583bd9f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 12:58:29.503613 containerd[1470]: time="2025-01-30T12:58:29.503595833Z" level=info msg="Container to stop \"dbb2948b80affc29925cba5530d393fe72a74b383f0063098bdc7f6b44439e4d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 12:58:29.506545 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0c5eda6279802109e5e72800c706e489aa9a6792aa9874d32a1d14c180e2a87a-shm.mount: Deactivated successfully. Jan 30 12:58:29.515095 systemd[1]: cri-containerd-0c5eda6279802109e5e72800c706e489aa9a6792aa9874d32a1d14c180e2a87a.scope: Deactivated successfully. Jan 30 12:58:29.541753 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c5eda6279802109e5e72800c706e489aa9a6792aa9874d32a1d14c180e2a87a-rootfs.mount: Deactivated successfully. Jan 30 12:58:29.546832 containerd[1470]: time="2025-01-30T12:58:29.546551162Z" level=info msg="shim disconnected" id=0c5eda6279802109e5e72800c706e489aa9a6792aa9874d32a1d14c180e2a87a namespace=k8s.io Jan 30 12:58:29.546832 containerd[1470]: time="2025-01-30T12:58:29.546619815Z" level=warning msg="cleaning up after shim disconnected" id=0c5eda6279802109e5e72800c706e489aa9a6792aa9874d32a1d14c180e2a87a namespace=k8s.io Jan 30 12:58:29.546832 containerd[1470]: time="2025-01-30T12:58:29.546632632Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:58:29.564380 containerd[1470]: time="2025-01-30T12:58:29.564309136Z" level=info msg="TearDown network for sandbox \"0c5eda6279802109e5e72800c706e489aa9a6792aa9874d32a1d14c180e2a87a\" successfully" Jan 30 12:58:29.564380 containerd[1470]: time="2025-01-30T12:58:29.564356069Z" level=info msg="StopPodSandbox for \"0c5eda6279802109e5e72800c706e489aa9a6792aa9874d32a1d14c180e2a87a\" returns successfully" Jan 30 12:58:29.654572 kubelet[1802]: I0130 12:58:29.653604 1802 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4c8904a4-405f-445d-9b96-91db040d7b3e-cilium-cgroup\") pod \"4c8904a4-405f-445d-9b96-91db040d7b3e\" (UID: \"4c8904a4-405f-445d-9b96-91db040d7b3e\") " Jan 30 12:58:29.654572 kubelet[1802]: I0130 12:58:29.653660 1802 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4c8904a4-405f-445d-9b96-91db040d7b3e-host-proc-sys-net\") pod \"4c8904a4-405f-445d-9b96-91db040d7b3e\" (UID: \"4c8904a4-405f-445d-9b96-91db040d7b3e\") " Jan 30 12:58:29.654572 kubelet[1802]: I0130 12:58:29.653694 1802 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4c8904a4-405f-445d-9b96-91db040d7b3e-hubble-tls\") pod \"4c8904a4-405f-445d-9b96-91db040d7b3e\" (UID: \"4c8904a4-405f-445d-9b96-91db040d7b3e\") " Jan 30 12:58:29.654572 kubelet[1802]: I0130 12:58:29.653710 1802 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4c8904a4-405f-445d-9b96-91db040d7b3e-cilium-run\") pod \"4c8904a4-405f-445d-9b96-91db040d7b3e\" (UID: \"4c8904a4-405f-445d-9b96-91db040d7b3e\") " Jan 30 12:58:29.654572 kubelet[1802]: I0130 12:58:29.653725 1802 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c8904a4-405f-445d-9b96-91db040d7b3e-xtables-lock\") pod \"4c8904a4-405f-445d-9b96-91db040d7b3e\" (UID: \"4c8904a4-405f-445d-9b96-91db040d7b3e\") " Jan 30 12:58:29.654572 kubelet[1802]: I0130 12:58:29.653749 1802 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4c8904a4-405f-445d-9b96-91db040d7b3e-cilium-config-path\") pod \"4c8904a4-405f-445d-9b96-91db040d7b3e\" (UID: \"4c8904a4-405f-445d-9b96-91db040d7b3e\") " Jan 30 12:58:29.654907 kubelet[1802]: I0130 12:58:29.653764 1802 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4c8904a4-405f-445d-9b96-91db040d7b3e-lib-modules\") pod \"4c8904a4-405f-445d-9b96-91db040d7b3e\" (UID: \"4c8904a4-405f-445d-9b96-91db040d7b3e\") " Jan 30 12:58:29.654907 kubelet[1802]: I0130 12:58:29.653750 1802 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c8904a4-405f-445d-9b96-91db040d7b3e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4c8904a4-405f-445d-9b96-91db040d7b3e" (UID: "4c8904a4-405f-445d-9b96-91db040d7b3e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 12:58:29.654907 kubelet[1802]: I0130 12:58:29.653780 1802 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4c8904a4-405f-445d-9b96-91db040d7b3e-hostproc\") pod \"4c8904a4-405f-445d-9b96-91db040d7b3e\" (UID: \"4c8904a4-405f-445d-9b96-91db040d7b3e\") " Jan 30 12:58:29.654907 kubelet[1802]: I0130 12:58:29.653823 1802 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c8904a4-405f-445d-9b96-91db040d7b3e-hostproc" (OuterVolumeSpecName: "hostproc") pod "4c8904a4-405f-445d-9b96-91db040d7b3e" (UID: "4c8904a4-405f-445d-9b96-91db040d7b3e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 12:58:29.654907 kubelet[1802]: I0130 12:58:29.653853 1802 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c8904a4-405f-445d-9b96-91db040d7b3e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4c8904a4-405f-445d-9b96-91db040d7b3e" (UID: "4c8904a4-405f-445d-9b96-91db040d7b3e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 12:58:29.655111 kubelet[1802]: I0130 12:58:29.653866 1802 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4c8904a4-405f-445d-9b96-91db040d7b3e-bpf-maps\") pod \"4c8904a4-405f-445d-9b96-91db040d7b3e\" (UID: \"4c8904a4-405f-445d-9b96-91db040d7b3e\") " Jan 30 12:58:29.655111 kubelet[1802]: I0130 12:58:29.653900 1802 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4c8904a4-405f-445d-9b96-91db040d7b3e-etc-cni-netd\") pod \"4c8904a4-405f-445d-9b96-91db040d7b3e\" (UID: \"4c8904a4-405f-445d-9b96-91db040d7b3e\") " Jan 30 12:58:29.655111 kubelet[1802]: I0130 12:58:29.653937 1802 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9btv\" (UniqueName: \"kubernetes.io/projected/4c8904a4-405f-445d-9b96-91db040d7b3e-kube-api-access-l9btv\") pod \"4c8904a4-405f-445d-9b96-91db040d7b3e\" (UID: \"4c8904a4-405f-445d-9b96-91db040d7b3e\") " Jan 30 12:58:29.655111 kubelet[1802]: I0130 12:58:29.653964 1802 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4c8904a4-405f-445d-9b96-91db040d7b3e-cni-path\") pod \"4c8904a4-405f-445d-9b96-91db040d7b3e\" (UID: \"4c8904a4-405f-445d-9b96-91db040d7b3e\") " Jan 30 12:58:29.655111 kubelet[1802]: I0130 12:58:29.653995 1802 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4c8904a4-405f-445d-9b96-91db040d7b3e-clustermesh-secrets\") pod \"4c8904a4-405f-445d-9b96-91db040d7b3e\" (UID: \"4c8904a4-405f-445d-9b96-91db040d7b3e\") " Jan 30 12:58:29.655111 kubelet[1802]: I0130 12:58:29.654015 1802 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4c8904a4-405f-445d-9b96-91db040d7b3e-host-proc-sys-kernel\") pod \"4c8904a4-405f-445d-9b96-91db040d7b3e\" (UID: \"4c8904a4-405f-445d-9b96-91db040d7b3e\") " Jan 30 12:58:29.655278 kubelet[1802]: I0130 12:58:29.654118 1802 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4c8904a4-405f-445d-9b96-91db040d7b3e-cilium-cgroup\") on node \"159.223.192.231\" DevicePath \"\"" Jan 30 12:58:29.655278 kubelet[1802]: I0130 12:58:29.654134 1802 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4c8904a4-405f-445d-9b96-91db040d7b3e-host-proc-sys-net\") on node \"159.223.192.231\" DevicePath \"\"" Jan 30 12:58:29.655278 kubelet[1802]: I0130 12:58:29.654147 1802 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4c8904a4-405f-445d-9b96-91db040d7b3e-hostproc\") on node \"159.223.192.231\" DevicePath \"\"" Jan 30 12:58:29.655278 kubelet[1802]: I0130 12:58:29.654181 1802 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c8904a4-405f-445d-9b96-91db040d7b3e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4c8904a4-405f-445d-9b96-91db040d7b3e" (UID: "4c8904a4-405f-445d-9b96-91db040d7b3e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 12:58:29.655278 kubelet[1802]: I0130 12:58:29.654205 1802 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c8904a4-405f-445d-9b96-91db040d7b3e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4c8904a4-405f-445d-9b96-91db040d7b3e" (UID: "4c8904a4-405f-445d-9b96-91db040d7b3e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 12:58:29.655278 kubelet[1802]: I0130 12:58:29.654278 1802 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c8904a4-405f-445d-9b96-91db040d7b3e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4c8904a4-405f-445d-9b96-91db040d7b3e" (UID: "4c8904a4-405f-445d-9b96-91db040d7b3e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 12:58:29.661329 kubelet[1802]: I0130 12:58:29.659621 1802 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c8904a4-405f-445d-9b96-91db040d7b3e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4c8904a4-405f-445d-9b96-91db040d7b3e" (UID: "4c8904a4-405f-445d-9b96-91db040d7b3e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 12:58:29.661329 kubelet[1802]: I0130 12:58:29.659687 1802 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c8904a4-405f-445d-9b96-91db040d7b3e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4c8904a4-405f-445d-9b96-91db040d7b3e" (UID: "4c8904a4-405f-445d-9b96-91db040d7b3e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 12:58:29.661962 systemd[1]: var-lib-kubelet-pods-4c8904a4\x2d405f\x2d445d\x2d9b96\x2d91db040d7b3e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl9btv.mount: Deactivated successfully. Jan 30 12:58:29.663417 kubelet[1802]: I0130 12:58:29.663229 1802 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c8904a4-405f-445d-9b96-91db040d7b3e-cni-path" (OuterVolumeSpecName: "cni-path") pod "4c8904a4-405f-445d-9b96-91db040d7b3e" (UID: "4c8904a4-405f-445d-9b96-91db040d7b3e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 12:58:29.663754 kubelet[1802]: I0130 12:58:29.663654 1802 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c8904a4-405f-445d-9b96-91db040d7b3e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4c8904a4-405f-445d-9b96-91db040d7b3e" (UID: "4c8904a4-405f-445d-9b96-91db040d7b3e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 12:58:29.664059 kubelet[1802]: I0130 12:58:29.663942 1802 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c8904a4-405f-445d-9b96-91db040d7b3e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4c8904a4-405f-445d-9b96-91db040d7b3e" (UID: "4c8904a4-405f-445d-9b96-91db040d7b3e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 12:58:29.664059 kubelet[1802]: I0130 12:58:29.664034 1802 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c8904a4-405f-445d-9b96-91db040d7b3e-kube-api-access-l9btv" (OuterVolumeSpecName: "kube-api-access-l9btv") pod "4c8904a4-405f-445d-9b96-91db040d7b3e" (UID: "4c8904a4-405f-445d-9b96-91db040d7b3e"). InnerVolumeSpecName "kube-api-access-l9btv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 12:58:29.665704 kubelet[1802]: I0130 12:58:29.665596 1802 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c8904a4-405f-445d-9b96-91db040d7b3e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4c8904a4-405f-445d-9b96-91db040d7b3e" (UID: "4c8904a4-405f-445d-9b96-91db040d7b3e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 12:58:29.667895 kubelet[1802]: I0130 12:58:29.667778 1802 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c8904a4-405f-445d-9b96-91db040d7b3e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4c8904a4-405f-445d-9b96-91db040d7b3e" (UID: "4c8904a4-405f-445d-9b96-91db040d7b3e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 12:58:29.754503 kubelet[1802]: I0130 12:58:29.754448 1802 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4c8904a4-405f-445d-9b96-91db040d7b3e-hubble-tls\") on node \"159.223.192.231\" DevicePath \"\"" Jan 30 12:58:29.754503 kubelet[1802]: I0130 12:58:29.754494 1802 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4c8904a4-405f-445d-9b96-91db040d7b3e-cilium-run\") on node \"159.223.192.231\" DevicePath \"\"" Jan 30 12:58:29.754503 kubelet[1802]: I0130 12:58:29.754512 1802 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c8904a4-405f-445d-9b96-91db040d7b3e-xtables-lock\") on node \"159.223.192.231\" DevicePath \"\"" Jan 30 12:58:29.754503 kubelet[1802]: I0130 12:58:29.754525 1802 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4c8904a4-405f-445d-9b96-91db040d7b3e-cilium-config-path\") on node \"159.223.192.231\" DevicePath \"\"" Jan 30 12:58:29.754772 kubelet[1802]: I0130 12:58:29.754541 1802 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4c8904a4-405f-445d-9b96-91db040d7b3e-lib-modules\") on node \"159.223.192.231\" DevicePath \"\"" Jan 30 12:58:29.754772 kubelet[1802]: I0130 12:58:29.754552 1802 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4c8904a4-405f-445d-9b96-91db040d7b3e-bpf-maps\") on node \"159.223.192.231\" DevicePath \"\"" Jan 30 12:58:29.754772 kubelet[1802]: I0130 12:58:29.754560 1802 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4c8904a4-405f-445d-9b96-91db040d7b3e-etc-cni-netd\") on node \"159.223.192.231\" DevicePath \"\"" Jan 30 12:58:29.754772 kubelet[1802]: I0130 12:58:29.754568 1802 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9btv\" (UniqueName: \"kubernetes.io/projected/4c8904a4-405f-445d-9b96-91db040d7b3e-kube-api-access-l9btv\") on node \"159.223.192.231\" DevicePath \"\"" Jan 30 12:58:29.754772 kubelet[1802]: I0130 12:58:29.754577 1802 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4c8904a4-405f-445d-9b96-91db040d7b3e-cni-path\") on node \"159.223.192.231\" DevicePath \"\"" Jan 30 12:58:29.754772 kubelet[1802]: I0130 12:58:29.754588 1802 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4c8904a4-405f-445d-9b96-91db040d7b3e-clustermesh-secrets\") on node \"159.223.192.231\" DevicePath \"\"" Jan 30 12:58:29.754772 kubelet[1802]: I0130 12:58:29.754597 1802 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4c8904a4-405f-445d-9b96-91db040d7b3e-host-proc-sys-kernel\") on node \"159.223.192.231\" DevicePath \"\"" Jan 30 12:58:29.967577 kubelet[1802]: E0130 12:58:29.967497 1802 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:58:29.998277 kubelet[1802]: I0130 12:58:29.998183 1802 scope.go:117] "RemoveContainer" containerID="7f0dc8d2251b3b1658ee56aac5a8c8f121c54193665d92bd225c42847b1adc98" Jan 30 12:58:30.000975 containerd[1470]: time="2025-01-30T12:58:30.000478662Z" level=info msg="RemoveContainer for \"7f0dc8d2251b3b1658ee56aac5a8c8f121c54193665d92bd225c42847b1adc98\"" Jan 30 12:58:30.006118 containerd[1470]: time="2025-01-30T12:58:30.005733698Z" level=info msg="RemoveContainer for \"7f0dc8d2251b3b1658ee56aac5a8c8f121c54193665d92bd225c42847b1adc98\" returns successfully" Jan 30 12:58:30.006720 kubelet[1802]: I0130 12:58:30.006664 1802 scope.go:117] "RemoveContainer" containerID="84a5ebeddf60e733b817354888c94433e5290d83a19c5500cebdbe877583bd9f" Jan 30 12:58:30.022570 containerd[1470]: time="2025-01-30T12:58:30.009518682Z" level=info msg="RemoveContainer for \"84a5ebeddf60e733b817354888c94433e5290d83a19c5500cebdbe877583bd9f\"" Jan 30 12:58:30.022779 kubelet[1802]: E0130 12:58:30.022587 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:58:30.027928 containerd[1470]: time="2025-01-30T12:58:30.027822753Z" level=info msg="RemoveContainer for \"84a5ebeddf60e733b817354888c94433e5290d83a19c5500cebdbe877583bd9f\" returns successfully" Jan 30 12:58:30.028670 kubelet[1802]: I0130 12:58:30.028480 1802 scope.go:117] "RemoveContainer" containerID="dbb2948b80affc29925cba5530d393fe72a74b383f0063098bdc7f6b44439e4d" Jan 30 12:58:30.030785 containerd[1470]: time="2025-01-30T12:58:30.030728405Z" level=info msg="RemoveContainer for \"dbb2948b80affc29925cba5530d393fe72a74b383f0063098bdc7f6b44439e4d\"" Jan 30 12:58:30.035818 containerd[1470]: time="2025-01-30T12:58:30.035739640Z" level=info msg="RemoveContainer for \"dbb2948b80affc29925cba5530d393fe72a74b383f0063098bdc7f6b44439e4d\" returns successfully" Jan 30 12:58:30.036857 kubelet[1802]: I0130 12:58:30.036800 1802 scope.go:117] "RemoveContainer" containerID="ce41abe5d9dfda1e6e5e6ec10ecc560fcb7d177c65524f444f855836c2543a69" Jan 30 12:58:30.039480 containerd[1470]: time="2025-01-30T12:58:30.039325896Z" level=info msg="RemoveContainer for \"ce41abe5d9dfda1e6e5e6ec10ecc560fcb7d177c65524f444f855836c2543a69\"" Jan 30 12:58:30.043694 containerd[1470]: time="2025-01-30T12:58:30.043473132Z" level=info msg="RemoveContainer for \"ce41abe5d9dfda1e6e5e6ec10ecc560fcb7d177c65524f444f855836c2543a69\" returns successfully" Jan 30 12:58:30.044503 kubelet[1802]: I0130 12:58:30.044318 1802 scope.go:117] "RemoveContainer" containerID="98ddf2cc9cbc8ef22b2c7e27b81603c9cec7c9264f464c91333420bd11148294" Jan 30 12:58:30.047168 containerd[1470]: time="2025-01-30T12:58:30.047079593Z" level=info msg="RemoveContainer for \"98ddf2cc9cbc8ef22b2c7e27b81603c9cec7c9264f464c91333420bd11148294\"" Jan 30 12:58:30.056352 containerd[1470]: time="2025-01-30T12:58:30.053368087Z" level=info msg="RemoveContainer for \"98ddf2cc9cbc8ef22b2c7e27b81603c9cec7c9264f464c91333420bd11148294\" returns successfully" Jan 30 12:58:30.059139 containerd[1470]: time="2025-01-30T12:58:30.058819778Z" level=info msg="StopPodSandbox for \"0c5eda6279802109e5e72800c706e489aa9a6792aa9874d32a1d14c180e2a87a\"" Jan 30 12:58:30.060942 containerd[1470]: time="2025-01-30T12:58:30.059286980Z" level=info msg="TearDown network for sandbox \"0c5eda6279802109e5e72800c706e489aa9a6792aa9874d32a1d14c180e2a87a\" successfully" Jan 30 12:58:30.060942 containerd[1470]: time="2025-01-30T12:58:30.059318898Z" level=info msg="StopPodSandbox for \"0c5eda6279802109e5e72800c706e489aa9a6792aa9874d32a1d14c180e2a87a\" returns successfully" Jan 30 12:58:30.060942 containerd[1470]: time="2025-01-30T12:58:30.060252799Z" level=info msg="RemovePodSandbox for \"0c5eda6279802109e5e72800c706e489aa9a6792aa9874d32a1d14c180e2a87a\"" Jan 30 12:58:30.060942 containerd[1470]: time="2025-01-30T12:58:30.060285673Z" level=info msg="Forcibly stopping sandbox \"0c5eda6279802109e5e72800c706e489aa9a6792aa9874d32a1d14c180e2a87a\"" Jan 30 12:58:30.060942 containerd[1470]: time="2025-01-30T12:58:30.060487335Z" level=info msg="TearDown network for sandbox \"0c5eda6279802109e5e72800c706e489aa9a6792aa9874d32a1d14c180e2a87a\" successfully" Jan 30 12:58:30.065877 containerd[1470]: time="2025-01-30T12:58:30.065563771Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0c5eda6279802109e5e72800c706e489aa9a6792aa9874d32a1d14c180e2a87a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 12:58:30.065877 containerd[1470]: time="2025-01-30T12:58:30.065668373Z" level=info msg="RemovePodSandbox \"0c5eda6279802109e5e72800c706e489aa9a6792aa9874d32a1d14c180e2a87a\" returns successfully" Jan 30 12:58:30.137994 kubelet[1802]: E0130 12:58:30.137916 1802 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 12:58:30.152946 systemd[1]: Removed slice kubepods-burstable-pod4c8904a4_405f_445d_9b96_91db040d7b3e.slice - libcontainer container kubepods-burstable-pod4c8904a4_405f_445d_9b96_91db040d7b3e.slice. Jan 30 12:58:30.153472 systemd[1]: kubepods-burstable-pod4c8904a4_405f_445d_9b96_91db040d7b3e.slice: Consumed 10.202s CPU time. Jan 30 12:58:30.376997 systemd[1]: var-lib-kubelet-pods-4c8904a4\x2d405f\x2d445d\x2d9b96\x2d91db040d7b3e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 30 12:58:30.377203 systemd[1]: var-lib-kubelet-pods-4c8904a4\x2d405f\x2d445d\x2d9b96\x2d91db040d7b3e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 30 12:58:31.023396 kubelet[1802]: E0130 12:58:31.023328 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:58:31.570033 kubelet[1802]: I0130 12:58:31.569938 1802 setters.go:602] "Node became not ready" node="159.223.192.231" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-30T12:58:31Z","lastTransitionTime":"2025-01-30T12:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 30 12:58:32.023951 kubelet[1802]: E0130 12:58:32.023863 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:58:32.148056 kubelet[1802]: I0130 12:58:32.148007 1802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c8904a4-405f-445d-9b96-91db040d7b3e" path="/var/lib/kubelet/pods/4c8904a4-405f-445d-9b96-91db040d7b3e/volumes" Jan 30 12:58:32.463611 kubelet[1802]: I0130 12:58:32.463500 1802 memory_manager.go:355] "RemoveStaleState removing state" podUID="4c8904a4-405f-445d-9b96-91db040d7b3e" containerName="cilium-agent" Jan 30 12:58:32.473002 systemd[1]: Created slice kubepods-besteffort-podd35b32c0_150c_4d73_bfee_606ed7953272.slice - libcontainer container kubepods-besteffort-podd35b32c0_150c_4d73_bfee_606ed7953272.slice. Jan 30 12:58:32.566060 systemd[1]: Created slice kubepods-burstable-podbe50b606_33c8_4920_9ceb_e9504e0ac77f.slice - libcontainer container kubepods-burstable-podbe50b606_33c8_4920_9ceb_e9504e0ac77f.slice. Jan 30 12:58:32.575041 kubelet[1802]: I0130 12:58:32.574921 1802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d35b32c0-150c-4d73-bfee-606ed7953272-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-r986t\" (UID: \"d35b32c0-150c-4d73-bfee-606ed7953272\") " pod="kube-system/cilium-operator-6c4d7847fc-r986t" Jan 30 12:58:32.575041 kubelet[1802]: I0130 12:58:32.574997 1802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwvw4\" (UniqueName: \"kubernetes.io/projected/d35b32c0-150c-4d73-bfee-606ed7953272-kube-api-access-qwvw4\") pod \"cilium-operator-6c4d7847fc-r986t\" (UID: \"d35b32c0-150c-4d73-bfee-606ed7953272\") " pod="kube-system/cilium-operator-6c4d7847fc-r986t" Jan 30 12:58:32.675631 kubelet[1802]: I0130 12:58:32.675559 1802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/be50b606-33c8-4920-9ceb-e9504e0ac77f-cni-path\") pod \"cilium-mzt28\" (UID: \"be50b606-33c8-4920-9ceb-e9504e0ac77f\") " pod="kube-system/cilium-mzt28" Jan 30 12:58:32.675631 kubelet[1802]: I0130 12:58:32.675630 1802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/be50b606-33c8-4920-9ceb-e9504e0ac77f-cilium-run\") pod \"cilium-mzt28\" (UID: \"be50b606-33c8-4920-9ceb-e9504e0ac77f\") " pod="kube-system/cilium-mzt28" Jan 30 12:58:32.675631 kubelet[1802]: I0130 12:58:32.675662 1802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/be50b606-33c8-4920-9ceb-e9504e0ac77f-bpf-maps\") pod \"cilium-mzt28\" (UID: \"be50b606-33c8-4920-9ceb-e9504e0ac77f\") " pod="kube-system/cilium-mzt28" Jan 30 12:58:32.675631 kubelet[1802]: I0130 12:58:32.675688 1802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/be50b606-33c8-4920-9ceb-e9504e0ac77f-lib-modules\") pod \"cilium-mzt28\" (UID: \"be50b606-33c8-4920-9ceb-e9504e0ac77f\") " pod="kube-system/cilium-mzt28" Jan 30 12:58:32.676141 kubelet[1802]: I0130 12:58:32.675713 1802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/be50b606-33c8-4920-9ceb-e9504e0ac77f-cilium-ipsec-secrets\") pod \"cilium-mzt28\" (UID: \"be50b606-33c8-4920-9ceb-e9504e0ac77f\") " pod="kube-system/cilium-mzt28" Jan 30 12:58:32.676141 kubelet[1802]: I0130 12:58:32.675740 1802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/be50b606-33c8-4920-9ceb-e9504e0ac77f-host-proc-sys-kernel\") pod \"cilium-mzt28\" (UID: \"be50b606-33c8-4920-9ceb-e9504e0ac77f\") " pod="kube-system/cilium-mzt28" Jan 30 12:58:32.676141 kubelet[1802]: I0130 12:58:32.675792 1802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/be50b606-33c8-4920-9ceb-e9504e0ac77f-etc-cni-netd\") pod \"cilium-mzt28\" (UID: \"be50b606-33c8-4920-9ceb-e9504e0ac77f\") " pod="kube-system/cilium-mzt28" Jan 30 12:58:32.676141 kubelet[1802]: I0130 12:58:32.675821 1802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/be50b606-33c8-4920-9ceb-e9504e0ac77f-hostproc\") pod \"cilium-mzt28\" (UID: \"be50b606-33c8-4920-9ceb-e9504e0ac77f\") " pod="kube-system/cilium-mzt28" Jan 30 12:58:32.676141 kubelet[1802]: I0130 12:58:32.675844 1802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/be50b606-33c8-4920-9ceb-e9504e0ac77f-xtables-lock\") pod \"cilium-mzt28\" (UID: \"be50b606-33c8-4920-9ceb-e9504e0ac77f\") " pod="kube-system/cilium-mzt28" Jan 30 12:58:32.676141 kubelet[1802]: I0130 12:58:32.675865 1802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/be50b606-33c8-4920-9ceb-e9504e0ac77f-host-proc-sys-net\") pod \"cilium-mzt28\" (UID: \"be50b606-33c8-4920-9ceb-e9504e0ac77f\") " pod="kube-system/cilium-mzt28" Jan 30 12:58:32.676399 kubelet[1802]: I0130 12:58:32.675888 1802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/be50b606-33c8-4920-9ceb-e9504e0ac77f-hubble-tls\") pod \"cilium-mzt28\" (UID: \"be50b606-33c8-4920-9ceb-e9504e0ac77f\") " pod="kube-system/cilium-mzt28" Jan 30 12:58:32.676399 kubelet[1802]: I0130 12:58:32.675912 1802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7nbj\" (UniqueName: \"kubernetes.io/projected/be50b606-33c8-4920-9ceb-e9504e0ac77f-kube-api-access-f7nbj\") pod \"cilium-mzt28\" (UID: \"be50b606-33c8-4920-9ceb-e9504e0ac77f\") " pod="kube-system/cilium-mzt28" Jan 30 12:58:32.676399 kubelet[1802]: I0130 12:58:32.675962 1802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/be50b606-33c8-4920-9ceb-e9504e0ac77f-cilium-cgroup\") pod \"cilium-mzt28\" (UID: \"be50b606-33c8-4920-9ceb-e9504e0ac77f\") " pod="kube-system/cilium-mzt28" Jan 30 12:58:32.676399 kubelet[1802]: I0130 12:58:32.675990 1802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/be50b606-33c8-4920-9ceb-e9504e0ac77f-clustermesh-secrets\") pod \"cilium-mzt28\" (UID: \"be50b606-33c8-4920-9ceb-e9504e0ac77f\") " pod="kube-system/cilium-mzt28" Jan 30 12:58:32.676399 kubelet[1802]: I0130 12:58:32.676023 1802 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/be50b606-33c8-4920-9ceb-e9504e0ac77f-cilium-config-path\") pod \"cilium-mzt28\" (UID: \"be50b606-33c8-4920-9ceb-e9504e0ac77f\") " pod="kube-system/cilium-mzt28" Jan 30 12:58:32.776158 kubelet[1802]: E0130 12:58:32.775942 1802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 12:58:32.780000 containerd[1470]: time="2025-01-30T12:58:32.779427014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-r986t,Uid:d35b32c0-150c-4d73-bfee-606ed7953272,Namespace:kube-system,Attempt:0,}" Jan 30 12:58:32.832819 containerd[1470]: time="2025-01-30T12:58:32.832656815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:58:32.833130 containerd[1470]: time="2025-01-30T12:58:32.832844103Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:58:32.833130 containerd[1470]: time="2025-01-30T12:58:32.832871585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:58:32.833377 containerd[1470]: time="2025-01-30T12:58:32.833150843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:58:32.856539 systemd[1]: Started cri-containerd-8f9ce25e91fca73ecb3b262b4c663161700efd3b552638ef404eb3542d6d9566.scope - libcontainer container 8f9ce25e91fca73ecb3b262b4c663161700efd3b552638ef404eb3542d6d9566. Jan 30 12:58:32.881400 kubelet[1802]: E0130 12:58:32.881346 1802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 12:58:32.883254 containerd[1470]: time="2025-01-30T12:58:32.883161985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mzt28,Uid:be50b606-33c8-4920-9ceb-e9504e0ac77f,Namespace:kube-system,Attempt:0,}" Jan 30 12:58:32.918854 containerd[1470]: time="2025-01-30T12:58:32.918428785Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:58:32.918854 containerd[1470]: time="2025-01-30T12:58:32.918588536Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:58:32.918854 containerd[1470]: time="2025-01-30T12:58:32.918615711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:58:32.918854 containerd[1470]: time="2025-01-30T12:58:32.918755231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:58:32.929562 containerd[1470]: time="2025-01-30T12:58:32.929509339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-r986t,Uid:d35b32c0-150c-4d73-bfee-606ed7953272,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f9ce25e91fca73ecb3b262b4c663161700efd3b552638ef404eb3542d6d9566\"" Jan 30 12:58:32.930702 kubelet[1802]: E0130 12:58:32.930676 1802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 12:58:32.931956 containerd[1470]: time="2025-01-30T12:58:32.931913474Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 30 12:58:32.953647 systemd[1]: Started cri-containerd-d61de548cbcd3cdca4a9ba09a54bdd794b2cec89e5d5839ae14baea7a3035348.scope - libcontainer container d61de548cbcd3cdca4a9ba09a54bdd794b2cec89e5d5839ae14baea7a3035348. Jan 30 12:58:32.991332 containerd[1470]: time="2025-01-30T12:58:32.991109250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mzt28,Uid:be50b606-33c8-4920-9ceb-e9504e0ac77f,Namespace:kube-system,Attempt:0,} returns sandbox id \"d61de548cbcd3cdca4a9ba09a54bdd794b2cec89e5d5839ae14baea7a3035348\"" Jan 30 12:58:32.992599 kubelet[1802]: E0130 12:58:32.992303 1802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 12:58:32.996051 containerd[1470]: time="2025-01-30T12:58:32.995743096Z" level=info msg="CreateContainer within sandbox \"d61de548cbcd3cdca4a9ba09a54bdd794b2cec89e5d5839ae14baea7a3035348\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 12:58:33.012633 containerd[1470]: time="2025-01-30T12:58:33.012529222Z" level=info msg="CreateContainer within sandbox \"d61de548cbcd3cdca4a9ba09a54bdd794b2cec89e5d5839ae14baea7a3035348\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1a91a5039b8415d13d83ba94b237bd00dd3259c0bc8a0a6aeb41e9add1255b6f\"" Jan 30 12:58:33.013739 containerd[1470]: time="2025-01-30T12:58:33.013623909Z" level=info msg="StartContainer for \"1a91a5039b8415d13d83ba94b237bd00dd3259c0bc8a0a6aeb41e9add1255b6f\"" Jan 30 12:58:33.024751 kubelet[1802]: E0130 12:58:33.024649 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:58:33.048523 systemd[1]: Started cri-containerd-1a91a5039b8415d13d83ba94b237bd00dd3259c0bc8a0a6aeb41e9add1255b6f.scope - libcontainer container 1a91a5039b8415d13d83ba94b237bd00dd3259c0bc8a0a6aeb41e9add1255b6f. Jan 30 12:58:33.086994 containerd[1470]: time="2025-01-30T12:58:33.085546725Z" level=info msg="StartContainer for \"1a91a5039b8415d13d83ba94b237bd00dd3259c0bc8a0a6aeb41e9add1255b6f\" returns successfully" Jan 30 12:58:33.101013 systemd[1]: cri-containerd-1a91a5039b8415d13d83ba94b237bd00dd3259c0bc8a0a6aeb41e9add1255b6f.scope: Deactivated successfully. Jan 30 12:58:33.144168 containerd[1470]: time="2025-01-30T12:58:33.144074875Z" level=info msg="shim disconnected" id=1a91a5039b8415d13d83ba94b237bd00dd3259c0bc8a0a6aeb41e9add1255b6f namespace=k8s.io Jan 30 12:58:33.144168 containerd[1470]: time="2025-01-30T12:58:33.144142895Z" level=warning msg="cleaning up after shim disconnected" id=1a91a5039b8415d13d83ba94b237bd00dd3259c0bc8a0a6aeb41e9add1255b6f namespace=k8s.io Jan 30 12:58:33.144168 containerd[1470]: time="2025-01-30T12:58:33.144152691Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:58:33.401301 kubelet[1802]: E0130 12:58:33.401115 1802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 12:58:33.404600 containerd[1470]: time="2025-01-30T12:58:33.404529553Z" level=info msg="CreateContainer within sandbox \"d61de548cbcd3cdca4a9ba09a54bdd794b2cec89e5d5839ae14baea7a3035348\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 12:58:33.420973 containerd[1470]: time="2025-01-30T12:58:33.420887602Z" level=info msg="CreateContainer within sandbox \"d61de548cbcd3cdca4a9ba09a54bdd794b2cec89e5d5839ae14baea7a3035348\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"58fbb01bf39620ed57ad5be017b5812ac6286fc6b05bfed598fc6b5af1df5eee\"" Jan 30 12:58:33.422129 containerd[1470]: time="2025-01-30T12:58:33.422066960Z" level=info msg="StartContainer for \"58fbb01bf39620ed57ad5be017b5812ac6286fc6b05bfed598fc6b5af1df5eee\"" Jan 30 12:58:33.463560 systemd[1]: Started cri-containerd-58fbb01bf39620ed57ad5be017b5812ac6286fc6b05bfed598fc6b5af1df5eee.scope - libcontainer container 58fbb01bf39620ed57ad5be017b5812ac6286fc6b05bfed598fc6b5af1df5eee. Jan 30 12:58:33.506668 containerd[1470]: time="2025-01-30T12:58:33.506563470Z" level=info msg="StartContainer for \"58fbb01bf39620ed57ad5be017b5812ac6286fc6b05bfed598fc6b5af1df5eee\" returns successfully" Jan 30 12:58:33.516861 systemd[1]: cri-containerd-58fbb01bf39620ed57ad5be017b5812ac6286fc6b05bfed598fc6b5af1df5eee.scope: Deactivated successfully. Jan 30 12:58:33.548806 containerd[1470]: time="2025-01-30T12:58:33.548706405Z" level=info msg="shim disconnected" id=58fbb01bf39620ed57ad5be017b5812ac6286fc6b05bfed598fc6b5af1df5eee namespace=k8s.io Jan 30 12:58:33.549288 containerd[1470]: time="2025-01-30T12:58:33.548782459Z" level=warning msg="cleaning up after shim disconnected" id=58fbb01bf39620ed57ad5be017b5812ac6286fc6b05bfed598fc6b5af1df5eee namespace=k8s.io Jan 30 12:58:33.549288 containerd[1470]: time="2025-01-30T12:58:33.549047771Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:58:34.025757 kubelet[1802]: E0130 12:58:34.025687 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:58:34.411730 kubelet[1802]: E0130 12:58:34.409957 1802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 12:58:34.413082 containerd[1470]: time="2025-01-30T12:58:34.413033139Z" level=info msg="CreateContainer within sandbox \"d61de548cbcd3cdca4a9ba09a54bdd794b2cec89e5d5839ae14baea7a3035348\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 12:58:34.436275 containerd[1470]: time="2025-01-30T12:58:34.436038121Z" level=info msg="CreateContainer within sandbox \"d61de548cbcd3cdca4a9ba09a54bdd794b2cec89e5d5839ae14baea7a3035348\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0bc192e253485003f289882467706f393f46ea73a12a70db6c6da9c03e11e9b4\"" Jan 30 12:58:34.437941 containerd[1470]: time="2025-01-30T12:58:34.437861808Z" level=info msg="StartContainer for \"0bc192e253485003f289882467706f393f46ea73a12a70db6c6da9c03e11e9b4\"" Jan 30 12:58:34.486269 systemd[1]: run-containerd-runc-k8s.io-0bc192e253485003f289882467706f393f46ea73a12a70db6c6da9c03e11e9b4-runc.IiHJcI.mount: Deactivated successfully. Jan 30 12:58:34.496584 systemd[1]: Started cri-containerd-0bc192e253485003f289882467706f393f46ea73a12a70db6c6da9c03e11e9b4.scope - libcontainer container 0bc192e253485003f289882467706f393f46ea73a12a70db6c6da9c03e11e9b4. Jan 30 12:58:34.538754 containerd[1470]: time="2025-01-30T12:58:34.538683312Z" level=info msg="StartContainer for \"0bc192e253485003f289882467706f393f46ea73a12a70db6c6da9c03e11e9b4\" returns successfully" Jan 30 12:58:34.543037 systemd[1]: cri-containerd-0bc192e253485003f289882467706f393f46ea73a12a70db6c6da9c03e11e9b4.scope: Deactivated successfully. Jan 30 12:58:34.578078 containerd[1470]: time="2025-01-30T12:58:34.577894807Z" level=info msg="shim disconnected" id=0bc192e253485003f289882467706f393f46ea73a12a70db6c6da9c03e11e9b4 namespace=k8s.io Jan 30 12:58:34.578506 containerd[1470]: time="2025-01-30T12:58:34.578035044Z" level=warning msg="cleaning up after shim disconnected" id=0bc192e253485003f289882467706f393f46ea73a12a70db6c6da9c03e11e9b4 namespace=k8s.io Jan 30 12:58:34.578506 containerd[1470]: time="2025-01-30T12:58:34.578494241Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:58:34.704974 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0bc192e253485003f289882467706f393f46ea73a12a70db6c6da9c03e11e9b4-rootfs.mount: Deactivated successfully. Jan 30 12:58:35.026879 kubelet[1802]: E0130 12:58:35.026709 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:58:35.139645 kubelet[1802]: E0130 12:58:35.139560 1802 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 12:58:35.223117 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1574116622.mount: Deactivated successfully. Jan 30 12:58:35.416735 kubelet[1802]: E0130 12:58:35.416323 1802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 12:58:35.419697 containerd[1470]: time="2025-01-30T12:58:35.419646603Z" level=info msg="CreateContainer within sandbox \"d61de548cbcd3cdca4a9ba09a54bdd794b2cec89e5d5839ae14baea7a3035348\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 12:58:35.452298 containerd[1470]: time="2025-01-30T12:58:35.452242013Z" level=info msg="CreateContainer within sandbox \"d61de548cbcd3cdca4a9ba09a54bdd794b2cec89e5d5839ae14baea7a3035348\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0b8686528eec3a8dc1a799b546d2cc385f7aa466889aea55f0cd23ac98af00c3\"" Jan 30 12:58:35.453292 containerd[1470]: time="2025-01-30T12:58:35.453260468Z" level=info msg="StartContainer for \"0b8686528eec3a8dc1a799b546d2cc385f7aa466889aea55f0cd23ac98af00c3\"" Jan 30 12:58:35.502478 systemd[1]: Started cri-containerd-0b8686528eec3a8dc1a799b546d2cc385f7aa466889aea55f0cd23ac98af00c3.scope - libcontainer container 0b8686528eec3a8dc1a799b546d2cc385f7aa466889aea55f0cd23ac98af00c3. Jan 30 12:58:35.551584 systemd[1]: cri-containerd-0b8686528eec3a8dc1a799b546d2cc385f7aa466889aea55f0cd23ac98af00c3.scope: Deactivated successfully. Jan 30 12:58:35.560268 containerd[1470]: time="2025-01-30T12:58:35.560175533Z" level=info msg="StartContainer for \"0b8686528eec3a8dc1a799b546d2cc385f7aa466889aea55f0cd23ac98af00c3\" returns successfully" Jan 30 12:58:35.590409 containerd[1470]: time="2025-01-30T12:58:35.590325899Z" level=info msg="shim disconnected" id=0b8686528eec3a8dc1a799b546d2cc385f7aa466889aea55f0cd23ac98af00c3 namespace=k8s.io Jan 30 12:58:35.591410 containerd[1470]: time="2025-01-30T12:58:35.591028748Z" level=warning msg="cleaning up after shim disconnected" id=0b8686528eec3a8dc1a799b546d2cc385f7aa466889aea55f0cd23ac98af00c3 namespace=k8s.io Jan 30 12:58:35.591410 containerd[1470]: time="2025-01-30T12:58:35.591051990Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:58:36.027057 kubelet[1802]: E0130 12:58:36.026973 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:58:36.423407 kubelet[1802]: E0130 12:58:36.422676 1802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 12:58:36.425192 containerd[1470]: time="2025-01-30T12:58:36.425128760Z" level=info msg="CreateContainer within sandbox \"d61de548cbcd3cdca4a9ba09a54bdd794b2cec89e5d5839ae14baea7a3035348\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 12:58:36.452902 containerd[1470]: time="2025-01-30T12:58:36.452719850Z" level=info msg="CreateContainer within sandbox \"d61de548cbcd3cdca4a9ba09a54bdd794b2cec89e5d5839ae14baea7a3035348\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"194889f054a6a45fdbe8b9e51b1734c364fabc3caeb1a22efd3341732300b7b1\"" Jan 30 12:58:36.454109 containerd[1470]: time="2025-01-30T12:58:36.453895718Z" level=info msg="StartContainer for \"194889f054a6a45fdbe8b9e51b1734c364fabc3caeb1a22efd3341732300b7b1\"" Jan 30 12:58:36.502514 systemd[1]: Started cri-containerd-194889f054a6a45fdbe8b9e51b1734c364fabc3caeb1a22efd3341732300b7b1.scope - libcontainer container 194889f054a6a45fdbe8b9e51b1734c364fabc3caeb1a22efd3341732300b7b1. Jan 30 12:58:36.546442 containerd[1470]: time="2025-01-30T12:58:36.544390696Z" level=info msg="StartContainer for \"194889f054a6a45fdbe8b9e51b1734c364fabc3caeb1a22efd3341732300b7b1\" returns successfully" Jan 30 12:58:36.704504 systemd[1]: run-containerd-runc-k8s.io-194889f054a6a45fdbe8b9e51b1734c364fabc3caeb1a22efd3341732300b7b1-runc.qxDAFb.mount: Deactivated successfully. Jan 30 12:58:37.027886 kubelet[1802]: E0130 12:58:37.027776 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:58:37.135264 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 30 12:58:37.443691 kubelet[1802]: E0130 12:58:37.443414 1802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 12:58:37.823972 containerd[1470]: time="2025-01-30T12:58:37.823882486Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:58:37.825451 containerd[1470]: time="2025-01-30T12:58:37.825362581Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 30 12:58:37.826702 containerd[1470]: time="2025-01-30T12:58:37.826552808Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:58:37.829886 containerd[1470]: time="2025-01-30T12:58:37.829818918Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.897740467s" Jan 30 12:58:37.830262 containerd[1470]: time="2025-01-30T12:58:37.830092056Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 30 12:58:37.833314 containerd[1470]: time="2025-01-30T12:58:37.833055203Z" level=info msg="CreateContainer within sandbox \"8f9ce25e91fca73ecb3b262b4c663161700efd3b552638ef404eb3542d6d9566\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 30 12:58:37.864743 containerd[1470]: time="2025-01-30T12:58:37.864676274Z" level=info msg="CreateContainer within sandbox \"8f9ce25e91fca73ecb3b262b4c663161700efd3b552638ef404eb3542d6d9566\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"574726af4331ab702959db5c7f4a6df3e07038afff8ddbfe19009dcbe4183877\"" Jan 30 12:58:37.865919 containerd[1470]: time="2025-01-30T12:58:37.865881906Z" level=info msg="StartContainer for \"574726af4331ab702959db5c7f4a6df3e07038afff8ddbfe19009dcbe4183877\"" Jan 30 12:58:37.924600 systemd[1]: Started cri-containerd-574726af4331ab702959db5c7f4a6df3e07038afff8ddbfe19009dcbe4183877.scope - libcontainer container 574726af4331ab702959db5c7f4a6df3e07038afff8ddbfe19009dcbe4183877. Jan 30 12:58:37.977988 containerd[1470]: time="2025-01-30T12:58:37.977888172Z" level=info msg="StartContainer for \"574726af4331ab702959db5c7f4a6df3e07038afff8ddbfe19009dcbe4183877\" returns successfully" Jan 30 12:58:38.028672 kubelet[1802]: E0130 12:58:38.028579 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:58:38.446886 kubelet[1802]: E0130 12:58:38.446843 1802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 12:58:38.460899 kubelet[1802]: I0130 12:58:38.460813 1802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mzt28" podStartSLOduration=6.460789618 podStartE2EDuration="6.460789618s" podCreationTimestamp="2025-01-30 12:58:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 12:58:37.470991312 +0000 UTC m=+67.823587281" watchObservedRunningTime="2025-01-30 12:58:38.460789618 +0000 UTC m=+68.813385563" Jan 30 12:58:38.852083 systemd[1]: run-containerd-runc-k8s.io-574726af4331ab702959db5c7f4a6df3e07038afff8ddbfe19009dcbe4183877-runc.LUM8vS.mount: Deactivated successfully. Jan 30 12:58:38.882794 kubelet[1802]: E0130 12:58:38.882744 1802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 12:58:39.029682 kubelet[1802]: E0130 12:58:39.029605 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:58:39.455885 kubelet[1802]: E0130 12:58:39.455830 1802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 12:58:40.030673 kubelet[1802]: E0130 12:58:40.030591 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:58:40.813041 systemd-networkd[1374]: lxc_health: Link UP Jan 30 12:58:40.823777 systemd-networkd[1374]: lxc_health: Gained carrier Jan 30 12:58:40.883543 kubelet[1802]: E0130 12:58:40.883494 1802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 12:58:40.929866 kubelet[1802]: I0130 12:58:40.929775 1802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-r986t" podStartSLOduration=4.029928958 podStartE2EDuration="8.929749609s" podCreationTimestamp="2025-01-30 12:58:32 +0000 UTC" firstStartedPulling="2025-01-30 12:58:32.931486245 +0000 UTC m=+63.284082150" lastFinishedPulling="2025-01-30 12:58:37.831306872 +0000 UTC m=+68.183902801" observedRunningTime="2025-01-30 12:58:38.461366565 +0000 UTC m=+68.813962470" watchObservedRunningTime="2025-01-30 12:58:40.929749609 +0000 UTC m=+71.282345546" Jan 30 12:58:41.031237 kubelet[1802]: E0130 12:58:41.031135 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:58:41.468477 kubelet[1802]: E0130 12:58:41.468437 1802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 12:58:41.707890 systemd[1]: run-containerd-runc-k8s.io-194889f054a6a45fdbe8b9e51b1734c364fabc3caeb1a22efd3341732300b7b1-runc.QJYNK0.mount: Deactivated successfully. Jan 30 12:58:42.031591 kubelet[1802]: E0130 12:58:42.031505 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:58:42.359943 systemd-networkd[1374]: lxc_health: Gained IPv6LL Jan 30 12:58:42.471926 kubelet[1802]: E0130 12:58:42.471800 1802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 12:58:43.031804 kubelet[1802]: E0130 12:58:43.031724 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:58:44.009503 systemd[1]: run-containerd-runc-k8s.io-194889f054a6a45fdbe8b9e51b1734c364fabc3caeb1a22efd3341732300b7b1-runc.j6KSap.mount: Deactivated successfully. Jan 30 12:58:44.032864 kubelet[1802]: E0130 12:58:44.032778 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:58:45.034459 kubelet[1802]: E0130 12:58:45.034391 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:58:45.143240 kubelet[1802]: E0130 12:58:45.142267 1802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 12:58:46.034944 kubelet[1802]: E0130 12:58:46.034794 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:58:47.035117 kubelet[1802]: E0130 12:58:47.035027 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:58:48.035998 kubelet[1802]: E0130 12:58:48.035912 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:58:48.447771 systemd[1]: run-containerd-runc-k8s.io-194889f054a6a45fdbe8b9e51b1734c364fabc3caeb1a22efd3341732300b7b1-runc.mHoiS2.mount: Deactivated successfully. Jan 30 12:58:49.037114 kubelet[1802]: E0130 12:58:49.037043 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:58:49.967246 kubelet[1802]: E0130 12:58:49.967162 1802 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:58:50.037755 kubelet[1802]: E0130 12:58:50.037653 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"