Jan 30 13:59:44.968998 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 13:59:44.969029 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:59:44.969046 kernel: BIOS-provided physical RAM map: Jan 30 13:59:44.969055 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 30 13:59:44.969064 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 30 13:59:44.969074 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 30 13:59:44.969085 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffd7fff] usable Jan 30 13:59:44.969095 kernel: BIOS-e820: [mem 0x000000007ffd8000-0x000000007fffffff] reserved Jan 30 13:59:44.969105 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 30 13:59:44.969118 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 30 13:59:44.969127 kernel: NX (Execute Disable) protection: active Jan 30 13:59:44.969137 kernel: APIC: Static calls initialized Jan 30 13:59:44.969147 kernel: SMBIOS 2.8 present. Jan 30 13:59:44.969157 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jan 30 13:59:44.969170 kernel: Hypervisor detected: KVM Jan 30 13:59:44.969184 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 13:59:44.969195 kernel: kvm-clock: using sched offset of 2973947571 cycles Jan 30 13:59:44.969207 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 13:59:44.969219 kernel: tsc: Detected 2494.138 MHz processor Jan 30 13:59:44.969230 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:59:44.969241 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:59:44.969252 kernel: last_pfn = 0x7ffd8 max_arch_pfn = 0x400000000 Jan 30 13:59:44.969264 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 30 13:59:44.969275 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:59:44.969289 kernel: ACPI: Early table checksum verification disabled Jan 30 13:59:44.969301 kernel: ACPI: RSDP 0x00000000000F5A50 000014 (v00 BOCHS ) Jan 30 13:59:44.969312 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:59:44.969323 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:59:44.969335 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:59:44.969346 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 30 13:59:44.969357 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:59:44.969368 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:59:44.969379 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:59:44.969394 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:59:44.969405 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jan 30 13:59:44.969417 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jan 30 13:59:44.969428 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 30 13:59:44.969439 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jan 30 13:59:44.969450 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jan 30 13:59:44.969461 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jan 30 13:59:44.969480 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jan 30 13:59:44.969492 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 30 13:59:44.969504 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 30 13:59:44.969516 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 30 13:59:44.969528 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 30 13:59:44.969540 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffd7fff] -> [mem 0x00000000-0x7ffd7fff] Jan 30 13:59:44.969575 kernel: NODE_DATA(0) allocated [mem 0x7ffd2000-0x7ffd7fff] Jan 30 13:59:44.969591 kernel: Zone ranges: Jan 30 13:59:44.969603 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:59:44.969615 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffd7fff] Jan 30 13:59:44.969626 kernel: Normal empty Jan 30 13:59:44.969638 kernel: Movable zone start for each node Jan 30 13:59:44.969650 kernel: Early memory node ranges Jan 30 13:59:44.969662 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 30 13:59:44.969674 kernel: node 0: [mem 0x0000000000100000-0x000000007ffd7fff] Jan 30 13:59:44.969686 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffd7fff] Jan 30 13:59:44.969701 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:59:44.969713 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 30 13:59:44.969725 kernel: On node 0, zone DMA32: 40 pages in unavailable ranges Jan 30 13:59:44.969737 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 30 13:59:44.969749 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 13:59:44.969761 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 13:59:44.969773 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 13:59:44.969785 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 13:59:44.969797 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 13:59:44.969812 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 13:59:44.969824 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 13:59:44.969835 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:59:44.969847 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 13:59:44.969859 kernel: TSC deadline timer available Jan 30 13:59:44.969871 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 30 13:59:44.969883 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 13:59:44.969895 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jan 30 13:59:44.969907 kernel: Booting paravirtualized kernel on KVM Jan 30 13:59:44.969922 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:59:44.969934 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 30 13:59:44.969946 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 30 13:59:44.969958 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 30 13:59:44.969970 kernel: pcpu-alloc: [0] 0 1 Jan 30 13:59:44.969982 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 30 13:59:44.969995 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:59:44.970007 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:59:44.970022 kernel: random: crng init done Jan 30 13:59:44.970034 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:59:44.970046 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 30 13:59:44.970058 kernel: Fallback order for Node 0: 0 Jan 30 13:59:44.970070 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515800 Jan 30 13:59:44.970082 kernel: Policy zone: DMA32 Jan 30 13:59:44.970094 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:59:44.970106 kernel: Memory: 1971192K/2096600K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 125148K reserved, 0K cma-reserved) Jan 30 13:59:44.970118 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 13:59:44.970133 kernel: Kernel/User page tables isolation: enabled Jan 30 13:59:44.970145 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 13:59:44.970157 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:59:44.970169 kernel: Dynamic Preempt: voluntary Jan 30 13:59:44.970181 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:59:44.970194 kernel: rcu: RCU event tracing is enabled. Jan 30 13:59:44.970206 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 13:59:44.970218 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:59:44.970231 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:59:44.970246 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:59:44.970258 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:59:44.970270 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 13:59:44.970282 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 30 13:59:44.970294 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:59:44.970306 kernel: Console: colour VGA+ 80x25 Jan 30 13:59:44.970318 kernel: printk: console [tty0] enabled Jan 30 13:59:44.970343 kernel: printk: console [ttyS0] enabled Jan 30 13:59:44.970356 kernel: ACPI: Core revision 20230628 Jan 30 13:59:44.970368 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 30 13:59:44.970384 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:59:44.970396 kernel: x2apic enabled Jan 30 13:59:44.970408 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 13:59:44.970420 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 30 13:59:44.970432 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Jan 30 13:59:44.970444 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494138) Jan 30 13:59:44.970456 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 30 13:59:44.970469 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 30 13:59:44.970493 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:59:44.970506 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 13:59:44.970519 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:59:44.970535 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 13:59:44.970647 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 30 13:59:44.970661 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 13:59:44.970674 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 13:59:44.970688 kernel: MDS: Mitigation: Clear CPU buffers Jan 30 13:59:44.970701 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 13:59:44.970719 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 13:59:44.970732 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 13:59:44.970745 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 13:59:44.970758 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 13:59:44.970771 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 30 13:59:44.970784 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:59:44.970797 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:59:44.970810 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:59:44.970826 kernel: landlock: Up and running. Jan 30 13:59:44.970839 kernel: SELinux: Initializing. Jan 30 13:59:44.970852 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 13:59:44.970865 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 13:59:44.970878 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jan 30 13:59:44.970891 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:59:44.970904 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:59:44.970917 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:59:44.970933 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jan 30 13:59:44.970946 kernel: signal: max sigframe size: 1776 Jan 30 13:59:44.970958 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:59:44.970972 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:59:44.970985 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 30 13:59:44.970997 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:59:44.971010 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:59:44.971022 kernel: .... node #0, CPUs: #1 Jan 30 13:59:44.971036 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 13:59:44.971049 kernel: smpboot: Max logical packages: 1 Jan 30 13:59:44.971065 kernel: smpboot: Total of 2 processors activated (9976.55 BogoMIPS) Jan 30 13:59:44.971078 kernel: devtmpfs: initialized Jan 30 13:59:44.971092 kernel: x86/mm: Memory block size: 128MB Jan 30 13:59:44.971105 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:59:44.971118 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 13:59:44.971130 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:59:44.971143 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:59:44.971156 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:59:44.971170 kernel: audit: type=2000 audit(1738245584.308:1): state=initialized audit_enabled=0 res=1 Jan 30 13:59:44.971185 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:59:44.971198 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:59:44.971211 kernel: cpuidle: using governor menu Jan 30 13:59:44.971224 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:59:44.971237 kernel: dca service started, version 1.12.1 Jan 30 13:59:44.971250 kernel: PCI: Using configuration type 1 for base access Jan 30 13:59:44.971263 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:59:44.971276 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:59:44.971289 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:59:44.971305 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:59:44.971318 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:59:44.971331 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:59:44.971344 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:59:44.971357 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 13:59:44.971370 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 13:59:44.971383 kernel: ACPI: Interpreter enabled Jan 30 13:59:44.971396 kernel: ACPI: PM: (supports S0 S5) Jan 30 13:59:44.971429 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:59:44.971446 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:59:44.971459 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 13:59:44.971472 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 30 13:59:44.971485 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 13:59:44.971723 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:59:44.971855 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 30 13:59:44.971979 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 30 13:59:44.972581 kernel: acpiphp: Slot [3] registered Jan 30 13:59:44.972596 kernel: acpiphp: Slot [4] registered Jan 30 13:59:44.972609 kernel: acpiphp: Slot [5] registered Jan 30 13:59:44.972623 kernel: acpiphp: Slot [6] registered Jan 30 13:59:44.972636 kernel: acpiphp: Slot [7] registered Jan 30 13:59:44.972649 kernel: acpiphp: Slot [8] registered Jan 30 13:59:44.972662 kernel: acpiphp: Slot [9] registered Jan 30 13:59:44.972675 kernel: acpiphp: Slot [10] registered Jan 30 13:59:44.972688 kernel: acpiphp: Slot [11] registered Jan 30 13:59:44.972706 kernel: acpiphp: Slot [12] registered Jan 30 13:59:44.972718 kernel: acpiphp: Slot [13] registered Jan 30 13:59:44.972732 kernel: acpiphp: Slot [14] registered Jan 30 13:59:44.972744 kernel: acpiphp: Slot [15] registered Jan 30 13:59:44.972758 kernel: acpiphp: Slot [16] registered Jan 30 13:59:44.972770 kernel: acpiphp: Slot [17] registered Jan 30 13:59:44.972783 kernel: acpiphp: Slot [18] registered Jan 30 13:59:44.972795 kernel: acpiphp: Slot [19] registered Jan 30 13:59:44.972809 kernel: acpiphp: Slot [20] registered Jan 30 13:59:44.972821 kernel: acpiphp: Slot [21] registered Jan 30 13:59:44.972838 kernel: acpiphp: Slot [22] registered Jan 30 13:59:44.972851 kernel: acpiphp: Slot [23] registered Jan 30 13:59:44.972863 kernel: acpiphp: Slot [24] registered Jan 30 13:59:44.972877 kernel: acpiphp: Slot [25] registered Jan 30 13:59:44.972890 kernel: acpiphp: Slot [26] registered Jan 30 13:59:44.972903 kernel: acpiphp: Slot [27] registered Jan 30 13:59:44.972915 kernel: acpiphp: Slot [28] registered Jan 30 13:59:44.972928 kernel: acpiphp: Slot [29] registered Jan 30 13:59:44.972950 kernel: acpiphp: Slot [30] registered Jan 30 13:59:44.972966 kernel: acpiphp: Slot [31] registered Jan 30 13:59:44.972979 kernel: PCI host bridge to bus 0000:00 Jan 30 13:59:44.973163 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 13:59:44.973281 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 13:59:44.973394 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 13:59:44.973503 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 30 13:59:44.973630 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jan 30 13:59:44.973738 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 13:59:44.973887 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 30 13:59:44.974029 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 30 13:59:44.974158 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 30 13:59:44.974278 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jan 30 13:59:44.974434 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 30 13:59:44.974572 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 30 13:59:44.974704 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 30 13:59:44.974838 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 30 13:59:44.974975 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jan 30 13:59:44.975103 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jan 30 13:59:44.975235 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 30 13:59:44.975365 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 30 13:59:44.975494 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 30 13:59:44.975641 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 30 13:59:44.975769 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 30 13:59:44.975898 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jan 30 13:59:44.976021 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jan 30 13:59:44.976141 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 30 13:59:44.976263 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 13:59:44.976409 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 30 13:59:44.976531 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jan 30 13:59:44.977279 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jan 30 13:59:44.977409 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jan 30 13:59:44.977575 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 30 13:59:44.977698 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jan 30 13:59:44.977818 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jan 30 13:59:44.977947 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jan 30 13:59:44.978079 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jan 30 13:59:44.978202 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jan 30 13:59:44.978322 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jan 30 13:59:44.978475 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jan 30 13:59:44.978665 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jan 30 13:59:44.978789 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jan 30 13:59:44.978916 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jan 30 13:59:44.979034 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jan 30 13:59:44.979165 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jan 30 13:59:44.979285 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jan 30 13:59:44.979404 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jan 30 13:59:44.979528 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jan 30 13:59:44.979674 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jan 30 13:59:44.979804 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jan 30 13:59:44.979926 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jan 30 13:59:44.979953 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 13:59:44.979967 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 13:59:44.979980 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 13:59:44.979993 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 13:59:44.980010 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 30 13:59:44.980023 kernel: iommu: Default domain type: Translated Jan 30 13:59:44.980036 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:59:44.980049 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:59:44.980062 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 13:59:44.980075 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 30 13:59:44.980088 kernel: e820: reserve RAM buffer [mem 0x7ffd8000-0x7fffffff] Jan 30 13:59:44.980212 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 30 13:59:44.980333 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 30 13:59:44.980457 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 13:59:44.980474 kernel: vgaarb: loaded Jan 30 13:59:44.980487 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 30 13:59:44.980500 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 30 13:59:44.980513 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 13:59:44.980526 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:59:44.980540 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:59:44.980607 kernel: pnp: PnP ACPI init Jan 30 13:59:44.980625 kernel: pnp: PnP ACPI: found 4 devices Jan 30 13:59:44.980643 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:59:44.980656 kernel: NET: Registered PF_INET protocol family Jan 30 13:59:44.980669 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 13:59:44.980683 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 30 13:59:44.980697 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:59:44.980712 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 13:59:44.980725 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 13:59:44.980739 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 30 13:59:44.980752 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 13:59:44.980769 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 13:59:44.980782 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:59:44.980795 kernel: NET: Registered PF_XDP protocol family Jan 30 13:59:44.980928 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 13:59:44.981053 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 13:59:44.981164 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 13:59:44.981273 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 30 13:59:44.981381 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jan 30 13:59:44.981512 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 30 13:59:44.981649 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 30 13:59:44.981667 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 30 13:59:44.981786 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 27820 usecs Jan 30 13:59:44.981802 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:59:44.981816 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 30 13:59:44.981829 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Jan 30 13:59:44.981843 kernel: Initialise system trusted keyrings Jan 30 13:59:44.981859 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 30 13:59:44.981872 kernel: Key type asymmetric registered Jan 30 13:59:44.981885 kernel: Asymmetric key parser 'x509' registered Jan 30 13:59:44.981898 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:59:44.981911 kernel: io scheduler mq-deadline registered Jan 30 13:59:44.981924 kernel: io scheduler kyber registered Jan 30 13:59:44.981937 kernel: io scheduler bfq registered Jan 30 13:59:44.981949 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:59:44.981963 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 30 13:59:44.981976 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 30 13:59:44.981992 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 30 13:59:44.982005 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:59:44.982018 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:59:44.982033 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 13:59:44.982045 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 13:59:44.982058 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 13:59:44.982585 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 13:59:44.982750 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 30 13:59:44.982873 kernel: rtc_cmos 00:03: registered as rtc0 Jan 30 13:59:44.983019 kernel: rtc_cmos 00:03: setting system clock to 2025-01-30T13:59:44 UTC (1738245584) Jan 30 13:59:44.983131 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 30 13:59:44.983147 kernel: intel_pstate: CPU model not supported Jan 30 13:59:44.983160 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:59:44.983173 kernel: Segment Routing with IPv6 Jan 30 13:59:44.983186 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:59:44.983199 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:59:44.983215 kernel: Key type dns_resolver registered Jan 30 13:59:44.983229 kernel: IPI shorthand broadcast: enabled Jan 30 13:59:44.983242 kernel: sched_clock: Marking stable (893003333, 92421325)->(1009659451, -24234793) Jan 30 13:59:44.983255 kernel: registered taskstats version 1 Jan 30 13:59:44.983267 kernel: Loading compiled-in X.509 certificates Jan 30 13:59:44.983280 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 13:59:44.983293 kernel: Key type .fscrypt registered Jan 30 13:59:44.983306 kernel: Key type fscrypt-provisioning registered Jan 30 13:59:44.983319 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:59:44.983335 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:59:44.983347 kernel: ima: No architecture policies found Jan 30 13:59:44.983360 kernel: clk: Disabling unused clocks Jan 30 13:59:44.983373 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 13:59:44.983387 kernel: Write protecting the kernel read-only data: 36864k Jan 30 13:59:44.983420 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 13:59:44.983437 kernel: Run /init as init process Jan 30 13:59:44.983450 kernel: with arguments: Jan 30 13:59:44.983467 kernel: /init Jan 30 13:59:44.983483 kernel: with environment: Jan 30 13:59:44.983496 kernel: HOME=/ Jan 30 13:59:44.983509 kernel: TERM=linux Jan 30 13:59:44.983523 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:59:44.983539 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:59:44.985621 systemd[1]: Detected virtualization kvm. Jan 30 13:59:44.985638 systemd[1]: Detected architecture x86-64. Jan 30 13:59:44.985652 systemd[1]: Running in initrd. Jan 30 13:59:44.985672 systemd[1]: No hostname configured, using default hostname. Jan 30 13:59:44.985686 systemd[1]: Hostname set to . Jan 30 13:59:44.985701 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:59:44.985715 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:59:44.985729 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:59:44.985744 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:59:44.985759 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:59:44.985774 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:59:44.985791 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:59:44.985806 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:59:44.985823 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:59:44.985837 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:59:44.985852 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:59:44.985866 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:59:44.985883 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:59:44.985898 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:59:44.985913 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:59:44.985930 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:59:44.985944 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:59:44.985958 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:59:44.985976 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:59:44.985990 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:59:44.986005 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:59:44.986019 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:59:44.986033 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:59:44.986047 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:59:44.986061 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:59:44.986076 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:59:44.986093 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:59:44.986108 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:59:44.986122 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:59:44.986136 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:59:44.986150 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:59:44.986165 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:59:44.986215 systemd-journald[184]: Collecting audit messages is disabled. Jan 30 13:59:44.986252 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:59:44.986266 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:59:44.986283 systemd-journald[184]: Journal started Jan 30 13:59:44.986316 systemd-journald[184]: Runtime Journal (/run/log/journal/3ac7da44287c4977a116416d5b87ea04) is 4.9M, max 39.3M, 34.4M free. Jan 30 13:59:44.999574 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:59:45.005456 systemd-modules-load[185]: Inserted module 'overlay' Jan 30 13:59:45.041567 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:59:45.048581 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:59:45.051114 systemd-modules-load[185]: Inserted module 'br_netfilter' Jan 30 13:59:45.051836 kernel: Bridge firewalling registered Jan 30 13:59:45.053929 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:59:45.061380 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:59:45.062045 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:59:45.072763 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:59:45.074733 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:59:45.078821 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:59:45.080460 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:59:45.106094 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:59:45.110302 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:59:45.113365 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:59:45.119877 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:59:45.122019 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:59:45.130208 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:59:45.160571 dracut-cmdline[218]: dracut-dracut-053 Jan 30 13:59:45.160090 systemd-resolved[216]: Positive Trust Anchors: Jan 30 13:59:45.160099 systemd-resolved[216]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:59:45.160136 systemd-resolved[216]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:59:45.165884 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:59:45.167952 systemd-resolved[216]: Defaulting to hostname 'linux'. Jan 30 13:59:45.169571 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:59:45.170867 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:59:45.278596 kernel: SCSI subsystem initialized Jan 30 13:59:45.288600 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:59:45.299646 kernel: iscsi: registered transport (tcp) Jan 30 13:59:45.321816 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:59:45.321905 kernel: QLogic iSCSI HBA Driver Jan 30 13:59:45.373170 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:59:45.384847 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:59:45.410034 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:59:45.410105 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:59:45.410692 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:59:45.454598 kernel: raid6: avx2x4 gen() 17465 MB/s Jan 30 13:59:45.470609 kernel: raid6: avx2x2 gen() 17458 MB/s Jan 30 13:59:45.487861 kernel: raid6: avx2x1 gen() 13157 MB/s Jan 30 13:59:45.487935 kernel: raid6: using algorithm avx2x4 gen() 17465 MB/s Jan 30 13:59:45.505971 kernel: raid6: .... xor() 6985 MB/s, rmw enabled Jan 30 13:59:45.506052 kernel: raid6: using avx2x2 recovery algorithm Jan 30 13:59:45.527593 kernel: xor: automatically using best checksumming function avx Jan 30 13:59:45.706609 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:59:45.720050 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:59:45.725793 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:59:45.742978 systemd-udevd[401]: Using default interface naming scheme 'v255'. Jan 30 13:59:45.748260 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:59:45.756757 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:59:45.772343 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Jan 30 13:59:45.806637 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:59:45.811769 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:59:45.869160 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:59:45.873765 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:59:45.890233 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:59:45.895313 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:59:45.896910 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:59:45.898077 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:59:45.904666 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:59:45.922849 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:59:45.947618 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:59:45.952685 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jan 30 13:59:45.994424 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 30 13:59:45.994644 kernel: scsi host0: Virtio SCSI HBA Jan 30 13:59:45.994772 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 13:59:45.994786 kernel: GPT:9289727 != 125829119 Jan 30 13:59:45.994798 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 13:59:45.994809 kernel: GPT:9289727 != 125829119 Jan 30 13:59:45.994830 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:59:45.994842 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:59:45.994853 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 13:59:45.994865 kernel: AES CTR mode by8 optimization enabled Jan 30 13:59:45.971046 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:59:45.971116 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:59:45.972000 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:59:45.972370 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:59:45.972419 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:59:45.972807 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:59:45.982614 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:59:46.001671 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jan 30 13:59:46.002430 kernel: virtio_blk virtio5: [vdb] 932 512-byte logical blocks (477 kB/466 KiB) Jan 30 13:59:46.043581 kernel: libata version 3.00 loaded. Jan 30 13:59:46.049570 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 30 13:59:46.068783 kernel: scsi host1: ata_piix Jan 30 13:59:46.072502 kernel: scsi host2: ata_piix Jan 30 13:59:46.072761 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jan 30 13:59:46.072777 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jan 30 13:59:46.072789 kernel: ACPI: bus type USB registered Jan 30 13:59:46.072801 kernel: usbcore: registered new interface driver usbfs Jan 30 13:59:46.072821 kernel: usbcore: registered new interface driver hub Jan 30 13:59:46.072832 kernel: usbcore: registered new device driver usb Jan 30 13:59:46.079407 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:59:46.089720 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:59:46.104729 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (449) Jan 30 13:59:46.104757 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (450) Jan 30 13:59:46.113074 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 13:59:46.120273 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 13:59:46.120982 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:59:46.125537 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:59:46.129026 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 13:59:46.129462 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 13:59:46.134704 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:59:46.150931 disk-uuid[548]: Primary Header is updated. Jan 30 13:59:46.150931 disk-uuid[548]: Secondary Entries is updated. Jan 30 13:59:46.150931 disk-uuid[548]: Secondary Header is updated. Jan 30 13:59:46.158573 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:59:46.168592 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:59:46.278518 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jan 30 13:59:46.285717 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jan 30 13:59:46.285862 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jan 30 13:59:46.285978 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jan 30 13:59:46.286089 kernel: hub 1-0:1.0: USB hub found Jan 30 13:59:46.286231 kernel: hub 1-0:1.0: 2 ports detected Jan 30 13:59:47.167572 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:59:47.168070 disk-uuid[549]: The operation has completed successfully. Jan 30 13:59:47.210202 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:59:47.210355 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:59:47.223836 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:59:47.229435 sh[560]: Success Jan 30 13:59:47.243646 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 30 13:59:47.308926 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:59:47.322755 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:59:47.325260 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:59:47.347859 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 13:59:47.347955 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:59:47.348904 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:59:47.350587 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:59:47.350640 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:59:47.360290 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:59:47.361931 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:59:47.368850 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:59:47.372788 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:59:47.383023 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:59:47.383093 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:59:47.383120 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:59:47.389612 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:59:47.401597 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:59:47.402012 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:59:47.409812 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:59:47.416841 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:59:47.520300 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:59:47.531805 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:59:47.569577 ignition[649]: Ignition 2.19.0 Jan 30 13:59:47.569604 ignition[649]: Stage: fetch-offline Jan 30 13:59:47.569674 ignition[649]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:59:47.569693 ignition[649]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 13:59:47.569917 ignition[649]: parsed url from cmdline: "" Jan 30 13:59:47.572834 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:59:47.569923 ignition[649]: no config URL provided Jan 30 13:59:47.569933 ignition[649]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:59:47.569944 ignition[649]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:59:47.569951 ignition[649]: failed to fetch config: resource requires networking Jan 30 13:59:47.570519 ignition[649]: Ignition finished successfully Jan 30 13:59:47.586528 systemd-networkd[745]: lo: Link UP Jan 30 13:59:47.586541 systemd-networkd[745]: lo: Gained carrier Jan 30 13:59:47.588742 systemd-networkd[745]: Enumeration completed Jan 30 13:59:47.588869 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:59:47.589512 systemd-networkd[745]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 30 13:59:47.589516 systemd-networkd[745]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jan 30 13:59:47.590151 systemd-networkd[745]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:59:47.590155 systemd-networkd[745]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:59:47.591055 systemd[1]: Reached target network.target - Network. Jan 30 13:59:47.591818 systemd-networkd[745]: eth0: Link UP Jan 30 13:59:47.591821 systemd-networkd[745]: eth0: Gained carrier Jan 30 13:59:47.591830 systemd-networkd[745]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 30 13:59:47.595001 systemd-networkd[745]: eth1: Link UP Jan 30 13:59:47.595006 systemd-networkd[745]: eth1: Gained carrier Jan 30 13:59:47.595017 systemd-networkd[745]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:59:47.599981 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 13:59:47.606648 systemd-networkd[745]: eth1: DHCPv4 address 10.124.0.6/20 acquired from 169.254.169.253 Jan 30 13:59:47.611651 systemd-networkd[745]: eth0: DHCPv4 address 164.92.66.128/20, gateway 164.92.64.1 acquired from 169.254.169.253 Jan 30 13:59:47.623036 ignition[752]: Ignition 2.19.0 Jan 30 13:59:47.623047 ignition[752]: Stage: fetch Jan 30 13:59:47.623307 ignition[752]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:59:47.623332 ignition[752]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 13:59:47.623473 ignition[752]: parsed url from cmdline: "" Jan 30 13:59:47.623477 ignition[752]: no config URL provided Jan 30 13:59:47.623483 ignition[752]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:59:47.623492 ignition[752]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:59:47.623516 ignition[752]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jan 30 13:59:47.637275 ignition[752]: GET result: OK Jan 30 13:59:47.637399 ignition[752]: parsing config with SHA512: c06dcaa79530e6f8a30791575a7f28af2a6f04407f3b3b42afe1c89384748d842ede6b117f01b74cf05f5abc875927de83d039802360099a6956cebda7772161 Jan 30 13:59:47.641737 unknown[752]: fetched base config from "system" Jan 30 13:59:47.641753 unknown[752]: fetched base config from "system" Jan 30 13:59:47.642227 ignition[752]: fetch: fetch complete Jan 30 13:59:47.641770 unknown[752]: fetched user config from "digitalocean" Jan 30 13:59:47.642232 ignition[752]: fetch: fetch passed Jan 30 13:59:47.644232 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 13:59:47.642297 ignition[752]: Ignition finished successfully Jan 30 13:59:47.649821 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:59:47.684493 ignition[759]: Ignition 2.19.0 Jan 30 13:59:47.684504 ignition[759]: Stage: kargs Jan 30 13:59:47.684732 ignition[759]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:59:47.684744 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 13:59:47.685939 ignition[759]: kargs: kargs passed Jan 30 13:59:47.686000 ignition[759]: Ignition finished successfully Jan 30 13:59:47.687754 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:59:47.693810 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:59:47.717102 ignition[765]: Ignition 2.19.0 Jan 30 13:59:47.717117 ignition[765]: Stage: disks Jan 30 13:59:47.717297 ignition[765]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:59:47.717309 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 13:59:47.718016 ignition[765]: disks: disks passed Jan 30 13:59:47.720054 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:59:47.718064 ignition[765]: Ignition finished successfully Jan 30 13:59:47.724619 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:59:47.725052 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:59:47.725816 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:59:47.726627 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:59:47.727388 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:59:47.738848 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:59:47.756093 systemd-fsck[773]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 13:59:47.759690 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:59:47.763714 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:59:47.870564 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 13:59:47.871072 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:59:47.871978 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:59:47.883712 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:59:47.886614 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:59:47.888753 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Jan 30 13:59:47.894770 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 13:59:47.895809 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:59:47.895845 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:59:47.901290 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:59:47.905789 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (781) Jan 30 13:59:47.905816 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:59:47.905829 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:59:47.905841 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:59:47.909979 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:59:47.914577 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:59:47.925523 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:59:47.990978 initrd-setup-root[811]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:59:47.992804 coreos-metadata[783]: Jan 30 13:59:47.991 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 13:59:47.997499 coreos-metadata[784]: Jan 30 13:59:47.997 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 13:59:48.001228 initrd-setup-root[818]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:59:48.002822 coreos-metadata[783]: Jan 30 13:59:48.002 INFO Fetch successful Jan 30 13:59:48.007374 initrd-setup-root[825]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:59:48.009940 coreos-metadata[784]: Jan 30 13:59:48.009 INFO Fetch successful Jan 30 13:59:48.010926 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Jan 30 13:59:48.011026 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Jan 30 13:59:48.014823 coreos-metadata[784]: Jan 30 13:59:48.011 INFO wrote hostname ci-4081.3.0-f-9922ae6042 to /sysroot/etc/hostname Jan 30 13:59:48.012698 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:59:48.017724 initrd-setup-root[834]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:59:48.114276 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:59:48.122700 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:59:48.127839 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:59:48.136624 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:59:48.157704 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:59:48.168263 ignition[902]: INFO : Ignition 2.19.0 Jan 30 13:59:48.169563 ignition[902]: INFO : Stage: mount Jan 30 13:59:48.169563 ignition[902]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:59:48.169563 ignition[902]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 13:59:48.171719 ignition[902]: INFO : mount: mount passed Jan 30 13:59:48.171719 ignition[902]: INFO : Ignition finished successfully Jan 30 13:59:48.171452 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:59:48.177689 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:59:48.347834 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:59:48.355811 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:59:48.366583 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (913) Jan 30 13:59:48.366649 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:59:48.368883 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:59:48.368962 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:59:48.374631 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:59:48.376712 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:59:48.408778 ignition[930]: INFO : Ignition 2.19.0 Jan 30 13:59:48.408778 ignition[930]: INFO : Stage: files Jan 30 13:59:48.409907 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:59:48.409907 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 13:59:48.410979 ignition[930]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:59:48.411678 ignition[930]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:59:48.411678 ignition[930]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:59:48.415962 ignition[930]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:59:48.416762 ignition[930]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:59:48.417305 ignition[930]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:59:48.417163 unknown[930]: wrote ssh authorized keys file for user: core Jan 30 13:59:48.418713 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:59:48.419391 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:59:48.419391 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:59:48.419391 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:59:48.419391 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:59:48.419391 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:59:48.419391 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:59:48.419391 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 30 13:59:48.779929 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 30 13:59:49.038181 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:59:49.040820 ignition[930]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:59:49.040820 ignition[930]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:59:49.040820 ignition[930]: INFO : files: files passed Jan 30 13:59:49.040820 ignition[930]: INFO : Ignition finished successfully Jan 30 13:59:49.041147 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:59:49.046784 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:59:49.050787 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:59:49.058868 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:59:49.058985 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:59:49.068412 initrd-setup-root-after-ignition[959]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:59:49.070779 initrd-setup-root-after-ignition[959]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:59:49.071647 initrd-setup-root-after-ignition[963]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:59:49.073539 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:59:49.074203 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:59:49.080839 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:59:49.123795 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:59:49.124724 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:59:49.127056 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:59:49.127639 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:59:49.128729 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:59:49.134966 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:59:49.151385 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:59:49.157808 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:59:49.168662 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:59:49.169922 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:59:49.171066 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:59:49.171922 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:59:49.172055 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:59:49.173662 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:59:49.174157 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:59:49.174541 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:59:49.174997 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:59:49.175918 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:59:49.176675 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:59:49.177398 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:59:49.178161 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:59:49.178955 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:59:49.179685 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:59:49.180381 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:59:49.180561 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:59:49.181516 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:59:49.182459 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:59:49.183083 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:59:49.183322 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:59:49.184084 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:59:49.184216 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:59:49.185168 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:59:49.185290 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:59:49.186194 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:59:49.186295 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:59:49.187060 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 13:59:49.187168 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:59:49.193876 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:59:49.194420 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:59:49.194686 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:59:49.197728 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:59:49.198143 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:59:49.198286 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:59:49.201032 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:59:49.201142 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:59:49.210031 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:59:49.210607 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:59:49.213815 systemd-networkd[745]: eth1: Gained IPv6LL Jan 30 13:59:49.215773 ignition[984]: INFO : Ignition 2.19.0 Jan 30 13:59:49.215773 ignition[984]: INFO : Stage: umount Jan 30 13:59:49.217573 ignition[984]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:59:49.217573 ignition[984]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 13:59:49.218869 ignition[984]: INFO : umount: umount passed Jan 30 13:59:49.218869 ignition[984]: INFO : Ignition finished successfully Jan 30 13:59:49.220936 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:59:49.221482 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:59:49.222700 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:59:49.222751 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:59:49.224195 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:59:49.224276 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:59:49.224718 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 13:59:49.224763 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 13:59:49.225124 systemd[1]: Stopped target network.target - Network. Jan 30 13:59:49.225908 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:59:49.225957 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:59:49.228686 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:59:49.229113 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:59:49.234656 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:59:49.235107 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:59:49.235409 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:59:49.235792 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:59:49.235858 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:59:49.236248 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:59:49.236314 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:59:49.238717 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:59:49.238804 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:59:49.239794 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:59:49.239864 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:59:49.240886 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:59:49.244839 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:59:49.246827 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:59:49.247394 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:59:49.247503 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:59:49.249239 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:59:49.249843 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:59:49.250681 systemd-networkd[745]: eth0: DHCPv6 lease lost Jan 30 13:59:49.253447 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:59:49.253633 systemd-networkd[745]: eth1: DHCPv6 lease lost Jan 30 13:59:49.253936 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:59:49.254862 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:59:49.254910 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:59:49.256127 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:59:49.256226 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:59:49.258381 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:59:49.258431 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:59:49.263656 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:59:49.264034 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:59:49.264094 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:59:49.264573 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:59:49.264615 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:59:49.266920 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:59:49.266976 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:59:49.267474 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:59:49.284430 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:59:49.285089 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:59:49.285800 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:59:49.285884 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:59:49.287651 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:59:49.287730 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:59:49.288191 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:59:49.288238 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:59:49.288989 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:59:49.289039 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:59:49.290198 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:59:49.290241 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:59:49.290952 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:59:49.290994 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:59:49.296830 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:59:49.297268 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:59:49.297330 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:59:49.298940 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:59:49.298989 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:59:49.305209 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:59:49.305363 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:59:49.306416 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:59:49.310754 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:59:49.321040 systemd[1]: Switching root. Jan 30 13:59:49.355327 systemd-journald[184]: Journal stopped Jan 30 13:59:50.424661 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jan 30 13:59:50.424733 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:59:50.424748 kernel: SELinux: policy capability open_perms=1 Jan 30 13:59:50.424764 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:59:50.424776 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:59:50.424788 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:59:50.424799 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:59:50.424814 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:59:50.424829 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:59:50.424854 kernel: audit: type=1403 audit(1738245589.545:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:59:50.424874 systemd[1]: Successfully loaded SELinux policy in 38.672ms. Jan 30 13:59:50.424901 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.013ms. Jan 30 13:59:50.424920 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:59:50.424940 systemd[1]: Detected virtualization kvm. Jan 30 13:59:50.424953 systemd[1]: Detected architecture x86-64. Jan 30 13:59:50.424969 systemd[1]: Detected first boot. Jan 30 13:59:50.424982 systemd[1]: Hostname set to . Jan 30 13:59:50.424995 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:59:50.425007 zram_generator::config[1026]: No configuration found. Jan 30 13:59:50.425025 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:59:50.425037 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 13:59:50.425050 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 13:59:50.425062 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 13:59:50.425076 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:59:50.425091 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:59:50.425103 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:59:50.425115 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:59:50.425128 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:59:50.425144 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:59:50.425156 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:59:50.425168 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:59:50.425181 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:59:50.425199 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:59:50.425212 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:59:50.425225 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:59:50.425237 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:59:50.425251 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:59:50.425263 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 13:59:50.425275 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:59:50.425287 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 13:59:50.425303 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 13:59:50.425315 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 13:59:50.425329 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:59:50.425348 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:59:50.425367 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:59:50.425386 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:59:50.425407 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:59:50.425426 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:59:50.425450 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:59:50.425467 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:59:50.425480 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:59:50.425493 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:59:50.425506 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:59:50.425518 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:59:50.425531 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:59:50.425565 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:59:50.425579 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:59:50.425595 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:59:50.425607 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:59:50.425619 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:59:50.425632 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:59:50.425645 systemd[1]: Reached target machines.target - Containers. Jan 30 13:59:50.425658 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:59:50.425671 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:59:50.425683 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:59:50.425698 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:59:50.425710 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:59:50.425722 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:59:50.425735 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:59:50.425747 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:59:50.425760 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:59:50.425773 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:59:50.425785 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 13:59:50.425800 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 13:59:50.425812 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 13:59:50.425825 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 13:59:50.425837 kernel: fuse: init (API version 7.39) Jan 30 13:59:50.425849 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:59:50.425861 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:59:50.425874 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:59:50.425886 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:59:50.425912 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:59:50.425925 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 13:59:50.425939 kernel: loop: module loaded Jan 30 13:59:50.425951 systemd[1]: Stopped verity-setup.service. Jan 30 13:59:50.425964 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:59:50.425976 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:59:50.425988 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:59:50.426000 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:59:50.426013 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:59:50.426027 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:59:50.426039 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:59:50.426052 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:59:50.426064 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:59:50.426076 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:59:50.426128 systemd-journald[1106]: Collecting audit messages is disabled. Jan 30 13:59:50.426167 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:59:50.426185 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:59:50.426207 systemd-journald[1106]: Journal started Jan 30 13:59:50.426242 systemd-journald[1106]: Runtime Journal (/run/log/journal/3ac7da44287c4977a116416d5b87ea04) is 4.9M, max 39.3M, 34.4M free. Jan 30 13:59:50.147220 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:59:50.168141 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 13:59:50.168760 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 13:59:50.429563 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:59:50.431887 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:59:50.432066 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:59:50.432841 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:59:50.432988 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:59:50.434458 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:59:50.435317 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:59:50.437029 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:59:50.466573 kernel: ACPI: bus type drm_connector registered Jan 30 13:59:50.467826 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:59:50.475752 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:59:50.476412 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:59:50.485730 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:59:50.487799 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:59:50.488643 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:59:50.491956 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:59:50.492934 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:59:50.494013 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:59:50.494969 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:59:50.499917 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:59:50.506085 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:59:50.508014 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:59:50.508055 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:59:50.511864 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:59:50.518485 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:59:50.525891 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:59:50.526717 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:59:50.536760 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:59:50.538685 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:59:50.544877 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:59:50.555010 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:59:50.564498 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:59:50.587886 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:59:50.590100 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:59:50.593064 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:59:50.606612 kernel: loop0: detected capacity change from 0 to 140768 Jan 30 13:59:50.611699 systemd-journald[1106]: Time spent on flushing to /var/log/journal/3ac7da44287c4977a116416d5b87ea04 is 82.643ms for 971 entries. Jan 30 13:59:50.611699 systemd-journald[1106]: System Journal (/var/log/journal/3ac7da44287c4977a116416d5b87ea04) is 8.0M, max 195.6M, 187.6M free. Jan 30 13:59:50.731476 systemd-journald[1106]: Received client request to flush runtime journal. Jan 30 13:59:50.731526 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:59:50.731566 kernel: loop1: detected capacity change from 0 to 210664 Jan 30 13:59:50.626978 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:59:50.628536 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:59:50.639220 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:59:50.689019 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:59:50.702847 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:59:50.712999 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:59:50.719982 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:59:50.734777 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:59:50.744595 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:59:50.745970 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:59:50.756745 kernel: loop2: detected capacity change from 0 to 142488 Jan 30 13:59:50.761954 udevadm[1161]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 13:59:50.806206 systemd-tmpfiles[1164]: ACLs are not supported, ignoring. Jan 30 13:59:50.806232 systemd-tmpfiles[1164]: ACLs are not supported, ignoring. Jan 30 13:59:50.816999 kernel: loop3: detected capacity change from 0 to 8 Jan 30 13:59:50.839038 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:59:50.848225 kernel: loop4: detected capacity change from 0 to 140768 Jan 30 13:59:50.876585 kernel: loop5: detected capacity change from 0 to 210664 Jan 30 13:59:50.893063 kernel: loop6: detected capacity change from 0 to 142488 Jan 30 13:59:50.913601 kernel: loop7: detected capacity change from 0 to 8 Jan 30 13:59:50.917714 (sd-merge)[1173]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jan 30 13:59:50.918249 (sd-merge)[1173]: Merged extensions into '/usr'. Jan 30 13:59:50.928935 systemd[1]: Reloading requested from client PID 1151 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:59:50.928952 systemd[1]: Reloading... Jan 30 13:59:51.091592 zram_generator::config[1199]: No configuration found. Jan 30 13:59:51.217339 ldconfig[1146]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:59:51.242210 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:59:51.293381 systemd[1]: Reloading finished in 363 ms. Jan 30 13:59:51.323493 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:59:51.325002 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:59:51.336940 systemd[1]: Starting ensure-sysext.service... Jan 30 13:59:51.340790 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:59:51.359634 systemd[1]: Reloading requested from client PID 1242 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:59:51.359663 systemd[1]: Reloading... Jan 30 13:59:51.402000 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:59:51.402747 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:59:51.404240 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:59:51.404727 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Jan 30 13:59:51.404824 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Jan 30 13:59:51.413015 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:59:51.413029 systemd-tmpfiles[1243]: Skipping /boot Jan 30 13:59:51.436944 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:59:51.436959 systemd-tmpfiles[1243]: Skipping /boot Jan 30 13:59:51.489579 zram_generator::config[1272]: No configuration found. Jan 30 13:59:51.639205 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:59:51.691232 systemd[1]: Reloading finished in 331 ms. Jan 30 13:59:51.705540 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:59:51.706799 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:59:51.727896 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:59:51.731847 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:59:51.739828 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:59:51.744303 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:59:51.749910 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:59:51.760369 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:59:51.767893 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:59:51.771728 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:59:51.772001 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:59:51.779834 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:59:51.784094 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:59:51.795897 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:59:51.797740 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:59:51.797892 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:59:51.800314 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:59:51.800507 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:59:51.801716 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:59:51.801816 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:59:51.805066 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:59:51.805292 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:59:51.815225 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:59:51.816920 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:59:51.817105 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:59:51.820422 systemd[1]: Finished ensure-sysext.service. Jan 30 13:59:51.836555 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 13:59:51.852962 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:59:51.868289 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:59:51.868467 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:59:51.870926 augenrules[1345]: No rules Jan 30 13:59:51.872746 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:59:51.875728 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:59:51.876737 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:59:51.876896 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:59:51.879474 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:59:51.879982 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:59:51.880741 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:59:51.880879 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:59:51.882675 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:59:51.882790 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:59:51.882843 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:59:51.883359 systemd-udevd[1325]: Using default interface naming scheme 'v255'. Jan 30 13:59:51.886367 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:59:51.890757 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:59:51.900811 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:59:51.924883 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:59:51.930774 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:59:51.958900 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:59:52.021885 systemd-networkd[1366]: lo: Link UP Jan 30 13:59:52.022228 systemd-networkd[1366]: lo: Gained carrier Jan 30 13:59:52.023294 systemd-networkd[1366]: Enumeration completed Jan 30 13:59:52.023856 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:59:52.035153 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:59:52.070759 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jan 30 13:59:52.071330 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:59:52.071501 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:59:52.073203 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:59:52.076770 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:59:52.084882 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:59:52.085415 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:59:52.085457 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:59:52.085474 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:59:52.085798 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 30 13:59:52.087638 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 13:59:52.088147 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:59:52.090417 systemd-resolved[1324]: Positive Trust Anchors: Jan 30 13:59:52.090429 systemd-resolved[1324]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:59:52.090465 systemd-resolved[1324]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:59:52.101155 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:59:52.101751 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:59:52.109614 kernel: ISO 9660 Extensions: RRIP_1991A Jan 30 13:59:52.112741 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jan 30 13:59:52.114078 systemd-resolved[1324]: Using system hostname 'ci-4081.3.0-f-9922ae6042'. Jan 30 13:59:52.116310 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:59:52.116984 systemd[1]: Reached target network.target - Network. Jan 30 13:59:52.117611 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:59:52.128869 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:59:52.129027 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:59:52.129638 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:59:52.137653 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:59:52.137830 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:59:52.138473 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:59:52.147576 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1371) Jan 30 13:59:52.188591 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 30 13:59:52.201634 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 30 13:59:52.216617 kernel: ACPI: button: Power Button [PWRF] Jan 30 13:59:52.224622 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 30 13:59:52.237277 systemd-networkd[1366]: eth0: Configuring with /run/systemd/network/10-56:45:d8:ab:f0:6b.network. Jan 30 13:59:52.239121 systemd-networkd[1366]: eth0: Link UP Jan 30 13:59:52.239208 systemd-networkd[1366]: eth0: Gained carrier Jan 30 13:59:52.246378 systemd-timesyncd[1338]: Network configuration changed, trying to establish connection. Jan 30 13:59:52.271438 systemd-networkd[1366]: eth1: Configuring with /run/systemd/network/10-8e:a1:c6:fe:90:62.network. Jan 30 13:59:52.272869 systemd-networkd[1366]: eth1: Link UP Jan 30 13:59:52.273500 systemd-networkd[1366]: eth1: Gained carrier Jan 30 13:59:52.296587 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 13:59:52.300035 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 30 13:59:52.300120 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 30 13:59:52.304924 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:59:52.305599 kernel: Console: switching to colour dummy device 80x25 Jan 30 13:59:52.314788 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 30 13:59:52.314919 kernel: [drm] features: -context_init Jan 30 13:59:53.005058 systemd-timesyncd[1338]: Contacted time server 135.148.100.14:123 (0.flatcar.pool.ntp.org). Jan 30 13:59:53.005140 systemd-timesyncd[1338]: Initial clock synchronization to Thu 2025-01-30 13:59:53.004879 UTC. Jan 30 13:59:53.005217 systemd-resolved[1324]: Clock change detected. Flushing caches. Jan 30 13:59:53.007342 kernel: [drm] number of scanouts: 1 Jan 30 13:59:53.007414 kernel: [drm] number of cap sets: 0 Jan 30 13:59:53.015739 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 30 13:59:53.024078 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:59:53.031552 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:59:53.037333 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 30 13:59:53.040338 kernel: Console: switching to colour frame buffer device 128x48 Jan 30 13:59:53.049374 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 30 13:59:53.052501 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:59:53.054421 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:59:53.067668 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:59:53.085005 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:59:53.091204 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:59:53.091528 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:59:53.106722 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:59:53.259339 kernel: EDAC MC: Ver: 3.0.0 Jan 30 13:59:53.279370 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:59:53.283846 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:59:53.292642 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:59:53.308344 lvm[1425]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:59:53.339511 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:59:53.340682 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:59:53.340834 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:59:53.341045 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:59:53.341150 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:59:53.341563 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:59:53.341861 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:59:53.341987 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:59:53.342100 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:59:53.342151 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:59:53.342227 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:59:53.344575 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:59:53.348001 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:59:53.363280 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:59:53.374575 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:59:53.377508 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:59:53.378148 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:59:53.380995 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:59:53.381418 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:59:53.381611 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:59:53.381635 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:59:53.383622 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:59:53.399594 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 13:59:53.404552 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:59:53.408484 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:59:53.422079 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:59:53.423664 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:59:53.427237 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:59:53.431836 jq[1434]: false Jan 30 13:59:53.433318 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:59:53.443675 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:59:53.452751 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:59:53.453701 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 13:59:53.454225 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:59:53.461585 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:59:53.465747 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:59:53.469199 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:59:53.481057 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:59:53.481359 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:59:53.481718 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:59:53.482404 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:59:53.505037 extend-filesystems[1435]: Found loop4 Jan 30 13:59:53.517534 jq[1446]: true Jan 30 13:59:53.517914 extend-filesystems[1435]: Found loop5 Jan 30 13:59:53.517914 extend-filesystems[1435]: Found loop6 Jan 30 13:59:53.517914 extend-filesystems[1435]: Found loop7 Jan 30 13:59:53.517914 extend-filesystems[1435]: Found vda Jan 30 13:59:53.517914 extend-filesystems[1435]: Found vda1 Jan 30 13:59:53.517914 extend-filesystems[1435]: Found vda2 Jan 30 13:59:53.517914 extend-filesystems[1435]: Found vda3 Jan 30 13:59:53.517914 extend-filesystems[1435]: Found usr Jan 30 13:59:53.517914 extend-filesystems[1435]: Found vda4 Jan 30 13:59:53.517914 extend-filesystems[1435]: Found vda6 Jan 30 13:59:53.517914 extend-filesystems[1435]: Found vda7 Jan 30 13:59:53.517914 extend-filesystems[1435]: Found vda9 Jan 30 13:59:53.517914 extend-filesystems[1435]: Checking size of /dev/vda9 Jan 30 13:59:53.583271 coreos-metadata[1432]: Jan 30 13:59:53.557 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 13:59:53.583271 coreos-metadata[1432]: Jan 30 13:59:53.576 INFO Fetch successful Jan 30 13:59:53.585671 extend-filesystems[1435]: Resized partition /dev/vda9 Jan 30 13:59:53.535168 dbus-daemon[1433]: [system] SELinux support is enabled Jan 30 13:59:53.607919 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1379) Jan 30 13:59:53.607960 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jan 30 13:59:53.535488 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:59:53.608150 update_engine[1444]: I20250130 13:59:53.536128 1444 main.cc:92] Flatcar Update Engine starting Jan 30 13:59:53.608150 update_engine[1444]: I20250130 13:59:53.542589 1444 update_check_scheduler.cc:74] Next update check in 5m4s Jan 30 13:59:53.612155 jq[1458]: true Jan 30 13:59:53.612436 extend-filesystems[1468]: resize2fs 1.47.1 (20-May-2024) Jan 30 13:59:53.546829 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:59:53.546883 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:59:53.553941 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:59:53.554079 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jan 30 13:59:53.554116 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:59:53.568027 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:59:53.591512 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:59:53.608846 (ntainerd)[1460]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:59:53.609997 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:59:53.610194 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:59:53.701364 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 30 13:59:53.710582 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 13:59:53.711414 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 13:59:53.731960 extend-filesystems[1468]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 13:59:53.731960 extend-filesystems[1468]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 30 13:59:53.731960 extend-filesystems[1468]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 30 13:59:53.748959 extend-filesystems[1435]: Resized filesystem in /dev/vda9 Jan 30 13:59:53.748959 extend-filesystems[1435]: Found vdb Jan 30 13:59:53.734117 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:59:53.734340 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:59:53.775629 bash[1490]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:59:53.779979 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:59:53.780181 systemd-logind[1442]: New seat seat0. Jan 30 13:59:53.791137 systemd-logind[1442]: Watching system buttons on /dev/input/event1 (Power Button) Jan 30 13:59:53.791164 systemd-logind[1442]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 13:59:53.808448 systemd[1]: Starting sshkeys.service... Jan 30 13:59:53.808972 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:59:53.851896 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 13:59:53.860730 locksmithd[1469]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:59:53.863753 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 13:59:53.897344 sshd_keygen[1448]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:59:53.918226 coreos-metadata[1498]: Jan 30 13:59:53.917 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 13:59:53.932560 coreos-metadata[1498]: Jan 30 13:59:53.929 INFO Fetch successful Jan 30 13:59:53.941155 unknown[1498]: wrote ssh authorized keys file for user: core Jan 30 13:59:53.951838 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:59:53.977035 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:59:53.988329 update-ssh-keys[1512]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:59:53.990325 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 13:59:53.995436 systemd[1]: Finished sshkeys.service. Jan 30 13:59:54.006444 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:59:54.006770 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:59:54.016718 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:59:54.053676 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:59:54.064791 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:59:54.075739 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 13:59:54.077577 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:59:54.130315 containerd[1460]: time="2025-01-30T13:59:54.130160952Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 13:59:54.157881 containerd[1460]: time="2025-01-30T13:59:54.157806490Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:59:54.159776 containerd[1460]: time="2025-01-30T13:59:54.159725070Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:59:54.160340 containerd[1460]: time="2025-01-30T13:59:54.159882735Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:59:54.160340 containerd[1460]: time="2025-01-30T13:59:54.159908934Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:59:54.160340 containerd[1460]: time="2025-01-30T13:59:54.160156198Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:59:54.160340 containerd[1460]: time="2025-01-30T13:59:54.160181145Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:59:54.160340 containerd[1460]: time="2025-01-30T13:59:54.160235267Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:59:54.160340 containerd[1460]: time="2025-01-30T13:59:54.160246870Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:59:54.160670 containerd[1460]: time="2025-01-30T13:59:54.160650695Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:59:54.160719 containerd[1460]: time="2025-01-30T13:59:54.160709114Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:59:54.160776 containerd[1460]: time="2025-01-30T13:59:54.160764105Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:59:54.160817 containerd[1460]: time="2025-01-30T13:59:54.160807448Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:59:54.160943 containerd[1460]: time="2025-01-30T13:59:54.160928975Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:59:54.161198 containerd[1460]: time="2025-01-30T13:59:54.161181716Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:59:54.161422 containerd[1460]: time="2025-01-30T13:59:54.161403725Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:59:54.161479 containerd[1460]: time="2025-01-30T13:59:54.161468321Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:59:54.161611 containerd[1460]: time="2025-01-30T13:59:54.161596270Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:59:54.161717 containerd[1460]: time="2025-01-30T13:59:54.161704469Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:59:54.169912 containerd[1460]: time="2025-01-30T13:59:54.169766554Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:59:54.169912 containerd[1460]: time="2025-01-30T13:59:54.169849354Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:59:54.169912 containerd[1460]: time="2025-01-30T13:59:54.169868528Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:59:54.169912 containerd[1460]: time="2025-01-30T13:59:54.169923831Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:59:54.170208 containerd[1460]: time="2025-01-30T13:59:54.169950668Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:59:54.170208 containerd[1460]: time="2025-01-30T13:59:54.170127858Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:59:54.170420 containerd[1460]: time="2025-01-30T13:59:54.170391345Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:59:54.170516 containerd[1460]: time="2025-01-30T13:59:54.170499792Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:59:54.170561 containerd[1460]: time="2025-01-30T13:59:54.170518394Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:59:54.170561 containerd[1460]: time="2025-01-30T13:59:54.170531902Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:59:54.170561 containerd[1460]: time="2025-01-30T13:59:54.170546995Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:59:54.170561 containerd[1460]: time="2025-01-30T13:59:54.170559654Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:59:54.170719 containerd[1460]: time="2025-01-30T13:59:54.170572824Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:59:54.170719 containerd[1460]: time="2025-01-30T13:59:54.170588439Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:59:54.170719 containerd[1460]: time="2025-01-30T13:59:54.170613319Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:59:54.170719 containerd[1460]: time="2025-01-30T13:59:54.170631996Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:59:54.170719 containerd[1460]: time="2025-01-30T13:59:54.170645276Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:59:54.170719 containerd[1460]: time="2025-01-30T13:59:54.170656930Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:59:54.170719 containerd[1460]: time="2025-01-30T13:59:54.170676420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:59:54.170719 containerd[1460]: time="2025-01-30T13:59:54.170690585Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:59:54.170719 containerd[1460]: time="2025-01-30T13:59:54.170702733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:59:54.170719 containerd[1460]: time="2025-01-30T13:59:54.170715863Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:59:54.171255 containerd[1460]: time="2025-01-30T13:59:54.170727922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:59:54.171255 containerd[1460]: time="2025-01-30T13:59:54.170740198Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:59:54.171255 containerd[1460]: time="2025-01-30T13:59:54.170751264Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:59:54.171255 containerd[1460]: time="2025-01-30T13:59:54.170763055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:59:54.171255 containerd[1460]: time="2025-01-30T13:59:54.170775768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:59:54.171255 containerd[1460]: time="2025-01-30T13:59:54.170795483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:59:54.171255 containerd[1460]: time="2025-01-30T13:59:54.170810074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:59:54.171255 containerd[1460]: time="2025-01-30T13:59:54.170821993Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:59:54.171255 containerd[1460]: time="2025-01-30T13:59:54.170863589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:59:54.171255 containerd[1460]: time="2025-01-30T13:59:54.170881167Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:59:54.171255 containerd[1460]: time="2025-01-30T13:59:54.170901765Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:59:54.171255 containerd[1460]: time="2025-01-30T13:59:54.170913771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:59:54.171255 containerd[1460]: time="2025-01-30T13:59:54.170924808Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:59:54.171255 containerd[1460]: time="2025-01-30T13:59:54.170984550Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:59:54.172703 containerd[1460]: time="2025-01-30T13:59:54.171014328Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:59:54.172703 containerd[1460]: time="2025-01-30T13:59:54.171031261Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:59:54.172703 containerd[1460]: time="2025-01-30T13:59:54.171050195Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:59:54.172703 containerd[1460]: time="2025-01-30T13:59:54.171064372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:59:54.172703 containerd[1460]: time="2025-01-30T13:59:54.171092738Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:59:54.172703 containerd[1460]: time="2025-01-30T13:59:54.171105172Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:59:54.172703 containerd[1460]: time="2025-01-30T13:59:54.171117605Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:59:54.172992 containerd[1460]: time="2025-01-30T13:59:54.171412216Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:59:54.172992 containerd[1460]: time="2025-01-30T13:59:54.171497363Z" level=info msg="Connect containerd service" Jan 30 13:59:54.172992 containerd[1460]: time="2025-01-30T13:59:54.171544227Z" level=info msg="using legacy CRI server" Jan 30 13:59:54.172992 containerd[1460]: time="2025-01-30T13:59:54.171552002Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:59:54.172992 containerd[1460]: time="2025-01-30T13:59:54.171959254Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:59:54.172992 containerd[1460]: time="2025-01-30T13:59:54.172953666Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:59:54.174076 containerd[1460]: time="2025-01-30T13:59:54.174043161Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:59:54.174153 containerd[1460]: time="2025-01-30T13:59:54.174103314Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:59:54.174212 containerd[1460]: time="2025-01-30T13:59:54.174172367Z" level=info msg="Start subscribing containerd event" Jan 30 13:59:54.174252 containerd[1460]: time="2025-01-30T13:59:54.174221059Z" level=info msg="Start recovering state" Jan 30 13:59:54.174343 containerd[1460]: time="2025-01-30T13:59:54.174289351Z" level=info msg="Start event monitor" Jan 30 13:59:54.174343 containerd[1460]: time="2025-01-30T13:59:54.174328751Z" level=info msg="Start snapshots syncer" Jan 30 13:59:54.175433 containerd[1460]: time="2025-01-30T13:59:54.174340260Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:59:54.175433 containerd[1460]: time="2025-01-30T13:59:54.174359084Z" level=info msg="Start streaming server" Jan 30 13:59:54.174511 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:59:54.177630 containerd[1460]: time="2025-01-30T13:59:54.177595312Z" level=info msg="containerd successfully booted in 0.048782s" Jan 30 13:59:54.639621 systemd-networkd[1366]: eth1: Gained IPv6LL Jan 30 13:59:54.643001 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:59:54.646369 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:59:54.656689 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:59:54.660673 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:59:54.683094 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:59:54.767572 systemd-networkd[1366]: eth0: Gained IPv6LL Jan 30 13:59:55.594546 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:59:55.595478 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:59:55.596966 systemd[1]: Startup finished in 1.033s (kernel) + 4.843s (initrd) + 5.398s (userspace) = 11.275s. Jan 30 13:59:55.602211 (kubelet)[1549]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:59:56.271525 kubelet[1549]: E0130 13:59:56.271441 1549 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:59:56.274598 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:59:56.274774 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:59:56.275240 systemd[1]: kubelet.service: Consumed 1.198s CPU time. Jan 30 13:59:58.893417 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:59:58.903652 systemd[1]: Started sshd@0-164.92.66.128:22-147.75.109.163:45216.service - OpenSSH per-connection server daemon (147.75.109.163:45216). Jan 30 13:59:58.960824 sshd[1562]: Accepted publickey for core from 147.75.109.163 port 45216 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:59:58.963607 sshd[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:59:58.978741 systemd-logind[1442]: New session 1 of user core. Jan 30 13:59:58.980963 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:59:58.986693 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:59:59.005555 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:59:59.012768 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:59:59.027668 (systemd)[1566]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:59:59.144734 systemd[1566]: Queued start job for default target default.target. Jan 30 13:59:59.151561 systemd[1566]: Created slice app.slice - User Application Slice. Jan 30 13:59:59.151594 systemd[1566]: Reached target paths.target - Paths. Jan 30 13:59:59.151609 systemd[1566]: Reached target timers.target - Timers. Jan 30 13:59:59.153445 systemd[1566]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:59:59.167619 systemd[1566]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:59:59.167755 systemd[1566]: Reached target sockets.target - Sockets. Jan 30 13:59:59.167771 systemd[1566]: Reached target basic.target - Basic System. Jan 30 13:59:59.167819 systemd[1566]: Reached target default.target - Main User Target. Jan 30 13:59:59.167852 systemd[1566]: Startup finished in 130ms. Jan 30 13:59:59.168179 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:59:59.170636 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:59:59.242797 systemd[1]: Started sshd@1-164.92.66.128:22-147.75.109.163:45220.service - OpenSSH per-connection server daemon (147.75.109.163:45220). Jan 30 13:59:59.288630 sshd[1577]: Accepted publickey for core from 147.75.109.163 port 45220 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:59:59.290378 sshd[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:59:59.296623 systemd-logind[1442]: New session 2 of user core. Jan 30 13:59:59.303605 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:59:59.368175 sshd[1577]: pam_unix(sshd:session): session closed for user core Jan 30 13:59:59.385445 systemd[1]: sshd@1-164.92.66.128:22-147.75.109.163:45220.service: Deactivated successfully. Jan 30 13:59:59.387842 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 13:59:59.390510 systemd-logind[1442]: Session 2 logged out. Waiting for processes to exit. Jan 30 13:59:59.402831 systemd[1]: Started sshd@2-164.92.66.128:22-147.75.109.163:45224.service - OpenSSH per-connection server daemon (147.75.109.163:45224). Jan 30 13:59:59.405637 systemd-logind[1442]: Removed session 2. Jan 30 13:59:59.449681 sshd[1584]: Accepted publickey for core from 147.75.109.163 port 45224 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:59:59.451436 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:59:59.456646 systemd-logind[1442]: New session 3 of user core. Jan 30 13:59:59.464917 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:59:59.522869 sshd[1584]: pam_unix(sshd:session): session closed for user core Jan 30 13:59:59.537437 systemd[1]: sshd@2-164.92.66.128:22-147.75.109.163:45224.service: Deactivated successfully. Jan 30 13:59:59.539356 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 13:59:59.540896 systemd-logind[1442]: Session 3 logged out. Waiting for processes to exit. Jan 30 13:59:59.552848 systemd[1]: Started sshd@3-164.92.66.128:22-147.75.109.163:45230.service - OpenSSH per-connection server daemon (147.75.109.163:45230). Jan 30 13:59:59.554175 systemd-logind[1442]: Removed session 3. Jan 30 13:59:59.599783 sshd[1591]: Accepted publickey for core from 147.75.109.163 port 45230 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:59:59.601657 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:59:59.608177 systemd-logind[1442]: New session 4 of user core. Jan 30 13:59:59.613590 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:59:59.678833 sshd[1591]: pam_unix(sshd:session): session closed for user core Jan 30 13:59:59.689045 systemd[1]: sshd@3-164.92.66.128:22-147.75.109.163:45230.service: Deactivated successfully. Jan 30 13:59:59.691069 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 13:59:59.693581 systemd-logind[1442]: Session 4 logged out. Waiting for processes to exit. Jan 30 13:59:59.703494 systemd[1]: Started sshd@4-164.92.66.128:22-147.75.109.163:45234.service - OpenSSH per-connection server daemon (147.75.109.163:45234). Jan 30 13:59:59.704852 systemd-logind[1442]: Removed session 4. Jan 30 13:59:59.763056 sshd[1598]: Accepted publickey for core from 147.75.109.163 port 45234 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:59:59.765068 sshd[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:59:59.771561 systemd-logind[1442]: New session 5 of user core. Jan 30 13:59:59.779617 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:59:59.847524 sudo[1601]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 13:59:59.847838 sudo[1601]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:59:59.858850 sudo[1601]: pam_unix(sudo:session): session closed for user root Jan 30 13:59:59.862565 sshd[1598]: pam_unix(sshd:session): session closed for user core Jan 30 13:59:59.873319 systemd[1]: sshd@4-164.92.66.128:22-147.75.109.163:45234.service: Deactivated successfully. Jan 30 13:59:59.875260 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 13:59:59.876180 systemd-logind[1442]: Session 5 logged out. Waiting for processes to exit. Jan 30 13:59:59.881642 systemd[1]: Started sshd@5-164.92.66.128:22-147.75.109.163:45242.service - OpenSSH per-connection server daemon (147.75.109.163:45242). Jan 30 13:59:59.883620 systemd-logind[1442]: Removed session 5. Jan 30 13:59:59.937900 sshd[1606]: Accepted publickey for core from 147.75.109.163 port 45242 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:59:59.939494 sshd[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:59:59.944808 systemd-logind[1442]: New session 6 of user core. Jan 30 13:59:59.955622 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 14:00:00.021578 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 14:00:00.022050 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:00:00.028447 sudo[1610]: pam_unix(sudo:session): session closed for user root Jan 30 14:00:00.036910 sudo[1609]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 14:00:00.037258 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:00:00.057793 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 14:00:00.060379 auditctl[1613]: No rules Jan 30 14:00:00.060783 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 14:00:00.060992 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 14:00:00.067795 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 14:00:00.110230 augenrules[1631]: No rules Jan 30 14:00:00.111755 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 14:00:00.113059 sudo[1609]: pam_unix(sudo:session): session closed for user root Jan 30 14:00:00.117045 sshd[1606]: pam_unix(sshd:session): session closed for user core Jan 30 14:00:00.126903 systemd[1]: sshd@5-164.92.66.128:22-147.75.109.163:45242.service: Deactivated successfully. Jan 30 14:00:00.129404 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 14:00:00.131870 systemd-logind[1442]: Session 6 logged out. Waiting for processes to exit. Jan 30 14:00:00.145936 systemd[1]: Started sshd@6-164.92.66.128:22-147.75.109.163:45254.service - OpenSSH per-connection server daemon (147.75.109.163:45254). Jan 30 14:00:00.147734 systemd-logind[1442]: Removed session 6. Jan 30 14:00:00.198093 sshd[1639]: Accepted publickey for core from 147.75.109.163 port 45254 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:00:00.199239 sshd[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:00:00.206385 systemd-logind[1442]: New session 7 of user core. Jan 30 14:00:00.211623 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 14:00:00.273001 sudo[1642]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 14:00:00.273434 sudo[1642]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:00:01.278122 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:00:01.278590 systemd[1]: kubelet.service: Consumed 1.198s CPU time. Jan 30 14:00:01.292286 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:00:01.337291 systemd[1]: Reloading requested from client PID 1680 ('systemctl') (unit session-7.scope)... Jan 30 14:00:01.337343 systemd[1]: Reloading... Jan 30 14:00:01.555822 zram_generator::config[1718]: No configuration found. Jan 30 14:00:01.941649 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:00:02.096451 systemd[1]: Reloading finished in 758 ms. Jan 30 14:00:02.253103 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 14:00:02.253268 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 14:00:02.253870 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:00:02.301052 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:00:02.627753 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:00:02.637560 (kubelet)[1771]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 14:00:02.820191 kubelet[1771]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:00:02.820925 kubelet[1771]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 14:00:02.821040 kubelet[1771]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:00:02.823011 kubelet[1771]: I0130 14:00:02.822895 1771 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 14:00:03.870975 kubelet[1771]: I0130 14:00:03.867773 1771 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 14:00:03.870975 kubelet[1771]: I0130 14:00:03.867868 1771 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 14:00:03.870975 kubelet[1771]: I0130 14:00:03.868391 1771 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 14:00:03.922719 kubelet[1771]: I0130 14:00:03.922649 1771 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 14:00:03.967340 kubelet[1771]: I0130 14:00:03.967207 1771 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 14:00:03.973222 kubelet[1771]: I0130 14:00:03.972469 1771 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 14:00:03.973222 kubelet[1771]: I0130 14:00:03.972614 1771 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"164.92.66.128","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 14:00:03.974773 kubelet[1771]: I0130 14:00:03.974127 1771 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 14:00:03.974773 kubelet[1771]: I0130 14:00:03.974210 1771 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 14:00:03.974773 kubelet[1771]: I0130 14:00:03.974521 1771 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:00:03.978073 kubelet[1771]: I0130 14:00:03.977452 1771 kubelet.go:400] "Attempting to sync node with API server" Jan 30 14:00:03.978073 kubelet[1771]: I0130 14:00:03.977509 1771 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 14:00:03.978073 kubelet[1771]: I0130 14:00:03.977547 1771 kubelet.go:312] "Adding apiserver pod source" Jan 30 14:00:03.978073 kubelet[1771]: I0130 14:00:03.977567 1771 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 14:00:03.980705 kubelet[1771]: E0130 14:00:03.980642 1771 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:03.983739 kubelet[1771]: E0130 14:00:03.983675 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:03.993741 kubelet[1771]: I0130 14:00:03.989922 1771 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 14:00:03.998348 kubelet[1771]: I0130 14:00:03.997776 1771 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 14:00:03.998348 kubelet[1771]: W0130 14:00:03.997962 1771 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 14:00:04.000173 kubelet[1771]: I0130 14:00:03.999407 1771 server.go:1264] "Started kubelet" Jan 30 14:00:04.027540 kubelet[1771]: I0130 14:00:04.005455 1771 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 14:00:04.047603 kubelet[1771]: I0130 14:00:04.029551 1771 server.go:455] "Adding debug handlers to kubelet server" Jan 30 14:00:04.054388 kubelet[1771]: I0130 14:00:04.052798 1771 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 14:00:04.054388 kubelet[1771]: I0130 14:00:04.053518 1771 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 14:00:04.060599 kubelet[1771]: I0130 14:00:04.059967 1771 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 14:00:04.067896 kubelet[1771]: I0130 14:00:04.067646 1771 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 14:00:04.069578 kubelet[1771]: I0130 14:00:04.069502 1771 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 14:00:04.069797 kubelet[1771]: I0130 14:00:04.069710 1771 reconciler.go:26] "Reconciler: start to sync state" Jan 30 14:00:04.070438 kubelet[1771]: W0130 14:00:04.070394 1771 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 30 14:00:04.075248 kubelet[1771]: E0130 14:00:04.075191 1771 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 30 14:00:04.075627 kubelet[1771]: W0130 14:00:04.074946 1771 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "164.92.66.128" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 30 14:00:04.076009 kubelet[1771]: E0130 14:00:04.075737 1771 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "164.92.66.128" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 30 14:00:04.078032 kubelet[1771]: I0130 14:00:04.077973 1771 factory.go:221] Registration of the systemd container factory successfully Jan 30 14:00:04.083527 kubelet[1771]: I0130 14:00:04.082297 1771 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 14:00:04.091116 kubelet[1771]: E0130 14:00:04.091061 1771 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 14:00:04.092741 kubelet[1771]: W0130 14:00:04.092677 1771 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 30 14:00:04.093416 kubelet[1771]: E0130 14:00:04.093167 1771 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 30 14:00:04.094451 kubelet[1771]: E0130 14:00:04.093731 1771 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{164.92.66.128.181f7d2acec442c3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:164.92.66.128,UID:164.92.66.128,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:164.92.66.128,},FirstTimestamp:2025-01-30 14:00:03.999367875 +0000 UTC m=+1.349432189,LastTimestamp:2025-01-30 14:00:03.999367875 +0000 UTC m=+1.349432189,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:164.92.66.128,}" Jan 30 14:00:04.100876 kubelet[1771]: I0130 14:00:04.100534 1771 factory.go:221] Registration of the containerd container factory successfully Jan 30 14:00:04.147166 kubelet[1771]: E0130 14:00:04.139632 1771 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"164.92.66.128\" not found" node="164.92.66.128" Jan 30 14:00:04.183287 kubelet[1771]: I0130 14:00:04.183246 1771 kubelet_node_status.go:73] "Attempting to register node" node="164.92.66.128" Jan 30 14:00:04.187207 kubelet[1771]: I0130 14:00:04.186941 1771 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 14:00:04.187207 kubelet[1771]: I0130 14:00:04.186962 1771 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 14:00:04.187207 kubelet[1771]: I0130 14:00:04.186987 1771 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:00:04.202329 kubelet[1771]: I0130 14:00:04.202124 1771 kubelet_node_status.go:76] "Successfully registered node" node="164.92.66.128" Jan 30 14:00:04.205703 kubelet[1771]: I0130 14:00:04.205470 1771 policy_none.go:49] "None policy: Start" Jan 30 14:00:04.213183 kubelet[1771]: I0130 14:00:04.211828 1771 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 14:00:04.213183 kubelet[1771]: I0130 14:00:04.211873 1771 state_mem.go:35] "Initializing new in-memory state store" Jan 30 14:00:04.262007 kubelet[1771]: E0130 14:00:04.261951 1771 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"164.92.66.128\" not found" Jan 30 14:00:04.264616 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 14:00:04.293585 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 14:00:04.296385 kubelet[1771]: I0130 14:00:04.296127 1771 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 14:00:04.306440 kubelet[1771]: I0130 14:00:04.305754 1771 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 14:00:04.306440 kubelet[1771]: I0130 14:00:04.306017 1771 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 14:00:04.306440 kubelet[1771]: I0130 14:00:04.306068 1771 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 14:00:04.306440 kubelet[1771]: E0130 14:00:04.306170 1771 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 14:00:04.313421 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 14:00:04.325294 kubelet[1771]: I0130 14:00:04.325055 1771 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 14:00:04.328827 kubelet[1771]: I0130 14:00:04.327910 1771 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 14:00:04.328827 kubelet[1771]: I0130 14:00:04.328131 1771 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 14:00:04.337651 kubelet[1771]: E0130 14:00:04.337606 1771 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"164.92.66.128\" not found" Jan 30 14:00:04.365620 kubelet[1771]: E0130 14:00:04.365505 1771 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"164.92.66.128\" not found" Jan 30 14:00:04.470038 kubelet[1771]: E0130 14:00:04.468685 1771 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"164.92.66.128\" not found" Jan 30 14:00:04.569150 kubelet[1771]: E0130 14:00:04.569028 1771 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"164.92.66.128\" not found" Jan 30 14:00:04.671185 kubelet[1771]: E0130 14:00:04.671085 1771 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"164.92.66.128\" not found" Jan 30 14:00:04.697695 sudo[1642]: pam_unix(sudo:session): session closed for user root Jan 30 14:00:04.710692 sshd[1639]: pam_unix(sshd:session): session closed for user core Jan 30 14:00:04.725751 systemd[1]: sshd@6-164.92.66.128:22-147.75.109.163:45254.service: Deactivated successfully. Jan 30 14:00:04.731240 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 14:00:04.738632 systemd-logind[1442]: Session 7 logged out. Waiting for processes to exit. Jan 30 14:00:04.753187 systemd-logind[1442]: Removed session 7. Jan 30 14:00:04.772625 kubelet[1771]: E0130 14:00:04.772548 1771 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"164.92.66.128\" not found" Jan 30 14:00:04.873109 kubelet[1771]: E0130 14:00:04.872704 1771 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"164.92.66.128\" not found" Jan 30 14:00:04.873109 kubelet[1771]: I0130 14:00:04.872727 1771 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 30 14:00:04.873109 kubelet[1771]: W0130 14:00:04.873051 1771 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 30 14:00:04.972995 kubelet[1771]: E0130 14:00:04.972889 1771 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"164.92.66.128\" not found" Jan 30 14:00:04.986585 kubelet[1771]: E0130 14:00:04.984926 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:05.073671 kubelet[1771]: E0130 14:00:05.073595 1771 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"164.92.66.128\" not found" Jan 30 14:00:05.174687 kubelet[1771]: E0130 14:00:05.174581 1771 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"164.92.66.128\" not found" Jan 30 14:00:05.285451 kubelet[1771]: I0130 14:00:05.283373 1771 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 30 14:00:05.286771 containerd[1460]: time="2025-01-30T14:00:05.286378902Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 14:00:05.287435 kubelet[1771]: I0130 14:00:05.286839 1771 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 30 14:00:05.985867 kubelet[1771]: E0130 14:00:05.985771 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:05.985867 kubelet[1771]: I0130 14:00:05.985798 1771 apiserver.go:52] "Watching apiserver" Jan 30 14:00:06.003626 kubelet[1771]: I0130 14:00:06.000916 1771 topology_manager.go:215] "Topology Admit Handler" podUID="16e506e8-e3a9-447d-be22-1ce80016d143" podNamespace="kube-system" podName="kube-proxy-ggtm7" Jan 30 14:00:06.003626 kubelet[1771]: I0130 14:00:06.001114 1771 topology_manager.go:215] "Topology Admit Handler" podUID="c2ee125a-b0d5-458f-aaa5-32012308f211" podNamespace="kube-system" podName="cilium-thxzh" Jan 30 14:00:06.060645 systemd[1]: Created slice kubepods-burstable-podc2ee125a_b0d5_458f_aaa5_32012308f211.slice - libcontainer container kubepods-burstable-podc2ee125a_b0d5_458f_aaa5_32012308f211.slice. Jan 30 14:00:06.071204 kubelet[1771]: I0130 14:00:06.071073 1771 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 14:00:06.077982 systemd[1]: Created slice kubepods-besteffort-pod16e506e8_e3a9_447d_be22_1ce80016d143.slice - libcontainer container kubepods-besteffort-pod16e506e8_e3a9_447d_be22_1ce80016d143.slice. Jan 30 14:00:06.088385 kubelet[1771]: I0130 14:00:06.086531 1771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c2ee125a-b0d5-458f-aaa5-32012308f211-cilium-config-path\") pod \"cilium-thxzh\" (UID: \"c2ee125a-b0d5-458f-aaa5-32012308f211\") " pod="kube-system/cilium-thxzh" Jan 30 14:00:06.088385 kubelet[1771]: I0130 14:00:06.086644 1771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c2ee125a-b0d5-458f-aaa5-32012308f211-host-proc-sys-kernel\") pod \"cilium-thxzh\" (UID: \"c2ee125a-b0d5-458f-aaa5-32012308f211\") " pod="kube-system/cilium-thxzh" Jan 30 14:00:06.088385 kubelet[1771]: I0130 14:00:06.086693 1771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c2ee125a-b0d5-458f-aaa5-32012308f211-hubble-tls\") pod \"cilium-thxzh\" (UID: \"c2ee125a-b0d5-458f-aaa5-32012308f211\") " pod="kube-system/cilium-thxzh" Jan 30 14:00:06.088385 kubelet[1771]: I0130 14:00:06.086737 1771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh4dd\" (UniqueName: \"kubernetes.io/projected/c2ee125a-b0d5-458f-aaa5-32012308f211-kube-api-access-jh4dd\") pod \"cilium-thxzh\" (UID: \"c2ee125a-b0d5-458f-aaa5-32012308f211\") " pod="kube-system/cilium-thxzh" Jan 30 14:00:06.088385 kubelet[1771]: I0130 14:00:06.086765 1771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c2ee125a-b0d5-458f-aaa5-32012308f211-cni-path\") pod \"cilium-thxzh\" (UID: \"c2ee125a-b0d5-458f-aaa5-32012308f211\") " pod="kube-system/cilium-thxzh" Jan 30 14:00:06.088757 kubelet[1771]: I0130 14:00:06.086833 1771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/16e506e8-e3a9-447d-be22-1ce80016d143-lib-modules\") pod \"kube-proxy-ggtm7\" (UID: \"16e506e8-e3a9-447d-be22-1ce80016d143\") " pod="kube-system/kube-proxy-ggtm7" Jan 30 14:00:06.088757 kubelet[1771]: I0130 14:00:06.086862 1771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9hg9\" (UniqueName: \"kubernetes.io/projected/16e506e8-e3a9-447d-be22-1ce80016d143-kube-api-access-d9hg9\") pod \"kube-proxy-ggtm7\" (UID: \"16e506e8-e3a9-447d-be22-1ce80016d143\") " pod="kube-system/kube-proxy-ggtm7" Jan 30 14:00:06.088757 kubelet[1771]: I0130 14:00:06.086886 1771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c2ee125a-b0d5-458f-aaa5-32012308f211-hostproc\") pod \"cilium-thxzh\" (UID: \"c2ee125a-b0d5-458f-aaa5-32012308f211\") " pod="kube-system/cilium-thxzh" Jan 30 14:00:06.088757 kubelet[1771]: I0130 14:00:06.087023 1771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c2ee125a-b0d5-458f-aaa5-32012308f211-lib-modules\") pod \"cilium-thxzh\" (UID: \"c2ee125a-b0d5-458f-aaa5-32012308f211\") " pod="kube-system/cilium-thxzh" Jan 30 14:00:06.088757 kubelet[1771]: I0130 14:00:06.087046 1771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/16e506e8-e3a9-447d-be22-1ce80016d143-kube-proxy\") pod \"kube-proxy-ggtm7\" (UID: \"16e506e8-e3a9-447d-be22-1ce80016d143\") " pod="kube-system/kube-proxy-ggtm7" Jan 30 14:00:06.088757 kubelet[1771]: I0130 14:00:06.087085 1771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c2ee125a-b0d5-458f-aaa5-32012308f211-bpf-maps\") pod \"cilium-thxzh\" (UID: \"c2ee125a-b0d5-458f-aaa5-32012308f211\") " pod="kube-system/cilium-thxzh" Jan 30 14:00:06.090155 kubelet[1771]: I0130 14:00:06.087101 1771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c2ee125a-b0d5-458f-aaa5-32012308f211-cilium-cgroup\") pod \"cilium-thxzh\" (UID: \"c2ee125a-b0d5-458f-aaa5-32012308f211\") " pod="kube-system/cilium-thxzh" Jan 30 14:00:06.090155 kubelet[1771]: I0130 14:00:06.087117 1771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c2ee125a-b0d5-458f-aaa5-32012308f211-xtables-lock\") pod \"cilium-thxzh\" (UID: \"c2ee125a-b0d5-458f-aaa5-32012308f211\") " pod="kube-system/cilium-thxzh" Jan 30 14:00:06.090155 kubelet[1771]: I0130 14:00:06.087150 1771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c2ee125a-b0d5-458f-aaa5-32012308f211-clustermesh-secrets\") pod \"cilium-thxzh\" (UID: \"c2ee125a-b0d5-458f-aaa5-32012308f211\") " pod="kube-system/cilium-thxzh" Jan 30 14:00:06.090155 kubelet[1771]: I0130 14:00:06.087168 1771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c2ee125a-b0d5-458f-aaa5-32012308f211-host-proc-sys-net\") pod \"cilium-thxzh\" (UID: \"c2ee125a-b0d5-458f-aaa5-32012308f211\") " pod="kube-system/cilium-thxzh" Jan 30 14:00:06.090155 kubelet[1771]: I0130 14:00:06.087184 1771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/16e506e8-e3a9-447d-be22-1ce80016d143-xtables-lock\") pod \"kube-proxy-ggtm7\" (UID: \"16e506e8-e3a9-447d-be22-1ce80016d143\") " pod="kube-system/kube-proxy-ggtm7" Jan 30 14:00:06.090155 kubelet[1771]: I0130 14:00:06.087200 1771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c2ee125a-b0d5-458f-aaa5-32012308f211-etc-cni-netd\") pod \"cilium-thxzh\" (UID: \"c2ee125a-b0d5-458f-aaa5-32012308f211\") " pod="kube-system/cilium-thxzh" Jan 30 14:00:06.090434 kubelet[1771]: I0130 14:00:06.087221 1771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c2ee125a-b0d5-458f-aaa5-32012308f211-cilium-run\") pod \"cilium-thxzh\" (UID: \"c2ee125a-b0d5-458f-aaa5-32012308f211\") " pod="kube-system/cilium-thxzh" Jan 30 14:00:06.377101 kubelet[1771]: E0130 14:00:06.376251 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:06.378346 containerd[1460]: time="2025-01-30T14:00:06.377990636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-thxzh,Uid:c2ee125a-b0d5-458f-aaa5-32012308f211,Namespace:kube-system,Attempt:0,}" Jan 30 14:00:06.393151 kubelet[1771]: E0130 14:00:06.391105 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:06.394186 containerd[1460]: time="2025-01-30T14:00:06.393639069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ggtm7,Uid:16e506e8-e3a9-447d-be22-1ce80016d143,Namespace:kube-system,Attempt:0,}" Jan 30 14:00:06.988792 kubelet[1771]: E0130 14:00:06.988732 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:07.192841 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount586443899.mount: Deactivated successfully. Jan 30 14:00:07.229720 containerd[1460]: time="2025-01-30T14:00:07.229633610Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:00:07.235250 containerd[1460]: time="2025-01-30T14:00:07.235001257Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 30 14:00:07.242014 containerd[1460]: time="2025-01-30T14:00:07.241104446Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:00:07.243024 containerd[1460]: time="2025-01-30T14:00:07.242938275Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:00:07.243739 containerd[1460]: time="2025-01-30T14:00:07.243691879Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 14:00:07.249681 containerd[1460]: time="2025-01-30T14:00:07.249602426Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:00:07.253271 containerd[1460]: time="2025-01-30T14:00:07.252653432Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 872.621348ms" Jan 30 14:00:07.254488 containerd[1460]: time="2025-01-30T14:00:07.254164066Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 860.365916ms" Jan 30 14:00:07.526116 containerd[1460]: time="2025-01-30T14:00:07.520639136Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:00:07.526116 containerd[1460]: time="2025-01-30T14:00:07.525642343Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:00:07.526980 containerd[1460]: time="2025-01-30T14:00:07.525668562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:00:07.526980 containerd[1460]: time="2025-01-30T14:00:07.525926455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:00:07.549888 containerd[1460]: time="2025-01-30T14:00:07.549230849Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:00:07.550633 containerd[1460]: time="2025-01-30T14:00:07.549717830Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:00:07.550633 containerd[1460]: time="2025-01-30T14:00:07.549752079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:00:07.550633 containerd[1460]: time="2025-01-30T14:00:07.550360516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:00:07.685660 systemd[1]: Started cri-containerd-9ceb6700be6858927af3f5ca2eb7b7fae75855f5fb1c31e47c48ced931cdcc5d.scope - libcontainer container 9ceb6700be6858927af3f5ca2eb7b7fae75855f5fb1c31e47c48ced931cdcc5d. Jan 30 14:00:07.688154 systemd[1]: Started cri-containerd-cdf396d0bb45d4c65cf5ed4b99ef48154f134016b9a9049eefed55d9a22dd6bd.scope - libcontainer container cdf396d0bb45d4c65cf5ed4b99ef48154f134016b9a9049eefed55d9a22dd6bd. Jan 30 14:00:07.763429 containerd[1460]: time="2025-01-30T14:00:07.762585789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-thxzh,Uid:c2ee125a-b0d5-458f-aaa5-32012308f211,Namespace:kube-system,Attempt:0,} returns sandbox id \"cdf396d0bb45d4c65cf5ed4b99ef48154f134016b9a9049eefed55d9a22dd6bd\"" Jan 30 14:00:07.768900 kubelet[1771]: E0130 14:00:07.768852 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:07.770467 containerd[1460]: time="2025-01-30T14:00:07.770393545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ggtm7,Uid:16e506e8-e3a9-447d-be22-1ce80016d143,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ceb6700be6858927af3f5ca2eb7b7fae75855f5fb1c31e47c48ced931cdcc5d\"" Jan 30 14:00:07.773151 kubelet[1771]: E0130 14:00:07.773110 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:07.774881 containerd[1460]: time="2025-01-30T14:00:07.774794038Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 30 14:00:07.990546 kubelet[1771]: E0130 14:00:07.990352 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:08.992540 kubelet[1771]: E0130 14:00:08.992455 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:09.993576 kubelet[1771]: E0130 14:00:09.993522 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:10.995137 kubelet[1771]: E0130 14:00:10.994971 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:11.995613 kubelet[1771]: E0130 14:00:11.995524 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:12.560254 systemd-resolved[1324]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jan 30 14:00:12.996605 kubelet[1771]: E0130 14:00:12.996427 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:13.997426 kubelet[1771]: E0130 14:00:13.997360 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:14.526887 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3945995118.mount: Deactivated successfully. Jan 30 14:00:14.998500 kubelet[1771]: E0130 14:00:14.998291 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:15.998527 kubelet[1771]: E0130 14:00:15.998470 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:16.999381 kubelet[1771]: E0130 14:00:16.999265 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:17.056149 containerd[1460]: time="2025-01-30T14:00:17.055911030Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:00:17.057583 containerd[1460]: time="2025-01-30T14:00:17.057321212Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 30 14:00:17.058545 containerd[1460]: time="2025-01-30T14:00:17.058495156Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:00:17.060743 containerd[1460]: time="2025-01-30T14:00:17.060541162Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.285658109s" Jan 30 14:00:17.060743 containerd[1460]: time="2025-01-30T14:00:17.060585228Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 30 14:00:17.063400 containerd[1460]: time="2025-01-30T14:00:17.063192068Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 14:00:17.064873 containerd[1460]: time="2025-01-30T14:00:17.064838430Z" level=info msg="CreateContainer within sandbox \"cdf396d0bb45d4c65cf5ed4b99ef48154f134016b9a9049eefed55d9a22dd6bd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 14:00:17.091428 containerd[1460]: time="2025-01-30T14:00:17.091268428Z" level=info msg="CreateContainer within sandbox \"cdf396d0bb45d4c65cf5ed4b99ef48154f134016b9a9049eefed55d9a22dd6bd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c667fa1839ee923826339272f55fb8e8b4048ab0737f1ea3ebeb89a55d506f23\"" Jan 30 14:00:17.092792 containerd[1460]: time="2025-01-30T14:00:17.092745071Z" level=info msg="StartContainer for \"c667fa1839ee923826339272f55fb8e8b4048ab0737f1ea3ebeb89a55d506f23\"" Jan 30 14:00:17.139516 systemd[1]: Started cri-containerd-c667fa1839ee923826339272f55fb8e8b4048ab0737f1ea3ebeb89a55d506f23.scope - libcontainer container c667fa1839ee923826339272f55fb8e8b4048ab0737f1ea3ebeb89a55d506f23. Jan 30 14:00:17.184485 containerd[1460]: time="2025-01-30T14:00:17.184387220Z" level=info msg="StartContainer for \"c667fa1839ee923826339272f55fb8e8b4048ab0737f1ea3ebeb89a55d506f23\" returns successfully" Jan 30 14:00:17.199163 systemd[1]: cri-containerd-c667fa1839ee923826339272f55fb8e8b4048ab0737f1ea3ebeb89a55d506f23.scope: Deactivated successfully. Jan 30 14:00:17.229032 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c667fa1839ee923826339272f55fb8e8b4048ab0737f1ea3ebeb89a55d506f23-rootfs.mount: Deactivated successfully. Jan 30 14:00:17.313129 containerd[1460]: time="2025-01-30T14:00:17.311837143Z" level=info msg="shim disconnected" id=c667fa1839ee923826339272f55fb8e8b4048ab0737f1ea3ebeb89a55d506f23 namespace=k8s.io Jan 30 14:00:17.313129 containerd[1460]: time="2025-01-30T14:00:17.311990470Z" level=warning msg="cleaning up after shim disconnected" id=c667fa1839ee923826339272f55fb8e8b4048ab0737f1ea3ebeb89a55d506f23 namespace=k8s.io Jan 30 14:00:17.313129 containerd[1460]: time="2025-01-30T14:00:17.312006487Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:00:17.415416 kubelet[1771]: E0130 14:00:17.415371 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:17.418981 containerd[1460]: time="2025-01-30T14:00:17.418914302Z" level=info msg="CreateContainer within sandbox \"cdf396d0bb45d4c65cf5ed4b99ef48154f134016b9a9049eefed55d9a22dd6bd\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 14:00:17.448357 containerd[1460]: time="2025-01-30T14:00:17.448216122Z" level=info msg="CreateContainer within sandbox \"cdf396d0bb45d4c65cf5ed4b99ef48154f134016b9a9049eefed55d9a22dd6bd\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fdef02c28193b04e7c254f9a5a2473e99a0021facf5dd11f32bf8ff082da00e0\"" Jan 30 14:00:17.449327 containerd[1460]: time="2025-01-30T14:00:17.449196658Z" level=info msg="StartContainer for \"fdef02c28193b04e7c254f9a5a2473e99a0021facf5dd11f32bf8ff082da00e0\"" Jan 30 14:00:17.480732 systemd[1]: Started cri-containerd-fdef02c28193b04e7c254f9a5a2473e99a0021facf5dd11f32bf8ff082da00e0.scope - libcontainer container fdef02c28193b04e7c254f9a5a2473e99a0021facf5dd11f32bf8ff082da00e0. Jan 30 14:00:17.524875 containerd[1460]: time="2025-01-30T14:00:17.524802915Z" level=info msg="StartContainer for \"fdef02c28193b04e7c254f9a5a2473e99a0021facf5dd11f32bf8ff082da00e0\" returns successfully" Jan 30 14:00:17.541083 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 14:00:17.541442 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:00:17.541530 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:00:17.550055 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:00:17.550297 systemd[1]: cri-containerd-fdef02c28193b04e7c254f9a5a2473e99a0021facf5dd11f32bf8ff082da00e0.scope: Deactivated successfully. Jan 30 14:00:17.582187 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:00:17.585340 containerd[1460]: time="2025-01-30T14:00:17.584826098Z" level=info msg="shim disconnected" id=fdef02c28193b04e7c254f9a5a2473e99a0021facf5dd11f32bf8ff082da00e0 namespace=k8s.io Jan 30 14:00:17.585340 containerd[1460]: time="2025-01-30T14:00:17.584901656Z" level=warning msg="cleaning up after shim disconnected" id=fdef02c28193b04e7c254f9a5a2473e99a0021facf5dd11f32bf8ff082da00e0 namespace=k8s.io Jan 30 14:00:17.585340 containerd[1460]: time="2025-01-30T14:00:17.584914540Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:00:17.999980 kubelet[1771]: E0130 14:00:17.999628 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:18.420723 kubelet[1771]: E0130 14:00:18.420280 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:18.423536 containerd[1460]: time="2025-01-30T14:00:18.423475633Z" level=info msg="CreateContainer within sandbox \"cdf396d0bb45d4c65cf5ed4b99ef48154f134016b9a9049eefed55d9a22dd6bd\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 14:00:18.487115 containerd[1460]: time="2025-01-30T14:00:18.487024741Z" level=info msg="CreateContainer within sandbox \"cdf396d0bb45d4c65cf5ed4b99ef48154f134016b9a9049eefed55d9a22dd6bd\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"89da84638ba9d6330c890e164e82bffcea300c2309ef72567ebb615a5258849d\"" Jan 30 14:00:18.489374 containerd[1460]: time="2025-01-30T14:00:18.488357511Z" level=info msg="StartContainer for \"89da84638ba9d6330c890e164e82bffcea300c2309ef72567ebb615a5258849d\"" Jan 30 14:00:18.493734 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount754971630.mount: Deactivated successfully. Jan 30 14:00:18.555337 systemd[1]: Started cri-containerd-89da84638ba9d6330c890e164e82bffcea300c2309ef72567ebb615a5258849d.scope - libcontainer container 89da84638ba9d6330c890e164e82bffcea300c2309ef72567ebb615a5258849d. Jan 30 14:00:18.626371 systemd[1]: cri-containerd-89da84638ba9d6330c890e164e82bffcea300c2309ef72567ebb615a5258849d.scope: Deactivated successfully. Jan 30 14:00:18.629049 containerd[1460]: time="2025-01-30T14:00:18.628877472Z" level=info msg="StartContainer for \"89da84638ba9d6330c890e164e82bffcea300c2309ef72567ebb615a5258849d\" returns successfully" Jan 30 14:00:18.762769 containerd[1460]: time="2025-01-30T14:00:18.762340743Z" level=info msg="shim disconnected" id=89da84638ba9d6330c890e164e82bffcea300c2309ef72567ebb615a5258849d namespace=k8s.io Jan 30 14:00:18.762769 containerd[1460]: time="2025-01-30T14:00:18.762473639Z" level=warning msg="cleaning up after shim disconnected" id=89da84638ba9d6330c890e164e82bffcea300c2309ef72567ebb615a5258849d namespace=k8s.io Jan 30 14:00:18.762769 containerd[1460]: time="2025-01-30T14:00:18.762498188Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:00:18.806408 containerd[1460]: time="2025-01-30T14:00:18.806170330Z" level=warning msg="cleanup warnings time=\"2025-01-30T14:00:18Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 14:00:19.001704 kubelet[1771]: E0130 14:00:19.001506 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:19.077897 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89da84638ba9d6330c890e164e82bffcea300c2309ef72567ebb615a5258849d-rootfs.mount: Deactivated successfully. Jan 30 14:00:19.181912 containerd[1460]: time="2025-01-30T14:00:19.181848256Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:00:19.183506 containerd[1460]: time="2025-01-30T14:00:19.183435442Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058337" Jan 30 14:00:19.184551 containerd[1460]: time="2025-01-30T14:00:19.184375460Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:00:19.194607 containerd[1460]: time="2025-01-30T14:00:19.193479540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:00:19.194607 containerd[1460]: time="2025-01-30T14:00:19.194430755Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 2.131199531s" Jan 30 14:00:19.194607 containerd[1460]: time="2025-01-30T14:00:19.194474574Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 30 14:00:19.197541 containerd[1460]: time="2025-01-30T14:00:19.197461794Z" level=info msg="CreateContainer within sandbox \"9ceb6700be6858927af3f5ca2eb7b7fae75855f5fb1c31e47c48ced931cdcc5d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 14:00:19.215459 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1472877268.mount: Deactivated successfully. Jan 30 14:00:19.223822 containerd[1460]: time="2025-01-30T14:00:19.223727563Z" level=info msg="CreateContainer within sandbox \"9ceb6700be6858927af3f5ca2eb7b7fae75855f5fb1c31e47c48ced931cdcc5d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9319a476f928f3537d0840367b5c4ce4106a45b85f5783bc17eb7a6adc81c457\"" Jan 30 14:00:19.226481 containerd[1460]: time="2025-01-30T14:00:19.225064581Z" level=info msg="StartContainer for \"9319a476f928f3537d0840367b5c4ce4106a45b85f5783bc17eb7a6adc81c457\"" Jan 30 14:00:19.271591 systemd[1]: Started cri-containerd-9319a476f928f3537d0840367b5c4ce4106a45b85f5783bc17eb7a6adc81c457.scope - libcontainer container 9319a476f928f3537d0840367b5c4ce4106a45b85f5783bc17eb7a6adc81c457. Jan 30 14:00:19.323378 containerd[1460]: time="2025-01-30T14:00:19.323295803Z" level=info msg="StartContainer for \"9319a476f928f3537d0840367b5c4ce4106a45b85f5783bc17eb7a6adc81c457\" returns successfully" Jan 30 14:00:19.425868 kubelet[1771]: E0130 14:00:19.425184 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:19.429672 kubelet[1771]: E0130 14:00:19.429556 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:19.433010 containerd[1460]: time="2025-01-30T14:00:19.432952040Z" level=info msg="CreateContainer within sandbox \"cdf396d0bb45d4c65cf5ed4b99ef48154f134016b9a9049eefed55d9a22dd6bd\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 14:00:19.453610 containerd[1460]: time="2025-01-30T14:00:19.453548240Z" level=info msg="CreateContainer within sandbox \"cdf396d0bb45d4c65cf5ed4b99ef48154f134016b9a9049eefed55d9a22dd6bd\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5f0f58192f2fc81bcd8efa1403f68650ddc740c74823e7dab5edd46a58927b38\"" Jan 30 14:00:19.454567 containerd[1460]: time="2025-01-30T14:00:19.454512311Z" level=info msg="StartContainer for \"5f0f58192f2fc81bcd8efa1403f68650ddc740c74823e7dab5edd46a58927b38\"" Jan 30 14:00:19.482695 kubelet[1771]: I0130 14:00:19.482629 1771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ggtm7" podStartSLOduration=4.061432334 podStartE2EDuration="15.482606826s" podCreationTimestamp="2025-01-30 14:00:04 +0000 UTC" firstStartedPulling="2025-01-30 14:00:07.774464262 +0000 UTC m=+5.124528569" lastFinishedPulling="2025-01-30 14:00:19.195638754 +0000 UTC m=+16.545703061" observedRunningTime="2025-01-30 14:00:19.446693354 +0000 UTC m=+16.796757670" watchObservedRunningTime="2025-01-30 14:00:19.482606826 +0000 UTC m=+16.832671139" Jan 30 14:00:19.506171 systemd[1]: Started cri-containerd-5f0f58192f2fc81bcd8efa1403f68650ddc740c74823e7dab5edd46a58927b38.scope - libcontainer container 5f0f58192f2fc81bcd8efa1403f68650ddc740c74823e7dab5edd46a58927b38. Jan 30 14:00:19.553345 systemd[1]: cri-containerd-5f0f58192f2fc81bcd8efa1403f68650ddc740c74823e7dab5edd46a58927b38.scope: Deactivated successfully. Jan 30 14:00:19.556930 containerd[1460]: time="2025-01-30T14:00:19.556806007Z" level=info msg="StartContainer for \"5f0f58192f2fc81bcd8efa1403f68650ddc740c74823e7dab5edd46a58927b38\" returns successfully" Jan 30 14:00:19.634150 containerd[1460]: time="2025-01-30T14:00:19.634072883Z" level=info msg="shim disconnected" id=5f0f58192f2fc81bcd8efa1403f68650ddc740c74823e7dab5edd46a58927b38 namespace=k8s.io Jan 30 14:00:19.635092 containerd[1460]: time="2025-01-30T14:00:19.634630767Z" level=warning msg="cleaning up after shim disconnected" id=5f0f58192f2fc81bcd8efa1403f68650ddc740c74823e7dab5edd46a58927b38 namespace=k8s.io Jan 30 14:00:19.635092 containerd[1460]: time="2025-01-30T14:00:19.634656725Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:00:20.002586 kubelet[1771]: E0130 14:00:20.002496 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:20.078216 systemd[1]: run-containerd-runc-k8s.io-9319a476f928f3537d0840367b5c4ce4106a45b85f5783bc17eb7a6adc81c457-runc.7nN5ej.mount: Deactivated successfully. Jan 30 14:00:20.175636 systemd-resolved[1324]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jan 30 14:00:20.436224 kubelet[1771]: E0130 14:00:20.435430 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:20.436224 kubelet[1771]: E0130 14:00:20.435690 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:20.438745 containerd[1460]: time="2025-01-30T14:00:20.438686040Z" level=info msg="CreateContainer within sandbox \"cdf396d0bb45d4c65cf5ed4b99ef48154f134016b9a9049eefed55d9a22dd6bd\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 14:00:20.493971 containerd[1460]: time="2025-01-30T14:00:20.493852537Z" level=info msg="CreateContainer within sandbox \"cdf396d0bb45d4c65cf5ed4b99ef48154f134016b9a9049eefed55d9a22dd6bd\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"621835f2ca6d4ea049e36a4ab8cf0d0fe137f7aa8baa077c2d83a11262704f94\"" Jan 30 14:00:20.496793 containerd[1460]: time="2025-01-30T14:00:20.495203205Z" level=info msg="StartContainer for \"621835f2ca6d4ea049e36a4ab8cf0d0fe137f7aa8baa077c2d83a11262704f94\"" Jan 30 14:00:20.555695 systemd[1]: Started cri-containerd-621835f2ca6d4ea049e36a4ab8cf0d0fe137f7aa8baa077c2d83a11262704f94.scope - libcontainer container 621835f2ca6d4ea049e36a4ab8cf0d0fe137f7aa8baa077c2d83a11262704f94. Jan 30 14:00:20.620294 containerd[1460]: time="2025-01-30T14:00:20.619409720Z" level=info msg="StartContainer for \"621835f2ca6d4ea049e36a4ab8cf0d0fe137f7aa8baa077c2d83a11262704f94\" returns successfully" Jan 30 14:00:20.737469 kubelet[1771]: I0130 14:00:20.736357 1771 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 14:00:21.003472 kubelet[1771]: E0130 14:00:21.003291 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:21.131606 kernel: Initializing XFRM netlink socket Jan 30 14:00:21.443275 kubelet[1771]: E0130 14:00:21.442920 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:22.004243 kubelet[1771]: E0130 14:00:22.004136 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:22.445700 kubelet[1771]: E0130 14:00:22.445654 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:22.826156 systemd-networkd[1366]: cilium_host: Link UP Jan 30 14:00:22.826607 systemd-networkd[1366]: cilium_net: Link UP Jan 30 14:00:22.826838 systemd-networkd[1366]: cilium_net: Gained carrier Jan 30 14:00:22.827057 systemd-networkd[1366]: cilium_host: Gained carrier Jan 30 14:00:22.976415 systemd-networkd[1366]: cilium_vxlan: Link UP Jan 30 14:00:22.976425 systemd-networkd[1366]: cilium_vxlan: Gained carrier Jan 30 14:00:23.005398 kubelet[1771]: E0130 14:00:23.005278 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:23.200099 systemd-networkd[1366]: cilium_net: Gained IPv6LL Jan 30 14:00:23.281426 kernel: NET: Registered PF_ALG protocol family Jan 30 14:00:23.449281 kubelet[1771]: E0130 14:00:23.449227 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:23.539221 kubelet[1771]: I0130 14:00:23.539008 1771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-thxzh" podStartSLOduration=10.249205363 podStartE2EDuration="19.538975176s" podCreationTimestamp="2025-01-30 14:00:04 +0000 UTC" firstStartedPulling="2025-01-30 14:00:07.772418222 +0000 UTC m=+5.122482529" lastFinishedPulling="2025-01-30 14:00:17.06218805 +0000 UTC m=+14.412252342" observedRunningTime="2025-01-30 14:00:21.479185369 +0000 UTC m=+18.829249682" watchObservedRunningTime="2025-01-30 14:00:23.538975176 +0000 UTC m=+20.889039481" Jan 30 14:00:23.539416 kubelet[1771]: I0130 14:00:23.539384 1771 topology_manager.go:215] "Topology Admit Handler" podUID="ef34f9da-1fd2-4637-b36e-9d579fdc1236" podNamespace="default" podName="nginx-deployment-85f456d6dd-nth2q" Jan 30 14:00:23.551712 systemd[1]: Created slice kubepods-besteffort-podef34f9da_1fd2_4637_b36e_9d579fdc1236.slice - libcontainer container kubepods-besteffort-podef34f9da_1fd2_4637_b36e_9d579fdc1236.slice. Jan 30 14:00:23.568366 systemd-networkd[1366]: cilium_host: Gained IPv6LL Jan 30 14:00:23.570873 kubelet[1771]: I0130 14:00:23.569551 1771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-278k7\" (UniqueName: \"kubernetes.io/projected/ef34f9da-1fd2-4637-b36e-9d579fdc1236-kube-api-access-278k7\") pod \"nginx-deployment-85f456d6dd-nth2q\" (UID: \"ef34f9da-1fd2-4637-b36e-9d579fdc1236\") " pod="default/nginx-deployment-85f456d6dd-nth2q" Jan 30 14:00:23.858605 containerd[1460]: time="2025-01-30T14:00:23.858062981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-nth2q,Uid:ef34f9da-1fd2-4637-b36e-9d579fdc1236,Namespace:default,Attempt:0,}" Jan 30 14:00:23.978173 kubelet[1771]: E0130 14:00:23.978094 1771 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:24.005897 kubelet[1771]: E0130 14:00:24.005835 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:24.296849 systemd-networkd[1366]: lxc_health: Link UP Jan 30 14:00:24.297145 systemd-networkd[1366]: lxc_health: Gained carrier Jan 30 14:00:24.450859 kubelet[1771]: E0130 14:00:24.450821 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:24.784441 systemd-networkd[1366]: cilium_vxlan: Gained IPv6LL Jan 30 14:00:24.973107 systemd-networkd[1366]: lxc9cdf221bf4bf: Link UP Jan 30 14:00:24.980897 kernel: eth0: renamed from tmp8a104 Jan 30 14:00:24.987237 systemd-networkd[1366]: lxc9cdf221bf4bf: Gained carrier Jan 30 14:00:25.008398 kubelet[1771]: E0130 14:00:25.006419 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:25.424528 systemd-networkd[1366]: lxc_health: Gained IPv6LL Jan 30 14:00:25.455023 kubelet[1771]: E0130 14:00:25.454696 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:26.006901 kubelet[1771]: E0130 14:00:26.006825 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:26.447699 systemd-networkd[1366]: lxc9cdf221bf4bf: Gained IPv6LL Jan 30 14:00:26.464259 kubelet[1771]: E0130 14:00:26.464209 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:27.007327 kubelet[1771]: E0130 14:00:27.007233 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:28.007461 kubelet[1771]: E0130 14:00:28.007388 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:29.007810 kubelet[1771]: E0130 14:00:29.007739 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:29.610624 containerd[1460]: time="2025-01-30T14:00:29.609855869Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:00:29.611765 containerd[1460]: time="2025-01-30T14:00:29.610580591Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:00:29.611765 containerd[1460]: time="2025-01-30T14:00:29.610814053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:00:29.611765 containerd[1460]: time="2025-01-30T14:00:29.611419326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:00:29.644599 systemd[1]: Started cri-containerd-8a1043eec48a1be667b9c5461ee76dbbe979bea393f81314a286a91d75869245.scope - libcontainer container 8a1043eec48a1be667b9c5461ee76dbbe979bea393f81314a286a91d75869245. Jan 30 14:00:29.696581 containerd[1460]: time="2025-01-30T14:00:29.696538779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-nth2q,Uid:ef34f9da-1fd2-4637-b36e-9d579fdc1236,Namespace:default,Attempt:0,} returns sandbox id \"8a1043eec48a1be667b9c5461ee76dbbe979bea393f81314a286a91d75869245\"" Jan 30 14:00:29.698898 containerd[1460]: time="2025-01-30T14:00:29.698847712Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 30 14:00:29.701517 systemd-resolved[1324]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Jan 30 14:00:30.008854 kubelet[1771]: E0130 14:00:30.008168 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:31.009068 kubelet[1771]: E0130 14:00:31.008980 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:32.009471 kubelet[1771]: E0130 14:00:32.009311 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:32.705278 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3640814354.mount: Deactivated successfully. Jan 30 14:00:33.010346 kubelet[1771]: E0130 14:00:33.009983 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:34.011114 kubelet[1771]: E0130 14:00:34.011022 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:34.046083 containerd[1460]: time="2025-01-30T14:00:34.044763097Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:00:34.046083 containerd[1460]: time="2025-01-30T14:00:34.045975269Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71015561" Jan 30 14:00:34.046083 containerd[1460]: time="2025-01-30T14:00:34.046024533Z" level=info msg="ImageCreate event name:\"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:00:34.048798 containerd[1460]: time="2025-01-30T14:00:34.048755268Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:00:34.049752 containerd[1460]: time="2025-01-30T14:00:34.049715503Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 4.350832115s" Jan 30 14:00:34.049846 containerd[1460]: time="2025-01-30T14:00:34.049757812Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 30 14:00:34.052869 containerd[1460]: time="2025-01-30T14:00:34.052838095Z" level=info msg="CreateContainer within sandbox \"8a1043eec48a1be667b9c5461ee76dbbe979bea393f81314a286a91d75869245\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 30 14:00:34.069539 containerd[1460]: time="2025-01-30T14:00:34.069489049Z" level=info msg="CreateContainer within sandbox \"8a1043eec48a1be667b9c5461ee76dbbe979bea393f81314a286a91d75869245\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"4a064a7647ea3f72ae74551ac95b5062ff42540df6cdc02e54a4e5175d1a67f4\"" Jan 30 14:00:34.070673 containerd[1460]: time="2025-01-30T14:00:34.070644616Z" level=info msg="StartContainer for \"4a064a7647ea3f72ae74551ac95b5062ff42540df6cdc02e54a4e5175d1a67f4\"" Jan 30 14:00:34.156276 systemd[1]: run-containerd-runc-k8s.io-4a064a7647ea3f72ae74551ac95b5062ff42540df6cdc02e54a4e5175d1a67f4-runc.A8uls6.mount: Deactivated successfully. Jan 30 14:00:34.168725 systemd[1]: Started cri-containerd-4a064a7647ea3f72ae74551ac95b5062ff42540df6cdc02e54a4e5175d1a67f4.scope - libcontainer container 4a064a7647ea3f72ae74551ac95b5062ff42540df6cdc02e54a4e5175d1a67f4. Jan 30 14:00:34.202837 containerd[1460]: time="2025-01-30T14:00:34.202088009Z" level=info msg="StartContainer for \"4a064a7647ea3f72ae74551ac95b5062ff42540df6cdc02e54a4e5175d1a67f4\" returns successfully" Jan 30 14:00:35.011463 kubelet[1771]: E0130 14:00:35.011415 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:36.012461 kubelet[1771]: E0130 14:00:36.012391 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:37.012593 kubelet[1771]: E0130 14:00:37.012531 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:38.013670 kubelet[1771]: E0130 14:00:38.013594 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:38.326563 update_engine[1444]: I20250130 14:00:38.325624 1444 update_attempter.cc:509] Updating boot flags... Jan 30 14:00:38.371195 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2973) Jan 30 14:00:38.449688 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2976) Jan 30 14:00:38.532029 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2976) Jan 30 14:00:39.014455 kubelet[1771]: E0130 14:00:39.014385 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:40.014670 kubelet[1771]: E0130 14:00:40.014583 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:41.014949 kubelet[1771]: E0130 14:00:41.014801 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:42.015493 kubelet[1771]: E0130 14:00:42.015423 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:43.018432 kubelet[1771]: E0130 14:00:43.016578 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:43.103736 kubelet[1771]: I0130 14:00:43.103554 1771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-nth2q" podStartSLOduration=15.751087536 podStartE2EDuration="20.103491424s" podCreationTimestamp="2025-01-30 14:00:23 +0000 UTC" firstStartedPulling="2025-01-30 14:00:29.698481667 +0000 UTC m=+27.048545971" lastFinishedPulling="2025-01-30 14:00:34.050885555 +0000 UTC m=+31.400949859" observedRunningTime="2025-01-30 14:00:34.506123789 +0000 UTC m=+31.856188102" watchObservedRunningTime="2025-01-30 14:00:43.103491424 +0000 UTC m=+40.453555777" Jan 30 14:00:43.104600 kubelet[1771]: I0130 14:00:43.104140 1771 topology_manager.go:215] "Topology Admit Handler" podUID="c0f669fa-452a-4360-8981-a153ff480b6e" podNamespace="default" podName="nfs-server-provisioner-0" Jan 30 14:00:43.111491 systemd[1]: Created slice kubepods-besteffort-podc0f669fa_452a_4360_8981_a153ff480b6e.slice - libcontainer container kubepods-besteffort-podc0f669fa_452a_4360_8981_a153ff480b6e.slice. Jan 30 14:00:43.211623 kubelet[1771]: I0130 14:00:43.211552 1771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7knv\" (UniqueName: \"kubernetes.io/projected/c0f669fa-452a-4360-8981-a153ff480b6e-kube-api-access-x7knv\") pod \"nfs-server-provisioner-0\" (UID: \"c0f669fa-452a-4360-8981-a153ff480b6e\") " pod="default/nfs-server-provisioner-0" Jan 30 14:00:43.211888 kubelet[1771]: I0130 14:00:43.211856 1771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/c0f669fa-452a-4360-8981-a153ff480b6e-data\") pod \"nfs-server-provisioner-0\" (UID: \"c0f669fa-452a-4360-8981-a153ff480b6e\") " pod="default/nfs-server-provisioner-0" Jan 30 14:00:43.416163 containerd[1460]: time="2025-01-30T14:00:43.415692367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:c0f669fa-452a-4360-8981-a153ff480b6e,Namespace:default,Attempt:0,}" Jan 30 14:00:43.472945 systemd-networkd[1366]: lxce39426819295: Link UP Jan 30 14:00:43.486196 kernel: eth0: renamed from tmp5ad3e Jan 30 14:00:43.490374 systemd-networkd[1366]: lxce39426819295: Gained carrier Jan 30 14:00:43.704800 containerd[1460]: time="2025-01-30T14:00:43.704334883Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:00:43.705404 containerd[1460]: time="2025-01-30T14:00:43.705187669Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:00:43.706106 containerd[1460]: time="2025-01-30T14:00:43.705406700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:00:43.706495 containerd[1460]: time="2025-01-30T14:00:43.706416873Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:00:43.745624 systemd[1]: Started cri-containerd-5ad3e74ebe1c5f51b007417ef2c17a8649634d5c886221cc5ee5cee4b1d072f2.scope - libcontainer container 5ad3e74ebe1c5f51b007417ef2c17a8649634d5c886221cc5ee5cee4b1d072f2. Jan 30 14:00:43.801692 containerd[1460]: time="2025-01-30T14:00:43.801637503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:c0f669fa-452a-4360-8981-a153ff480b6e,Namespace:default,Attempt:0,} returns sandbox id \"5ad3e74ebe1c5f51b007417ef2c17a8649634d5c886221cc5ee5cee4b1d072f2\"" Jan 30 14:00:43.804052 containerd[1460]: time="2025-01-30T14:00:43.804013309Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 30 14:00:43.978692 kubelet[1771]: E0130 14:00:43.978534 1771 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:44.017432 kubelet[1771]: E0130 14:00:44.017361 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:44.687495 systemd-networkd[1366]: lxce39426819295: Gained IPv6LL Jan 30 14:00:45.017971 kubelet[1771]: E0130 14:00:45.017599 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:45.861593 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2168212622.mount: Deactivated successfully. Jan 30 14:00:46.018755 kubelet[1771]: E0130 14:00:46.018695 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:47.019143 kubelet[1771]: E0130 14:00:47.019060 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:48.019756 kubelet[1771]: E0130 14:00:48.019698 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:48.053950 containerd[1460]: time="2025-01-30T14:00:48.052576504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:00:48.055872 containerd[1460]: time="2025-01-30T14:00:48.055814821Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jan 30 14:00:48.057319 containerd[1460]: time="2025-01-30T14:00:48.057262569Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:00:48.060418 containerd[1460]: time="2025-01-30T14:00:48.060353196Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:00:48.061680 containerd[1460]: time="2025-01-30T14:00:48.061622195Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 4.257561056s" Jan 30 14:00:48.061680 containerd[1460]: time="2025-01-30T14:00:48.061685270Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 30 14:00:48.064912 containerd[1460]: time="2025-01-30T14:00:48.064865304Z" level=info msg="CreateContainer within sandbox \"5ad3e74ebe1c5f51b007417ef2c17a8649634d5c886221cc5ee5cee4b1d072f2\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 30 14:00:48.084061 containerd[1460]: time="2025-01-30T14:00:48.083977179Z" level=info msg="CreateContainer within sandbox \"5ad3e74ebe1c5f51b007417ef2c17a8649634d5c886221cc5ee5cee4b1d072f2\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"1755876b6f23b66700733fd7ed7f43017fb462c6b05a00f346744505ffb6fa2e\"" Jan 30 14:00:48.084913 containerd[1460]: time="2025-01-30T14:00:48.084828420Z" level=info msg="StartContainer for \"1755876b6f23b66700733fd7ed7f43017fb462c6b05a00f346744505ffb6fa2e\"" Jan 30 14:00:48.116621 systemd[1]: run-containerd-runc-k8s.io-1755876b6f23b66700733fd7ed7f43017fb462c6b05a00f346744505ffb6fa2e-runc.Nn3LnG.mount: Deactivated successfully. Jan 30 14:00:48.128536 systemd[1]: Started cri-containerd-1755876b6f23b66700733fd7ed7f43017fb462c6b05a00f346744505ffb6fa2e.scope - libcontainer container 1755876b6f23b66700733fd7ed7f43017fb462c6b05a00f346744505ffb6fa2e. Jan 30 14:00:48.160983 containerd[1460]: time="2025-01-30T14:00:48.160920871Z" level=info msg="StartContainer for \"1755876b6f23b66700733fd7ed7f43017fb462c6b05a00f346744505ffb6fa2e\" returns successfully" Jan 30 14:00:48.552012 kubelet[1771]: I0130 14:00:48.551938 1771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.292439562 podStartE2EDuration="5.551915251s" podCreationTimestamp="2025-01-30 14:00:43 +0000 UTC" firstStartedPulling="2025-01-30 14:00:43.803477637 +0000 UTC m=+41.153541942" lastFinishedPulling="2025-01-30 14:00:48.062953318 +0000 UTC m=+45.413017631" observedRunningTime="2025-01-30 14:00:48.550168831 +0000 UTC m=+45.900233143" watchObservedRunningTime="2025-01-30 14:00:48.551915251 +0000 UTC m=+45.901979554" Jan 30 14:00:49.020747 kubelet[1771]: E0130 14:00:49.020681 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:50.021810 kubelet[1771]: E0130 14:00:50.021755 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:51.022978 kubelet[1771]: E0130 14:00:51.022888 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:52.023454 kubelet[1771]: E0130 14:00:52.023386 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:53.024243 kubelet[1771]: E0130 14:00:53.024170 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:54.024488 kubelet[1771]: E0130 14:00:54.024362 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:55.025616 kubelet[1771]: E0130 14:00:55.025541 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:56.026607 kubelet[1771]: E0130 14:00:56.026531 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:57.027610 kubelet[1771]: E0130 14:00:57.027516 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:58.028411 kubelet[1771]: E0130 14:00:58.028346 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:58.187349 kubelet[1771]: I0130 14:00:58.187239 1771 topology_manager.go:215] "Topology Admit Handler" podUID="e7045669-b66e-49cb-acde-cc1c2f9bcd2e" podNamespace="default" podName="test-pod-1" Jan 30 14:00:58.195277 systemd[1]: Created slice kubepods-besteffort-pode7045669_b66e_49cb_acde_cc1c2f9bcd2e.slice - libcontainer container kubepods-besteffort-pode7045669_b66e_49cb_acde_cc1c2f9bcd2e.slice. Jan 30 14:00:58.217749 kubelet[1771]: I0130 14:00:58.217697 1771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2g5s\" (UniqueName: \"kubernetes.io/projected/e7045669-b66e-49cb-acde-cc1c2f9bcd2e-kube-api-access-v2g5s\") pod \"test-pod-1\" (UID: \"e7045669-b66e-49cb-acde-cc1c2f9bcd2e\") " pod="default/test-pod-1" Jan 30 14:00:58.217940 kubelet[1771]: I0130 14:00:58.217765 1771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7c8bc4d1-9edd-473f-b173-46a59b3cf94d\" (UniqueName: \"kubernetes.io/nfs/e7045669-b66e-49cb-acde-cc1c2f9bcd2e-pvc-7c8bc4d1-9edd-473f-b173-46a59b3cf94d\") pod \"test-pod-1\" (UID: \"e7045669-b66e-49cb-acde-cc1c2f9bcd2e\") " pod="default/test-pod-1" Jan 30 14:00:58.353467 kernel: FS-Cache: Loaded Jan 30 14:00:58.434580 kernel: RPC: Registered named UNIX socket transport module. Jan 30 14:00:58.434732 kernel: RPC: Registered udp transport module. Jan 30 14:00:58.434756 kernel: RPC: Registered tcp transport module. Jan 30 14:00:58.434774 kernel: RPC: Registered tcp-with-tls transport module. Jan 30 14:00:58.435586 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 30 14:00:58.756509 kernel: NFS: Registering the id_resolver key type Jan 30 14:00:58.758424 kernel: Key type id_resolver registered Jan 30 14:00:58.761005 kernel: Key type id_legacy registered Jan 30 14:00:58.801466 nfsidmap[3164]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.0-f-9922ae6042' Jan 30 14:00:58.805867 nfsidmap[3165]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.0-f-9922ae6042' Jan 30 14:00:59.029582 kubelet[1771]: E0130 14:00:59.029420 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:00:59.099566 containerd[1460]: time="2025-01-30T14:00:59.099181049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:e7045669-b66e-49cb-acde-cc1c2f9bcd2e,Namespace:default,Attempt:0,}" Jan 30 14:00:59.139919 systemd-networkd[1366]: lxc140755cd4b2f: Link UP Jan 30 14:00:59.145654 kernel: eth0: renamed from tmpaa646 Jan 30 14:00:59.155194 systemd-networkd[1366]: lxc140755cd4b2f: Gained carrier Jan 30 14:00:59.341699 containerd[1460]: time="2025-01-30T14:00:59.341120462Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:00:59.341699 containerd[1460]: time="2025-01-30T14:00:59.341282115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:00:59.341699 containerd[1460]: time="2025-01-30T14:00:59.341297260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:00:59.341699 containerd[1460]: time="2025-01-30T14:00:59.341414676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:00:59.380632 systemd[1]: Started cri-containerd-aa64636efc69e7e4a8aa4944955d8877e88a415a559eafbff12f1e732e199d64.scope - libcontainer container aa64636efc69e7e4a8aa4944955d8877e88a415a559eafbff12f1e732e199d64. Jan 30 14:00:59.438192 containerd[1460]: time="2025-01-30T14:00:59.438128330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:e7045669-b66e-49cb-acde-cc1c2f9bcd2e,Namespace:default,Attempt:0,} returns sandbox id \"aa64636efc69e7e4a8aa4944955d8877e88a415a559eafbff12f1e732e199d64\"" Jan 30 14:00:59.441487 containerd[1460]: time="2025-01-30T14:00:59.441434971Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 30 14:00:59.813992 containerd[1460]: time="2025-01-30T14:00:59.813927663Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:00:59.814837 containerd[1460]: time="2025-01-30T14:00:59.814779044Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 30 14:00:59.818042 containerd[1460]: time="2025-01-30T14:00:59.817986908Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 376.500276ms" Jan 30 14:00:59.818042 containerd[1460]: time="2025-01-30T14:00:59.818042416Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 30 14:00:59.820591 containerd[1460]: time="2025-01-30T14:00:59.820548132Z" level=info msg="CreateContainer within sandbox \"aa64636efc69e7e4a8aa4944955d8877e88a415a559eafbff12f1e732e199d64\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 30 14:00:59.839012 containerd[1460]: time="2025-01-30T14:00:59.838956402Z" level=info msg="CreateContainer within sandbox \"aa64636efc69e7e4a8aa4944955d8877e88a415a559eafbff12f1e732e199d64\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"b44aeb086adea542b16cb12bef5e134c0ead1f39eee3fa36c279297e27d7bbbe\"" Jan 30 14:00:59.839959 containerd[1460]: time="2025-01-30T14:00:59.839913545Z" level=info msg="StartContainer for \"b44aeb086adea542b16cb12bef5e134c0ead1f39eee3fa36c279297e27d7bbbe\"" Jan 30 14:00:59.874596 systemd[1]: Started cri-containerd-b44aeb086adea542b16cb12bef5e134c0ead1f39eee3fa36c279297e27d7bbbe.scope - libcontainer container b44aeb086adea542b16cb12bef5e134c0ead1f39eee3fa36c279297e27d7bbbe. Jan 30 14:00:59.909963 containerd[1460]: time="2025-01-30T14:00:59.909369278Z" level=info msg="StartContainer for \"b44aeb086adea542b16cb12bef5e134c0ead1f39eee3fa36c279297e27d7bbbe\" returns successfully" Jan 30 14:01:00.030219 kubelet[1771]: E0130 14:01:00.030150 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:01:00.583415 kubelet[1771]: I0130 14:01:00.583328 1771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=17.205201745 podStartE2EDuration="17.58327796s" podCreationTimestamp="2025-01-30 14:00:43 +0000 UTC" firstStartedPulling="2025-01-30 14:00:59.440760761 +0000 UTC m=+56.790825054" lastFinishedPulling="2025-01-30 14:00:59.818836963 +0000 UTC m=+57.168901269" observedRunningTime="2025-01-30 14:01:00.583231735 +0000 UTC m=+57.933296049" watchObservedRunningTime="2025-01-30 14:01:00.58327796 +0000 UTC m=+57.933342264" Jan 30 14:01:00.625446 systemd-networkd[1366]: lxc140755cd4b2f: Gained IPv6LL Jan 30 14:01:01.031371 kubelet[1771]: E0130 14:01:01.031282 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:01:02.032368 kubelet[1771]: E0130 14:01:02.032262 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:01:03.032879 kubelet[1771]: E0130 14:01:03.032784 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:01:03.977953 kubelet[1771]: E0130 14:01:03.977886 1771 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:01:04.033519 kubelet[1771]: E0130 14:01:04.033447 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:01:05.034639 kubelet[1771]: E0130 14:01:05.034568 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:01:06.035458 kubelet[1771]: E0130 14:01:06.035392 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:01:07.036467 kubelet[1771]: E0130 14:01:07.036403 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:01:07.119610 systemd[1]: run-containerd-runc-k8s.io-621835f2ca6d4ea049e36a4ab8cf0d0fe137f7aa8baa077c2d83a11262704f94-runc.bSIt2S.mount: Deactivated successfully. Jan 30 14:01:07.135726 containerd[1460]: time="2025-01-30T14:01:07.135615783Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 14:01:07.143885 containerd[1460]: time="2025-01-30T14:01:07.143790874Z" level=info msg="StopContainer for \"621835f2ca6d4ea049e36a4ab8cf0d0fe137f7aa8baa077c2d83a11262704f94\" with timeout 2 (s)" Jan 30 14:01:07.144383 containerd[1460]: time="2025-01-30T14:01:07.144353809Z" level=info msg="Stop container \"621835f2ca6d4ea049e36a4ab8cf0d0fe137f7aa8baa077c2d83a11262704f94\" with signal terminated" Jan 30 14:01:07.154030 systemd-networkd[1366]: lxc_health: Link DOWN Jan 30 14:01:07.154042 systemd-networkd[1366]: lxc_health: Lost carrier Jan 30 14:01:07.178269 systemd[1]: cri-containerd-621835f2ca6d4ea049e36a4ab8cf0d0fe137f7aa8baa077c2d83a11262704f94.scope: Deactivated successfully. Jan 30 14:01:07.179278 systemd[1]: cri-containerd-621835f2ca6d4ea049e36a4ab8cf0d0fe137f7aa8baa077c2d83a11262704f94.scope: Consumed 8.461s CPU time. Jan 30 14:01:07.206355 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-621835f2ca6d4ea049e36a4ab8cf0d0fe137f7aa8baa077c2d83a11262704f94-rootfs.mount: Deactivated successfully. Jan 30 14:01:07.287651 containerd[1460]: time="2025-01-30T14:01:07.287295058Z" level=info msg="shim disconnected" id=621835f2ca6d4ea049e36a4ab8cf0d0fe137f7aa8baa077c2d83a11262704f94 namespace=k8s.io Jan 30 14:01:07.287651 containerd[1460]: time="2025-01-30T14:01:07.287370867Z" level=warning msg="cleaning up after shim disconnected" id=621835f2ca6d4ea049e36a4ab8cf0d0fe137f7aa8baa077c2d83a11262704f94 namespace=k8s.io Jan 30 14:01:07.287651 containerd[1460]: time="2025-01-30T14:01:07.287380012Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:01:07.306881 containerd[1460]: time="2025-01-30T14:01:07.306667386Z" level=info msg="StopContainer for \"621835f2ca6d4ea049e36a4ab8cf0d0fe137f7aa8baa077c2d83a11262704f94\" returns successfully" Jan 30 14:01:07.307924 containerd[1460]: time="2025-01-30T14:01:07.307694666Z" level=info msg="StopPodSandbox for \"cdf396d0bb45d4c65cf5ed4b99ef48154f134016b9a9049eefed55d9a22dd6bd\"" Jan 30 14:01:07.307924 containerd[1460]: time="2025-01-30T14:01:07.307742983Z" level=info msg="Container to stop \"89da84638ba9d6330c890e164e82bffcea300c2309ef72567ebb615a5258849d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 14:01:07.307924 containerd[1460]: time="2025-01-30T14:01:07.307755398Z" level=info msg="Container to stop \"c667fa1839ee923826339272f55fb8e8b4048ab0737f1ea3ebeb89a55d506f23\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 14:01:07.307924 containerd[1460]: time="2025-01-30T14:01:07.307764925Z" level=info msg="Container to stop \"fdef02c28193b04e7c254f9a5a2473e99a0021facf5dd11f32bf8ff082da00e0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 14:01:07.307924 containerd[1460]: time="2025-01-30T14:01:07.307797576Z" level=info msg="Container to stop \"5f0f58192f2fc81bcd8efa1403f68650ddc740c74823e7dab5edd46a58927b38\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 14:01:07.307924 containerd[1460]: time="2025-01-30T14:01:07.307812042Z" level=info msg="Container to stop \"621835f2ca6d4ea049e36a4ab8cf0d0fe137f7aa8baa077c2d83a11262704f94\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 14:01:07.310593 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cdf396d0bb45d4c65cf5ed4b99ef48154f134016b9a9049eefed55d9a22dd6bd-shm.mount: Deactivated successfully. Jan 30 14:01:07.320019 systemd[1]: cri-containerd-cdf396d0bb45d4c65cf5ed4b99ef48154f134016b9a9049eefed55d9a22dd6bd.scope: Deactivated successfully. Jan 30 14:01:07.349606 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cdf396d0bb45d4c65cf5ed4b99ef48154f134016b9a9049eefed55d9a22dd6bd-rootfs.mount: Deactivated successfully. Jan 30 14:01:07.354819 containerd[1460]: time="2025-01-30T14:01:07.354551895Z" level=info msg="shim disconnected" id=cdf396d0bb45d4c65cf5ed4b99ef48154f134016b9a9049eefed55d9a22dd6bd namespace=k8s.io Jan 30 14:01:07.354819 containerd[1460]: time="2025-01-30T14:01:07.354602177Z" level=warning msg="cleaning up after shim disconnected" id=cdf396d0bb45d4c65cf5ed4b99ef48154f134016b9a9049eefed55d9a22dd6bd namespace=k8s.io Jan 30 14:01:07.354819 containerd[1460]: time="2025-01-30T14:01:07.354610314Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:01:07.372032 containerd[1460]: time="2025-01-30T14:01:07.371653679Z" level=info msg="TearDown network for sandbox \"cdf396d0bb45d4c65cf5ed4b99ef48154f134016b9a9049eefed55d9a22dd6bd\" successfully" Jan 30 14:01:07.372032 containerd[1460]: time="2025-01-30T14:01:07.371712076Z" level=info msg="StopPodSandbox for \"cdf396d0bb45d4c65cf5ed4b99ef48154f134016b9a9049eefed55d9a22dd6bd\" returns successfully" Jan 30 14:01:07.484360 kubelet[1771]: I0130 14:01:07.484082 1771 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c2ee125a-b0d5-458f-aaa5-32012308f211-hostproc\") pod \"c2ee125a-b0d5-458f-aaa5-32012308f211\" (UID: \"c2ee125a-b0d5-458f-aaa5-32012308f211\") " Jan 30 14:01:07.484360 kubelet[1771]: I0130 14:01:07.484168 1771 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c2ee125a-b0d5-458f-aaa5-32012308f211-clustermesh-secrets\") pod \"c2ee125a-b0d5-458f-aaa5-32012308f211\" (UID: \"c2ee125a-b0d5-458f-aaa5-32012308f211\") " Jan 30 14:01:07.484360 kubelet[1771]: I0130 14:01:07.484198 1771 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c2ee125a-b0d5-458f-aaa5-32012308f211-cilium-run\") pod \"c2ee125a-b0d5-458f-aaa5-32012308f211\" (UID: \"c2ee125a-b0d5-458f-aaa5-32012308f211\") " Jan 30 14:01:07.484360 kubelet[1771]: I0130 14:01:07.484225 1771 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c2ee125a-b0d5-458f-aaa5-32012308f211-hubble-tls\") pod \"c2ee125a-b0d5-458f-aaa5-32012308f211\" (UID: \"c2ee125a-b0d5-458f-aaa5-32012308f211\") " Jan 30 14:01:07.484360 kubelet[1771]: I0130 14:01:07.484247 1771 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c2ee125a-b0d5-458f-aaa5-32012308f211-cni-path\") pod \"c2ee125a-b0d5-458f-aaa5-32012308f211\" (UID: \"c2ee125a-b0d5-458f-aaa5-32012308f211\") " Jan 30 14:01:07.484360 kubelet[1771]: I0130 14:01:07.484271 1771 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c2ee125a-b0d5-458f-aaa5-32012308f211-xtables-lock\") pod \"c2ee125a-b0d5-458f-aaa5-32012308f211\" (UID: \"c2ee125a-b0d5-458f-aaa5-32012308f211\") " Jan 30 14:01:07.484942 kubelet[1771]: I0130 14:01:07.484296 1771 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c2ee125a-b0d5-458f-aaa5-32012308f211-host-proc-sys-net\") pod \"c2ee125a-b0d5-458f-aaa5-32012308f211\" (UID: \"c2ee125a-b0d5-458f-aaa5-32012308f211\") " Jan 30 14:01:07.484942 kubelet[1771]: I0130 14:01:07.484366 1771 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c2ee125a-b0d5-458f-aaa5-32012308f211-cilium-config-path\") pod \"c2ee125a-b0d5-458f-aaa5-32012308f211\" (UID: \"c2ee125a-b0d5-458f-aaa5-32012308f211\") " Jan 30 14:01:07.484942 kubelet[1771]: I0130 14:01:07.484388 1771 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c2ee125a-b0d5-458f-aaa5-32012308f211-cilium-cgroup\") pod \"c2ee125a-b0d5-458f-aaa5-32012308f211\" (UID: \"c2ee125a-b0d5-458f-aaa5-32012308f211\") " Jan 30 14:01:07.484942 kubelet[1771]: I0130 14:01:07.484418 1771 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c2ee125a-b0d5-458f-aaa5-32012308f211-host-proc-sys-kernel\") pod \"c2ee125a-b0d5-458f-aaa5-32012308f211\" (UID: \"c2ee125a-b0d5-458f-aaa5-32012308f211\") " Jan 30 14:01:07.484942 kubelet[1771]: I0130 14:01:07.484468 1771 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c2ee125a-b0d5-458f-aaa5-32012308f211-bpf-maps\") pod \"c2ee125a-b0d5-458f-aaa5-32012308f211\" (UID: \"c2ee125a-b0d5-458f-aaa5-32012308f211\") " Jan 30 14:01:07.484942 kubelet[1771]: I0130 14:01:07.484503 1771 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jh4dd\" (UniqueName: \"kubernetes.io/projected/c2ee125a-b0d5-458f-aaa5-32012308f211-kube-api-access-jh4dd\") pod \"c2ee125a-b0d5-458f-aaa5-32012308f211\" (UID: \"c2ee125a-b0d5-458f-aaa5-32012308f211\") " Jan 30 14:01:07.486088 kubelet[1771]: I0130 14:01:07.484526 1771 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c2ee125a-b0d5-458f-aaa5-32012308f211-lib-modules\") pod \"c2ee125a-b0d5-458f-aaa5-32012308f211\" (UID: \"c2ee125a-b0d5-458f-aaa5-32012308f211\") " Jan 30 14:01:07.486088 kubelet[1771]: I0130 14:01:07.484555 1771 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c2ee125a-b0d5-458f-aaa5-32012308f211-etc-cni-netd\") pod \"c2ee125a-b0d5-458f-aaa5-32012308f211\" (UID: \"c2ee125a-b0d5-458f-aaa5-32012308f211\") " Jan 30 14:01:07.486088 kubelet[1771]: I0130 14:01:07.484694 1771 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2ee125a-b0d5-458f-aaa5-32012308f211-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c2ee125a-b0d5-458f-aaa5-32012308f211" (UID: "c2ee125a-b0d5-458f-aaa5-32012308f211"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:01:07.486088 kubelet[1771]: I0130 14:01:07.484754 1771 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2ee125a-b0d5-458f-aaa5-32012308f211-hostproc" (OuterVolumeSpecName: "hostproc") pod "c2ee125a-b0d5-458f-aaa5-32012308f211" (UID: "c2ee125a-b0d5-458f-aaa5-32012308f211"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:01:07.486088 kubelet[1771]: I0130 14:01:07.485380 1771 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2ee125a-b0d5-458f-aaa5-32012308f211-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c2ee125a-b0d5-458f-aaa5-32012308f211" (UID: "c2ee125a-b0d5-458f-aaa5-32012308f211"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:01:07.488822 kubelet[1771]: I0130 14:01:07.488420 1771 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2ee125a-b0d5-458f-aaa5-32012308f211-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c2ee125a-b0d5-458f-aaa5-32012308f211" (UID: "c2ee125a-b0d5-458f-aaa5-32012308f211"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:01:07.488822 kubelet[1771]: I0130 14:01:07.488480 1771 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2ee125a-b0d5-458f-aaa5-32012308f211-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c2ee125a-b0d5-458f-aaa5-32012308f211" (UID: "c2ee125a-b0d5-458f-aaa5-32012308f211"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:01:07.488822 kubelet[1771]: I0130 14:01:07.488501 1771 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2ee125a-b0d5-458f-aaa5-32012308f211-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c2ee125a-b0d5-458f-aaa5-32012308f211" (UID: "c2ee125a-b0d5-458f-aaa5-32012308f211"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:01:07.488822 kubelet[1771]: I0130 14:01:07.488583 1771 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2ee125a-b0d5-458f-aaa5-32012308f211-cni-path" (OuterVolumeSpecName: "cni-path") pod "c2ee125a-b0d5-458f-aaa5-32012308f211" (UID: "c2ee125a-b0d5-458f-aaa5-32012308f211"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:01:07.488822 kubelet[1771]: I0130 14:01:07.488641 1771 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2ee125a-b0d5-458f-aaa5-32012308f211-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c2ee125a-b0d5-458f-aaa5-32012308f211" (UID: "c2ee125a-b0d5-458f-aaa5-32012308f211"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:01:07.489094 kubelet[1771]: I0130 14:01:07.488672 1771 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2ee125a-b0d5-458f-aaa5-32012308f211-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c2ee125a-b0d5-458f-aaa5-32012308f211" (UID: "c2ee125a-b0d5-458f-aaa5-32012308f211"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:01:07.491505 kubelet[1771]: I0130 14:01:07.491330 1771 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2ee125a-b0d5-458f-aaa5-32012308f211-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c2ee125a-b0d5-458f-aaa5-32012308f211" (UID: "c2ee125a-b0d5-458f-aaa5-32012308f211"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:01:07.491626 kubelet[1771]: I0130 14:01:07.491528 1771 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2ee125a-b0d5-458f-aaa5-32012308f211-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c2ee125a-b0d5-458f-aaa5-32012308f211" (UID: "c2ee125a-b0d5-458f-aaa5-32012308f211"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:01:07.491626 kubelet[1771]: I0130 14:01:07.491607 1771 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2ee125a-b0d5-458f-aaa5-32012308f211-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c2ee125a-b0d5-458f-aaa5-32012308f211" (UID: "c2ee125a-b0d5-458f-aaa5-32012308f211"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:01:07.492581 kubelet[1771]: I0130 14:01:07.492233 1771 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2ee125a-b0d5-458f-aaa5-32012308f211-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c2ee125a-b0d5-458f-aaa5-32012308f211" (UID: "c2ee125a-b0d5-458f-aaa5-32012308f211"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:01:07.492839 kubelet[1771]: I0130 14:01:07.492689 1771 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2ee125a-b0d5-458f-aaa5-32012308f211-kube-api-access-jh4dd" (OuterVolumeSpecName: "kube-api-access-jh4dd") pod "c2ee125a-b0d5-458f-aaa5-32012308f211" (UID: "c2ee125a-b0d5-458f-aaa5-32012308f211"). InnerVolumeSpecName "kube-api-access-jh4dd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:01:07.584972 kubelet[1771]: I0130 14:01:07.584845 1771 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c2ee125a-b0d5-458f-aaa5-32012308f211-host-proc-sys-kernel\") on node \"164.92.66.128\" DevicePath \"\"" Jan 30 14:01:07.584972 kubelet[1771]: I0130 14:01:07.584890 1771 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c2ee125a-b0d5-458f-aaa5-32012308f211-bpf-maps\") on node \"164.92.66.128\" DevicePath \"\"" Jan 30 14:01:07.584972 kubelet[1771]: I0130 14:01:07.584903 1771 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c2ee125a-b0d5-458f-aaa5-32012308f211-etc-cni-netd\") on node \"164.92.66.128\" DevicePath \"\"" Jan 30 14:01:07.584972 kubelet[1771]: I0130 14:01:07.584918 1771 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-jh4dd\" (UniqueName: \"kubernetes.io/projected/c2ee125a-b0d5-458f-aaa5-32012308f211-kube-api-access-jh4dd\") on node \"164.92.66.128\" DevicePath \"\"" Jan 30 14:01:07.584972 kubelet[1771]: I0130 14:01:07.584928 1771 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c2ee125a-b0d5-458f-aaa5-32012308f211-lib-modules\") on node \"164.92.66.128\" DevicePath \"\"" Jan 30 14:01:07.584972 kubelet[1771]: I0130 14:01:07.584942 1771 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c2ee125a-b0d5-458f-aaa5-32012308f211-cni-path\") on node \"164.92.66.128\" DevicePath \"\"" Jan 30 14:01:07.584972 kubelet[1771]: I0130 14:01:07.584953 1771 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c2ee125a-b0d5-458f-aaa5-32012308f211-hostproc\") on node \"164.92.66.128\" DevicePath \"\"" Jan 30 14:01:07.584972 kubelet[1771]: I0130 14:01:07.584964 1771 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c2ee125a-b0d5-458f-aaa5-32012308f211-clustermesh-secrets\") on node \"164.92.66.128\" DevicePath \"\"" Jan 30 14:01:07.585470 kubelet[1771]: I0130 14:01:07.584973 1771 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c2ee125a-b0d5-458f-aaa5-32012308f211-cilium-run\") on node \"164.92.66.128\" DevicePath \"\"" Jan 30 14:01:07.585470 kubelet[1771]: I0130 14:01:07.584980 1771 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c2ee125a-b0d5-458f-aaa5-32012308f211-hubble-tls\") on node \"164.92.66.128\" DevicePath \"\"" Jan 30 14:01:07.585470 kubelet[1771]: I0130 14:01:07.584989 1771 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c2ee125a-b0d5-458f-aaa5-32012308f211-cilium-cgroup\") on node \"164.92.66.128\" DevicePath \"\"" Jan 30 14:01:07.585470 kubelet[1771]: I0130 14:01:07.584997 1771 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c2ee125a-b0d5-458f-aaa5-32012308f211-xtables-lock\") on node \"164.92.66.128\" DevicePath \"\"" Jan 30 14:01:07.585470 kubelet[1771]: I0130 14:01:07.585005 1771 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c2ee125a-b0d5-458f-aaa5-32012308f211-host-proc-sys-net\") on node \"164.92.66.128\" DevicePath \"\"" Jan 30 14:01:07.585470 kubelet[1771]: I0130 14:01:07.585012 1771 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c2ee125a-b0d5-458f-aaa5-32012308f211-cilium-config-path\") on node \"164.92.66.128\" DevicePath \"\"" Jan 30 14:01:07.587095 kubelet[1771]: I0130 14:01:07.587064 1771 scope.go:117] "RemoveContainer" containerID="621835f2ca6d4ea049e36a4ab8cf0d0fe137f7aa8baa077c2d83a11262704f94" Jan 30 14:01:07.590398 containerd[1460]: time="2025-01-30T14:01:07.590358230Z" level=info msg="RemoveContainer for \"621835f2ca6d4ea049e36a4ab8cf0d0fe137f7aa8baa077c2d83a11262704f94\"" Jan 30 14:01:07.594230 systemd[1]: Removed slice kubepods-burstable-podc2ee125a_b0d5_458f_aaa5_32012308f211.slice - libcontainer container kubepods-burstable-podc2ee125a_b0d5_458f_aaa5_32012308f211.slice. Jan 30 14:01:07.594354 systemd[1]: kubepods-burstable-podc2ee125a_b0d5_458f_aaa5_32012308f211.slice: Consumed 8.590s CPU time. Jan 30 14:01:07.599626 containerd[1460]: time="2025-01-30T14:01:07.599454467Z" level=info msg="RemoveContainer for \"621835f2ca6d4ea049e36a4ab8cf0d0fe137f7aa8baa077c2d83a11262704f94\" returns successfully" Jan 30 14:01:07.600029 kubelet[1771]: I0130 14:01:07.599993 1771 scope.go:117] "RemoveContainer" containerID="5f0f58192f2fc81bcd8efa1403f68650ddc740c74823e7dab5edd46a58927b38" Jan 30 14:01:07.611334 containerd[1460]: time="2025-01-30T14:01:07.609038772Z" level=info msg="RemoveContainer for \"5f0f58192f2fc81bcd8efa1403f68650ddc740c74823e7dab5edd46a58927b38\"" Jan 30 14:01:07.615065 containerd[1460]: time="2025-01-30T14:01:07.615012720Z" level=info msg="RemoveContainer for \"5f0f58192f2fc81bcd8efa1403f68650ddc740c74823e7dab5edd46a58927b38\" returns successfully" Jan 30 14:01:07.615707 kubelet[1771]: I0130 14:01:07.615477 1771 scope.go:117] "RemoveContainer" containerID="89da84638ba9d6330c890e164e82bffcea300c2309ef72567ebb615a5258849d" Jan 30 14:01:07.617847 containerd[1460]: time="2025-01-30T14:01:07.617800603Z" level=info msg="RemoveContainer for \"89da84638ba9d6330c890e164e82bffcea300c2309ef72567ebb615a5258849d\"" Jan 30 14:01:07.620887 containerd[1460]: time="2025-01-30T14:01:07.620836356Z" level=info msg="RemoveContainer for \"89da84638ba9d6330c890e164e82bffcea300c2309ef72567ebb615a5258849d\" returns successfully" Jan 30 14:01:07.621163 kubelet[1771]: I0130 14:01:07.621082 1771 scope.go:117] "RemoveContainer" containerID="fdef02c28193b04e7c254f9a5a2473e99a0021facf5dd11f32bf8ff082da00e0" Jan 30 14:01:07.622624 containerd[1460]: time="2025-01-30T14:01:07.622565285Z" level=info msg="RemoveContainer for \"fdef02c28193b04e7c254f9a5a2473e99a0021facf5dd11f32bf8ff082da00e0\"" Jan 30 14:01:07.625956 containerd[1460]: time="2025-01-30T14:01:07.625494767Z" level=info msg="RemoveContainer for \"fdef02c28193b04e7c254f9a5a2473e99a0021facf5dd11f32bf8ff082da00e0\" returns successfully" Jan 30 14:01:07.626099 kubelet[1771]: I0130 14:01:07.625739 1771 scope.go:117] "RemoveContainer" containerID="c667fa1839ee923826339272f55fb8e8b4048ab0737f1ea3ebeb89a55d506f23" Jan 30 14:01:07.627789 containerd[1460]: time="2025-01-30T14:01:07.627748102Z" level=info msg="RemoveContainer for \"c667fa1839ee923826339272f55fb8e8b4048ab0737f1ea3ebeb89a55d506f23\"" Jan 30 14:01:07.630950 containerd[1460]: time="2025-01-30T14:01:07.630874080Z" level=info msg="RemoveContainer for \"c667fa1839ee923826339272f55fb8e8b4048ab0737f1ea3ebeb89a55d506f23\" returns successfully" Jan 30 14:01:07.631224 kubelet[1771]: I0130 14:01:07.631181 1771 scope.go:117] "RemoveContainer" containerID="621835f2ca6d4ea049e36a4ab8cf0d0fe137f7aa8baa077c2d83a11262704f94" Jan 30 14:01:07.631694 containerd[1460]: time="2025-01-30T14:01:07.631629401Z" level=error msg="ContainerStatus for \"621835f2ca6d4ea049e36a4ab8cf0d0fe137f7aa8baa077c2d83a11262704f94\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"621835f2ca6d4ea049e36a4ab8cf0d0fe137f7aa8baa077c2d83a11262704f94\": not found" Jan 30 14:01:07.631908 kubelet[1771]: E0130 14:01:07.631859 1771 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"621835f2ca6d4ea049e36a4ab8cf0d0fe137f7aa8baa077c2d83a11262704f94\": not found" containerID="621835f2ca6d4ea049e36a4ab8cf0d0fe137f7aa8baa077c2d83a11262704f94" Jan 30 14:01:07.632066 kubelet[1771]: I0130 14:01:07.631899 1771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"621835f2ca6d4ea049e36a4ab8cf0d0fe137f7aa8baa077c2d83a11262704f94"} err="failed to get container status \"621835f2ca6d4ea049e36a4ab8cf0d0fe137f7aa8baa077c2d83a11262704f94\": rpc error: code = NotFound desc = an error occurred when try to find container \"621835f2ca6d4ea049e36a4ab8cf0d0fe137f7aa8baa077c2d83a11262704f94\": not found" Jan 30 14:01:07.632066 kubelet[1771]: I0130 14:01:07.631981 1771 scope.go:117] "RemoveContainer" containerID="5f0f58192f2fc81bcd8efa1403f68650ddc740c74823e7dab5edd46a58927b38" Jan 30 14:01:07.632226 containerd[1460]: time="2025-01-30T14:01:07.632192575Z" level=error msg="ContainerStatus for \"5f0f58192f2fc81bcd8efa1403f68650ddc740c74823e7dab5edd46a58927b38\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5f0f58192f2fc81bcd8efa1403f68650ddc740c74823e7dab5edd46a58927b38\": not found" Jan 30 14:01:07.632352 kubelet[1771]: E0130 14:01:07.632329 1771 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5f0f58192f2fc81bcd8efa1403f68650ddc740c74823e7dab5edd46a58927b38\": not found" containerID="5f0f58192f2fc81bcd8efa1403f68650ddc740c74823e7dab5edd46a58927b38" Jan 30 14:01:07.632416 kubelet[1771]: I0130 14:01:07.632349 1771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5f0f58192f2fc81bcd8efa1403f68650ddc740c74823e7dab5edd46a58927b38"} err="failed to get container status \"5f0f58192f2fc81bcd8efa1403f68650ddc740c74823e7dab5edd46a58927b38\": rpc error: code = NotFound desc = an error occurred when try to find container \"5f0f58192f2fc81bcd8efa1403f68650ddc740c74823e7dab5edd46a58927b38\": not found" Jan 30 14:01:07.632416 kubelet[1771]: I0130 14:01:07.632366 1771 scope.go:117] "RemoveContainer" containerID="89da84638ba9d6330c890e164e82bffcea300c2309ef72567ebb615a5258849d" Jan 30 14:01:07.632856 containerd[1460]: time="2025-01-30T14:01:07.632710212Z" level=error msg="ContainerStatus for \"89da84638ba9d6330c890e164e82bffcea300c2309ef72567ebb615a5258849d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"89da84638ba9d6330c890e164e82bffcea300c2309ef72567ebb615a5258849d\": not found" Jan 30 14:01:07.632942 kubelet[1771]: E0130 14:01:07.632856 1771 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"89da84638ba9d6330c890e164e82bffcea300c2309ef72567ebb615a5258849d\": not found" containerID="89da84638ba9d6330c890e164e82bffcea300c2309ef72567ebb615a5258849d" Jan 30 14:01:07.632942 kubelet[1771]: I0130 14:01:07.632887 1771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"89da84638ba9d6330c890e164e82bffcea300c2309ef72567ebb615a5258849d"} err="failed to get container status \"89da84638ba9d6330c890e164e82bffcea300c2309ef72567ebb615a5258849d\": rpc error: code = NotFound desc = an error occurred when try to find container \"89da84638ba9d6330c890e164e82bffcea300c2309ef72567ebb615a5258849d\": not found" Jan 30 14:01:07.632942 kubelet[1771]: I0130 14:01:07.632903 1771 scope.go:117] "RemoveContainer" containerID="fdef02c28193b04e7c254f9a5a2473e99a0021facf5dd11f32bf8ff082da00e0" Jan 30 14:01:07.633399 containerd[1460]: time="2025-01-30T14:01:07.633330563Z" level=error msg="ContainerStatus for \"fdef02c28193b04e7c254f9a5a2473e99a0021facf5dd11f32bf8ff082da00e0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fdef02c28193b04e7c254f9a5a2473e99a0021facf5dd11f32bf8ff082da00e0\": not found" Jan 30 14:01:07.633517 kubelet[1771]: E0130 14:01:07.633491 1771 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fdef02c28193b04e7c254f9a5a2473e99a0021facf5dd11f32bf8ff082da00e0\": not found" containerID="fdef02c28193b04e7c254f9a5a2473e99a0021facf5dd11f32bf8ff082da00e0" Jan 30 14:01:07.633623 kubelet[1771]: I0130 14:01:07.633523 1771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fdef02c28193b04e7c254f9a5a2473e99a0021facf5dd11f32bf8ff082da00e0"} err="failed to get container status \"fdef02c28193b04e7c254f9a5a2473e99a0021facf5dd11f32bf8ff082da00e0\": rpc error: code = NotFound desc = an error occurred when try to find container \"fdef02c28193b04e7c254f9a5a2473e99a0021facf5dd11f32bf8ff082da00e0\": not found" Jan 30 14:01:07.633623 kubelet[1771]: I0130 14:01:07.633544 1771 scope.go:117] "RemoveContainer" containerID="c667fa1839ee923826339272f55fb8e8b4048ab0737f1ea3ebeb89a55d506f23" Jan 30 14:01:07.633766 containerd[1460]: time="2025-01-30T14:01:07.633722207Z" level=error msg="ContainerStatus for \"c667fa1839ee923826339272f55fb8e8b4048ab0737f1ea3ebeb89a55d506f23\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c667fa1839ee923826339272f55fb8e8b4048ab0737f1ea3ebeb89a55d506f23\": not found" Jan 30 14:01:07.633920 kubelet[1771]: E0130 14:01:07.633818 1771 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c667fa1839ee923826339272f55fb8e8b4048ab0737f1ea3ebeb89a55d506f23\": not found" containerID="c667fa1839ee923826339272f55fb8e8b4048ab0737f1ea3ebeb89a55d506f23" Jan 30 14:01:07.633920 kubelet[1771]: I0130 14:01:07.633842 1771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c667fa1839ee923826339272f55fb8e8b4048ab0737f1ea3ebeb89a55d506f23"} err="failed to get container status \"c667fa1839ee923826339272f55fb8e8b4048ab0737f1ea3ebeb89a55d506f23\": rpc error: code = NotFound desc = an error occurred when try to find container \"c667fa1839ee923826339272f55fb8e8b4048ab0737f1ea3ebeb89a55d506f23\": not found" Jan 30 14:01:08.036862 kubelet[1771]: E0130 14:01:08.036783 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:01:08.116223 systemd[1]: var-lib-kubelet-pods-c2ee125a\x2db0d5\x2d458f\x2daaa5\x2d32012308f211-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djh4dd.mount: Deactivated successfully. Jan 30 14:01:08.116410 systemd[1]: var-lib-kubelet-pods-c2ee125a\x2db0d5\x2d458f\x2daaa5\x2d32012308f211-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 30 14:01:08.116526 systemd[1]: var-lib-kubelet-pods-c2ee125a\x2db0d5\x2d458f\x2daaa5\x2d32012308f211-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 30 14:01:08.313098 kubelet[1771]: I0130 14:01:08.312431 1771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2ee125a-b0d5-458f-aaa5-32012308f211" path="/var/lib/kubelet/pods/c2ee125a-b0d5-458f-aaa5-32012308f211/volumes" Jan 30 14:01:09.037552 kubelet[1771]: E0130 14:01:09.037478 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:01:09.371999 kubelet[1771]: E0130 14:01:09.371922 1771 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 14:01:09.829617 kubelet[1771]: I0130 14:01:09.828683 1771 topology_manager.go:215] "Topology Admit Handler" podUID="05f17d54-f4fd-410c-bc59-4689ef898761" podNamespace="kube-system" podName="cilium-operator-599987898-xn42r" Jan 30 14:01:09.829617 kubelet[1771]: E0130 14:01:09.828748 1771 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c2ee125a-b0d5-458f-aaa5-32012308f211" containerName="mount-cgroup" Jan 30 14:01:09.829617 kubelet[1771]: E0130 14:01:09.828759 1771 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c2ee125a-b0d5-458f-aaa5-32012308f211" containerName="apply-sysctl-overwrites" Jan 30 14:01:09.829617 kubelet[1771]: E0130 14:01:09.828766 1771 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c2ee125a-b0d5-458f-aaa5-32012308f211" containerName="mount-bpf-fs" Jan 30 14:01:09.829617 kubelet[1771]: E0130 14:01:09.828772 1771 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c2ee125a-b0d5-458f-aaa5-32012308f211" containerName="clean-cilium-state" Jan 30 14:01:09.829617 kubelet[1771]: E0130 14:01:09.828779 1771 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c2ee125a-b0d5-458f-aaa5-32012308f211" containerName="cilium-agent" Jan 30 14:01:09.829617 kubelet[1771]: I0130 14:01:09.828798 1771 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2ee125a-b0d5-458f-aaa5-32012308f211" containerName="cilium-agent" Jan 30 14:01:09.835723 systemd[1]: Created slice kubepods-besteffort-pod05f17d54_f4fd_410c_bc59_4689ef898761.slice - libcontainer container kubepods-besteffort-pod05f17d54_f4fd_410c_bc59_4689ef898761.slice. Jan 30 14:01:09.897355 kubelet[1771]: I0130 14:01:09.897270 1771 topology_manager.go:215] "Topology Admit Handler" podUID="3a28c101-595b-4344-a0d4-10b2947466e6" podNamespace="kube-system" podName="cilium-mbq8h" Jan 30 14:01:09.898292 kubelet[1771]: I0130 14:01:09.897911 1771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rzn7\" (UniqueName: \"kubernetes.io/projected/05f17d54-f4fd-410c-bc59-4689ef898761-kube-api-access-8rzn7\") pod \"cilium-operator-599987898-xn42r\" (UID: \"05f17d54-f4fd-410c-bc59-4689ef898761\") " pod="kube-system/cilium-operator-599987898-xn42r" Jan 30 14:01:09.898292 kubelet[1771]: I0130 14:01:09.897962 1771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/05f17d54-f4fd-410c-bc59-4689ef898761-cilium-config-path\") pod \"cilium-operator-599987898-xn42r\" (UID: \"05f17d54-f4fd-410c-bc59-4689ef898761\") " pod="kube-system/cilium-operator-599987898-xn42r" Jan 30 14:01:09.906796 systemd[1]: Created slice kubepods-burstable-pod3a28c101_595b_4344_a0d4_10b2947466e6.slice - libcontainer container kubepods-burstable-pod3a28c101_595b_4344_a0d4_10b2947466e6.slice. Jan 30 14:01:10.000352 kubelet[1771]: I0130 14:01:09.998717 1771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3a28c101-595b-4344-a0d4-10b2947466e6-cni-path\") pod \"cilium-mbq8h\" (UID: \"3a28c101-595b-4344-a0d4-10b2947466e6\") " pod="kube-system/cilium-mbq8h" Jan 30 14:01:10.000352 kubelet[1771]: I0130 14:01:09.998766 1771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3a28c101-595b-4344-a0d4-10b2947466e6-hostproc\") pod \"cilium-mbq8h\" (UID: \"3a28c101-595b-4344-a0d4-10b2947466e6\") " pod="kube-system/cilium-mbq8h" Jan 30 14:01:10.000352 kubelet[1771]: I0130 14:01:09.998787 1771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3a28c101-595b-4344-a0d4-10b2947466e6-xtables-lock\") pod \"cilium-mbq8h\" (UID: \"3a28c101-595b-4344-a0d4-10b2947466e6\") " pod="kube-system/cilium-mbq8h" Jan 30 14:01:10.000352 kubelet[1771]: I0130 14:01:09.998805 1771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3a28c101-595b-4344-a0d4-10b2947466e6-cilium-config-path\") pod \"cilium-mbq8h\" (UID: \"3a28c101-595b-4344-a0d4-10b2947466e6\") " pod="kube-system/cilium-mbq8h" Jan 30 14:01:10.000352 kubelet[1771]: I0130 14:01:09.998828 1771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmf89\" (UniqueName: \"kubernetes.io/projected/3a28c101-595b-4344-a0d4-10b2947466e6-kube-api-access-pmf89\") pod \"cilium-mbq8h\" (UID: \"3a28c101-595b-4344-a0d4-10b2947466e6\") " pod="kube-system/cilium-mbq8h" Jan 30 14:01:10.000352 kubelet[1771]: I0130 14:01:09.998856 1771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3a28c101-595b-4344-a0d4-10b2947466e6-bpf-maps\") pod \"cilium-mbq8h\" (UID: \"3a28c101-595b-4344-a0d4-10b2947466e6\") " pod="kube-system/cilium-mbq8h" Jan 30 14:01:10.000665 kubelet[1771]: I0130 14:01:09.998871 1771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3a28c101-595b-4344-a0d4-10b2947466e6-cilium-cgroup\") pod \"cilium-mbq8h\" (UID: \"3a28c101-595b-4344-a0d4-10b2947466e6\") " pod="kube-system/cilium-mbq8h" Jan 30 14:01:10.000665 kubelet[1771]: I0130 14:01:09.998889 1771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3a28c101-595b-4344-a0d4-10b2947466e6-etc-cni-netd\") pod \"cilium-mbq8h\" (UID: \"3a28c101-595b-4344-a0d4-10b2947466e6\") " pod="kube-system/cilium-mbq8h" Jan 30 14:01:10.000665 kubelet[1771]: I0130 14:01:09.998903 1771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3a28c101-595b-4344-a0d4-10b2947466e6-host-proc-sys-net\") pod \"cilium-mbq8h\" (UID: \"3a28c101-595b-4344-a0d4-10b2947466e6\") " pod="kube-system/cilium-mbq8h" Jan 30 14:01:10.000665 kubelet[1771]: I0130 14:01:09.998920 1771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3a28c101-595b-4344-a0d4-10b2947466e6-cilium-run\") pod \"cilium-mbq8h\" (UID: \"3a28c101-595b-4344-a0d4-10b2947466e6\") " pod="kube-system/cilium-mbq8h" Jan 30 14:01:10.000665 kubelet[1771]: I0130 14:01:09.998942 1771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a28c101-595b-4344-a0d4-10b2947466e6-lib-modules\") pod \"cilium-mbq8h\" (UID: \"3a28c101-595b-4344-a0d4-10b2947466e6\") " pod="kube-system/cilium-mbq8h" Jan 30 14:01:10.000665 kubelet[1771]: I0130 14:01:09.998967 1771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3a28c101-595b-4344-a0d4-10b2947466e6-clustermesh-secrets\") pod \"cilium-mbq8h\" (UID: \"3a28c101-595b-4344-a0d4-10b2947466e6\") " pod="kube-system/cilium-mbq8h" Jan 30 14:01:10.000813 kubelet[1771]: I0130 14:01:09.998995 1771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3a28c101-595b-4344-a0d4-10b2947466e6-cilium-ipsec-secrets\") pod \"cilium-mbq8h\" (UID: \"3a28c101-595b-4344-a0d4-10b2947466e6\") " pod="kube-system/cilium-mbq8h" Jan 30 14:01:10.000813 kubelet[1771]: I0130 14:01:09.999016 1771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3a28c101-595b-4344-a0d4-10b2947466e6-host-proc-sys-kernel\") pod \"cilium-mbq8h\" (UID: \"3a28c101-595b-4344-a0d4-10b2947466e6\") " pod="kube-system/cilium-mbq8h" Jan 30 14:01:10.000813 kubelet[1771]: I0130 14:01:09.999031 1771 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3a28c101-595b-4344-a0d4-10b2947466e6-hubble-tls\") pod \"cilium-mbq8h\" (UID: \"3a28c101-595b-4344-a0d4-10b2947466e6\") " pod="kube-system/cilium-mbq8h" Jan 30 14:01:10.038593 kubelet[1771]: E0130 14:01:10.038527 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:01:10.140766 kubelet[1771]: E0130 14:01:10.140366 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:10.141924 containerd[1460]: time="2025-01-30T14:01:10.141442921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-xn42r,Uid:05f17d54-f4fd-410c-bc59-4689ef898761,Namespace:kube-system,Attempt:0,}" Jan 30 14:01:10.169628 containerd[1460]: time="2025-01-30T14:01:10.169367700Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:01:10.169628 containerd[1460]: time="2025-01-30T14:01:10.169450350Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:01:10.169628 containerd[1460]: time="2025-01-30T14:01:10.169461538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:01:10.169817 containerd[1460]: time="2025-01-30T14:01:10.169578801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:01:10.192658 systemd[1]: Started cri-containerd-1f05af5a3c18bb45d02ed6c9c5c459faf20ef5b7792a415dae3366ca36be9f55.scope - libcontainer container 1f05af5a3c18bb45d02ed6c9c5c459faf20ef5b7792a415dae3366ca36be9f55. Jan 30 14:01:10.216454 kubelet[1771]: E0130 14:01:10.216363 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:10.217733 containerd[1460]: time="2025-01-30T14:01:10.217687439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mbq8h,Uid:3a28c101-595b-4344-a0d4-10b2947466e6,Namespace:kube-system,Attempt:0,}" Jan 30 14:01:10.251439 containerd[1460]: time="2025-01-30T14:01:10.251266481Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:01:10.251642 containerd[1460]: time="2025-01-30T14:01:10.251569210Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:01:10.252068 containerd[1460]: time="2025-01-30T14:01:10.251815360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:01:10.252402 containerd[1460]: time="2025-01-30T14:01:10.252349701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:01:10.254804 containerd[1460]: time="2025-01-30T14:01:10.254664499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-xn42r,Uid:05f17d54-f4fd-410c-bc59-4689ef898761,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f05af5a3c18bb45d02ed6c9c5c459faf20ef5b7792a415dae3366ca36be9f55\"" Jan 30 14:01:10.255733 kubelet[1771]: E0130 14:01:10.255484 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:10.258272 containerd[1460]: time="2025-01-30T14:01:10.258232079Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 30 14:01:10.279571 systemd[1]: Started cri-containerd-1631586b285616173e75981e7b37601bf92f4999fc17f63f7456eb372b66c5e6.scope - libcontainer container 1631586b285616173e75981e7b37601bf92f4999fc17f63f7456eb372b66c5e6. Jan 30 14:01:10.317750 containerd[1460]: time="2025-01-30T14:01:10.317574645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mbq8h,Uid:3a28c101-595b-4344-a0d4-10b2947466e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"1631586b285616173e75981e7b37601bf92f4999fc17f63f7456eb372b66c5e6\"" Jan 30 14:01:10.319213 kubelet[1771]: E0130 14:01:10.318754 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:10.321264 containerd[1460]: time="2025-01-30T14:01:10.321219350Z" level=info msg="CreateContainer within sandbox \"1631586b285616173e75981e7b37601bf92f4999fc17f63f7456eb372b66c5e6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 14:01:10.335621 containerd[1460]: time="2025-01-30T14:01:10.335565535Z" level=info msg="CreateContainer within sandbox \"1631586b285616173e75981e7b37601bf92f4999fc17f63f7456eb372b66c5e6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6fa7dc213a6c39aba319a25d1fc5b3ebd99aad89cab0544cc5d5667827481f63\"" Jan 30 14:01:10.337717 containerd[1460]: time="2025-01-30T14:01:10.336598842Z" level=info msg="StartContainer for \"6fa7dc213a6c39aba319a25d1fc5b3ebd99aad89cab0544cc5d5667827481f63\"" Jan 30 14:01:10.372576 systemd[1]: Started cri-containerd-6fa7dc213a6c39aba319a25d1fc5b3ebd99aad89cab0544cc5d5667827481f63.scope - libcontainer container 6fa7dc213a6c39aba319a25d1fc5b3ebd99aad89cab0544cc5d5667827481f63. Jan 30 14:01:10.403132 containerd[1460]: time="2025-01-30T14:01:10.402985610Z" level=info msg="StartContainer for \"6fa7dc213a6c39aba319a25d1fc5b3ebd99aad89cab0544cc5d5667827481f63\" returns successfully" Jan 30 14:01:10.419122 systemd[1]: cri-containerd-6fa7dc213a6c39aba319a25d1fc5b3ebd99aad89cab0544cc5d5667827481f63.scope: Deactivated successfully. Jan 30 14:01:10.464776 containerd[1460]: time="2025-01-30T14:01:10.464700340Z" level=info msg="shim disconnected" id=6fa7dc213a6c39aba319a25d1fc5b3ebd99aad89cab0544cc5d5667827481f63 namespace=k8s.io Jan 30 14:01:10.464776 containerd[1460]: time="2025-01-30T14:01:10.464771760Z" level=warning msg="cleaning up after shim disconnected" id=6fa7dc213a6c39aba319a25d1fc5b3ebd99aad89cab0544cc5d5667827481f63 namespace=k8s.io Jan 30 14:01:10.464776 containerd[1460]: time="2025-01-30T14:01:10.464783353Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:01:10.599347 kubelet[1771]: E0130 14:01:10.599294 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:10.602015 containerd[1460]: time="2025-01-30T14:01:10.601958651Z" level=info msg="CreateContainer within sandbox \"1631586b285616173e75981e7b37601bf92f4999fc17f63f7456eb372b66c5e6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 14:01:10.616982 containerd[1460]: time="2025-01-30T14:01:10.616806929Z" level=info msg="CreateContainer within sandbox \"1631586b285616173e75981e7b37601bf92f4999fc17f63f7456eb372b66c5e6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d611db57aaab5c718c153beff2c1537d8ecc8ef01deb8edfe35956c792a8babf\"" Jan 30 14:01:10.617954 containerd[1460]: time="2025-01-30T14:01:10.617776789Z" level=info msg="StartContainer for \"d611db57aaab5c718c153beff2c1537d8ecc8ef01deb8edfe35956c792a8babf\"" Jan 30 14:01:10.653642 systemd[1]: Started cri-containerd-d611db57aaab5c718c153beff2c1537d8ecc8ef01deb8edfe35956c792a8babf.scope - libcontainer container d611db57aaab5c718c153beff2c1537d8ecc8ef01deb8edfe35956c792a8babf. Jan 30 14:01:10.690473 containerd[1460]: time="2025-01-30T14:01:10.690326521Z" level=info msg="StartContainer for \"d611db57aaab5c718c153beff2c1537d8ecc8ef01deb8edfe35956c792a8babf\" returns successfully" Jan 30 14:01:10.698537 systemd[1]: cri-containerd-d611db57aaab5c718c153beff2c1537d8ecc8ef01deb8edfe35956c792a8babf.scope: Deactivated successfully. Jan 30 14:01:10.728500 containerd[1460]: time="2025-01-30T14:01:10.728204407Z" level=info msg="shim disconnected" id=d611db57aaab5c718c153beff2c1537d8ecc8ef01deb8edfe35956c792a8babf namespace=k8s.io Jan 30 14:01:10.728500 containerd[1460]: time="2025-01-30T14:01:10.728267775Z" level=warning msg="cleaning up after shim disconnected" id=d611db57aaab5c718c153beff2c1537d8ecc8ef01deb8edfe35956c792a8babf namespace=k8s.io Jan 30 14:01:10.728500 containerd[1460]: time="2025-01-30T14:01:10.728278884Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:01:11.039168 kubelet[1771]: E0130 14:01:11.038985 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:01:11.613123 kubelet[1771]: E0130 14:01:11.612934 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:11.615593 containerd[1460]: time="2025-01-30T14:01:11.615471472Z" level=info msg="CreateContainer within sandbox \"1631586b285616173e75981e7b37601bf92f4999fc17f63f7456eb372b66c5e6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 14:01:11.685944 containerd[1460]: time="2025-01-30T14:01:11.685582092Z" level=info msg="CreateContainer within sandbox \"1631586b285616173e75981e7b37601bf92f4999fc17f63f7456eb372b66c5e6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1d5b5d07544a29205f3ed8afa20ff898311140c5f8b6de970aa9998e496d1455\"" Jan 30 14:01:11.688079 containerd[1460]: time="2025-01-30T14:01:11.686365502Z" level=info msg="StartContainer for \"1d5b5d07544a29205f3ed8afa20ff898311140c5f8b6de970aa9998e496d1455\"" Jan 30 14:01:11.745636 systemd[1]: Started cri-containerd-1d5b5d07544a29205f3ed8afa20ff898311140c5f8b6de970aa9998e496d1455.scope - libcontainer container 1d5b5d07544a29205f3ed8afa20ff898311140c5f8b6de970aa9998e496d1455. Jan 30 14:01:11.791546 containerd[1460]: time="2025-01-30T14:01:11.791484535Z" level=info msg="StartContainer for \"1d5b5d07544a29205f3ed8afa20ff898311140c5f8b6de970aa9998e496d1455\" returns successfully" Jan 30 14:01:11.798345 systemd[1]: cri-containerd-1d5b5d07544a29205f3ed8afa20ff898311140c5f8b6de970aa9998e496d1455.scope: Deactivated successfully. Jan 30 14:01:11.848491 containerd[1460]: time="2025-01-30T14:01:11.848216144Z" level=info msg="shim disconnected" id=1d5b5d07544a29205f3ed8afa20ff898311140c5f8b6de970aa9998e496d1455 namespace=k8s.io Jan 30 14:01:11.848491 containerd[1460]: time="2025-01-30T14:01:11.848276207Z" level=warning msg="cleaning up after shim disconnected" id=1d5b5d07544a29205f3ed8afa20ff898311140c5f8b6de970aa9998e496d1455 namespace=k8s.io Jan 30 14:01:11.848491 containerd[1460]: time="2025-01-30T14:01:11.848284776Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:01:12.040240 kubelet[1771]: E0130 14:01:12.040068 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:01:12.071669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1685455120.mount: Deactivated successfully. Jan 30 14:01:12.071800 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d5b5d07544a29205f3ed8afa20ff898311140c5f8b6de970aa9998e496d1455-rootfs.mount: Deactivated successfully. Jan 30 14:01:12.364337 containerd[1460]: time="2025-01-30T14:01:12.364169765Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:12.365334 containerd[1460]: time="2025-01-30T14:01:12.365047755Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 30 14:01:12.366625 containerd[1460]: time="2025-01-30T14:01:12.366560695Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:12.369338 containerd[1460]: time="2025-01-30T14:01:12.368900885Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.110329355s" Jan 30 14:01:12.369338 containerd[1460]: time="2025-01-30T14:01:12.368958874Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 30 14:01:12.372389 containerd[1460]: time="2025-01-30T14:01:12.372355527Z" level=info msg="CreateContainer within sandbox \"1f05af5a3c18bb45d02ed6c9c5c459faf20ef5b7792a415dae3366ca36be9f55\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 30 14:01:12.385608 containerd[1460]: time="2025-01-30T14:01:12.385542628Z" level=info msg="CreateContainer within sandbox \"1f05af5a3c18bb45d02ed6c9c5c459faf20ef5b7792a415dae3366ca36be9f55\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"13e2b49d6a1257a3b69c5770f32e0c3050ae2cc62b5e4bf4f60e7da5eae3c1cb\"" Jan 30 14:01:12.389336 containerd[1460]: time="2025-01-30T14:01:12.386834663Z" level=info msg="StartContainer for \"13e2b49d6a1257a3b69c5770f32e0c3050ae2cc62b5e4bf4f60e7da5eae3c1cb\"" Jan 30 14:01:12.426699 systemd[1]: Started cri-containerd-13e2b49d6a1257a3b69c5770f32e0c3050ae2cc62b5e4bf4f60e7da5eae3c1cb.scope - libcontainer container 13e2b49d6a1257a3b69c5770f32e0c3050ae2cc62b5e4bf4f60e7da5eae3c1cb. Jan 30 14:01:12.462659 containerd[1460]: time="2025-01-30T14:01:12.462598295Z" level=info msg="StartContainer for \"13e2b49d6a1257a3b69c5770f32e0c3050ae2cc62b5e4bf4f60e7da5eae3c1cb\" returns successfully" Jan 30 14:01:12.621025 kubelet[1771]: E0130 14:01:12.620893 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:12.625477 kubelet[1771]: E0130 14:01:12.625436 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:12.628394 containerd[1460]: time="2025-01-30T14:01:12.627407497Z" level=info msg="CreateContainer within sandbox \"1631586b285616173e75981e7b37601bf92f4999fc17f63f7456eb372b66c5e6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 14:01:12.660957 containerd[1460]: time="2025-01-30T14:01:12.660886840Z" level=info msg="CreateContainer within sandbox \"1631586b285616173e75981e7b37601bf92f4999fc17f63f7456eb372b66c5e6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d78c120ed9269b4931ed71a4427a90894d23d2a4ab5a35216346fce43e6f059f\"" Jan 30 14:01:12.661923 containerd[1460]: time="2025-01-30T14:01:12.661880941Z" level=info msg="StartContainer for \"d78c120ed9269b4931ed71a4427a90894d23d2a4ab5a35216346fce43e6f059f\"" Jan 30 14:01:12.706624 systemd[1]: Started cri-containerd-d78c120ed9269b4931ed71a4427a90894d23d2a4ab5a35216346fce43e6f059f.scope - libcontainer container d78c120ed9269b4931ed71a4427a90894d23d2a4ab5a35216346fce43e6f059f. Jan 30 14:01:12.729588 kubelet[1771]: I0130 14:01:12.728993 1771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-xn42r" podStartSLOduration=1.6156183579999999 podStartE2EDuration="3.728968334s" podCreationTimestamp="2025-01-30 14:01:09 +0000 UTC" firstStartedPulling="2025-01-30 14:01:10.2566026 +0000 UTC m=+67.606666891" lastFinishedPulling="2025-01-30 14:01:12.369952558 +0000 UTC m=+69.720016867" observedRunningTime="2025-01-30 14:01:12.728518593 +0000 UTC m=+70.078582903" watchObservedRunningTime="2025-01-30 14:01:12.728968334 +0000 UTC m=+70.079032647" Jan 30 14:01:12.752222 systemd[1]: cri-containerd-d78c120ed9269b4931ed71a4427a90894d23d2a4ab5a35216346fce43e6f059f.scope: Deactivated successfully. Jan 30 14:01:12.753461 containerd[1460]: time="2025-01-30T14:01:12.753416820Z" level=info msg="StartContainer for \"d78c120ed9269b4931ed71a4427a90894d23d2a4ab5a35216346fce43e6f059f\" returns successfully" Jan 30 14:01:12.790352 containerd[1460]: time="2025-01-30T14:01:12.790054866Z" level=info msg="shim disconnected" id=d78c120ed9269b4931ed71a4427a90894d23d2a4ab5a35216346fce43e6f059f namespace=k8s.io Jan 30 14:01:12.790352 containerd[1460]: time="2025-01-30T14:01:12.790128559Z" level=warning msg="cleaning up after shim disconnected" id=d78c120ed9269b4931ed71a4427a90894d23d2a4ab5a35216346fce43e6f059f namespace=k8s.io Jan 30 14:01:12.790352 containerd[1460]: time="2025-01-30T14:01:12.790141020Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:01:13.041399 kubelet[1771]: E0130 14:01:13.040484 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:01:13.631527 kubelet[1771]: E0130 14:01:13.631476 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:13.632257 kubelet[1771]: E0130 14:01:13.631952 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:13.635147 containerd[1460]: time="2025-01-30T14:01:13.635105398Z" level=info msg="CreateContainer within sandbox \"1631586b285616173e75981e7b37601bf92f4999fc17f63f7456eb372b66c5e6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 14:01:13.659046 containerd[1460]: time="2025-01-30T14:01:13.658991935Z" level=info msg="CreateContainer within sandbox \"1631586b285616173e75981e7b37601bf92f4999fc17f63f7456eb372b66c5e6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6cc4b5a7323e5160f6832b7bf2dbde5edf40574e52d41f144bda6f3c350cef63\"" Jan 30 14:01:13.660011 containerd[1460]: time="2025-01-30T14:01:13.659968883Z" level=info msg="StartContainer for \"6cc4b5a7323e5160f6832b7bf2dbde5edf40574e52d41f144bda6f3c350cef63\"" Jan 30 14:01:13.700583 systemd[1]: Started cri-containerd-6cc4b5a7323e5160f6832b7bf2dbde5edf40574e52d41f144bda6f3c350cef63.scope - libcontainer container 6cc4b5a7323e5160f6832b7bf2dbde5edf40574e52d41f144bda6f3c350cef63. Jan 30 14:01:13.751490 containerd[1460]: time="2025-01-30T14:01:13.751382462Z" level=info msg="StartContainer for \"6cc4b5a7323e5160f6832b7bf2dbde5edf40574e52d41f144bda6f3c350cef63\" returns successfully" Jan 30 14:01:14.042545 kubelet[1771]: E0130 14:01:14.041276 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:01:14.069981 systemd[1]: run-containerd-runc-k8s.io-6cc4b5a7323e5160f6832b7bf2dbde5edf40574e52d41f144bda6f3c350cef63-runc.dWUiZM.mount: Deactivated successfully. Jan 30 14:01:14.182562 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 30 14:01:14.637982 kubelet[1771]: E0130 14:01:14.637938 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:14.670503 kubelet[1771]: I0130 14:01:14.670434 1771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mbq8h" podStartSLOduration=5.670412073 podStartE2EDuration="5.670412073s" podCreationTimestamp="2025-01-30 14:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:01:14.670238775 +0000 UTC m=+72.020303087" watchObservedRunningTime="2025-01-30 14:01:14.670412073 +0000 UTC m=+72.020476385" Jan 30 14:01:15.042486 kubelet[1771]: E0130 14:01:15.042281 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:01:16.043472 kubelet[1771]: E0130 14:01:16.043392 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:01:16.218081 kubelet[1771]: E0130 14:01:16.217617 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:16.926510 systemd[1]: run-containerd-runc-k8s.io-6cc4b5a7323e5160f6832b7bf2dbde5edf40574e52d41f144bda6f3c350cef63-runc.BbRTEU.mount: Deactivated successfully. Jan 30 14:01:17.044152 kubelet[1771]: E0130 14:01:17.043955 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:01:17.337532 systemd-networkd[1366]: lxc_health: Link UP Jan 30 14:01:17.357213 systemd-networkd[1366]: lxc_health: Gained carrier Jan 30 14:01:18.044778 kubelet[1771]: E0130 14:01:18.044708 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:01:18.232913 kubelet[1771]: E0130 14:01:18.232870 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:18.543546 systemd-networkd[1366]: lxc_health: Gained IPv6LL Jan 30 14:01:18.685115 kubelet[1771]: E0130 14:01:18.685065 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:19.045574 kubelet[1771]: E0130 14:01:19.045482 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:01:20.046523 kubelet[1771]: E0130 14:01:20.046442 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:01:21.047108 kubelet[1771]: E0130 14:01:21.047030 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:01:22.047339 kubelet[1771]: E0130 14:01:22.047261 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:01:23.047825 kubelet[1771]: E0130 14:01:23.047756 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:01:23.978088 kubelet[1771]: E0130 14:01:23.978021 1771 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:01:24.048036 kubelet[1771]: E0130 14:01:24.047980 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:01:24.311055 kubelet[1771]: E0130 14:01:24.310879 1771 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:25.048440 kubelet[1771]: E0130 14:01:25.048369 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:01:26.049335 kubelet[1771]: E0130 14:01:26.048510 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:01:26.059890 systemd[1]: run-containerd-runc-k8s.io-6cc4b5a7323e5160f6832b7bf2dbde5edf40574e52d41f144bda6f3c350cef63-runc.o4alBm.mount: Deactivated successfully. Jan 30 14:01:27.049232 kubelet[1771]: E0130 14:01:27.049104 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:01:28.050053 kubelet[1771]: E0130 14:01:28.049989 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 14:01:29.050966 kubelet[1771]: E0130 14:01:29.050902 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"