Jan 13 21:25:53.889623 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:40:50 -00 2025 Jan 13 21:25:53.889644 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:25:53.889655 kernel: BIOS-provided physical RAM map: Jan 13 21:25:53.889662 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 13 21:25:53.889668 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 13 21:25:53.889674 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 13 21:25:53.889681 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 13 21:25:53.889688 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 13 21:25:53.889694 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 13 21:25:53.889703 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 13 21:25:53.889709 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 13 21:25:53.889715 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 13 21:25:53.889725 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 13 21:25:53.889732 kernel: NX (Execute Disable) protection: active Jan 13 21:25:53.889739 kernel: APIC: Static calls initialized Jan 13 21:25:53.889752 kernel: SMBIOS 2.8 present. Jan 13 21:25:53.889759 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 13 21:25:53.889766 kernel: Hypervisor detected: KVM Jan 13 21:25:53.889779 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 21:25:53.889786 kernel: kvm-clock: using sched offset of 2793413808 cycles Jan 13 21:25:53.889794 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 21:25:53.889801 kernel: tsc: Detected 2794.748 MHz processor Jan 13 21:25:53.889808 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 21:25:53.889815 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 21:25:53.889826 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 13 21:25:53.889833 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 13 21:25:53.889840 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 21:25:53.889847 kernel: Using GB pages for direct mapping Jan 13 21:25:53.889854 kernel: ACPI: Early table checksum verification disabled Jan 13 21:25:53.889861 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 13 21:25:53.889868 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:25:53.889876 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:25:53.889883 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:25:53.889892 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 13 21:25:53.889899 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:25:53.889906 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:25:53.889913 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:25:53.889920 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:25:53.889927 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Jan 13 21:25:53.889934 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Jan 13 21:25:53.889945 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 13 21:25:53.889955 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Jan 13 21:25:53.889962 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Jan 13 21:25:53.889969 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Jan 13 21:25:53.889976 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Jan 13 21:25:53.889986 kernel: No NUMA configuration found Jan 13 21:25:53.889993 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 13 21:25:53.890003 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 13 21:25:53.890010 kernel: Zone ranges: Jan 13 21:25:53.890018 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 21:25:53.890025 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 13 21:25:53.890032 kernel: Normal empty Jan 13 21:25:53.890039 kernel: Movable zone start for each node Jan 13 21:25:53.890046 kernel: Early memory node ranges Jan 13 21:25:53.890054 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 13 21:25:53.890061 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 13 21:25:53.890068 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 13 21:25:53.890078 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 21:25:53.890087 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 13 21:25:53.890094 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 13 21:25:53.890102 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 13 21:25:53.890109 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 21:25:53.890116 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 13 21:25:53.890123 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 13 21:25:53.890131 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 21:25:53.890138 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 21:25:53.890148 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 21:25:53.890155 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 21:25:53.890162 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 21:25:53.890170 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 13 21:25:53.890177 kernel: TSC deadline timer available Jan 13 21:25:53.890184 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 13 21:25:53.890191 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 13 21:25:53.890199 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 13 21:25:53.890208 kernel: kvm-guest: setup PV sched yield Jan 13 21:25:53.890218 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 13 21:25:53.890225 kernel: Booting paravirtualized kernel on KVM Jan 13 21:25:53.890233 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 21:25:53.890240 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 13 21:25:53.890247 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 13 21:25:53.890255 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 13 21:25:53.890262 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 13 21:25:53.890269 kernel: kvm-guest: PV spinlocks enabled Jan 13 21:25:53.890276 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 13 21:25:53.890287 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:25:53.890295 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 21:25:53.890302 kernel: random: crng init done Jan 13 21:25:53.890310 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 21:25:53.890317 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 21:25:53.890324 kernel: Fallback order for Node 0: 0 Jan 13 21:25:53.890332 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 13 21:25:53.890339 kernel: Policy zone: DMA32 Jan 13 21:25:53.890349 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 21:25:53.890356 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42844K init, 2348K bss, 136900K reserved, 0K cma-reserved) Jan 13 21:25:53.890364 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 13 21:25:53.890371 kernel: ftrace: allocating 37918 entries in 149 pages Jan 13 21:25:53.890378 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 21:25:53.890386 kernel: Dynamic Preempt: voluntary Jan 13 21:25:53.890393 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 21:25:53.890449 kernel: rcu: RCU event tracing is enabled. Jan 13 21:25:53.890458 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 13 21:25:53.890469 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 21:25:53.890477 kernel: Rude variant of Tasks RCU enabled. Jan 13 21:25:53.890484 kernel: Tracing variant of Tasks RCU enabled. Jan 13 21:25:53.890492 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 21:25:53.890501 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 13 21:25:53.890509 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 13 21:25:53.890517 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 21:25:53.890524 kernel: Console: colour VGA+ 80x25 Jan 13 21:25:53.890531 kernel: printk: console [ttyS0] enabled Jan 13 21:25:53.890541 kernel: ACPI: Core revision 20230628 Jan 13 21:25:53.890549 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 13 21:25:53.890557 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 21:25:53.890564 kernel: x2apic enabled Jan 13 21:25:53.890571 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 21:25:53.890579 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 13 21:25:53.890586 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 13 21:25:53.890594 kernel: kvm-guest: setup PV IPIs Jan 13 21:25:53.890611 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 13 21:25:53.890619 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 13 21:25:53.890626 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jan 13 21:25:53.890634 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 13 21:25:53.890644 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 13 21:25:53.890652 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 13 21:25:53.890659 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 21:25:53.890667 kernel: Spectre V2 : Mitigation: Retpolines Jan 13 21:25:53.890675 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 21:25:53.890685 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 21:25:53.890692 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 13 21:25:53.890702 kernel: RETBleed: Mitigation: untrained return thunk Jan 13 21:25:53.890710 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 13 21:25:53.890718 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 13 21:25:53.890725 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 13 21:25:53.890734 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 13 21:25:53.890741 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 13 21:25:53.890752 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 21:25:53.890759 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 21:25:53.890773 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 21:25:53.890781 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 21:25:53.890789 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 13 21:25:53.890796 kernel: Freeing SMP alternatives memory: 32K Jan 13 21:25:53.890804 kernel: pid_max: default: 32768 minimum: 301 Jan 13 21:25:53.890812 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 21:25:53.890820 kernel: landlock: Up and running. Jan 13 21:25:53.890830 kernel: SELinux: Initializing. Jan 13 21:25:53.890837 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:25:53.890845 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:25:53.890853 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 13 21:25:53.890861 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:25:53.890869 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:25:53.890877 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:25:53.890884 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 13 21:25:53.890894 kernel: ... version: 0 Jan 13 21:25:53.890904 kernel: ... bit width: 48 Jan 13 21:25:53.890912 kernel: ... generic registers: 6 Jan 13 21:25:53.890919 kernel: ... value mask: 0000ffffffffffff Jan 13 21:25:53.890927 kernel: ... max period: 00007fffffffffff Jan 13 21:25:53.890935 kernel: ... fixed-purpose events: 0 Jan 13 21:25:53.890942 kernel: ... event mask: 000000000000003f Jan 13 21:25:53.890950 kernel: signal: max sigframe size: 1776 Jan 13 21:25:53.890957 kernel: rcu: Hierarchical SRCU implementation. Jan 13 21:25:53.890965 kernel: rcu: Max phase no-delay instances is 400. Jan 13 21:25:53.890975 kernel: smp: Bringing up secondary CPUs ... Jan 13 21:25:53.890983 kernel: smpboot: x86: Booting SMP configuration: Jan 13 21:25:53.890990 kernel: .... node #0, CPUs: #1 #2 #3 Jan 13 21:25:53.890998 kernel: smp: Brought up 1 node, 4 CPUs Jan 13 21:25:53.891005 kernel: smpboot: Max logical packages: 1 Jan 13 21:25:53.891013 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jan 13 21:25:53.891021 kernel: devtmpfs: initialized Jan 13 21:25:53.891028 kernel: x86/mm: Memory block size: 128MB Jan 13 21:25:53.891036 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 21:25:53.891046 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 13 21:25:53.891054 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 21:25:53.891062 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 21:25:53.891069 kernel: audit: initializing netlink subsys (disabled) Jan 13 21:25:53.891077 kernel: audit: type=2000 audit(1736803553.553:1): state=initialized audit_enabled=0 res=1 Jan 13 21:25:53.891085 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 21:25:53.891092 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 21:25:53.891100 kernel: cpuidle: using governor menu Jan 13 21:25:53.891108 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 21:25:53.891118 kernel: dca service started, version 1.12.1 Jan 13 21:25:53.891125 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 13 21:25:53.891133 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 13 21:25:53.891141 kernel: PCI: Using configuration type 1 for base access Jan 13 21:25:53.891149 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 21:25:53.891156 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 21:25:53.891164 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 21:25:53.891172 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 21:25:53.891179 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 21:25:53.891190 kernel: ACPI: Added _OSI(Module Device) Jan 13 21:25:53.891197 kernel: ACPI: Added _OSI(Processor Device) Jan 13 21:25:53.891205 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 21:25:53.891212 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 21:25:53.891220 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 21:25:53.891228 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 21:25:53.891235 kernel: ACPI: Interpreter enabled Jan 13 21:25:53.891243 kernel: ACPI: PM: (supports S0 S3 S5) Jan 13 21:25:53.891250 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 21:25:53.891262 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 21:25:53.891270 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 21:25:53.891278 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 13 21:25:53.891286 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 21:25:53.891510 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 21:25:53.891647 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 13 21:25:53.891784 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 13 21:25:53.891799 kernel: PCI host bridge to bus 0000:00 Jan 13 21:25:53.891941 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 21:25:53.892059 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 21:25:53.892175 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 21:25:53.892288 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 13 21:25:53.892401 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 13 21:25:53.892534 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 13 21:25:53.892654 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 21:25:53.892814 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 13 21:25:53.892955 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 13 21:25:53.893082 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 13 21:25:53.893209 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 13 21:25:53.893334 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 13 21:25:53.893597 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 21:25:53.893753 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 21:25:53.893889 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 13 21:25:53.894015 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 13 21:25:53.894142 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 13 21:25:53.894288 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 13 21:25:53.894433 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 13 21:25:53.894562 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 13 21:25:53.894693 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 13 21:25:53.894935 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 13 21:25:53.895069 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 13 21:25:53.895247 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 13 21:25:53.895381 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 13 21:25:53.895534 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 13 21:25:53.895689 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 13 21:25:53.895830 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 13 21:25:53.895973 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 13 21:25:53.896100 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 13 21:25:53.896225 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 13 21:25:53.896363 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 13 21:25:53.896651 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 13 21:25:53.896670 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 21:25:53.896678 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 21:25:53.896686 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 21:25:53.896693 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 21:25:53.896701 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 13 21:25:53.896709 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 13 21:25:53.896717 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 13 21:25:53.896725 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 13 21:25:53.896732 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 13 21:25:53.896743 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 13 21:25:53.896751 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 13 21:25:53.896758 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 13 21:25:53.896766 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 13 21:25:53.896783 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 13 21:25:53.896791 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 13 21:25:53.896799 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 13 21:25:53.896806 kernel: iommu: Default domain type: Translated Jan 13 21:25:53.896814 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 21:25:53.896825 kernel: PCI: Using ACPI for IRQ routing Jan 13 21:25:53.896832 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 21:25:53.896840 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 13 21:25:53.896848 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 13 21:25:53.896975 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 13 21:25:53.897100 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 13 21:25:53.897225 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 21:25:53.897235 kernel: vgaarb: loaded Jan 13 21:25:53.897247 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 13 21:25:53.897255 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 13 21:25:53.897263 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 21:25:53.897271 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 21:25:53.897279 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 21:25:53.897286 kernel: pnp: PnP ACPI init Jan 13 21:25:53.897529 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 13 21:25:53.897543 kernel: pnp: PnP ACPI: found 6 devices Jan 13 21:25:53.897556 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 21:25:53.897564 kernel: NET: Registered PF_INET protocol family Jan 13 21:25:53.897572 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 21:25:53.897580 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 21:25:53.897587 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 21:25:53.897596 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 21:25:53.897603 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 21:25:53.897611 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 21:25:53.897619 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:25:53.897629 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:25:53.897637 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 21:25:53.897645 kernel: NET: Registered PF_XDP protocol family Jan 13 21:25:53.897776 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 21:25:53.897897 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 21:25:53.898014 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 21:25:53.898129 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 13 21:25:53.898244 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 13 21:25:53.898364 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 13 21:25:53.898374 kernel: PCI: CLS 0 bytes, default 64 Jan 13 21:25:53.898382 kernel: Initialise system trusted keyrings Jan 13 21:25:53.898390 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 21:25:53.898398 kernel: Key type asymmetric registered Jan 13 21:25:53.898420 kernel: Asymmetric key parser 'x509' registered Jan 13 21:25:53.898428 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 21:25:53.898436 kernel: io scheduler mq-deadline registered Jan 13 21:25:53.898443 kernel: io scheduler kyber registered Jan 13 21:25:53.898451 kernel: io scheduler bfq registered Jan 13 21:25:53.898463 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 21:25:53.898472 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 13 21:25:53.898479 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 13 21:25:53.898487 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 13 21:25:53.898495 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 21:25:53.898503 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 21:25:53.898511 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 21:25:53.898519 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 21:25:53.898526 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 21:25:53.898671 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 13 21:25:53.898683 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 21:25:53.898809 kernel: rtc_cmos 00:04: registered as rtc0 Jan 13 21:25:53.898930 kernel: rtc_cmos 00:04: setting system clock to 2025-01-13T21:25:53 UTC (1736803553) Jan 13 21:25:53.899048 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 13 21:25:53.899058 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 13 21:25:53.899066 kernel: NET: Registered PF_INET6 protocol family Jan 13 21:25:53.899078 kernel: Segment Routing with IPv6 Jan 13 21:25:53.899085 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 21:25:53.899093 kernel: NET: Registered PF_PACKET protocol family Jan 13 21:25:53.899101 kernel: Key type dns_resolver registered Jan 13 21:25:53.899109 kernel: IPI shorthand broadcast: enabled Jan 13 21:25:53.899116 kernel: sched_clock: Marking stable (829005479, 104967544)->(994584371, -60611348) Jan 13 21:25:53.899124 kernel: registered taskstats version 1 Jan 13 21:25:53.899132 kernel: Loading compiled-in X.509 certificates Jan 13 21:25:53.899140 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e8ca4908f7ff887d90a0430272c92dde55624447' Jan 13 21:25:53.899150 kernel: Key type .fscrypt registered Jan 13 21:25:53.899158 kernel: Key type fscrypt-provisioning registered Jan 13 21:25:53.899166 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 21:25:53.899174 kernel: ima: Allocated hash algorithm: sha1 Jan 13 21:25:53.899181 kernel: ima: No architecture policies found Jan 13 21:25:53.899189 kernel: clk: Disabling unused clocks Jan 13 21:25:53.899197 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 13 21:25:53.899205 kernel: Write protecting the kernel read-only data: 36864k Jan 13 21:25:53.899213 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 13 21:25:53.899223 kernel: Run /init as init process Jan 13 21:25:53.899231 kernel: with arguments: Jan 13 21:25:53.899238 kernel: /init Jan 13 21:25:53.899246 kernel: with environment: Jan 13 21:25:53.899254 kernel: HOME=/ Jan 13 21:25:53.899261 kernel: TERM=linux Jan 13 21:25:53.899269 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 21:25:53.899279 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:25:53.899292 systemd[1]: Detected virtualization kvm. Jan 13 21:25:53.899300 systemd[1]: Detected architecture x86-64. Jan 13 21:25:53.899309 systemd[1]: Running in initrd. Jan 13 21:25:53.899317 systemd[1]: No hostname configured, using default hostname. Jan 13 21:25:53.899325 systemd[1]: Hostname set to . Jan 13 21:25:53.899334 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:25:53.899342 systemd[1]: Queued start job for default target initrd.target. Jan 13 21:25:53.899350 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:25:53.899362 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:25:53.899371 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 21:25:53.899391 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:25:53.899403 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 21:25:53.899482 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 21:25:53.899497 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 21:25:53.899506 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 21:25:53.899515 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:25:53.899523 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:25:53.899532 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:25:53.899541 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:25:53.899549 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:25:53.899558 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:25:53.899569 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:25:53.899577 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:25:53.899586 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:25:53.899594 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:25:53.899603 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:25:53.899611 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:25:53.899620 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:25:53.899628 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:25:53.899637 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 21:25:53.899648 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:25:53.899656 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 21:25:53.899665 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 21:25:53.899673 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:25:53.899682 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:25:53.899690 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:25:53.899699 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 21:25:53.899707 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:25:53.899718 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 21:25:53.899749 systemd-journald[191]: Collecting audit messages is disabled. Jan 13 21:25:53.899777 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:25:53.899789 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:25:53.899798 systemd-journald[191]: Journal started Jan 13 21:25:53.899818 systemd-journald[191]: Runtime Journal (/run/log/journal/6456edc7c205402d8a90d288ace0035f) is 6.0M, max 48.4M, 42.3M free. Jan 13 21:25:53.895278 systemd-modules-load[194]: Inserted module 'overlay' Jan 13 21:25:53.933651 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 21:25:53.933667 kernel: Bridge firewalling registered Jan 13 21:25:53.933677 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:25:53.922725 systemd-modules-load[194]: Inserted module 'br_netfilter' Jan 13 21:25:53.934010 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:25:53.947583 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:25:53.948555 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:25:53.950833 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:25:53.953092 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:25:53.961108 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:25:53.967865 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:25:53.969028 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:25:53.974790 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:25:53.980635 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:25:53.981358 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:25:53.984241 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 21:25:54.000215 dracut-cmdline[231]: dracut-dracut-053 Jan 13 21:25:54.004270 dracut-cmdline[231]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:25:54.015388 systemd-resolved[226]: Positive Trust Anchors: Jan 13 21:25:54.015418 systemd-resolved[226]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:25:54.015452 systemd-resolved[226]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:25:54.018010 systemd-resolved[226]: Defaulting to hostname 'linux'. Jan 13 21:25:54.019236 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:25:54.024945 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:25:54.109461 kernel: SCSI subsystem initialized Jan 13 21:25:54.118440 kernel: Loading iSCSI transport class v2.0-870. Jan 13 21:25:54.129448 kernel: iscsi: registered transport (tcp) Jan 13 21:25:54.150444 kernel: iscsi: registered transport (qla4xxx) Jan 13 21:25:54.150477 kernel: QLogic iSCSI HBA Driver Jan 13 21:25:54.198546 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 21:25:54.213555 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 21:25:54.240917 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 21:25:54.240974 kernel: device-mapper: uevent: version 1.0.3 Jan 13 21:25:54.242231 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 21:25:54.288476 kernel: raid6: avx2x4 gen() 28652 MB/s Jan 13 21:25:54.305442 kernel: raid6: avx2x2 gen() 30188 MB/s Jan 13 21:25:54.322524 kernel: raid6: avx2x1 gen() 25855 MB/s Jan 13 21:25:54.322550 kernel: raid6: using algorithm avx2x2 gen() 30188 MB/s Jan 13 21:25:54.340552 kernel: raid6: .... xor() 19847 MB/s, rmw enabled Jan 13 21:25:54.340577 kernel: raid6: using avx2x2 recovery algorithm Jan 13 21:25:54.361440 kernel: xor: automatically using best checksumming function avx Jan 13 21:25:54.517444 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 21:25:54.530710 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:25:54.546676 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:25:54.559671 systemd-udevd[413]: Using default interface naming scheme 'v255'. Jan 13 21:25:54.564599 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:25:54.579533 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 21:25:54.598921 dracut-pre-trigger[419]: rd.md=0: removing MD RAID activation Jan 13 21:25:54.626779 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:25:54.635591 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:25:54.708684 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:25:54.714662 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 21:25:54.730096 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 21:25:54.734851 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:25:54.735324 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:25:54.735903 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:25:54.747673 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 21:25:54.758448 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:25:54.767448 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 21:25:54.769455 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 13 21:25:54.791347 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 13 21:25:54.791558 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 21:25:54.791575 kernel: GPT:9289727 != 19775487 Jan 13 21:25:54.791589 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 21:25:54.791603 kernel: GPT:9289727 != 19775487 Jan 13 21:25:54.791616 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 21:25:54.791630 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:25:54.791644 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 21:25:54.791666 kernel: AES CTR mode by8 optimization enabled Jan 13 21:25:54.787249 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:25:54.793082 kernel: libata version 3.00 loaded. Jan 13 21:25:54.787437 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:25:54.789679 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:25:54.791023 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:25:54.791213 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:25:54.798076 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:25:54.807511 kernel: ahci 0000:00:1f.2: version 3.0 Jan 13 21:25:54.847087 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 13 21:25:54.847110 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 13 21:25:54.847296 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 13 21:25:54.847495 kernel: scsi host0: ahci Jan 13 21:25:54.847681 kernel: scsi host1: ahci Jan 13 21:25:54.847871 kernel: BTRFS: device fsid b8e2d3c5-4bed-4339-bed5-268c66823686 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (459) Jan 13 21:25:54.847886 kernel: scsi host2: ahci Jan 13 21:25:54.848058 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (458) Jan 13 21:25:54.848072 kernel: scsi host3: ahci Jan 13 21:25:54.848252 kernel: scsi host4: ahci Jan 13 21:25:54.848467 kernel: scsi host5: ahci Jan 13 21:25:54.848672 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 13 21:25:54.848688 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 13 21:25:54.848702 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 13 21:25:54.848716 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 13 21:25:54.848729 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 13 21:25:54.848752 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 13 21:25:54.813418 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:25:54.843980 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 21:25:54.850548 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 21:25:54.860114 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 21:25:54.860175 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 21:25:54.866022 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:25:54.899893 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 21:25:54.932154 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:25:54.932178 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:25:54.932189 disk-uuid[550]: Primary Header is updated. Jan 13 21:25:54.932189 disk-uuid[550]: Secondary Entries is updated. Jan 13 21:25:54.932189 disk-uuid[550]: Secondary Header is updated. Jan 13 21:25:54.938209 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:25:54.957573 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:25:54.984197 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:25:55.154631 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 13 21:25:55.154692 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 13 21:25:55.154713 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 13 21:25:55.156454 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 13 21:25:55.156560 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 13 21:25:55.157444 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 13 21:25:55.158720 kernel: ata3.00: applying bridge limits Jan 13 21:25:55.158756 kernel: ata3.00: configured for UDMA/100 Jan 13 21:25:55.159433 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 13 21:25:55.164431 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 13 21:25:55.224439 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 13 21:25:55.243194 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 13 21:25:55.243214 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 13 21:25:55.919077 disk-uuid[551]: The operation has completed successfully. Jan 13 21:25:55.920482 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:25:55.951895 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 21:25:55.952047 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 21:25:55.979807 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 21:25:55.983068 sh[590]: Success Jan 13 21:25:55.996459 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 13 21:25:56.039739 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 21:25:56.056279 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 21:25:56.059526 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 21:25:56.078768 kernel: BTRFS info (device dm-0): first mount of filesystem b8e2d3c5-4bed-4339-bed5-268c66823686 Jan 13 21:25:56.078837 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:25:56.078853 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 21:25:56.080026 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 21:25:56.080953 kernel: BTRFS info (device dm-0): using free space tree Jan 13 21:25:56.087567 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 21:25:56.088049 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 21:25:56.104707 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 21:25:56.105901 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 21:25:56.120526 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:25:56.120556 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:25:56.120568 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:25:56.123458 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:25:56.133549 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 21:25:56.135455 kernel: BTRFS info (device vda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:25:56.144393 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 21:25:56.151586 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 21:25:56.348182 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:25:56.358586 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:25:56.376647 ignition[690]: Ignition 2.19.0 Jan 13 21:25:56.376668 ignition[690]: Stage: fetch-offline Jan 13 21:25:56.376723 ignition[690]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:25:56.376734 ignition[690]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:25:56.376842 ignition[690]: parsed url from cmdline: "" Jan 13 21:25:56.376846 ignition[690]: no config URL provided Jan 13 21:25:56.376852 ignition[690]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:25:56.376862 ignition[690]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:25:56.376896 ignition[690]: op(1): [started] loading QEMU firmware config module Jan 13 21:25:56.376902 ignition[690]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 13 21:25:56.387394 systemd-networkd[776]: lo: Link UP Jan 13 21:25:56.387422 systemd-networkd[776]: lo: Gained carrier Jan 13 21:25:56.389481 systemd-networkd[776]: Enumeration completed Jan 13 21:25:56.390019 systemd-networkd[776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:25:56.390023 systemd-networkd[776]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:25:56.391030 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:25:56.391464 ignition[690]: op(1): [finished] loading QEMU firmware config module Jan 13 21:25:56.391484 systemd[1]: Reached target network.target - Network. Jan 13 21:25:56.397944 ignition[690]: parsing config with SHA512: 91f41ed681db719f90ec826f3e0c6b85d55883889523c7bca796d7bbba45fe76b5465f9fea9d011f6899d7eda9e1193b481b84f78e80640147c607e0ba14e8f9 Jan 13 21:25:56.392742 systemd-networkd[776]: eth0: Link UP Jan 13 21:25:56.392746 systemd-networkd[776]: eth0: Gained carrier Jan 13 21:25:56.392754 systemd-networkd[776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:25:56.400787 ignition[690]: fetch-offline: fetch-offline passed Jan 13 21:25:56.400529 unknown[690]: fetched base config from "system" Jan 13 21:25:56.400857 ignition[690]: Ignition finished successfully Jan 13 21:25:56.400537 unknown[690]: fetched user config from "qemu" Jan 13 21:25:56.403310 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:25:56.404662 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 13 21:25:56.411566 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 21:25:56.412482 systemd-networkd[776]: eth0: DHCPv4 address 10.0.0.115/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 21:25:56.428490 ignition[783]: Ignition 2.19.0 Jan 13 21:25:56.428505 ignition[783]: Stage: kargs Jan 13 21:25:56.428723 ignition[783]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:25:56.428736 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:25:56.429551 ignition[783]: kargs: kargs passed Jan 13 21:25:56.429601 ignition[783]: Ignition finished successfully Jan 13 21:25:56.436602 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 21:25:56.443862 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 21:25:56.459598 ignition[792]: Ignition 2.19.0 Jan 13 21:25:56.459610 ignition[792]: Stage: disks Jan 13 21:25:56.459805 ignition[792]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:25:56.459818 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:25:56.463421 ignition[792]: disks: disks passed Jan 13 21:25:56.463483 ignition[792]: Ignition finished successfully Jan 13 21:25:56.467067 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 21:25:56.468348 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 21:25:56.470300 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:25:56.471572 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:25:56.471990 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:25:56.472312 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:25:56.481540 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 21:25:56.519958 systemd-fsck[802]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 21:25:56.552298 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 21:25:56.563669 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 21:25:56.654445 kernel: EXT4-fs (vda9): mounted filesystem 39899d4c-a8b1-4feb-9875-e812cc535888 r/w with ordered data mode. Quota mode: none. Jan 13 21:25:56.655200 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 21:25:56.656511 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 21:25:56.672502 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:25:56.674687 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 21:25:56.675244 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 21:25:56.681019 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (811) Jan 13 21:25:56.681050 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:25:56.675288 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 21:25:56.687526 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:25:56.687551 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:25:56.687562 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:25:56.675311 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:25:56.684509 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 21:25:56.688690 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:25:56.698653 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 21:25:56.736276 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 21:25:56.743442 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Jan 13 21:25:56.748939 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 21:25:56.756582 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 21:25:56.884941 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 21:25:56.896668 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 21:25:56.897949 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 21:25:56.911490 kernel: BTRFS info (device vda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:25:56.929285 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 21:25:56.947971 ignition[926]: INFO : Ignition 2.19.0 Jan 13 21:25:56.947971 ignition[926]: INFO : Stage: mount Jan 13 21:25:56.950070 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:25:56.950070 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:25:56.950070 ignition[926]: INFO : mount: mount passed Jan 13 21:25:56.950070 ignition[926]: INFO : Ignition finished successfully Jan 13 21:25:56.957125 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 21:25:56.968591 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 21:25:57.077585 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 21:25:57.086752 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:25:57.095909 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (939) Jan 13 21:25:57.095941 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:25:57.095953 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:25:57.096765 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:25:57.100435 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:25:57.101580 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:25:57.215295 ignition[956]: INFO : Ignition 2.19.0 Jan 13 21:25:57.215295 ignition[956]: INFO : Stage: files Jan 13 21:25:57.217478 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:25:57.217478 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:25:57.217478 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Jan 13 21:25:57.220973 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 21:25:57.220973 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 21:25:57.227058 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 21:25:57.228484 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 21:25:57.228484 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 21:25:57.227913 unknown[956]: wrote ssh authorized keys file for user: core Jan 13 21:25:57.232734 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 13 21:25:57.232734 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 21:25:57.232734 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:25:57.232734 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:25:57.232734 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 21:25:57.232734 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 21:25:57.232734 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 21:25:57.232734 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 13 21:25:57.578508 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 13 21:25:58.075703 systemd-networkd[776]: eth0: Gained IPv6LL Jan 13 21:25:58.166991 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 21:25:58.170208 ignition[956]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Jan 13 21:25:58.170208 ignition[956]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 21:25:58.170208 ignition[956]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 21:25:58.170208 ignition[956]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Jan 13 21:25:58.170208 ignition[956]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Jan 13 21:25:58.200912 ignition[956]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 21:25:58.208000 ignition[956]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 21:25:58.209741 ignition[956]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Jan 13 21:25:58.211348 ignition[956]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:25:58.213160 ignition[956]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:25:58.214924 ignition[956]: INFO : files: files passed Jan 13 21:25:58.215712 ignition[956]: INFO : Ignition finished successfully Jan 13 21:25:58.218783 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 21:25:58.231609 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 21:25:58.233640 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 21:25:58.238100 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 21:25:58.238239 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 21:25:58.247686 initrd-setup-root-after-ignition[984]: grep: /sysroot/oem/oem-release: No such file or directory Jan 13 21:25:58.251076 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:25:58.251076 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:25:58.254689 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:25:58.257962 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:25:58.258437 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 21:25:58.275553 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 21:25:58.302922 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 21:25:58.303068 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 21:25:58.303833 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 21:25:58.306986 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 21:25:58.307383 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 21:25:58.308246 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 21:25:58.338726 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:25:58.348614 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 21:25:58.360884 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:25:58.363321 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:25:58.364668 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 21:25:58.366583 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 21:25:58.366742 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:25:58.368878 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 21:25:58.370592 systemd[1]: Stopped target basic.target - Basic System. Jan 13 21:25:58.372613 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 21:25:58.374660 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:25:58.376691 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 21:25:58.378825 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 21:25:58.380935 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:25:58.383229 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 21:25:58.385214 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 21:25:58.387421 systemd[1]: Stopped target swap.target - Swaps. Jan 13 21:25:58.389186 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 21:25:58.389303 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:25:58.391528 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:25:58.393571 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:25:58.395535 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 21:25:58.395668 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:25:58.397718 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 21:25:58.397866 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 21:25:58.400018 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 21:25:58.400143 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:25:58.402269 systemd[1]: Stopped target paths.target - Path Units. Jan 13 21:25:58.403989 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 21:25:58.407490 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:25:58.409669 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 21:25:58.411697 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 21:25:58.413498 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 21:25:58.413614 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:25:58.415686 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 21:25:58.415806 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:25:58.418189 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 21:25:58.418321 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:25:58.420251 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 21:25:58.420362 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 21:25:58.437570 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 21:25:58.440450 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 21:25:58.441366 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 21:25:58.441512 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:25:58.443694 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 21:25:58.443802 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:25:58.449497 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 21:25:58.449617 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 21:25:58.454272 ignition[1010]: INFO : Ignition 2.19.0 Jan 13 21:25:58.454272 ignition[1010]: INFO : Stage: umount Jan 13 21:25:58.456134 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:25:58.456134 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:25:58.456134 ignition[1010]: INFO : umount: umount passed Jan 13 21:25:58.456134 ignition[1010]: INFO : Ignition finished successfully Jan 13 21:25:58.457705 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 21:25:58.457829 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 21:25:58.459806 systemd[1]: Stopped target network.target - Network. Jan 13 21:25:58.461376 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 21:25:58.461483 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 21:25:58.463306 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 21:25:58.463359 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 21:25:58.465362 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 21:25:58.465428 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 21:25:58.467433 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 21:25:58.467486 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 21:25:58.469509 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 21:25:58.471477 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 21:25:58.473453 systemd-networkd[776]: eth0: DHCPv6 lease lost Jan 13 21:25:58.474823 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 21:25:58.476309 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 21:25:58.476478 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 21:25:58.478115 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 21:25:58.478173 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:25:58.486566 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 21:25:58.488477 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 21:25:58.488559 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:25:58.490966 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:25:58.495097 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 21:25:58.495228 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 21:25:58.508687 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 21:25:58.508896 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:25:58.510725 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 21:25:58.510814 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 21:25:58.512661 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 21:25:58.512707 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:25:58.512970 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 21:25:58.513019 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:25:58.513745 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 21:25:58.513793 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 21:25:58.519717 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:25:58.519773 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:25:58.523562 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 21:25:58.524041 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:25:58.524095 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:25:58.524402 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 21:25:58.524463 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 21:25:58.525077 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 21:25:58.525123 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:25:58.525401 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 13 21:25:58.525462 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:25:58.525910 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 21:25:58.525953 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:25:58.526236 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 21:25:58.526279 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:25:58.526916 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:25:58.526963 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:25:58.542929 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 21:25:58.543102 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 21:25:58.548105 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 21:25:58.548223 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 21:25:58.648866 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 21:25:58.648999 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 21:25:58.651015 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 21:25:58.652743 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 21:25:58.652801 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 21:25:58.666533 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 21:25:58.674663 systemd[1]: Switching root. Jan 13 21:25:58.705256 systemd-journald[191]: Journal stopped Jan 13 21:25:59.834196 systemd-journald[191]: Received SIGTERM from PID 1 (systemd). Jan 13 21:25:59.834269 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 21:25:59.834288 kernel: SELinux: policy capability open_perms=1 Jan 13 21:25:59.834299 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 21:25:59.834311 kernel: SELinux: policy capability always_check_network=0 Jan 13 21:25:59.834328 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 21:25:59.834349 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 21:25:59.834364 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 21:25:59.834376 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 21:25:59.834388 kernel: audit: type=1403 audit(1736803559.025:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 21:25:59.834401 systemd[1]: Successfully loaded SELinux policy in 39.822ms. Jan 13 21:25:59.834439 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.444ms. Jan 13 21:25:59.834457 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:25:59.834470 systemd[1]: Detected virtualization kvm. Jan 13 21:25:59.834487 systemd[1]: Detected architecture x86-64. Jan 13 21:25:59.834506 systemd[1]: Detected first boot. Jan 13 21:25:59.834519 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:25:59.834537 zram_generator::config[1054]: No configuration found. Jan 13 21:25:59.834550 systemd[1]: Populated /etc with preset unit settings. Jan 13 21:25:59.834563 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 21:25:59.834581 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 21:25:59.834593 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 21:25:59.834606 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 21:25:59.834626 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 21:25:59.834638 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 21:25:59.834650 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 21:25:59.834663 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 21:25:59.834683 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 21:25:59.834695 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 21:25:59.834713 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 21:25:59.834726 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:25:59.834739 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:25:59.834751 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 21:25:59.834771 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 21:25:59.834784 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 21:25:59.834796 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:25:59.834809 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 21:25:59.834821 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:25:59.834840 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 21:25:59.834852 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 21:25:59.834864 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 21:25:59.834876 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 21:25:59.834888 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:25:59.834900 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:25:59.834913 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:25:59.834930 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:25:59.834942 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 21:25:59.834955 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 21:25:59.834976 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:25:59.834988 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:25:59.835004 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:25:59.835016 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 21:25:59.835029 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 21:25:59.835041 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 21:25:59.835054 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 21:25:59.835072 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:25:59.835085 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 21:25:59.835097 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 21:25:59.835110 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 21:25:59.835122 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 21:25:59.835135 systemd[1]: Reached target machines.target - Containers. Jan 13 21:25:59.835147 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 21:25:59.835160 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:25:59.835177 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:25:59.835190 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 21:25:59.835202 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:25:59.835215 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:25:59.835228 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:25:59.835240 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 21:25:59.835254 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:25:59.835266 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 21:25:59.835283 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 21:25:59.835296 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 21:25:59.835308 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 21:25:59.835320 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 21:25:59.835332 kernel: loop: module loaded Jan 13 21:25:59.835344 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:25:59.835356 kernel: fuse: init (API version 7.39) Jan 13 21:25:59.835368 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:25:59.835380 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 21:25:59.835398 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 21:25:59.835424 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:25:59.835436 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 21:25:59.835449 systemd[1]: Stopped verity-setup.service. Jan 13 21:25:59.835461 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:25:59.835474 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 21:25:59.835486 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 21:25:59.835498 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 21:25:59.835511 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 21:25:59.835548 systemd-journald[1117]: Collecting audit messages is disabled. Jan 13 21:25:59.835572 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 21:25:59.835585 systemd-journald[1117]: Journal started Jan 13 21:25:59.835620 systemd-journald[1117]: Runtime Journal (/run/log/journal/6456edc7c205402d8a90d288ace0035f) is 6.0M, max 48.4M, 42.3M free. Jan 13 21:25:59.555278 systemd[1]: Queued start job for default target multi-user.target. Jan 13 21:25:59.579121 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 21:25:59.579632 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 21:25:59.837510 kernel: ACPI: bus type drm_connector registered Jan 13 21:25:59.837534 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:25:59.840840 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 21:25:59.842498 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:25:59.854768 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 21:25:59.855017 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 21:25:59.856932 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:25:59.857169 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:25:59.859023 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:25:59.859264 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:25:59.873239 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:25:59.873701 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:25:59.875652 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 21:25:59.875893 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 21:25:59.877818 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:25:59.878053 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:25:59.879856 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:25:59.881750 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 21:25:59.883709 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 21:25:59.903672 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 21:25:59.910536 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 21:25:59.915361 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 21:25:59.916783 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 21:25:59.916945 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:25:59.919694 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 21:25:59.932275 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 21:25:59.934653 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 21:25:59.935849 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:25:59.937359 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 21:25:59.939638 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 21:25:59.940936 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:25:59.942056 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 21:25:59.943248 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:25:59.945518 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:25:59.948023 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 21:25:59.953075 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:25:59.956365 systemd-journald[1117]: Time spent on flushing to /var/log/journal/6456edc7c205402d8a90d288ace0035f is 17.456ms for 935 entries. Jan 13 21:25:59.956365 systemd-journald[1117]: System Journal (/var/log/journal/6456edc7c205402d8a90d288ace0035f) is 8.0M, max 195.6M, 187.6M free. Jan 13 21:25:59.986716 systemd-journald[1117]: Received client request to flush runtime journal. Jan 13 21:25:59.986752 kernel: loop0: detected capacity change from 0 to 142488 Jan 13 21:25:59.960847 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 21:25:59.962582 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:25:59.965245 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 21:25:59.966702 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 21:25:59.968923 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 21:25:59.971641 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 21:25:59.984145 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 21:25:59.998638 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 21:26:00.001442 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 21:26:00.002239 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 21:26:00.004100 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 21:26:00.007204 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:26:00.011569 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Jan 13 21:26:00.011590 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Jan 13 21:26:00.020553 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:26:00.027070 kernel: loop1: detected capacity change from 0 to 205544 Jan 13 21:26:00.029111 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 21:26:00.031056 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 21:26:00.032513 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 21:26:00.038111 udevadm[1183]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 13 21:26:00.062891 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 21:26:00.068448 kernel: loop2: detected capacity change from 0 to 140768 Jan 13 21:26:00.071616 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:26:00.092036 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Jan 13 21:26:00.092066 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Jan 13 21:26:00.098848 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:26:00.107564 kernel: loop3: detected capacity change from 0 to 142488 Jan 13 21:26:00.118453 kernel: loop4: detected capacity change from 0 to 205544 Jan 13 21:26:00.125563 kernel: loop5: detected capacity change from 0 to 140768 Jan 13 21:26:00.137338 (sd-merge)[1196]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 13 21:26:00.137991 (sd-merge)[1196]: Merged extensions into '/usr'. Jan 13 21:26:00.142428 systemd[1]: Reloading requested from client PID 1167 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 21:26:00.142446 systemd[1]: Reloading... Jan 13 21:26:00.206446 zram_generator::config[1222]: No configuration found. Jan 13 21:26:00.303927 ldconfig[1162]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 21:26:00.351853 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:26:00.401304 systemd[1]: Reloading finished in 258 ms. Jan 13 21:26:00.437068 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 21:26:00.438683 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 21:26:00.452609 systemd[1]: Starting ensure-sysext.service... Jan 13 21:26:00.454574 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:26:00.460233 systemd[1]: Reloading requested from client PID 1259 ('systemctl') (unit ensure-sysext.service)... Jan 13 21:26:00.460248 systemd[1]: Reloading... Jan 13 21:26:00.476519 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 21:26:00.476905 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 21:26:00.477929 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 21:26:00.478236 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Jan 13 21:26:00.478316 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Jan 13 21:26:00.481881 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:26:00.481893 systemd-tmpfiles[1260]: Skipping /boot Jan 13 21:26:00.496569 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:26:00.496587 systemd-tmpfiles[1260]: Skipping /boot Jan 13 21:26:00.514443 zram_generator::config[1290]: No configuration found. Jan 13 21:26:00.619922 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:26:00.669218 systemd[1]: Reloading finished in 208 ms. Jan 13 21:26:00.688531 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 21:26:00.704955 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:26:00.714168 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:26:00.717097 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 21:26:00.719941 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 21:26:00.726390 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:26:00.737081 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:26:00.740351 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 21:26:00.745358 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:26:00.745728 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:26:00.747204 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:26:00.765875 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:26:00.770721 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:26:00.774654 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:26:00.775599 systemd-udevd[1332]: Using default interface naming scheme 'v255'. Jan 13 21:26:00.777186 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 21:26:00.778686 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:26:00.780377 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:26:00.780666 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:26:00.788053 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:26:00.788297 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:26:00.790852 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 21:26:00.792809 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:26:00.793017 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:26:00.804901 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 21:26:00.806633 augenrules[1355]: No rules Jan 13 21:26:00.808292 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:26:00.814033 systemd[1]: Finished ensure-sysext.service. Jan 13 21:26:00.816032 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:26:00.816256 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:26:00.823778 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:26:00.830651 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:26:00.836623 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:26:00.844080 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:26:00.845448 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:26:00.849565 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 21:26:00.856556 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 21:26:00.857858 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:26:00.858212 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:26:00.859892 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 21:26:00.863472 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 21:26:00.865423 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:26:00.865659 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:26:00.867374 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:26:00.867639 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:26:00.869255 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:26:00.870005 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:26:00.874176 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:26:00.874367 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:26:00.897836 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 21:26:00.907462 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1376) Jan 13 21:26:00.907485 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:26:00.910507 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:26:00.910626 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:26:00.910664 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 21:26:00.915482 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 21:26:00.929962 systemd-resolved[1330]: Positive Trust Anchors: Jan 13 21:26:00.929978 systemd-resolved[1330]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:26:00.930011 systemd-resolved[1330]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:26:00.936201 systemd-resolved[1330]: Defaulting to hostname 'linux'. Jan 13 21:26:00.938012 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:26:00.948472 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:26:00.984566 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:26:01.010608 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 13 21:26:01.010664 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 21:26:01.017146 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 13 21:26:01.020932 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 13 21:26:01.021170 kernel: ACPI: button: Power Button [PWRF] Jan 13 21:26:01.021190 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 13 21:26:01.021371 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 13 21:26:01.020541 systemd-networkd[1400]: lo: Link UP Jan 13 21:26:01.020546 systemd-networkd[1400]: lo: Gained carrier Jan 13 21:26:01.022272 systemd-networkd[1400]: Enumeration completed Jan 13 21:26:01.022728 systemd-networkd[1400]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:26:01.022732 systemd-networkd[1400]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:26:01.023341 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:26:01.024038 systemd-networkd[1400]: eth0: Link UP Jan 13 21:26:01.024051 systemd-networkd[1400]: eth0: Gained carrier Jan 13 21:26:01.024064 systemd-networkd[1400]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:26:01.024767 systemd[1]: Reached target network.target - Network. Jan 13 21:26:01.035687 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 21:26:01.037256 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 21:26:01.038532 systemd-networkd[1400]: eth0: DHCPv4 address 10.0.0.115/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 21:26:01.039013 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 21:26:01.041092 systemd-timesyncd[1383]: Network configuration changed, trying to establish connection. Jan 13 21:26:01.041503 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 21:26:01.902894 systemd-resolved[1330]: Clock change detected. Flushing caches. Jan 13 21:26:01.902924 systemd-timesyncd[1383]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 13 21:26:01.902989 systemd-timesyncd[1383]: Initial clock synchronization to Mon 2025-01-13 21:26:01.902819 UTC. Jan 13 21:26:01.919961 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 21:26:01.927301 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:26:02.028297 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:26:02.033159 kernel: kvm_amd: TSC scaling supported Jan 13 21:26:02.033194 kernel: kvm_amd: Nested Virtualization enabled Jan 13 21:26:02.033207 kernel: kvm_amd: Nested Paging enabled Jan 13 21:26:02.034133 kernel: kvm_amd: LBR virtualization supported Jan 13 21:26:02.034152 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 13 21:26:02.035152 kernel: kvm_amd: Virtual GIF supported Jan 13 21:26:02.054978 kernel: EDAC MC: Ver: 3.0.0 Jan 13 21:26:02.090628 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 21:26:02.104211 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 21:26:02.114245 lvm[1424]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:26:02.155313 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 21:26:02.156833 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:26:02.158023 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:26:02.159239 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 21:26:02.160534 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 21:26:02.162032 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 21:26:02.163325 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 21:26:02.164608 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 21:26:02.165871 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 21:26:02.165904 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:26:02.167013 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:26:02.168575 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 21:26:02.171321 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 21:26:02.181091 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 21:26:02.186868 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 21:26:02.188423 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 21:26:02.189571 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:26:02.190533 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:26:02.191478 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:26:02.191510 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:26:02.192548 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 21:26:02.194995 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 21:26:02.200275 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 21:26:02.210736 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 21:26:02.213223 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 21:26:02.214686 lvm[1428]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:26:02.215841 jq[1431]: false Jan 13 21:26:02.218165 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 21:26:02.224160 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 21:26:02.231178 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 21:26:02.235064 extend-filesystems[1432]: Found loop3 Jan 13 21:26:02.236176 extend-filesystems[1432]: Found loop4 Jan 13 21:26:02.236176 extend-filesystems[1432]: Found loop5 Jan 13 21:26:02.236176 extend-filesystems[1432]: Found sr0 Jan 13 21:26:02.236176 extend-filesystems[1432]: Found vda Jan 13 21:26:02.236176 extend-filesystems[1432]: Found vda1 Jan 13 21:26:02.236176 extend-filesystems[1432]: Found vda2 Jan 13 21:26:02.236176 extend-filesystems[1432]: Found vda3 Jan 13 21:26:02.236176 extend-filesystems[1432]: Found usr Jan 13 21:26:02.236176 extend-filesystems[1432]: Found vda4 Jan 13 21:26:02.236176 extend-filesystems[1432]: Found vda6 Jan 13 21:26:02.236176 extend-filesystems[1432]: Found vda7 Jan 13 21:26:02.236176 extend-filesystems[1432]: Found vda9 Jan 13 21:26:02.236176 extend-filesystems[1432]: Checking size of /dev/vda9 Jan 13 21:26:02.250441 dbus-daemon[1430]: [system] SELinux support is enabled Jan 13 21:26:02.240308 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 21:26:02.244799 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 21:26:02.245442 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 21:26:02.264873 jq[1448]: true Jan 13 21:26:02.246466 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 21:26:02.274999 update_engine[1445]: I20250113 21:26:02.267501 1445 main.cc:92] Flatcar Update Engine starting Jan 13 21:26:02.274999 update_engine[1445]: I20250113 21:26:02.269165 1445 update_check_scheduler.cc:74] Next update check in 3m25s Jan 13 21:26:02.250980 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 21:26:02.254268 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 21:26:02.259061 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 21:26:02.274517 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 21:26:02.275555 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 21:26:02.276141 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 21:26:02.276406 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 21:26:02.278200 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 21:26:02.278444 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 21:26:02.292125 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 21:26:02.292175 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 21:26:02.294926 jq[1451]: true Jan 13 21:26:02.298196 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 21:26:02.298226 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 21:26:02.299507 extend-filesystems[1432]: Resized partition /dev/vda9 Jan 13 21:26:02.305105 extend-filesystems[1462]: resize2fs 1.47.1 (20-May-2024) Jan 13 21:26:02.316434 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1373) Jan 13 21:26:02.312528 (ntainerd)[1452]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 21:26:02.316188 systemd[1]: Started update-engine.service - Update Engine. Jan 13 21:26:02.321354 systemd-logind[1443]: Watching system buttons on /dev/input/event1 (Power Button) Jan 13 21:26:02.321388 systemd-logind[1443]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 21:26:02.322185 systemd-logind[1443]: New seat seat0. Jan 13 21:26:02.322343 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 21:26:02.331898 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 21:26:02.350001 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 13 21:26:02.362298 sshd_keygen[1449]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 21:26:02.387746 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 21:26:02.400250 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 21:26:02.407384 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 21:26:02.407648 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 21:26:02.410548 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 21:26:02.503103 locksmithd[1467]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 21:26:02.509872 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 21:26:02.523450 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 21:26:02.525962 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 21:26:02.532611 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 21:26:02.629963 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 13 21:26:02.643396 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 21:26:02.652216 systemd[1]: Started sshd@0-10.0.0.115:22-10.0.0.1:60544.service - OpenSSH per-connection server daemon (10.0.0.1:60544). Jan 13 21:26:02.687747 systemd[1]: sshd@0-10.0.0.115:22-10.0.0.1:60544.service: Deactivated successfully. Jan 13 21:26:02.705160 sshd[1503]: Connection closed by authenticating user core 10.0.0.1 port 60544 [preauth] Jan 13 21:26:02.705362 extend-filesystems[1462]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 21:26:02.705362 extend-filesystems[1462]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 21:26:02.705362 extend-filesystems[1462]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 13 21:26:02.709805 extend-filesystems[1432]: Resized filesystem in /dev/vda9 Jan 13 21:26:02.713428 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 21:26:02.713840 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 21:26:02.796759 bash[1480]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:26:02.797974 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 21:26:02.801243 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 13 21:26:02.805482 containerd[1452]: time="2025-01-13T21:26:02.805370016Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 13 21:26:02.827498 containerd[1452]: time="2025-01-13T21:26:02.827436980Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:26:02.829286 containerd[1452]: time="2025-01-13T21:26:02.829232116Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:26:02.829286 containerd[1452]: time="2025-01-13T21:26:02.829271270Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 21:26:02.829359 containerd[1452]: time="2025-01-13T21:26:02.829296537Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 21:26:02.829578 containerd[1452]: time="2025-01-13T21:26:02.829540705Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 21:26:02.829578 containerd[1452]: time="2025-01-13T21:26:02.829566163Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 21:26:02.829692 containerd[1452]: time="2025-01-13T21:26:02.829661562Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:26:02.829692 containerd[1452]: time="2025-01-13T21:26:02.829684725Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:26:02.829992 containerd[1452]: time="2025-01-13T21:26:02.829957797Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:26:02.829992 containerd[1452]: time="2025-01-13T21:26:02.829982694Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 21:26:02.830060 containerd[1452]: time="2025-01-13T21:26:02.830001750Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:26:02.830060 containerd[1452]: time="2025-01-13T21:26:02.830015476Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 21:26:02.830185 containerd[1452]: time="2025-01-13T21:26:02.830155268Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:26:02.830482 containerd[1452]: time="2025-01-13T21:26:02.830442035Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:26:02.830638 containerd[1452]: time="2025-01-13T21:26:02.830602076Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:26:02.830638 containerd[1452]: time="2025-01-13T21:26:02.830623295Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 21:26:02.830792 containerd[1452]: time="2025-01-13T21:26:02.830756736Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 21:26:02.830864 containerd[1452]: time="2025-01-13T21:26:02.830837667Z" level=info msg="metadata content store policy set" policy=shared Jan 13 21:26:02.836441 containerd[1452]: time="2025-01-13T21:26:02.836404267Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 21:26:02.836492 containerd[1452]: time="2025-01-13T21:26:02.836454952Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 21:26:02.836492 containerd[1452]: time="2025-01-13T21:26:02.836478115Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 21:26:02.836562 containerd[1452]: time="2025-01-13T21:26:02.836497401Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 21:26:02.836562 containerd[1452]: time="2025-01-13T21:26:02.836517479Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 21:26:02.836710 containerd[1452]: time="2025-01-13T21:26:02.836673732Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 21:26:02.836948 containerd[1452]: time="2025-01-13T21:26:02.836913622Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 21:26:02.837119 containerd[1452]: time="2025-01-13T21:26:02.837071418Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 21:26:02.837119 containerd[1452]: time="2025-01-13T21:26:02.837106253Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 21:26:02.837187 containerd[1452]: time="2025-01-13T21:26:02.837123395Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 21:26:02.837187 containerd[1452]: time="2025-01-13T21:26:02.837143283Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 21:26:02.837187 containerd[1452]: time="2025-01-13T21:26:02.837160856Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 21:26:02.837187 containerd[1452]: time="2025-01-13T21:26:02.837177026Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 21:26:02.837294 containerd[1452]: time="2025-01-13T21:26:02.837195200Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 21:26:02.837294 containerd[1452]: time="2025-01-13T21:26:02.837215398Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 21:26:02.837294 containerd[1452]: time="2025-01-13T21:26:02.837233752Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 21:26:02.837294 containerd[1452]: time="2025-01-13T21:26:02.837250504Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 21:26:02.837294 containerd[1452]: time="2025-01-13T21:26:02.837265632Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 21:26:02.837294 containerd[1452]: time="2025-01-13T21:26:02.837289797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 21:26:02.837452 containerd[1452]: time="2025-01-13T21:26:02.837307881Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 21:26:02.837452 containerd[1452]: time="2025-01-13T21:26:02.837324583Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 21:26:02.837452 containerd[1452]: time="2025-01-13T21:26:02.837341184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 21:26:02.837452 containerd[1452]: time="2025-01-13T21:26:02.837369697Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 21:26:02.837452 containerd[1452]: time="2025-01-13T21:26:02.837388943Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 21:26:02.837452 containerd[1452]: time="2025-01-13T21:26:02.837405595Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 21:26:02.837452 containerd[1452]: time="2025-01-13T21:26:02.837422917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 21:26:02.837452 containerd[1452]: time="2025-01-13T21:26:02.837442414Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 21:26:02.837670 containerd[1452]: time="2025-01-13T21:26:02.837463062Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 21:26:02.837670 containerd[1452]: time="2025-01-13T21:26:02.837478381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 21:26:02.837670 containerd[1452]: time="2025-01-13T21:26:02.837493640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 21:26:02.837670 containerd[1452]: time="2025-01-13T21:26:02.837510912Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 21:26:02.837670 containerd[1452]: time="2025-01-13T21:26:02.837530088Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 21:26:02.837670 containerd[1452]: time="2025-01-13T21:26:02.837560175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 21:26:02.837670 containerd[1452]: time="2025-01-13T21:26:02.837576495Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 21:26:02.837670 containerd[1452]: time="2025-01-13T21:26:02.837590822Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 21:26:02.837670 containerd[1452]: time="2025-01-13T21:26:02.837652237Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 21:26:02.837904 containerd[1452]: time="2025-01-13T21:26:02.837673347Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 21:26:02.837904 containerd[1452]: time="2025-01-13T21:26:02.837687083Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 21:26:02.837904 containerd[1452]: time="2025-01-13T21:26:02.837702141Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 21:26:02.837904 containerd[1452]: time="2025-01-13T21:26:02.837716127Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 21:26:02.837904 containerd[1452]: time="2025-01-13T21:26:02.837731937Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 21:26:02.837904 containerd[1452]: time="2025-01-13T21:26:02.837762644Z" level=info msg="NRI interface is disabled by configuration." Jan 13 21:26:02.837904 containerd[1452]: time="2025-01-13T21:26:02.837780167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 21:26:02.838221 containerd[1452]: time="2025-01-13T21:26:02.838124262Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 21:26:02.838221 containerd[1452]: time="2025-01-13T21:26:02.838216586Z" level=info msg="Connect containerd service" Jan 13 21:26:02.838419 containerd[1452]: time="2025-01-13T21:26:02.838260067Z" level=info msg="using legacy CRI server" Jan 13 21:26:02.838419 containerd[1452]: time="2025-01-13T21:26:02.838269795Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 21:26:02.838419 containerd[1452]: time="2025-01-13T21:26:02.838397906Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 21:26:02.839139 containerd[1452]: time="2025-01-13T21:26:02.839095895Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:26:02.839279 containerd[1452]: time="2025-01-13T21:26:02.839229756Z" level=info msg="Start subscribing containerd event" Jan 13 21:26:02.839279 containerd[1452]: time="2025-01-13T21:26:02.839277786Z" level=info msg="Start recovering state" Jan 13 21:26:02.839370 containerd[1452]: time="2025-01-13T21:26:02.839348238Z" level=info msg="Start event monitor" Jan 13 21:26:02.839370 containerd[1452]: time="2025-01-13T21:26:02.839367193Z" level=info msg="Start snapshots syncer" Jan 13 21:26:02.839425 containerd[1452]: time="2025-01-13T21:26:02.839376922Z" level=info msg="Start cni network conf syncer for default" Jan 13 21:26:02.839425 containerd[1452]: time="2025-01-13T21:26:02.839385167Z" level=info msg="Start streaming server" Jan 13 21:26:02.839487 containerd[1452]: time="2025-01-13T21:26:02.839462222Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 21:26:02.839585 containerd[1452]: time="2025-01-13T21:26:02.839549966Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 21:26:02.839677 containerd[1452]: time="2025-01-13T21:26:02.839652418Z" level=info msg="containerd successfully booted in 0.036003s" Jan 13 21:26:02.839752 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 21:26:03.208208 systemd-networkd[1400]: eth0: Gained IPv6LL Jan 13 21:26:03.211565 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 21:26:03.213408 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 21:26:03.228233 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 13 21:26:03.230990 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:26:03.233449 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 21:26:03.253398 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 13 21:26:03.253664 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 13 21:26:03.256430 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 21:26:03.259633 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 21:26:04.265593 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:26:04.267639 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 21:26:04.269299 systemd[1]: Startup finished in 962ms (kernel) + 5.327s (initrd) + 4.436s (userspace) = 10.727s. Jan 13 21:26:04.283354 (kubelet)[1541]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:26:04.870389 kubelet[1541]: E0113 21:26:04.870302 1541 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:26:04.875007 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:26:04.875368 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:26:04.876040 systemd[1]: kubelet.service: Consumed 1.511s CPU time. Jan 13 21:26:12.703436 systemd[1]: Started sshd@1-10.0.0.115:22-10.0.0.1:56662.service - OpenSSH per-connection server daemon (10.0.0.1:56662). Jan 13 21:26:12.738153 sshd[1554]: Accepted publickey for core from 10.0.0.1 port 56662 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:26:12.740561 sshd[1554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:26:12.749692 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 21:26:12.760163 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 21:26:12.761884 systemd-logind[1443]: New session 1 of user core. Jan 13 21:26:12.773555 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 21:26:12.777214 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 21:26:12.787061 (systemd)[1558]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 21:26:12.902492 systemd[1558]: Queued start job for default target default.target. Jan 13 21:26:12.911620 systemd[1558]: Created slice app.slice - User Application Slice. Jan 13 21:26:12.911653 systemd[1558]: Reached target paths.target - Paths. Jan 13 21:26:12.911668 systemd[1558]: Reached target timers.target - Timers. Jan 13 21:26:12.913549 systemd[1558]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 21:26:12.928482 systemd[1558]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 21:26:12.928644 systemd[1558]: Reached target sockets.target - Sockets. Jan 13 21:26:12.928662 systemd[1558]: Reached target basic.target - Basic System. Jan 13 21:26:12.928709 systemd[1558]: Reached target default.target - Main User Target. Jan 13 21:26:12.928757 systemd[1558]: Startup finished in 133ms. Jan 13 21:26:12.929353 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 21:26:12.931303 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 21:26:12.992777 systemd[1]: Started sshd@2-10.0.0.115:22-10.0.0.1:56666.service - OpenSSH per-connection server daemon (10.0.0.1:56666). Jan 13 21:26:13.028843 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 56666 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:26:13.031146 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:26:13.036103 systemd-logind[1443]: New session 2 of user core. Jan 13 21:26:13.046221 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 21:26:13.102152 sshd[1569]: pam_unix(sshd:session): session closed for user core Jan 13 21:26:13.119975 systemd[1]: sshd@2-10.0.0.115:22-10.0.0.1:56666.service: Deactivated successfully. Jan 13 21:26:13.122850 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 21:26:13.125039 systemd-logind[1443]: Session 2 logged out. Waiting for processes to exit. Jan 13 21:26:13.144526 systemd[1]: Started sshd@3-10.0.0.115:22-10.0.0.1:56668.service - OpenSSH per-connection server daemon (10.0.0.1:56668). Jan 13 21:26:13.145777 systemd-logind[1443]: Removed session 2. Jan 13 21:26:13.177061 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 56668 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:26:13.178794 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:26:13.183278 systemd-logind[1443]: New session 3 of user core. Jan 13 21:26:13.200088 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 21:26:13.252488 sshd[1576]: pam_unix(sshd:session): session closed for user core Jan 13 21:26:13.261919 systemd[1]: sshd@3-10.0.0.115:22-10.0.0.1:56668.service: Deactivated successfully. Jan 13 21:26:13.263896 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 21:26:13.265677 systemd-logind[1443]: Session 3 logged out. Waiting for processes to exit. Jan 13 21:26:13.267181 systemd[1]: Started sshd@4-10.0.0.115:22-10.0.0.1:56674.service - OpenSSH per-connection server daemon (10.0.0.1:56674). Jan 13 21:26:13.268129 systemd-logind[1443]: Removed session 3. Jan 13 21:26:13.299674 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 56674 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:26:13.301225 sshd[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:26:13.306036 systemd-logind[1443]: New session 4 of user core. Jan 13 21:26:13.316137 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 21:26:13.372962 sshd[1583]: pam_unix(sshd:session): session closed for user core Jan 13 21:26:13.386695 systemd[1]: sshd@4-10.0.0.115:22-10.0.0.1:56674.service: Deactivated successfully. Jan 13 21:26:13.389545 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 21:26:13.391684 systemd-logind[1443]: Session 4 logged out. Waiting for processes to exit. Jan 13 21:26:13.402544 systemd[1]: Started sshd@5-10.0.0.115:22-10.0.0.1:56684.service - OpenSSH per-connection server daemon (10.0.0.1:56684). Jan 13 21:26:13.403607 systemd-logind[1443]: Removed session 4. Jan 13 21:26:13.432320 sshd[1590]: Accepted publickey for core from 10.0.0.1 port 56684 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:26:13.434375 sshd[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:26:13.439201 systemd-logind[1443]: New session 5 of user core. Jan 13 21:26:13.454168 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 21:26:13.516815 sudo[1593]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 21:26:13.517322 sudo[1593]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:26:13.790222 sudo[1593]: pam_unix(sudo:session): session closed for user root Jan 13 21:26:13.792594 sshd[1590]: pam_unix(sshd:session): session closed for user core Jan 13 21:26:13.809766 systemd[1]: sshd@5-10.0.0.115:22-10.0.0.1:56684.service: Deactivated successfully. Jan 13 21:26:13.812683 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 21:26:13.814995 systemd-logind[1443]: Session 5 logged out. Waiting for processes to exit. Jan 13 21:26:13.825403 systemd[1]: Started sshd@6-10.0.0.115:22-10.0.0.1:56688.service - OpenSSH per-connection server daemon (10.0.0.1:56688). Jan 13 21:26:13.826728 systemd-logind[1443]: Removed session 5. Jan 13 21:26:13.858576 sshd[1598]: Accepted publickey for core from 10.0.0.1 port 56688 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:26:13.860907 sshd[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:26:13.865809 systemd-logind[1443]: New session 6 of user core. Jan 13 21:26:13.875096 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 21:26:13.931248 sudo[1602]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 21:26:13.931596 sudo[1602]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:26:13.936019 sudo[1602]: pam_unix(sudo:session): session closed for user root Jan 13 21:26:13.943440 sudo[1601]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 13 21:26:13.943810 sudo[1601]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:26:13.962267 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 13 21:26:13.964170 auditctl[1605]: No rules Jan 13 21:26:13.965647 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 21:26:13.966100 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 13 21:26:13.968232 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:26:14.002449 augenrules[1623]: No rules Jan 13 21:26:14.004425 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:26:14.005920 sudo[1601]: pam_unix(sudo:session): session closed for user root Jan 13 21:26:14.008207 sshd[1598]: pam_unix(sshd:session): session closed for user core Jan 13 21:26:14.020163 systemd[1]: sshd@6-10.0.0.115:22-10.0.0.1:56688.service: Deactivated successfully. Jan 13 21:26:14.022175 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 21:26:14.023951 systemd-logind[1443]: Session 6 logged out. Waiting for processes to exit. Jan 13 21:26:14.025303 systemd[1]: Started sshd@7-10.0.0.115:22-10.0.0.1:56694.service - OpenSSH per-connection server daemon (10.0.0.1:56694). Jan 13 21:26:14.026244 systemd-logind[1443]: Removed session 6. Jan 13 21:26:14.070071 sshd[1631]: Accepted publickey for core from 10.0.0.1 port 56694 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:26:14.071812 sshd[1631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:26:14.076444 systemd-logind[1443]: New session 7 of user core. Jan 13 21:26:14.090163 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 21:26:14.145459 sudo[1634]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 21:26:14.145832 sudo[1634]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:26:14.174242 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 13 21:26:14.195255 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 13 21:26:14.195635 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 13 21:26:14.732597 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:26:14.732781 systemd[1]: kubelet.service: Consumed 1.511s CPU time. Jan 13 21:26:14.751363 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:26:14.778599 systemd[1]: Reloading requested from client PID 1674 ('systemctl') (unit session-7.scope)... Jan 13 21:26:14.778618 systemd[1]: Reloading... Jan 13 21:26:14.873838 zram_generator::config[1715]: No configuration found. Jan 13 21:26:15.142569 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:26:15.221304 systemd[1]: Reloading finished in 442 ms. Jan 13 21:26:15.268713 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 21:26:15.268822 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 21:26:15.269127 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:26:15.272051 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:26:15.429335 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:26:15.433972 (kubelet)[1761]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:26:15.555812 kubelet[1761]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:26:15.555812 kubelet[1761]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:26:15.555812 kubelet[1761]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:26:15.556276 kubelet[1761]: I0113 21:26:15.555862 1761 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:26:15.858173 kubelet[1761]: I0113 21:26:15.858101 1761 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 13 21:26:15.858173 kubelet[1761]: I0113 21:26:15.858151 1761 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:26:15.858503 kubelet[1761]: I0113 21:26:15.858477 1761 server.go:929] "Client rotation is on, will bootstrap in background" Jan 13 21:26:15.878026 kubelet[1761]: I0113 21:26:15.877977 1761 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:26:15.888950 kubelet[1761]: E0113 21:26:15.888879 1761 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 13 21:26:15.888950 kubelet[1761]: I0113 21:26:15.888927 1761 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 13 21:26:15.895644 kubelet[1761]: I0113 21:26:15.895600 1761 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:26:15.896777 kubelet[1761]: I0113 21:26:15.896724 1761 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 13 21:26:15.896977 kubelet[1761]: I0113 21:26:15.896909 1761 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:26:15.897159 kubelet[1761]: I0113 21:26:15.896965 1761 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.115","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 21:26:15.897159 kubelet[1761]: I0113 21:26:15.897158 1761 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:26:15.897255 kubelet[1761]: I0113 21:26:15.897168 1761 container_manager_linux.go:300] "Creating device plugin manager" Jan 13 21:26:15.897327 kubelet[1761]: I0113 21:26:15.897301 1761 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:26:15.899943 kubelet[1761]: I0113 21:26:15.899883 1761 kubelet.go:408] "Attempting to sync node with API server" Jan 13 21:26:15.899943 kubelet[1761]: I0113 21:26:15.899915 1761 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:26:15.900003 kubelet[1761]: I0113 21:26:15.899971 1761 kubelet.go:314] "Adding apiserver pod source" Jan 13 21:26:15.900003 kubelet[1761]: I0113 21:26:15.899995 1761 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:26:15.900106 kubelet[1761]: E0113 21:26:15.900068 1761 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:15.900162 kubelet[1761]: E0113 21:26:15.900143 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:15.905286 kubelet[1761]: I0113 21:26:15.905251 1761 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:26:15.906908 kubelet[1761]: I0113 21:26:15.906881 1761 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:26:15.907400 kubelet[1761]: W0113 21:26:15.907375 1761 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 21:26:15.908194 kubelet[1761]: I0113 21:26:15.908089 1761 server.go:1269] "Started kubelet" Jan 13 21:26:15.909015 kubelet[1761]: I0113 21:26:15.908247 1761 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:26:15.909015 kubelet[1761]: W0113 21:26:15.908783 1761 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 13 21:26:15.909015 kubelet[1761]: E0113 21:26:15.908823 1761 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 13 21:26:15.909015 kubelet[1761]: I0113 21:26:15.908843 1761 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:26:15.909015 kubelet[1761]: W0113 21:26:15.908912 1761 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.115" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 13 21:26:15.909015 kubelet[1761]: E0113 21:26:15.908926 1761 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.115\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 13 21:26:15.909015 kubelet[1761]: I0113 21:26:15.909003 1761 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:26:15.909816 kubelet[1761]: I0113 21:26:15.909554 1761 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:26:15.910342 kubelet[1761]: I0113 21:26:15.910312 1761 server.go:460] "Adding debug handlers to kubelet server" Jan 13 21:26:15.911109 kubelet[1761]: I0113 21:26:15.911075 1761 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 21:26:15.911309 kubelet[1761]: I0113 21:26:15.911282 1761 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 13 21:26:15.911555 kubelet[1761]: I0113 21:26:15.911387 1761 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 13 21:26:15.911555 kubelet[1761]: I0113 21:26:15.911464 1761 reconciler.go:26] "Reconciler: start to sync state" Jan 13 21:26:15.911989 kubelet[1761]: E0113 21:26:15.911955 1761 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:26:15.913288 kubelet[1761]: I0113 21:26:15.912793 1761 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:26:15.913288 kubelet[1761]: I0113 21:26:15.912913 1761 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:26:15.913288 kubelet[1761]: E0113 21:26:15.913032 1761 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.115\" not found" Jan 13 21:26:15.914361 kubelet[1761]: I0113 21:26:15.914325 1761 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:26:15.935272 kubelet[1761]: I0113 21:26:15.935196 1761 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:26:15.935272 kubelet[1761]: I0113 21:26:15.935215 1761 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:26:15.935272 kubelet[1761]: I0113 21:26:15.935236 1761 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:26:15.935747 kubelet[1761]: E0113 21:26:15.935700 1761 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.115\" not found" node="10.0.0.115" Jan 13 21:26:16.013190 kubelet[1761]: E0113 21:26:16.013127 1761 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.115\" not found" Jan 13 21:26:16.114518 kubelet[1761]: E0113 21:26:16.114338 1761 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.115\" not found" Jan 13 21:26:16.215377 kubelet[1761]: E0113 21:26:16.215301 1761 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.115\" not found" Jan 13 21:26:16.284327 kubelet[1761]: E0113 21:26:16.284253 1761 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.115" not found Jan 13 21:26:16.316047 kubelet[1761]: E0113 21:26:16.315978 1761 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.115\" not found" Jan 13 21:26:16.327639 kubelet[1761]: I0113 21:26:16.327595 1761 policy_none.go:49] "None policy: Start" Jan 13 21:26:16.328612 kubelet[1761]: I0113 21:26:16.328581 1761 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:26:16.328612 kubelet[1761]: I0113 21:26:16.328612 1761 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:26:16.336650 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 21:26:16.346921 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 21:26:16.350373 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 21:26:16.356307 kubelet[1761]: I0113 21:26:16.356046 1761 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:26:16.356307 kubelet[1761]: I0113 21:26:16.356279 1761 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 21:26:16.356307 kubelet[1761]: I0113 21:26:16.356297 1761 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 21:26:16.356803 kubelet[1761]: I0113 21:26:16.356572 1761 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:26:16.358462 kubelet[1761]: E0113 21:26:16.358366 1761 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.115\" not found" Jan 13 21:26:16.358593 kubelet[1761]: I0113 21:26:16.358538 1761 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:26:16.360426 kubelet[1761]: I0113 21:26:16.360393 1761 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:26:16.360521 kubelet[1761]: I0113 21:26:16.360439 1761 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:26:16.360521 kubelet[1761]: I0113 21:26:16.360465 1761 kubelet.go:2321] "Starting kubelet main sync loop" Jan 13 21:26:16.360598 kubelet[1761]: E0113 21:26:16.360577 1761 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 13 21:26:16.457870 kubelet[1761]: I0113 21:26:16.457699 1761 kubelet_node_status.go:72] "Attempting to register node" node="10.0.0.115" Jan 13 21:26:16.499429 kubelet[1761]: I0113 21:26:16.499341 1761 kubelet_node_status.go:75] "Successfully registered node" node="10.0.0.115" Jan 13 21:26:16.499429 kubelet[1761]: E0113 21:26:16.499387 1761 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"10.0.0.115\": node \"10.0.0.115\" not found" Jan 13 21:26:16.530218 kubelet[1761]: E0113 21:26:16.530133 1761 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.115\" not found" Jan 13 21:26:16.631135 kubelet[1761]: E0113 21:26:16.631036 1761 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.115\" not found" Jan 13 21:26:16.732191 kubelet[1761]: E0113 21:26:16.732001 1761 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.115\" not found" Jan 13 21:26:16.833120 kubelet[1761]: E0113 21:26:16.833025 1761 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.115\" not found" Jan 13 21:26:16.860336 kubelet[1761]: I0113 21:26:16.860264 1761 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 13 21:26:16.860548 kubelet[1761]: W0113 21:26:16.860524 1761 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 13 21:26:16.860614 kubelet[1761]: W0113 21:26:16.860579 1761 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 13 21:26:16.901075 kubelet[1761]: E0113 21:26:16.900989 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:16.933698 kubelet[1761]: E0113 21:26:16.933604 1761 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.115\" not found" Jan 13 21:26:16.954645 sudo[1634]: pam_unix(sudo:session): session closed for user root Jan 13 21:26:16.956833 sshd[1631]: pam_unix(sshd:session): session closed for user core Jan 13 21:26:16.961786 systemd[1]: sshd@7-10.0.0.115:22-10.0.0.1:56694.service: Deactivated successfully. Jan 13 21:26:16.964430 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 21:26:16.965508 systemd-logind[1443]: Session 7 logged out. Waiting for processes to exit. Jan 13 21:26:16.966699 systemd-logind[1443]: Removed session 7. Jan 13 21:26:17.034686 kubelet[1761]: E0113 21:26:17.034465 1761 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.115\" not found" Jan 13 21:26:17.136361 kubelet[1761]: I0113 21:26:17.136313 1761 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 13 21:26:17.136760 containerd[1452]: time="2025-01-13T21:26:17.136716747Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 21:26:17.137226 kubelet[1761]: I0113 21:26:17.136913 1761 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 13 21:26:17.902071 kubelet[1761]: I0113 21:26:17.902022 1761 apiserver.go:52] "Watching apiserver" Jan 13 21:26:17.902669 kubelet[1761]: E0113 21:26:17.902018 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:17.911757 kubelet[1761]: I0113 21:26:17.911727 1761 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 13 21:26:17.913359 systemd[1]: Created slice kubepods-besteffort-pod7ff3e09a_9fc7_452a_b2a9_ad393cb0acc7.slice - libcontainer container kubepods-besteffort-pod7ff3e09a_9fc7_452a_b2a9_ad393cb0acc7.slice. Jan 13 21:26:17.924254 kubelet[1761]: I0113 21:26:17.923859 1761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4-host-proc-sys-net\") pod \"cilium-rf8fm\" (UID: \"49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4\") " pod="kube-system/cilium-rf8fm" Jan 13 21:26:17.924254 kubelet[1761]: I0113 21:26:17.923905 1761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4-host-proc-sys-kernel\") pod \"cilium-rf8fm\" (UID: \"49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4\") " pod="kube-system/cilium-rf8fm" Jan 13 21:26:17.924254 kubelet[1761]: I0113 21:26:17.923924 1761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n8pv\" (UniqueName: \"kubernetes.io/projected/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4-kube-api-access-4n8pv\") pod \"cilium-rf8fm\" (UID: \"49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4\") " pod="kube-system/cilium-rf8fm" Jan 13 21:26:17.924254 kubelet[1761]: I0113 21:26:17.923957 1761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68g5w\" (UniqueName: \"kubernetes.io/projected/7ff3e09a-9fc7-452a-b2a9-ad393cb0acc7-kube-api-access-68g5w\") pod \"kube-proxy-sdwjr\" (UID: \"7ff3e09a-9fc7-452a-b2a9-ad393cb0acc7\") " pod="kube-system/kube-proxy-sdwjr" Jan 13 21:26:17.924254 kubelet[1761]: I0113 21:26:17.923988 1761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4-hostproc\") pod \"cilium-rf8fm\" (UID: \"49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4\") " pod="kube-system/cilium-rf8fm" Jan 13 21:26:17.924479 kubelet[1761]: I0113 21:26:17.924013 1761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4-etc-cni-netd\") pod \"cilium-rf8fm\" (UID: \"49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4\") " pod="kube-system/cilium-rf8fm" Jan 13 21:26:17.924479 kubelet[1761]: I0113 21:26:17.924028 1761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4-lib-modules\") pod \"cilium-rf8fm\" (UID: \"49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4\") " pod="kube-system/cilium-rf8fm" Jan 13 21:26:17.924479 kubelet[1761]: I0113 21:26:17.924041 1761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4-clustermesh-secrets\") pod \"cilium-rf8fm\" (UID: \"49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4\") " pod="kube-system/cilium-rf8fm" Jan 13 21:26:17.924479 kubelet[1761]: I0113 21:26:17.924055 1761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4-bpf-maps\") pod \"cilium-rf8fm\" (UID: \"49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4\") " pod="kube-system/cilium-rf8fm" Jan 13 21:26:17.924479 kubelet[1761]: I0113 21:26:17.924072 1761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4-cilium-config-path\") pod \"cilium-rf8fm\" (UID: \"49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4\") " pod="kube-system/cilium-rf8fm" Jan 13 21:26:17.924479 kubelet[1761]: I0113 21:26:17.924085 1761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7ff3e09a-9fc7-452a-b2a9-ad393cb0acc7-kube-proxy\") pod \"kube-proxy-sdwjr\" (UID: \"7ff3e09a-9fc7-452a-b2a9-ad393cb0acc7\") " pod="kube-system/kube-proxy-sdwjr" Jan 13 21:26:17.924597 kubelet[1761]: I0113 21:26:17.924099 1761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7ff3e09a-9fc7-452a-b2a9-ad393cb0acc7-xtables-lock\") pod \"kube-proxy-sdwjr\" (UID: \"7ff3e09a-9fc7-452a-b2a9-ad393cb0acc7\") " pod="kube-system/kube-proxy-sdwjr" Jan 13 21:26:17.924597 kubelet[1761]: I0113 21:26:17.924113 1761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ff3e09a-9fc7-452a-b2a9-ad393cb0acc7-lib-modules\") pod \"kube-proxy-sdwjr\" (UID: \"7ff3e09a-9fc7-452a-b2a9-ad393cb0acc7\") " pod="kube-system/kube-proxy-sdwjr" Jan 13 21:26:17.924597 kubelet[1761]: I0113 21:26:17.924126 1761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4-hubble-tls\") pod \"cilium-rf8fm\" (UID: \"49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4\") " pod="kube-system/cilium-rf8fm" Jan 13 21:26:17.924597 kubelet[1761]: I0113 21:26:17.924141 1761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4-cilium-run\") pod \"cilium-rf8fm\" (UID: \"49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4\") " pod="kube-system/cilium-rf8fm" Jan 13 21:26:17.924597 kubelet[1761]: I0113 21:26:17.924157 1761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4-cilium-cgroup\") pod \"cilium-rf8fm\" (UID: \"49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4\") " pod="kube-system/cilium-rf8fm" Jan 13 21:26:17.924597 kubelet[1761]: I0113 21:26:17.924177 1761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4-cni-path\") pod \"cilium-rf8fm\" (UID: \"49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4\") " pod="kube-system/cilium-rf8fm" Jan 13 21:26:17.924773 kubelet[1761]: I0113 21:26:17.924211 1761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4-xtables-lock\") pod \"cilium-rf8fm\" (UID: \"49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4\") " pod="kube-system/cilium-rf8fm" Jan 13 21:26:17.925756 systemd[1]: Created slice kubepods-burstable-pod49bb30a2_f3e9_4a7e_a6dc_b99f7fc3c8f4.slice - libcontainer container kubepods-burstable-pod49bb30a2_f3e9_4a7e_a6dc_b99f7fc3c8f4.slice. Jan 13 21:26:18.224805 kubelet[1761]: E0113 21:26:18.224679 1761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:18.225508 containerd[1452]: time="2025-01-13T21:26:18.225414113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sdwjr,Uid:7ff3e09a-9fc7-452a-b2a9-ad393cb0acc7,Namespace:kube-system,Attempt:0,}" Jan 13 21:26:18.239758 kubelet[1761]: E0113 21:26:18.239738 1761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:18.240230 containerd[1452]: time="2025-01-13T21:26:18.240174414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rf8fm,Uid:49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4,Namespace:kube-system,Attempt:0,}" Jan 13 21:26:18.902274 kubelet[1761]: E0113 21:26:18.902211 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:19.016370 containerd[1452]: time="2025-01-13T21:26:19.016287052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:26:19.017052 containerd[1452]: time="2025-01-13T21:26:19.016995892Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 13 21:26:19.017962 containerd[1452]: time="2025-01-13T21:26:19.017911138Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:26:19.018888 containerd[1452]: time="2025-01-13T21:26:19.018854157Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:26:19.019577 containerd[1452]: time="2025-01-13T21:26:19.019544141Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:26:19.021066 containerd[1452]: time="2025-01-13T21:26:19.021030107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:26:19.023273 containerd[1452]: time="2025-01-13T21:26:19.023232177Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 782.971781ms" Jan 13 21:26:19.023844 containerd[1452]: time="2025-01-13T21:26:19.023812175Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 798.291742ms" Jan 13 21:26:19.029516 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1988759020.mount: Deactivated successfully. Jan 13 21:26:19.285097 containerd[1452]: time="2025-01-13T21:26:19.284625553Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:26:19.285097 containerd[1452]: time="2025-01-13T21:26:19.284754084Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:26:19.285097 containerd[1452]: time="2025-01-13T21:26:19.284774452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:26:19.285097 containerd[1452]: time="2025-01-13T21:26:19.284891592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:26:19.287850 containerd[1452]: time="2025-01-13T21:26:19.286442731Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:26:19.287850 containerd[1452]: time="2025-01-13T21:26:19.286500519Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:26:19.287850 containerd[1452]: time="2025-01-13T21:26:19.286511099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:26:19.288240 containerd[1452]: time="2025-01-13T21:26:19.286681529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:26:19.493104 systemd[1]: Started cri-containerd-4b05c64b546c5668ddc6a2b86b4f12372042145f005dd19c77f070985be56ee5.scope - libcontainer container 4b05c64b546c5668ddc6a2b86b4f12372042145f005dd19c77f070985be56ee5. Jan 13 21:26:19.494913 systemd[1]: Started cri-containerd-a8fdf1c3f6acb37a3ae839abf59ef22b842457ff23c671de616bda46d9dc3e6a.scope - libcontainer container a8fdf1c3f6acb37a3ae839abf59ef22b842457ff23c671de616bda46d9dc3e6a. Jan 13 21:26:19.521376 containerd[1452]: time="2025-01-13T21:26:19.521316723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rf8fm,Uid:49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"a8fdf1c3f6acb37a3ae839abf59ef22b842457ff23c671de616bda46d9dc3e6a\"" Jan 13 21:26:19.522820 kubelet[1761]: E0113 21:26:19.522781 1761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:19.524289 containerd[1452]: time="2025-01-13T21:26:19.524249493Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 13 21:26:19.526124 containerd[1452]: time="2025-01-13T21:26:19.526091808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sdwjr,Uid:7ff3e09a-9fc7-452a-b2a9-ad393cb0acc7,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b05c64b546c5668ddc6a2b86b4f12372042145f005dd19c77f070985be56ee5\"" Jan 13 21:26:19.526707 kubelet[1761]: E0113 21:26:19.526678 1761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:19.902958 kubelet[1761]: E0113 21:26:19.902894 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:20.903976 kubelet[1761]: E0113 21:26:20.903883 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:21.904621 kubelet[1761]: E0113 21:26:21.904525 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:22.905777 kubelet[1761]: E0113 21:26:22.905713 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:23.613032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2161053662.mount: Deactivated successfully. Jan 13 21:26:23.905972 kubelet[1761]: E0113 21:26:23.905802 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:24.924379 kubelet[1761]: E0113 21:26:24.924275 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:25.925454 kubelet[1761]: E0113 21:26:25.925391 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:26.925887 kubelet[1761]: E0113 21:26:26.925806 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:27.115638 containerd[1452]: time="2025-01-13T21:26:27.115570811Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:27.116609 containerd[1452]: time="2025-01-13T21:26:27.116568101Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735311" Jan 13 21:26:27.117908 containerd[1452]: time="2025-01-13T21:26:27.117873058Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:27.119685 containerd[1452]: time="2025-01-13T21:26:27.119653587Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.595362115s" Jan 13 21:26:27.119685 containerd[1452]: time="2025-01-13T21:26:27.119681620Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 13 21:26:27.120732 containerd[1452]: time="2025-01-13T21:26:27.120697365Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Jan 13 21:26:27.121785 containerd[1452]: time="2025-01-13T21:26:27.121754197Z" level=info msg="CreateContainer within sandbox \"a8fdf1c3f6acb37a3ae839abf59ef22b842457ff23c671de616bda46d9dc3e6a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 21:26:27.140190 containerd[1452]: time="2025-01-13T21:26:27.140136530Z" level=info msg="CreateContainer within sandbox \"a8fdf1c3f6acb37a3ae839abf59ef22b842457ff23c671de616bda46d9dc3e6a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5b69aef039d43e049753b1464b4aca853c8bcbf4b56bd80b1365a1da22bb6a16\"" Jan 13 21:26:27.140858 containerd[1452]: time="2025-01-13T21:26:27.140814582Z" level=info msg="StartContainer for \"5b69aef039d43e049753b1464b4aca853c8bcbf4b56bd80b1365a1da22bb6a16\"" Jan 13 21:26:27.278108 systemd[1]: Started cri-containerd-5b69aef039d43e049753b1464b4aca853c8bcbf4b56bd80b1365a1da22bb6a16.scope - libcontainer container 5b69aef039d43e049753b1464b4aca853c8bcbf4b56bd80b1365a1da22bb6a16. Jan 13 21:26:27.308107 containerd[1452]: time="2025-01-13T21:26:27.308055688Z" level=info msg="StartContainer for \"5b69aef039d43e049753b1464b4aca853c8bcbf4b56bd80b1365a1da22bb6a16\" returns successfully" Jan 13 21:26:27.322533 systemd[1]: cri-containerd-5b69aef039d43e049753b1464b4aca853c8bcbf4b56bd80b1365a1da22bb6a16.scope: Deactivated successfully. Jan 13 21:26:27.383152 kubelet[1761]: E0113 21:26:27.383105 1761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:27.926626 kubelet[1761]: E0113 21:26:27.926555 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:27.966840 containerd[1452]: time="2025-01-13T21:26:27.966621815Z" level=info msg="shim disconnected" id=5b69aef039d43e049753b1464b4aca853c8bcbf4b56bd80b1365a1da22bb6a16 namespace=k8s.io Jan 13 21:26:27.966840 containerd[1452]: time="2025-01-13T21:26:27.966787686Z" level=warning msg="cleaning up after shim disconnected" id=5b69aef039d43e049753b1464b4aca853c8bcbf4b56bd80b1365a1da22bb6a16 namespace=k8s.io Jan 13 21:26:27.966840 containerd[1452]: time="2025-01-13T21:26:27.966798647Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:26:28.134428 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b69aef039d43e049753b1464b4aca853c8bcbf4b56bd80b1365a1da22bb6a16-rootfs.mount: Deactivated successfully. Jan 13 21:26:28.391832 kubelet[1761]: E0113 21:26:28.391662 1761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:28.396458 containerd[1452]: time="2025-01-13T21:26:28.394729920Z" level=info msg="CreateContainer within sandbox \"a8fdf1c3f6acb37a3ae839abf59ef22b842457ff23c671de616bda46d9dc3e6a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 21:26:28.490036 containerd[1452]: time="2025-01-13T21:26:28.489976852Z" level=info msg="CreateContainer within sandbox \"a8fdf1c3f6acb37a3ae839abf59ef22b842457ff23c671de616bda46d9dc3e6a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"32e1c753291d22e33aa1c7bdfd16710f6107022d77999f77e14554f3acf8757f\"" Jan 13 21:26:28.490568 containerd[1452]: time="2025-01-13T21:26:28.490529268Z" level=info msg="StartContainer for \"32e1c753291d22e33aa1c7bdfd16710f6107022d77999f77e14554f3acf8757f\"" Jan 13 21:26:28.605074 systemd[1]: Started cri-containerd-32e1c753291d22e33aa1c7bdfd16710f6107022d77999f77e14554f3acf8757f.scope - libcontainer container 32e1c753291d22e33aa1c7bdfd16710f6107022d77999f77e14554f3acf8757f. Jan 13 21:26:28.640725 containerd[1452]: time="2025-01-13T21:26:28.640680526Z" level=info msg="StartContainer for \"32e1c753291d22e33aa1c7bdfd16710f6107022d77999f77e14554f3acf8757f\" returns successfully" Jan 13 21:26:28.655117 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:26:28.655359 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:26:28.655446 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:26:28.666272 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:26:28.666512 systemd[1]: cri-containerd-32e1c753291d22e33aa1c7bdfd16710f6107022d77999f77e14554f3acf8757f.scope: Deactivated successfully. Jan 13 21:26:28.692676 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:26:28.800888 containerd[1452]: time="2025-01-13T21:26:28.800807000Z" level=info msg="shim disconnected" id=32e1c753291d22e33aa1c7bdfd16710f6107022d77999f77e14554f3acf8757f namespace=k8s.io Jan 13 21:26:28.800888 containerd[1452]: time="2025-01-13T21:26:28.800862684Z" level=warning msg="cleaning up after shim disconnected" id=32e1c753291d22e33aa1c7bdfd16710f6107022d77999f77e14554f3acf8757f namespace=k8s.io Jan 13 21:26:28.800888 containerd[1452]: time="2025-01-13T21:26:28.800872082Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:26:28.927219 kubelet[1761]: E0113 21:26:28.926848 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:29.134888 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-32e1c753291d22e33aa1c7bdfd16710f6107022d77999f77e14554f3acf8757f-rootfs.mount: Deactivated successfully. Jan 13 21:26:29.135064 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1116155342.mount: Deactivated successfully. Jan 13 21:26:29.395326 kubelet[1761]: E0113 21:26:29.395287 1761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:29.397308 containerd[1452]: time="2025-01-13T21:26:29.397260774Z" level=info msg="CreateContainer within sandbox \"a8fdf1c3f6acb37a3ae839abf59ef22b842457ff23c671de616bda46d9dc3e6a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 21:26:29.533203 containerd[1452]: time="2025-01-13T21:26:29.533133994Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:29.539531 containerd[1452]: time="2025-01-13T21:26:29.539428649Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=30230243" Jan 13 21:26:29.541150 containerd[1452]: time="2025-01-13T21:26:29.541106816Z" level=info msg="ImageCreate event name:\"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:29.546194 containerd[1452]: time="2025-01-13T21:26:29.546139504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:29.546498 containerd[1452]: time="2025-01-13T21:26:29.546453022Z" level=info msg="CreateContainer within sandbox \"a8fdf1c3f6acb37a3ae839abf59ef22b842457ff23c671de616bda46d9dc3e6a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b5297dfb2d3639acf59012412fb2d2d2b7c132946e116b17cbaefb0d3c8449a0\"" Jan 13 21:26:29.547344 containerd[1452]: time="2025-01-13T21:26:29.546760739Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"30229262\" in 2.426036274s" Jan 13 21:26:29.547344 containerd[1452]: time="2025-01-13T21:26:29.546796907Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Jan 13 21:26:29.547344 containerd[1452]: time="2025-01-13T21:26:29.547042087Z" level=info msg="StartContainer for \"b5297dfb2d3639acf59012412fb2d2d2b7c132946e116b17cbaefb0d3c8449a0\"" Jan 13 21:26:29.552054 containerd[1452]: time="2025-01-13T21:26:29.551344826Z" level=info msg="CreateContainer within sandbox \"4b05c64b546c5668ddc6a2b86b4f12372042145f005dd19c77f070985be56ee5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 21:26:29.567653 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount36879470.mount: Deactivated successfully. Jan 13 21:26:29.572909 containerd[1452]: time="2025-01-13T21:26:29.572852541Z" level=info msg="CreateContainer within sandbox \"4b05c64b546c5668ddc6a2b86b4f12372042145f005dd19c77f070985be56ee5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7c0f3140d3650f79af9e7f9c5b0b82e9f5662e7ead724680b76b912d35df87f8\"" Jan 13 21:26:29.573756 containerd[1452]: time="2025-01-13T21:26:29.573709167Z" level=info msg="StartContainer for \"7c0f3140d3650f79af9e7f9c5b0b82e9f5662e7ead724680b76b912d35df87f8\"" Jan 13 21:26:29.585081 systemd[1]: Started cri-containerd-b5297dfb2d3639acf59012412fb2d2d2b7c132946e116b17cbaefb0d3c8449a0.scope - libcontainer container b5297dfb2d3639acf59012412fb2d2d2b7c132946e116b17cbaefb0d3c8449a0. Jan 13 21:26:29.696075 systemd[1]: Started cri-containerd-7c0f3140d3650f79af9e7f9c5b0b82e9f5662e7ead724680b76b912d35df87f8.scope - libcontainer container 7c0f3140d3650f79af9e7f9c5b0b82e9f5662e7ead724680b76b912d35df87f8. Jan 13 21:26:29.705356 containerd[1452]: time="2025-01-13T21:26:29.705285791Z" level=info msg="StartContainer for \"b5297dfb2d3639acf59012412fb2d2d2b7c132946e116b17cbaefb0d3c8449a0\" returns successfully" Jan 13 21:26:29.707180 systemd[1]: cri-containerd-b5297dfb2d3639acf59012412fb2d2d2b7c132946e116b17cbaefb0d3c8449a0.scope: Deactivated successfully. Jan 13 21:26:29.739216 containerd[1452]: time="2025-01-13T21:26:29.738789051Z" level=info msg="StartContainer for \"7c0f3140d3650f79af9e7f9c5b0b82e9f5662e7ead724680b76b912d35df87f8\" returns successfully" Jan 13 21:26:29.927289 kubelet[1761]: E0113 21:26:29.927236 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:29.993188 containerd[1452]: time="2025-01-13T21:26:29.993018592Z" level=info msg="shim disconnected" id=b5297dfb2d3639acf59012412fb2d2d2b7c132946e116b17cbaefb0d3c8449a0 namespace=k8s.io Jan 13 21:26:29.993188 containerd[1452]: time="2025-01-13T21:26:29.993075759Z" level=warning msg="cleaning up after shim disconnected" id=b5297dfb2d3639acf59012412fb2d2d2b7c132946e116b17cbaefb0d3c8449a0 namespace=k8s.io Jan 13 21:26:29.993188 containerd[1452]: time="2025-01-13T21:26:29.993085788Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:26:30.135475 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b5297dfb2d3639acf59012412fb2d2d2b7c132946e116b17cbaefb0d3c8449a0-rootfs.mount: Deactivated successfully. Jan 13 21:26:30.397470 kubelet[1761]: E0113 21:26:30.397365 1761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:30.399065 kubelet[1761]: E0113 21:26:30.399047 1761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:30.400581 containerd[1452]: time="2025-01-13T21:26:30.400529821Z" level=info msg="CreateContainer within sandbox \"a8fdf1c3f6acb37a3ae839abf59ef22b842457ff23c671de616bda46d9dc3e6a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 21:26:30.423875 kubelet[1761]: I0113 21:26:30.423796 1761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sdwjr" podStartSLOduration=4.403511594 podStartE2EDuration="14.42377795s" podCreationTimestamp="2025-01-13 21:26:16 +0000 UTC" firstStartedPulling="2025-01-13 21:26:19.52732977 +0000 UTC m=+4.059548694" lastFinishedPulling="2025-01-13 21:26:29.547596126 +0000 UTC m=+14.079815050" observedRunningTime="2025-01-13 21:26:30.423639009 +0000 UTC m=+14.955857933" watchObservedRunningTime="2025-01-13 21:26:30.42377795 +0000 UTC m=+14.955996884" Jan 13 21:26:30.434217 containerd[1452]: time="2025-01-13T21:26:30.434172603Z" level=info msg="CreateContainer within sandbox \"a8fdf1c3f6acb37a3ae839abf59ef22b842457ff23c671de616bda46d9dc3e6a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a901a52a476d3b0e6c95aae48525ac13b2d879a0928552550a1992db532dc11d\"" Jan 13 21:26:30.434679 containerd[1452]: time="2025-01-13T21:26:30.434650890Z" level=info msg="StartContainer for \"a901a52a476d3b0e6c95aae48525ac13b2d879a0928552550a1992db532dc11d\"" Jan 13 21:26:30.469071 systemd[1]: Started cri-containerd-a901a52a476d3b0e6c95aae48525ac13b2d879a0928552550a1992db532dc11d.scope - libcontainer container a901a52a476d3b0e6c95aae48525ac13b2d879a0928552550a1992db532dc11d. Jan 13 21:26:30.493253 systemd[1]: cri-containerd-a901a52a476d3b0e6c95aae48525ac13b2d879a0928552550a1992db532dc11d.scope: Deactivated successfully. Jan 13 21:26:30.495310 containerd[1452]: time="2025-01-13T21:26:30.495274261Z" level=info msg="StartContainer for \"a901a52a476d3b0e6c95aae48525ac13b2d879a0928552550a1992db532dc11d\" returns successfully" Jan 13 21:26:30.519253 containerd[1452]: time="2025-01-13T21:26:30.519177949Z" level=info msg="shim disconnected" id=a901a52a476d3b0e6c95aae48525ac13b2d879a0928552550a1992db532dc11d namespace=k8s.io Jan 13 21:26:30.519253 containerd[1452]: time="2025-01-13T21:26:30.519240186Z" level=warning msg="cleaning up after shim disconnected" id=a901a52a476d3b0e6c95aae48525ac13b2d879a0928552550a1992db532dc11d namespace=k8s.io Jan 13 21:26:30.519253 containerd[1452]: time="2025-01-13T21:26:30.519252809Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:26:30.927442 kubelet[1761]: E0113 21:26:30.927374 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:31.134963 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a901a52a476d3b0e6c95aae48525ac13b2d879a0928552550a1992db532dc11d-rootfs.mount: Deactivated successfully. Jan 13 21:26:31.402655 kubelet[1761]: E0113 21:26:31.402616 1761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:31.402655 kubelet[1761]: E0113 21:26:31.402645 1761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:31.404385 containerd[1452]: time="2025-01-13T21:26:31.404339863Z" level=info msg="CreateContainer within sandbox \"a8fdf1c3f6acb37a3ae839abf59ef22b842457ff23c671de616bda46d9dc3e6a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 21:26:31.424287 containerd[1452]: time="2025-01-13T21:26:31.424218723Z" level=info msg="CreateContainer within sandbox \"a8fdf1c3f6acb37a3ae839abf59ef22b842457ff23c671de616bda46d9dc3e6a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c4761e9c4b09964fe3dc6e79506619a151c6a8b2c0c52a0889d6f01a60ca7991\"" Jan 13 21:26:31.424831 containerd[1452]: time="2025-01-13T21:26:31.424804061Z" level=info msg="StartContainer for \"c4761e9c4b09964fe3dc6e79506619a151c6a8b2c0c52a0889d6f01a60ca7991\"" Jan 13 21:26:31.457085 systemd[1]: Started cri-containerd-c4761e9c4b09964fe3dc6e79506619a151c6a8b2c0c52a0889d6f01a60ca7991.scope - libcontainer container c4761e9c4b09964fe3dc6e79506619a151c6a8b2c0c52a0889d6f01a60ca7991. Jan 13 21:26:31.486212 containerd[1452]: time="2025-01-13T21:26:31.486167239Z" level=info msg="StartContainer for \"c4761e9c4b09964fe3dc6e79506619a151c6a8b2c0c52a0889d6f01a60ca7991\" returns successfully" Jan 13 21:26:31.600123 kubelet[1761]: I0113 21:26:31.599914 1761 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 13 21:26:31.927637 kubelet[1761]: E0113 21:26:31.927593 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:31.968980 kernel: Initializing XFRM netlink socket Jan 13 21:26:32.407120 kubelet[1761]: E0113 21:26:32.407058 1761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:32.928700 kubelet[1761]: E0113 21:26:32.928609 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:33.131485 kubelet[1761]: I0113 21:26:33.131405 1761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rf8fm" podStartSLOduration=9.534486724 podStartE2EDuration="17.131378127s" podCreationTimestamp="2025-01-13 21:26:16 +0000 UTC" firstStartedPulling="2025-01-13 21:26:19.523690605 +0000 UTC m=+4.055909529" lastFinishedPulling="2025-01-13 21:26:27.120582008 +0000 UTC m=+11.652800932" observedRunningTime="2025-01-13 21:26:32.577570399 +0000 UTC m=+17.109789354" watchObservedRunningTime="2025-01-13 21:26:33.131378127 +0000 UTC m=+17.663597051" Jan 13 21:26:33.138895 systemd[1]: Created slice kubepods-besteffort-podde627da9_f78d_43bb_9f2f_1a528608d8c7.slice - libcontainer container kubepods-besteffort-podde627da9_f78d_43bb_9f2f_1a528608d8c7.slice. Jan 13 21:26:33.224717 kubelet[1761]: I0113 21:26:33.224562 1761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vggkj\" (UniqueName: \"kubernetes.io/projected/de627da9-f78d-43bb-9f2f-1a528608d8c7-kube-api-access-vggkj\") pod \"nginx-deployment-8587fbcb89-vtsqd\" (UID: \"de627da9-f78d-43bb-9f2f-1a528608d8c7\") " pod="default/nginx-deployment-8587fbcb89-vtsqd" Jan 13 21:26:33.408908 kubelet[1761]: E0113 21:26:33.408862 1761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:33.442643 containerd[1452]: time="2025-01-13T21:26:33.442533524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-vtsqd,Uid:de627da9-f78d-43bb-9f2f-1a528608d8c7,Namespace:default,Attempt:0,}" Jan 13 21:26:33.687604 systemd-networkd[1400]: cilium_host: Link UP Jan 13 21:26:33.687870 systemd-networkd[1400]: cilium_net: Link UP Jan 13 21:26:33.688149 systemd-networkd[1400]: cilium_net: Gained carrier Jan 13 21:26:33.688425 systemd-networkd[1400]: cilium_host: Gained carrier Jan 13 21:26:33.688673 systemd-networkd[1400]: cilium_net: Gained IPv6LL Jan 13 21:26:33.805508 systemd-networkd[1400]: cilium_vxlan: Link UP Jan 13 21:26:33.805521 systemd-networkd[1400]: cilium_vxlan: Gained carrier Jan 13 21:26:33.928972 kubelet[1761]: E0113 21:26:33.928895 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:34.023990 kernel: NET: Registered PF_ALG protocol family Jan 13 21:26:34.410835 kubelet[1761]: E0113 21:26:34.410792 1761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:34.684028 systemd-networkd[1400]: lxc_health: Link UP Jan 13 21:26:34.695176 systemd-networkd[1400]: lxc_health: Gained carrier Jan 13 21:26:34.697655 systemd-networkd[1400]: cilium_host: Gained IPv6LL Jan 13 21:26:34.929247 kubelet[1761]: E0113 21:26:34.929185 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:35.019009 systemd-networkd[1400]: lxccadca74495c3: Link UP Jan 13 21:26:35.031984 kernel: eth0: renamed from tmp12044 Jan 13 21:26:35.040310 systemd-networkd[1400]: lxccadca74495c3: Gained carrier Jan 13 21:26:35.656211 systemd-networkd[1400]: cilium_vxlan: Gained IPv6LL Jan 13 21:26:35.900842 kubelet[1761]: E0113 21:26:35.900775 1761 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:35.929646 kubelet[1761]: E0113 21:26:35.929465 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:36.241422 kubelet[1761]: E0113 21:26:36.241285 1761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:36.413954 kubelet[1761]: E0113 21:26:36.413902 1761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:36.680118 systemd-networkd[1400]: lxccadca74495c3: Gained IPv6LL Jan 13 21:26:36.680565 systemd-networkd[1400]: lxc_health: Gained IPv6LL Jan 13 21:26:36.930251 kubelet[1761]: E0113 21:26:36.930169 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:37.416052 kubelet[1761]: E0113 21:26:37.416013 1761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:37.930368 kubelet[1761]: E0113 21:26:37.930294 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:38.684393 containerd[1452]: time="2025-01-13T21:26:38.684272017Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:26:38.684393 containerd[1452]: time="2025-01-13T21:26:38.684354745Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:26:38.684393 containerd[1452]: time="2025-01-13T21:26:38.684376637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:26:38.685004 containerd[1452]: time="2025-01-13T21:26:38.684474915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:26:38.716119 systemd[1]: Started cri-containerd-12044f60a21a3f723a182b6d4192071101f14d0c88b7f6c3a114cd65a9778995.scope - libcontainer container 12044f60a21a3f723a182b6d4192071101f14d0c88b7f6c3a114cd65a9778995. Jan 13 21:26:38.731418 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:26:38.761529 containerd[1452]: time="2025-01-13T21:26:38.761470447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-vtsqd,Uid:de627da9-f78d-43bb-9f2f-1a528608d8c7,Namespace:default,Attempt:0,} returns sandbox id \"12044f60a21a3f723a182b6d4192071101f14d0c88b7f6c3a114cd65a9778995\"" Jan 13 21:26:38.763514 containerd[1452]: time="2025-01-13T21:26:38.763471507Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 13 21:26:38.930917 kubelet[1761]: E0113 21:26:38.930823 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:39.931076 kubelet[1761]: E0113 21:26:39.931013 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:40.931488 kubelet[1761]: E0113 21:26:40.931422 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:41.786804 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1934178171.mount: Deactivated successfully. Jan 13 21:26:41.932401 kubelet[1761]: E0113 21:26:41.932337 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:42.933360 kubelet[1761]: E0113 21:26:42.933220 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:43.291255 containerd[1452]: time="2025-01-13T21:26:43.291116887Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:43.292098 containerd[1452]: time="2025-01-13T21:26:43.292064842Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71036018" Jan 13 21:26:43.293478 containerd[1452]: time="2025-01-13T21:26:43.293438506Z" level=info msg="ImageCreate event name:\"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:43.296189 containerd[1452]: time="2025-01-13T21:26:43.296155358Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:43.297269 containerd[1452]: time="2025-01-13T21:26:43.297229603Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 4.533715725s" Jan 13 21:26:43.297269 containerd[1452]: time="2025-01-13T21:26:43.297259690Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 13 21:26:43.299308 containerd[1452]: time="2025-01-13T21:26:43.299258766Z" level=info msg="CreateContainer within sandbox \"12044f60a21a3f723a182b6d4192071101f14d0c88b7f6c3a114cd65a9778995\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 13 21:26:43.313772 containerd[1452]: time="2025-01-13T21:26:43.313726937Z" level=info msg="CreateContainer within sandbox \"12044f60a21a3f723a182b6d4192071101f14d0c88b7f6c3a114cd65a9778995\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"58adaf72847722a8decb426448b5e5470f67a5ca82ffb05bcf690f983e13f69f\"" Jan 13 21:26:43.314405 containerd[1452]: time="2025-01-13T21:26:43.314302993Z" level=info msg="StartContainer for \"58adaf72847722a8decb426448b5e5470f67a5ca82ffb05bcf690f983e13f69f\"" Jan 13 21:26:43.357183 systemd[1]: Started cri-containerd-58adaf72847722a8decb426448b5e5470f67a5ca82ffb05bcf690f983e13f69f.scope - libcontainer container 58adaf72847722a8decb426448b5e5470f67a5ca82ffb05bcf690f983e13f69f. Jan 13 21:26:43.389205 containerd[1452]: time="2025-01-13T21:26:43.389165338Z" level=info msg="StartContainer for \"58adaf72847722a8decb426448b5e5470f67a5ca82ffb05bcf690f983e13f69f\" returns successfully" Jan 13 21:26:43.503873 kubelet[1761]: I0113 21:26:43.503795 1761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-vtsqd" podStartSLOduration=5.968813744 podStartE2EDuration="10.503775754s" podCreationTimestamp="2025-01-13 21:26:33 +0000 UTC" firstStartedPulling="2025-01-13 21:26:38.763128862 +0000 UTC m=+23.295347786" lastFinishedPulling="2025-01-13 21:26:43.298090872 +0000 UTC m=+27.830309796" observedRunningTime="2025-01-13 21:26:43.503648823 +0000 UTC m=+28.035867767" watchObservedRunningTime="2025-01-13 21:26:43.503775754 +0000 UTC m=+28.035994678" Jan 13 21:26:43.934163 kubelet[1761]: E0113 21:26:43.934102 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:44.934339 kubelet[1761]: E0113 21:26:44.934253 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:45.346692 systemd[1]: Created slice kubepods-besteffort-podbe573e07_f5b6_432d_a733_77326da12a49.slice - libcontainer container kubepods-besteffort-podbe573e07_f5b6_432d_a733_77326da12a49.slice. Jan 13 21:26:45.406561 kubelet[1761]: I0113 21:26:45.406484 1761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvmbw\" (UniqueName: \"kubernetes.io/projected/be573e07-f5b6-432d-a733-77326da12a49-kube-api-access-rvmbw\") pod \"nfs-server-provisioner-0\" (UID: \"be573e07-f5b6-432d-a733-77326da12a49\") " pod="default/nfs-server-provisioner-0" Jan 13 21:26:45.406561 kubelet[1761]: I0113 21:26:45.406536 1761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/be573e07-f5b6-432d-a733-77326da12a49-data\") pod \"nfs-server-provisioner-0\" (UID: \"be573e07-f5b6-432d-a733-77326da12a49\") " pod="default/nfs-server-provisioner-0" Jan 13 21:26:45.650102 containerd[1452]: time="2025-01-13T21:26:45.649971182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:be573e07-f5b6-432d-a733-77326da12a49,Namespace:default,Attempt:0,}" Jan 13 21:26:45.684232 systemd-networkd[1400]: lxc72e9e6ecd1f0: Link UP Jan 13 21:26:45.697972 kernel: eth0: renamed from tmpc6d98 Jan 13 21:26:45.705253 systemd-networkd[1400]: lxc72e9e6ecd1f0: Gained carrier Jan 13 21:26:45.935278 kubelet[1761]: E0113 21:26:45.935129 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:46.254647 containerd[1452]: time="2025-01-13T21:26:46.254414567Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:26:46.254647 containerd[1452]: time="2025-01-13T21:26:46.254491854Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:26:46.254647 containerd[1452]: time="2025-01-13T21:26:46.254530877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:26:46.254814 containerd[1452]: time="2025-01-13T21:26:46.254625017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:26:46.282104 systemd[1]: Started cri-containerd-c6d985c23f3d4104f2a4e87cf75228b3471e924ca805f398efe1f1ba15e9182e.scope - libcontainer container c6d985c23f3d4104f2a4e87cf75228b3471e924ca805f398efe1f1ba15e9182e. Jan 13 21:26:46.292837 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:26:46.316032 containerd[1452]: time="2025-01-13T21:26:46.315979053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:be573e07-f5b6-432d-a733-77326da12a49,Namespace:default,Attempt:0,} returns sandbox id \"c6d985c23f3d4104f2a4e87cf75228b3471e924ca805f398efe1f1ba15e9182e\"" Jan 13 21:26:46.317455 containerd[1452]: time="2025-01-13T21:26:46.317433584Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 13 21:26:46.936031 kubelet[1761]: E0113 21:26:46.935982 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:47.240837 systemd-networkd[1400]: lxc72e9e6ecd1f0: Gained IPv6LL Jan 13 21:26:47.543573 update_engine[1445]: I20250113 21:26:47.543329 1445 update_attempter.cc:509] Updating boot flags... Jan 13 21:26:47.649966 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2981) Jan 13 21:26:47.936571 kubelet[1761]: E0113 21:26:47.936511 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:48.727218 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3106519095.mount: Deactivated successfully. Jan 13 21:26:48.937282 kubelet[1761]: E0113 21:26:48.937200 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:49.937655 kubelet[1761]: E0113 21:26:49.937536 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:50.938074 kubelet[1761]: E0113 21:26:50.937999 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:51.076736 containerd[1452]: time="2025-01-13T21:26:51.076677886Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:51.077471 containerd[1452]: time="2025-01-13T21:26:51.077438095Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jan 13 21:26:51.078874 containerd[1452]: time="2025-01-13T21:26:51.078845407Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:51.083055 containerd[1452]: time="2025-01-13T21:26:51.082999287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:51.084026 containerd[1452]: time="2025-01-13T21:26:51.083978861Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 4.766514198s" Jan 13 21:26:51.084026 containerd[1452]: time="2025-01-13T21:26:51.084015089Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 13 21:26:51.086199 containerd[1452]: time="2025-01-13T21:26:51.086167372Z" level=info msg="CreateContainer within sandbox \"c6d985c23f3d4104f2a4e87cf75228b3471e924ca805f398efe1f1ba15e9182e\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 13 21:26:51.102977 containerd[1452]: time="2025-01-13T21:26:51.102914721Z" level=info msg="CreateContainer within sandbox \"c6d985c23f3d4104f2a4e87cf75228b3471e924ca805f398efe1f1ba15e9182e\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"d7d776ded43e576832eaf5702bc53f2f5fabdacbb983b497369e3ecc13ad2fce\"" Jan 13 21:26:51.103410 containerd[1452]: time="2025-01-13T21:26:51.103357919Z" level=info msg="StartContainer for \"d7d776ded43e576832eaf5702bc53f2f5fabdacbb983b497369e3ecc13ad2fce\"" Jan 13 21:26:51.179095 systemd[1]: Started cri-containerd-d7d776ded43e576832eaf5702bc53f2f5fabdacbb983b497369e3ecc13ad2fce.scope - libcontainer container d7d776ded43e576832eaf5702bc53f2f5fabdacbb983b497369e3ecc13ad2fce. Jan 13 21:26:51.298667 containerd[1452]: time="2025-01-13T21:26:51.298529299Z" level=info msg="StartContainer for \"d7d776ded43e576832eaf5702bc53f2f5fabdacbb983b497369e3ecc13ad2fce\" returns successfully" Jan 13 21:26:51.519434 kubelet[1761]: I0113 21:26:51.519357 1761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.751837493 podStartE2EDuration="6.5193383s" podCreationTimestamp="2025-01-13 21:26:45 +0000 UTC" firstStartedPulling="2025-01-13 21:26:46.317186565 +0000 UTC m=+30.849405489" lastFinishedPulling="2025-01-13 21:26:51.084687372 +0000 UTC m=+35.616906296" observedRunningTime="2025-01-13 21:26:51.519168319 +0000 UTC m=+36.051387233" watchObservedRunningTime="2025-01-13 21:26:51.5193383 +0000 UTC m=+36.051557224" Jan 13 21:26:51.938622 kubelet[1761]: E0113 21:26:51.938549 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:52.938966 kubelet[1761]: E0113 21:26:52.938867 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:53.939472 kubelet[1761]: E0113 21:26:53.939421 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:54.940056 kubelet[1761]: E0113 21:26:54.939963 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:55.900407 kubelet[1761]: E0113 21:26:55.900330 1761 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:55.941173 kubelet[1761]: E0113 21:26:55.941106 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:56.942328 kubelet[1761]: E0113 21:26:56.942268 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:57.942887 kubelet[1761]: E0113 21:26:57.942829 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:58.943445 kubelet[1761]: E0113 21:26:58.943393 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:59.944491 kubelet[1761]: E0113 21:26:59.944434 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:27:00.811223 systemd[1]: Created slice kubepods-besteffort-pod7f674d64_ed3e_4f5f_b60f_c524abe18b8b.slice - libcontainer container kubepods-besteffort-pod7f674d64_ed3e_4f5f_b60f_c524abe18b8b.slice. Jan 13 21:27:00.945258 kubelet[1761]: E0113 21:27:00.945201 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:27:00.995590 kubelet[1761]: I0113 21:27:00.995518 1761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f295\" (UniqueName: \"kubernetes.io/projected/7f674d64-ed3e-4f5f-b60f-c524abe18b8b-kube-api-access-9f295\") pod \"test-pod-1\" (UID: \"7f674d64-ed3e-4f5f-b60f-c524abe18b8b\") " pod="default/test-pod-1" Jan 13 21:27:00.995590 kubelet[1761]: I0113 21:27:00.995577 1761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-720aef68-e339-437e-a5a2-2887a6889361\" (UniqueName: \"kubernetes.io/nfs/7f674d64-ed3e-4f5f-b60f-c524abe18b8b-pvc-720aef68-e339-437e-a5a2-2887a6889361\") pod \"test-pod-1\" (UID: \"7f674d64-ed3e-4f5f-b60f-c524abe18b8b\") " pod="default/test-pod-1" Jan 13 21:27:01.149965 kernel: FS-Cache: Loaded Jan 13 21:27:01.222261 kernel: RPC: Registered named UNIX socket transport module. Jan 13 21:27:01.222347 kernel: RPC: Registered udp transport module. Jan 13 21:27:01.222369 kernel: RPC: Registered tcp transport module. Jan 13 21:27:01.222385 kernel: RPC: Registered tcp-with-tls transport module. Jan 13 21:27:01.223725 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 13 21:27:01.539387 kernel: NFS: Registering the id_resolver key type Jan 13 21:27:01.539551 kernel: Key type id_resolver registered Jan 13 21:27:01.539578 kernel: Key type id_legacy registered Jan 13 21:27:01.569547 nfsidmap[3150]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 13 21:27:01.577995 nfsidmap[3153]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 13 21:27:01.714912 containerd[1452]: time="2025-01-13T21:27:01.714842149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:7f674d64-ed3e-4f5f-b60f-c524abe18b8b,Namespace:default,Attempt:0,}" Jan 13 21:27:01.863960 kernel: eth0: renamed from tmp07518 Jan 13 21:27:01.870180 systemd-networkd[1400]: lxcd0c9a2d36771: Link UP Jan 13 21:27:01.872194 systemd-networkd[1400]: lxcd0c9a2d36771: Gained carrier Jan 13 21:27:01.946235 kubelet[1761]: E0113 21:27:01.946180 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:27:02.108603 containerd[1452]: time="2025-01-13T21:27:02.108478112Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:27:02.108603 containerd[1452]: time="2025-01-13T21:27:02.108544106Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:27:02.108603 containerd[1452]: time="2025-01-13T21:27:02.108556510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:27:02.108844 containerd[1452]: time="2025-01-13T21:27:02.108653211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:27:02.137199 systemd[1]: Started cri-containerd-075189fbaa28797d630615998906328127ad1fc91ed092f25e2db1198c60b7d5.scope - libcontainer container 075189fbaa28797d630615998906328127ad1fc91ed092f25e2db1198c60b7d5. Jan 13 21:27:02.149903 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:27:02.174097 containerd[1452]: time="2025-01-13T21:27:02.174038072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:7f674d64-ed3e-4f5f-b60f-c524abe18b8b,Namespace:default,Attempt:0,} returns sandbox id \"075189fbaa28797d630615998906328127ad1fc91ed092f25e2db1198c60b7d5\"" Jan 13 21:27:02.175677 containerd[1452]: time="2025-01-13T21:27:02.175636403Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 13 21:27:02.563437 containerd[1452]: time="2025-01-13T21:27:02.563381120Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:02.564176 containerd[1452]: time="2025-01-13T21:27:02.564125572Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 13 21:27:02.566632 containerd[1452]: time="2025-01-13T21:27:02.566587821Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 390.919618ms" Jan 13 21:27:02.566632 containerd[1452]: time="2025-01-13T21:27:02.566617456Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 13 21:27:02.568682 containerd[1452]: time="2025-01-13T21:27:02.568635418Z" level=info msg="CreateContainer within sandbox \"075189fbaa28797d630615998906328127ad1fc91ed092f25e2db1198c60b7d5\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 13 21:27:02.585613 containerd[1452]: time="2025-01-13T21:27:02.585553765Z" level=info msg="CreateContainer within sandbox \"075189fbaa28797d630615998906328127ad1fc91ed092f25e2db1198c60b7d5\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"9977df894dec0d4eb84fc17957aeb6abac0ddbad18aab11a705da37190ec5dc9\"" Jan 13 21:27:02.586275 containerd[1452]: time="2025-01-13T21:27:02.586238495Z" level=info msg="StartContainer for \"9977df894dec0d4eb84fc17957aeb6abac0ddbad18aab11a705da37190ec5dc9\"" Jan 13 21:27:02.623224 systemd[1]: Started cri-containerd-9977df894dec0d4eb84fc17957aeb6abac0ddbad18aab11a705da37190ec5dc9.scope - libcontainer container 9977df894dec0d4eb84fc17957aeb6abac0ddbad18aab11a705da37190ec5dc9. Jan 13 21:27:02.651250 containerd[1452]: time="2025-01-13T21:27:02.651188146Z" level=info msg="StartContainer for \"9977df894dec0d4eb84fc17957aeb6abac0ddbad18aab11a705da37190ec5dc9\" returns successfully" Jan 13 21:27:02.946637 kubelet[1761]: E0113 21:27:02.946486 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:27:03.496150 systemd-networkd[1400]: lxcd0c9a2d36771: Gained IPv6LL Jan 13 21:27:03.543442 kubelet[1761]: I0113 21:27:03.543367 1761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=18.151351524 podStartE2EDuration="18.54334797s" podCreationTimestamp="2025-01-13 21:26:45 +0000 UTC" firstStartedPulling="2025-01-13 21:27:02.175308896 +0000 UTC m=+46.707527820" lastFinishedPulling="2025-01-13 21:27:02.567305342 +0000 UTC m=+47.099524266" observedRunningTime="2025-01-13 21:27:03.543200372 +0000 UTC m=+48.075419296" watchObservedRunningTime="2025-01-13 21:27:03.54334797 +0000 UTC m=+48.075566894" Jan 13 21:27:03.947284 kubelet[1761]: E0113 21:27:03.947181 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:27:04.947870 kubelet[1761]: E0113 21:27:04.947787 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:27:05.948325 kubelet[1761]: E0113 21:27:05.948238 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:27:06.949313 kubelet[1761]: E0113 21:27:06.949232 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:27:07.949474 kubelet[1761]: E0113 21:27:07.949393 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:27:08.128744 containerd[1452]: time="2025-01-13T21:27:08.128682023Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:27:08.136410 containerd[1452]: time="2025-01-13T21:27:08.136361658Z" level=info msg="StopContainer for \"c4761e9c4b09964fe3dc6e79506619a151c6a8b2c0c52a0889d6f01a60ca7991\" with timeout 2 (s)" Jan 13 21:27:08.136631 containerd[1452]: time="2025-01-13T21:27:08.136607872Z" level=info msg="Stop container \"c4761e9c4b09964fe3dc6e79506619a151c6a8b2c0c52a0889d6f01a60ca7991\" with signal terminated" Jan 13 21:27:08.142774 systemd-networkd[1400]: lxc_health: Link DOWN Jan 13 21:27:08.142785 systemd-networkd[1400]: lxc_health: Lost carrier Jan 13 21:27:08.173348 systemd[1]: cri-containerd-c4761e9c4b09964fe3dc6e79506619a151c6a8b2c0c52a0889d6f01a60ca7991.scope: Deactivated successfully. Jan 13 21:27:08.173653 systemd[1]: cri-containerd-c4761e9c4b09964fe3dc6e79506619a151c6a8b2c0c52a0889d6f01a60ca7991.scope: Consumed 7.281s CPU time. Jan 13 21:27:08.194271 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c4761e9c4b09964fe3dc6e79506619a151c6a8b2c0c52a0889d6f01a60ca7991-rootfs.mount: Deactivated successfully. Jan 13 21:27:08.204612 containerd[1452]: time="2025-01-13T21:27:08.204475218Z" level=info msg="shim disconnected" id=c4761e9c4b09964fe3dc6e79506619a151c6a8b2c0c52a0889d6f01a60ca7991 namespace=k8s.io Jan 13 21:27:08.204612 containerd[1452]: time="2025-01-13T21:27:08.204540600Z" level=warning msg="cleaning up after shim disconnected" id=c4761e9c4b09964fe3dc6e79506619a151c6a8b2c0c52a0889d6f01a60ca7991 namespace=k8s.io Jan 13 21:27:08.204612 containerd[1452]: time="2025-01-13T21:27:08.204549587Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:27:08.268892 containerd[1452]: time="2025-01-13T21:27:08.268835137Z" level=info msg="StopContainer for \"c4761e9c4b09964fe3dc6e79506619a151c6a8b2c0c52a0889d6f01a60ca7991\" returns successfully" Jan 13 21:27:08.269691 containerd[1452]: time="2025-01-13T21:27:08.269641664Z" level=info msg="StopPodSandbox for \"a8fdf1c3f6acb37a3ae839abf59ef22b842457ff23c671de616bda46d9dc3e6a\"" Jan 13 21:27:08.269691 containerd[1452]: time="2025-01-13T21:27:08.269690175Z" level=info msg="Container to stop \"32e1c753291d22e33aa1c7bdfd16710f6107022d77999f77e14554f3acf8757f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:27:08.269691 containerd[1452]: time="2025-01-13T21:27:08.269704562Z" level=info msg="Container to stop \"5b69aef039d43e049753b1464b4aca853c8bcbf4b56bd80b1365a1da22bb6a16\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:27:08.269922 containerd[1452]: time="2025-01-13T21:27:08.269714922Z" level=info msg="Container to stop \"a901a52a476d3b0e6c95aae48525ac13b2d879a0928552550a1992db532dc11d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:27:08.269922 containerd[1452]: time="2025-01-13T21:27:08.269724961Z" level=info msg="Container to stop \"c4761e9c4b09964fe3dc6e79506619a151c6a8b2c0c52a0889d6f01a60ca7991\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:27:08.269922 containerd[1452]: time="2025-01-13T21:27:08.269735110Z" level=info msg="Container to stop \"b5297dfb2d3639acf59012412fb2d2d2b7c132946e116b17cbaefb0d3c8449a0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:27:08.271804 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a8fdf1c3f6acb37a3ae839abf59ef22b842457ff23c671de616bda46d9dc3e6a-shm.mount: Deactivated successfully. Jan 13 21:27:08.276698 systemd[1]: cri-containerd-a8fdf1c3f6acb37a3ae839abf59ef22b842457ff23c671de616bda46d9dc3e6a.scope: Deactivated successfully. Jan 13 21:27:08.298384 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a8fdf1c3f6acb37a3ae839abf59ef22b842457ff23c671de616bda46d9dc3e6a-rootfs.mount: Deactivated successfully. Jan 13 21:27:08.356864 containerd[1452]: time="2025-01-13T21:27:08.356787633Z" level=info msg="shim disconnected" id=a8fdf1c3f6acb37a3ae839abf59ef22b842457ff23c671de616bda46d9dc3e6a namespace=k8s.io Jan 13 21:27:08.356864 containerd[1452]: time="2025-01-13T21:27:08.356844972Z" level=warning msg="cleaning up after shim disconnected" id=a8fdf1c3f6acb37a3ae839abf59ef22b842457ff23c671de616bda46d9dc3e6a namespace=k8s.io Jan 13 21:27:08.356864 containerd[1452]: time="2025-01-13T21:27:08.356853558Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:27:08.371731 containerd[1452]: time="2025-01-13T21:27:08.371680737Z" level=info msg="TearDown network for sandbox \"a8fdf1c3f6acb37a3ae839abf59ef22b842457ff23c671de616bda46d9dc3e6a\" successfully" Jan 13 21:27:08.371731 containerd[1452]: time="2025-01-13T21:27:08.371717045Z" level=info msg="StopPodSandbox for \"a8fdf1c3f6acb37a3ae839abf59ef22b842457ff23c671de616bda46d9dc3e6a\" returns successfully" Jan 13 21:27:08.539714 kubelet[1761]: I0113 21:27:08.539529 1761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4-host-proc-sys-kernel\") pod \"49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4\" (UID: \"49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4\") " Jan 13 21:27:08.539714 kubelet[1761]: I0113 21:27:08.539583 1761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4-clustermesh-secrets\") pod \"49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4\" (UID: \"49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4\") " Jan 13 21:27:08.539714 kubelet[1761]: I0113 21:27:08.539603 1761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4-bpf-maps\") pod \"49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4\" (UID: \"49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4\") " Jan 13 21:27:08.539714 kubelet[1761]: I0113 21:27:08.539621 1761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4-cilium-cgroup\") pod \"49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4\" (UID: \"49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4\") " Jan 13 21:27:08.539714 kubelet[1761]: I0113 21:27:08.539638 1761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4n8pv\" (UniqueName: \"kubernetes.io/projected/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4-kube-api-access-4n8pv\") pod \"49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4\" (UID: \"49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4\") " Jan 13 21:27:08.539714 kubelet[1761]: I0113 21:27:08.539657 1761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4-cilium-config-path\") pod \"49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4\" (UID: \"49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4\") " Jan 13 21:27:08.540043 kubelet[1761]: I0113 21:27:08.539672 1761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4-hubble-tls\") pod \"49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4\" (UID: \"49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4\") " Jan 13 21:27:08.540043 kubelet[1761]: I0113 21:27:08.539686 1761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4-cni-path\") pod \"49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4\" (UID: \"49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4\") " Jan 13 21:27:08.540043 kubelet[1761]: I0113 21:27:08.539701 1761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4-host-proc-sys-net\") pod \"49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4\" (UID: \"49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4\") " Jan 13 21:27:08.540043 kubelet[1761]: I0113 21:27:08.539715 1761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4-etc-cni-netd\") pod \"49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4\" (UID: \"49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4\") " Jan 13 21:27:08.540043 kubelet[1761]: I0113 21:27:08.539728 1761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4-hostproc\") pod \"49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4\" (UID: \"49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4\") " Jan 13 21:27:08.540043 kubelet[1761]: I0113 21:27:08.539741 1761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4-cilium-run\") pod \"49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4\" (UID: \"49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4\") " Jan 13 21:27:08.540187 kubelet[1761]: I0113 21:27:08.539758 1761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4-lib-modules\") pod \"49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4\" (UID: \"49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4\") " Jan 13 21:27:08.540187 kubelet[1761]: I0113 21:27:08.539774 1761 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4-xtables-lock\") pod \"49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4\" (UID: \"49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4\") " Jan 13 21:27:08.540187 kubelet[1761]: I0113 21:27:08.539720 1761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4" (UID: "49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:27:08.540187 kubelet[1761]: I0113 21:27:08.539726 1761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4" (UID: "49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:27:08.540187 kubelet[1761]: I0113 21:27:08.539838 1761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4" (UID: "49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:27:08.540311 kubelet[1761]: I0113 21:27:08.539854 1761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4-cni-path" (OuterVolumeSpecName: "cni-path") pod "49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4" (UID: "49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:27:08.540311 kubelet[1761]: I0113 21:27:08.539869 1761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4" (UID: "49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:27:08.540311 kubelet[1761]: I0113 21:27:08.539882 1761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4" (UID: "49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:27:08.540311 kubelet[1761]: I0113 21:27:08.539896 1761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4-hostproc" (OuterVolumeSpecName: "hostproc") pod "49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4" (UID: "49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:27:08.540311 kubelet[1761]: I0113 21:27:08.539908 1761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4" (UID: "49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:27:08.540430 kubelet[1761]: I0113 21:27:08.540020 1761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4" (UID: "49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:27:08.540430 kubelet[1761]: I0113 21:27:08.540210 1761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4" (UID: "49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:27:08.544404 kubelet[1761]: I0113 21:27:08.543506 1761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4-kube-api-access-4n8pv" (OuterVolumeSpecName: "kube-api-access-4n8pv") pod "49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4" (UID: "49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4"). InnerVolumeSpecName "kube-api-access-4n8pv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:27:08.544404 kubelet[1761]: I0113 21:27:08.544069 1761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4" (UID: "49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 21:27:08.544404 kubelet[1761]: I0113 21:27:08.544326 1761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4" (UID: "49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 21:27:08.545888 systemd[1]: var-lib-kubelet-pods-49bb30a2\x2df3e9\x2d4a7e\x2da6dc\x2db99f7fc3c8f4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4n8pv.mount: Deactivated successfully. Jan 13 21:27:08.546062 systemd[1]: var-lib-kubelet-pods-49bb30a2\x2df3e9\x2d4a7e\x2da6dc\x2db99f7fc3c8f4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 13 21:27:08.546178 kubelet[1761]: I0113 21:27:08.546075 1761 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4" (UID: "49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:27:08.548903 kubelet[1761]: I0113 21:27:08.548877 1761 scope.go:117] "RemoveContainer" containerID="c4761e9c4b09964fe3dc6e79506619a151c6a8b2c0c52a0889d6f01a60ca7991" Jan 13 21:27:08.550307 containerd[1452]: time="2025-01-13T21:27:08.550272954Z" level=info msg="RemoveContainer for \"c4761e9c4b09964fe3dc6e79506619a151c6a8b2c0c52a0889d6f01a60ca7991\"" Jan 13 21:27:08.554478 systemd[1]: Removed slice kubepods-burstable-pod49bb30a2_f3e9_4a7e_a6dc_b99f7fc3c8f4.slice - libcontainer container kubepods-burstable-pod49bb30a2_f3e9_4a7e_a6dc_b99f7fc3c8f4.slice. Jan 13 21:27:08.554848 systemd[1]: kubepods-burstable-pod49bb30a2_f3e9_4a7e_a6dc_b99f7fc3c8f4.slice: Consumed 7.478s CPU time. Jan 13 21:27:08.606718 containerd[1452]: time="2025-01-13T21:27:08.606674754Z" level=info msg="RemoveContainer for \"c4761e9c4b09964fe3dc6e79506619a151c6a8b2c0c52a0889d6f01a60ca7991\" returns successfully" Jan 13 21:27:08.607072 kubelet[1761]: I0113 21:27:08.607029 1761 scope.go:117] "RemoveContainer" containerID="a901a52a476d3b0e6c95aae48525ac13b2d879a0928552550a1992db532dc11d" Jan 13 21:27:08.608253 containerd[1452]: time="2025-01-13T21:27:08.608196988Z" level=info msg="RemoveContainer for \"a901a52a476d3b0e6c95aae48525ac13b2d879a0928552550a1992db532dc11d\"" Jan 13 21:27:08.640517 kubelet[1761]: I0113 21:27:08.640470 1761 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4-xtables-lock\") on node \"10.0.0.115\" DevicePath \"\"" Jan 13 21:27:08.640517 kubelet[1761]: I0113 21:27:08.640494 1761 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4-lib-modules\") on node \"10.0.0.115\" DevicePath \"\"" Jan 13 21:27:08.640517 kubelet[1761]: I0113 21:27:08.640503 1761 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4-cilium-cgroup\") on node \"10.0.0.115\" DevicePath \"\"" Jan 13 21:27:08.640517 kubelet[1761]: I0113 21:27:08.640513 1761 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4-host-proc-sys-kernel\") on node \"10.0.0.115\" DevicePath \"\"" Jan 13 21:27:08.640517 kubelet[1761]: I0113 21:27:08.640524 1761 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4-clustermesh-secrets\") on node \"10.0.0.115\" DevicePath \"\"" Jan 13 21:27:08.640517 kubelet[1761]: I0113 21:27:08.640532 1761 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4-bpf-maps\") on node \"10.0.0.115\" DevicePath \"\"" Jan 13 21:27:08.640736 kubelet[1761]: I0113 21:27:08.640541 1761 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4-hubble-tls\") on node \"10.0.0.115\" DevicePath \"\"" Jan 13 21:27:08.640736 kubelet[1761]: I0113 21:27:08.640549 1761 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4-cni-path\") on node \"10.0.0.115\" DevicePath \"\"" Jan 13 21:27:08.640736 kubelet[1761]: I0113 21:27:08.640557 1761 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-4n8pv\" (UniqueName: \"kubernetes.io/projected/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4-kube-api-access-4n8pv\") on node \"10.0.0.115\" DevicePath \"\"" Jan 13 21:27:08.640736 kubelet[1761]: I0113 21:27:08.640566 1761 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4-cilium-config-path\") on node \"10.0.0.115\" DevicePath \"\"" Jan 13 21:27:08.640736 kubelet[1761]: I0113 21:27:08.640574 1761 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4-cilium-run\") on node \"10.0.0.115\" DevicePath \"\"" Jan 13 21:27:08.640736 kubelet[1761]: I0113 21:27:08.640581 1761 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4-host-proc-sys-net\") on node \"10.0.0.115\" DevicePath \"\"" Jan 13 21:27:08.640736 kubelet[1761]: I0113 21:27:08.640589 1761 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4-etc-cni-netd\") on node \"10.0.0.115\" DevicePath \"\"" Jan 13 21:27:08.640736 kubelet[1761]: I0113 21:27:08.640596 1761 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4-hostproc\") on node \"10.0.0.115\" DevicePath \"\"" Jan 13 21:27:08.667101 containerd[1452]: time="2025-01-13T21:27:08.667044939Z" level=info msg="RemoveContainer for \"a901a52a476d3b0e6c95aae48525ac13b2d879a0928552550a1992db532dc11d\" returns successfully" Jan 13 21:27:08.667288 kubelet[1761]: I0113 21:27:08.667264 1761 scope.go:117] "RemoveContainer" containerID="b5297dfb2d3639acf59012412fb2d2d2b7c132946e116b17cbaefb0d3c8449a0" Jan 13 21:27:08.668422 containerd[1452]: time="2025-01-13T21:27:08.668396531Z" level=info msg="RemoveContainer for \"b5297dfb2d3639acf59012412fb2d2d2b7c132946e116b17cbaefb0d3c8449a0\"" Jan 13 21:27:08.679783 containerd[1452]: time="2025-01-13T21:27:08.679723988Z" level=info msg="RemoveContainer for \"b5297dfb2d3639acf59012412fb2d2d2b7c132946e116b17cbaefb0d3c8449a0\" returns successfully" Jan 13 21:27:08.680028 kubelet[1761]: I0113 21:27:08.679896 1761 scope.go:117] "RemoveContainer" containerID="32e1c753291d22e33aa1c7bdfd16710f6107022d77999f77e14554f3acf8757f" Jan 13 21:27:08.681083 containerd[1452]: time="2025-01-13T21:27:08.681053389Z" level=info msg="RemoveContainer for \"32e1c753291d22e33aa1c7bdfd16710f6107022d77999f77e14554f3acf8757f\"" Jan 13 21:27:08.684903 containerd[1452]: time="2025-01-13T21:27:08.684871741Z" level=info msg="RemoveContainer for \"32e1c753291d22e33aa1c7bdfd16710f6107022d77999f77e14554f3acf8757f\" returns successfully" Jan 13 21:27:08.685125 kubelet[1761]: I0113 21:27:08.685039 1761 scope.go:117] "RemoveContainer" containerID="5b69aef039d43e049753b1464b4aca853c8bcbf4b56bd80b1365a1da22bb6a16" Jan 13 21:27:08.686677 containerd[1452]: time="2025-01-13T21:27:08.686633384Z" level=info msg="RemoveContainer for \"5b69aef039d43e049753b1464b4aca853c8bcbf4b56bd80b1365a1da22bb6a16\"" Jan 13 21:27:08.690299 containerd[1452]: time="2025-01-13T21:27:08.690268952Z" level=info msg="RemoveContainer for \"5b69aef039d43e049753b1464b4aca853c8bcbf4b56bd80b1365a1da22bb6a16\" returns successfully" Jan 13 21:27:08.690465 kubelet[1761]: I0113 21:27:08.690442 1761 scope.go:117] "RemoveContainer" containerID="c4761e9c4b09964fe3dc6e79506619a151c6a8b2c0c52a0889d6f01a60ca7991" Jan 13 21:27:08.690699 containerd[1452]: time="2025-01-13T21:27:08.690650641Z" level=error msg="ContainerStatus for \"c4761e9c4b09964fe3dc6e79506619a151c6a8b2c0c52a0889d6f01a60ca7991\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c4761e9c4b09964fe3dc6e79506619a151c6a8b2c0c52a0889d6f01a60ca7991\": not found" Jan 13 21:27:08.690835 kubelet[1761]: E0113 21:27:08.690810 1761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c4761e9c4b09964fe3dc6e79506619a151c6a8b2c0c52a0889d6f01a60ca7991\": not found" containerID="c4761e9c4b09964fe3dc6e79506619a151c6a8b2c0c52a0889d6f01a60ca7991" Jan 13 21:27:08.690990 kubelet[1761]: I0113 21:27:08.690863 1761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c4761e9c4b09964fe3dc6e79506619a151c6a8b2c0c52a0889d6f01a60ca7991"} err="failed to get container status \"c4761e9c4b09964fe3dc6e79506619a151c6a8b2c0c52a0889d6f01a60ca7991\": rpc error: code = NotFound desc = an error occurred when try to find container \"c4761e9c4b09964fe3dc6e79506619a151c6a8b2c0c52a0889d6f01a60ca7991\": not found" Jan 13 21:27:08.690990 kubelet[1761]: I0113 21:27:08.690974 1761 scope.go:117] "RemoveContainer" containerID="a901a52a476d3b0e6c95aae48525ac13b2d879a0928552550a1992db532dc11d" Jan 13 21:27:08.691191 containerd[1452]: time="2025-01-13T21:27:08.691152134Z" level=error msg="ContainerStatus for \"a901a52a476d3b0e6c95aae48525ac13b2d879a0928552550a1992db532dc11d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a901a52a476d3b0e6c95aae48525ac13b2d879a0928552550a1992db532dc11d\": not found" Jan 13 21:27:08.691317 kubelet[1761]: E0113 21:27:08.691282 1761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a901a52a476d3b0e6c95aae48525ac13b2d879a0928552550a1992db532dc11d\": not found" containerID="a901a52a476d3b0e6c95aae48525ac13b2d879a0928552550a1992db532dc11d" Jan 13 21:27:08.691349 kubelet[1761]: I0113 21:27:08.691314 1761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a901a52a476d3b0e6c95aae48525ac13b2d879a0928552550a1992db532dc11d"} err="failed to get container status \"a901a52a476d3b0e6c95aae48525ac13b2d879a0928552550a1992db532dc11d\": rpc error: code = NotFound desc = an error occurred when try to find container \"a901a52a476d3b0e6c95aae48525ac13b2d879a0928552550a1992db532dc11d\": not found" Jan 13 21:27:08.691349 kubelet[1761]: I0113 21:27:08.691337 1761 scope.go:117] "RemoveContainer" containerID="b5297dfb2d3639acf59012412fb2d2d2b7c132946e116b17cbaefb0d3c8449a0" Jan 13 21:27:08.691577 containerd[1452]: time="2025-01-13T21:27:08.691517561Z" level=error msg="ContainerStatus for \"b5297dfb2d3639acf59012412fb2d2d2b7c132946e116b17cbaefb0d3c8449a0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b5297dfb2d3639acf59012412fb2d2d2b7c132946e116b17cbaefb0d3c8449a0\": not found" Jan 13 21:27:08.691663 kubelet[1761]: E0113 21:27:08.691642 1761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b5297dfb2d3639acf59012412fb2d2d2b7c132946e116b17cbaefb0d3c8449a0\": not found" containerID="b5297dfb2d3639acf59012412fb2d2d2b7c132946e116b17cbaefb0d3c8449a0" Jan 13 21:27:08.691701 kubelet[1761]: I0113 21:27:08.691660 1761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b5297dfb2d3639acf59012412fb2d2d2b7c132946e116b17cbaefb0d3c8449a0"} err="failed to get container status \"b5297dfb2d3639acf59012412fb2d2d2b7c132946e116b17cbaefb0d3c8449a0\": rpc error: code = NotFound desc = an error occurred when try to find container \"b5297dfb2d3639acf59012412fb2d2d2b7c132946e116b17cbaefb0d3c8449a0\": not found" Jan 13 21:27:08.691701 kubelet[1761]: I0113 21:27:08.691672 1761 scope.go:117] "RemoveContainer" containerID="32e1c753291d22e33aa1c7bdfd16710f6107022d77999f77e14554f3acf8757f" Jan 13 21:27:08.691857 containerd[1452]: time="2025-01-13T21:27:08.691824208Z" level=error msg="ContainerStatus for \"32e1c753291d22e33aa1c7bdfd16710f6107022d77999f77e14554f3acf8757f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"32e1c753291d22e33aa1c7bdfd16710f6107022d77999f77e14554f3acf8757f\": not found" Jan 13 21:27:08.691970 kubelet[1761]: E0113 21:27:08.691922 1761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"32e1c753291d22e33aa1c7bdfd16710f6107022d77999f77e14554f3acf8757f\": not found" containerID="32e1c753291d22e33aa1c7bdfd16710f6107022d77999f77e14554f3acf8757f" Jan 13 21:27:08.692011 kubelet[1761]: I0113 21:27:08.691972 1761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"32e1c753291d22e33aa1c7bdfd16710f6107022d77999f77e14554f3acf8757f"} err="failed to get container status \"32e1c753291d22e33aa1c7bdfd16710f6107022d77999f77e14554f3acf8757f\": rpc error: code = NotFound desc = an error occurred when try to find container \"32e1c753291d22e33aa1c7bdfd16710f6107022d77999f77e14554f3acf8757f\": not found" Jan 13 21:27:08.692011 kubelet[1761]: I0113 21:27:08.691989 1761 scope.go:117] "RemoveContainer" containerID="5b69aef039d43e049753b1464b4aca853c8bcbf4b56bd80b1365a1da22bb6a16" Jan 13 21:27:08.692176 containerd[1452]: time="2025-01-13T21:27:08.692145452Z" level=error msg="ContainerStatus for \"5b69aef039d43e049753b1464b4aca853c8bcbf4b56bd80b1365a1da22bb6a16\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5b69aef039d43e049753b1464b4aca853c8bcbf4b56bd80b1365a1da22bb6a16\": not found" Jan 13 21:27:08.692302 kubelet[1761]: E0113 21:27:08.692275 1761 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5b69aef039d43e049753b1464b4aca853c8bcbf4b56bd80b1365a1da22bb6a16\": not found" containerID="5b69aef039d43e049753b1464b4aca853c8bcbf4b56bd80b1365a1da22bb6a16" Jan 13 21:27:08.692366 kubelet[1761]: I0113 21:27:08.692302 1761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5b69aef039d43e049753b1464b4aca853c8bcbf4b56bd80b1365a1da22bb6a16"} err="failed to get container status \"5b69aef039d43e049753b1464b4aca853c8bcbf4b56bd80b1365a1da22bb6a16\": rpc error: code = NotFound desc = an error occurred when try to find container \"5b69aef039d43e049753b1464b4aca853c8bcbf4b56bd80b1365a1da22bb6a16\": not found" Jan 13 21:27:08.950449 kubelet[1761]: E0113 21:27:08.950374 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:27:09.114040 systemd[1]: var-lib-kubelet-pods-49bb30a2\x2df3e9\x2d4a7e\x2da6dc\x2db99f7fc3c8f4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 13 21:27:09.951453 kubelet[1761]: E0113 21:27:09.951386 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:27:10.364619 kubelet[1761]: I0113 21:27:10.364558 1761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4" path="/var/lib/kubelet/pods/49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4/volumes" Jan 13 21:27:10.641989 kubelet[1761]: E0113 21:27:10.641814 1761 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4" containerName="clean-cilium-state" Jan 13 21:27:10.641989 kubelet[1761]: E0113 21:27:10.641837 1761 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4" containerName="mount-cgroup" Jan 13 21:27:10.641989 kubelet[1761]: E0113 21:27:10.641844 1761 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4" containerName="cilium-agent" Jan 13 21:27:10.641989 kubelet[1761]: E0113 21:27:10.641850 1761 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4" containerName="apply-sysctl-overwrites" Jan 13 21:27:10.641989 kubelet[1761]: E0113 21:27:10.641856 1761 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4" containerName="mount-bpf-fs" Jan 13 21:27:10.641989 kubelet[1761]: I0113 21:27:10.641876 1761 memory_manager.go:354] "RemoveStaleState removing state" podUID="49bb30a2-f3e9-4a7e-a6dc-b99f7fc3c8f4" containerName="cilium-agent" Jan 13 21:27:10.647463 systemd[1]: Created slice kubepods-besteffort-pod23852b1c_4b91_472b_8ad9_c33158887379.slice - libcontainer container kubepods-besteffort-pod23852b1c_4b91_472b_8ad9_c33158887379.slice. Jan 13 21:27:10.653046 systemd[1]: Created slice kubepods-burstable-pod35e8c668_a689_448b_9bdd_5378309a8f8c.slice - libcontainer container kubepods-burstable-pod35e8c668_a689_448b_9bdd_5378309a8f8c.slice. Jan 13 21:27:10.756039 kubelet[1761]: I0113 21:27:10.755985 1761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ql9v\" (UniqueName: \"kubernetes.io/projected/35e8c668-a689-448b-9bdd-5378309a8f8c-kube-api-access-2ql9v\") pod \"cilium-5hnwk\" (UID: \"35e8c668-a689-448b-9bdd-5378309a8f8c\") " pod="kube-system/cilium-5hnwk" Jan 13 21:27:10.756242 kubelet[1761]: I0113 21:27:10.756062 1761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/35e8c668-a689-448b-9bdd-5378309a8f8c-cni-path\") pod \"cilium-5hnwk\" (UID: \"35e8c668-a689-448b-9bdd-5378309a8f8c\") " pod="kube-system/cilium-5hnwk" Jan 13 21:27:10.756242 kubelet[1761]: I0113 21:27:10.756094 1761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/35e8c668-a689-448b-9bdd-5378309a8f8c-cilium-config-path\") pod \"cilium-5hnwk\" (UID: \"35e8c668-a689-448b-9bdd-5378309a8f8c\") " pod="kube-system/cilium-5hnwk" Jan 13 21:27:10.756242 kubelet[1761]: I0113 21:27:10.756114 1761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/35e8c668-a689-448b-9bdd-5378309a8f8c-cilium-ipsec-secrets\") pod \"cilium-5hnwk\" (UID: \"35e8c668-a689-448b-9bdd-5378309a8f8c\") " pod="kube-system/cilium-5hnwk" Jan 13 21:27:10.756242 kubelet[1761]: I0113 21:27:10.756142 1761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rq4pm\" (UniqueName: \"kubernetes.io/projected/23852b1c-4b91-472b-8ad9-c33158887379-kube-api-access-rq4pm\") pod \"cilium-operator-5d85765b45-4qxl6\" (UID: \"23852b1c-4b91-472b-8ad9-c33158887379\") " pod="kube-system/cilium-operator-5d85765b45-4qxl6" Jan 13 21:27:10.756242 kubelet[1761]: I0113 21:27:10.756160 1761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/35e8c668-a689-448b-9bdd-5378309a8f8c-etc-cni-netd\") pod \"cilium-5hnwk\" (UID: \"35e8c668-a689-448b-9bdd-5378309a8f8c\") " pod="kube-system/cilium-5hnwk" Jan 13 21:27:10.756426 kubelet[1761]: I0113 21:27:10.756183 1761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/35e8c668-a689-448b-9bdd-5378309a8f8c-xtables-lock\") pod \"cilium-5hnwk\" (UID: \"35e8c668-a689-448b-9bdd-5378309a8f8c\") " pod="kube-system/cilium-5hnwk" Jan 13 21:27:10.756426 kubelet[1761]: I0113 21:27:10.756199 1761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/35e8c668-a689-448b-9bdd-5378309a8f8c-host-proc-sys-kernel\") pod \"cilium-5hnwk\" (UID: \"35e8c668-a689-448b-9bdd-5378309a8f8c\") " pod="kube-system/cilium-5hnwk" Jan 13 21:27:10.756426 kubelet[1761]: I0113 21:27:10.756290 1761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/35e8c668-a689-448b-9bdd-5378309a8f8c-cilium-cgroup\") pod \"cilium-5hnwk\" (UID: \"35e8c668-a689-448b-9bdd-5378309a8f8c\") " pod="kube-system/cilium-5hnwk" Jan 13 21:27:10.756426 kubelet[1761]: I0113 21:27:10.756343 1761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/35e8c668-a689-448b-9bdd-5378309a8f8c-lib-modules\") pod \"cilium-5hnwk\" (UID: \"35e8c668-a689-448b-9bdd-5378309a8f8c\") " pod="kube-system/cilium-5hnwk" Jan 13 21:27:10.756426 kubelet[1761]: I0113 21:27:10.756374 1761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/35e8c668-a689-448b-9bdd-5378309a8f8c-bpf-maps\") pod \"cilium-5hnwk\" (UID: \"35e8c668-a689-448b-9bdd-5378309a8f8c\") " pod="kube-system/cilium-5hnwk" Jan 13 21:27:10.756426 kubelet[1761]: I0113 21:27:10.756392 1761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/35e8c668-a689-448b-9bdd-5378309a8f8c-hostproc\") pod \"cilium-5hnwk\" (UID: \"35e8c668-a689-448b-9bdd-5378309a8f8c\") " pod="kube-system/cilium-5hnwk" Jan 13 21:27:10.756694 kubelet[1761]: I0113 21:27:10.756414 1761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/35e8c668-a689-448b-9bdd-5378309a8f8c-host-proc-sys-net\") pod \"cilium-5hnwk\" (UID: \"35e8c668-a689-448b-9bdd-5378309a8f8c\") " pod="kube-system/cilium-5hnwk" Jan 13 21:27:10.756694 kubelet[1761]: I0113 21:27:10.756436 1761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/23852b1c-4b91-472b-8ad9-c33158887379-cilium-config-path\") pod \"cilium-operator-5d85765b45-4qxl6\" (UID: \"23852b1c-4b91-472b-8ad9-c33158887379\") " pod="kube-system/cilium-operator-5d85765b45-4qxl6" Jan 13 21:27:10.756694 kubelet[1761]: I0113 21:27:10.756462 1761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/35e8c668-a689-448b-9bdd-5378309a8f8c-clustermesh-secrets\") pod \"cilium-5hnwk\" (UID: \"35e8c668-a689-448b-9bdd-5378309a8f8c\") " pod="kube-system/cilium-5hnwk" Jan 13 21:27:10.756694 kubelet[1761]: I0113 21:27:10.756488 1761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/35e8c668-a689-448b-9bdd-5378309a8f8c-hubble-tls\") pod \"cilium-5hnwk\" (UID: \"35e8c668-a689-448b-9bdd-5378309a8f8c\") " pod="kube-system/cilium-5hnwk" Jan 13 21:27:10.756694 kubelet[1761]: I0113 21:27:10.756569 1761 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/35e8c668-a689-448b-9bdd-5378309a8f8c-cilium-run\") pod \"cilium-5hnwk\" (UID: \"35e8c668-a689-448b-9bdd-5378309a8f8c\") " pod="kube-system/cilium-5hnwk" Jan 13 21:27:10.950698 kubelet[1761]: E0113 21:27:10.950488 1761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:10.951389 containerd[1452]: time="2025-01-13T21:27:10.951161134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-4qxl6,Uid:23852b1c-4b91-472b-8ad9-c33158887379,Namespace:kube-system,Attempt:0,}" Jan 13 21:27:10.952480 kubelet[1761]: E0113 21:27:10.952443 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:27:10.965126 kubelet[1761]: E0113 21:27:10.965075 1761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:10.965794 containerd[1452]: time="2025-01-13T21:27:10.965749556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5hnwk,Uid:35e8c668-a689-448b-9bdd-5378309a8f8c,Namespace:kube-system,Attempt:0,}" Jan 13 21:27:10.976295 containerd[1452]: time="2025-01-13T21:27:10.976002235Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:27:10.976295 containerd[1452]: time="2025-01-13T21:27:10.976078899Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:27:10.976295 containerd[1452]: time="2025-01-13T21:27:10.976095530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:27:10.976295 containerd[1452]: time="2025-01-13T21:27:10.976196480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:27:10.989202 containerd[1452]: time="2025-01-13T21:27:10.989090253Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:27:10.989202 containerd[1452]: time="2025-01-13T21:27:10.989163712Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:27:10.989385 containerd[1452]: time="2025-01-13T21:27:10.989182107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:27:10.989385 containerd[1452]: time="2025-01-13T21:27:10.989274480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:27:11.000191 systemd[1]: Started cri-containerd-0e2e206cef74c41bb17c8713eb957e7f335b6ccda5afe14f52e075f5f227cf55.scope - libcontainer container 0e2e206cef74c41bb17c8713eb957e7f335b6ccda5afe14f52e075f5f227cf55. Jan 13 21:27:11.004871 systemd[1]: Started cri-containerd-6ae8a06b061cea463458033efd31123b905ea296cda3ce8d12ce40b91e889922.scope - libcontainer container 6ae8a06b061cea463458033efd31123b905ea296cda3ce8d12ce40b91e889922. Jan 13 21:27:11.031980 containerd[1452]: time="2025-01-13T21:27:11.031881624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5hnwk,Uid:35e8c668-a689-448b-9bdd-5378309a8f8c,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ae8a06b061cea463458033efd31123b905ea296cda3ce8d12ce40b91e889922\"" Jan 13 21:27:11.033062 kubelet[1761]: E0113 21:27:11.032925 1761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:11.037845 containerd[1452]: time="2025-01-13T21:27:11.037789091Z" level=info msg="CreateContainer within sandbox \"6ae8a06b061cea463458033efd31123b905ea296cda3ce8d12ce40b91e889922\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 21:27:11.045197 containerd[1452]: time="2025-01-13T21:27:11.045053618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-4qxl6,Uid:23852b1c-4b91-472b-8ad9-c33158887379,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e2e206cef74c41bb17c8713eb957e7f335b6ccda5afe14f52e075f5f227cf55\"" Jan 13 21:27:11.045858 kubelet[1761]: E0113 21:27:11.045831 1761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:11.046841 containerd[1452]: time="2025-01-13T21:27:11.046808628Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 13 21:27:11.058320 containerd[1452]: time="2025-01-13T21:27:11.058239918Z" level=info msg="CreateContainer within sandbox \"6ae8a06b061cea463458033efd31123b905ea296cda3ce8d12ce40b91e889922\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a75a8e54cfa2bcd5e236e131f2152c776f98ae03bb1bcc3017ea48be8c3e62e8\"" Jan 13 21:27:11.058912 containerd[1452]: time="2025-01-13T21:27:11.058846167Z" level=info msg="StartContainer for \"a75a8e54cfa2bcd5e236e131f2152c776f98ae03bb1bcc3017ea48be8c3e62e8\"" Jan 13 21:27:11.090117 systemd[1]: Started cri-containerd-a75a8e54cfa2bcd5e236e131f2152c776f98ae03bb1bcc3017ea48be8c3e62e8.scope - libcontainer container a75a8e54cfa2bcd5e236e131f2152c776f98ae03bb1bcc3017ea48be8c3e62e8. Jan 13 21:27:11.116372 containerd[1452]: time="2025-01-13T21:27:11.116309929Z" level=info msg="StartContainer for \"a75a8e54cfa2bcd5e236e131f2152c776f98ae03bb1bcc3017ea48be8c3e62e8\" returns successfully" Jan 13 21:27:11.127649 systemd[1]: cri-containerd-a75a8e54cfa2bcd5e236e131f2152c776f98ae03bb1bcc3017ea48be8c3e62e8.scope: Deactivated successfully. Jan 13 21:27:11.162387 containerd[1452]: time="2025-01-13T21:27:11.162315939Z" level=info msg="shim disconnected" id=a75a8e54cfa2bcd5e236e131f2152c776f98ae03bb1bcc3017ea48be8c3e62e8 namespace=k8s.io Jan 13 21:27:11.162387 containerd[1452]: time="2025-01-13T21:27:11.162373307Z" level=warning msg="cleaning up after shim disconnected" id=a75a8e54cfa2bcd5e236e131f2152c776f98ae03bb1bcc3017ea48be8c3e62e8 namespace=k8s.io Jan 13 21:27:11.162387 containerd[1452]: time="2025-01-13T21:27:11.162382324Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:27:11.369025 kubelet[1761]: E0113 21:27:11.368929 1761 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 21:27:11.557297 kubelet[1761]: E0113 21:27:11.557234 1761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:11.558982 containerd[1452]: time="2025-01-13T21:27:11.558926941Z" level=info msg="CreateContainer within sandbox \"6ae8a06b061cea463458033efd31123b905ea296cda3ce8d12ce40b91e889922\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 21:27:11.571441 containerd[1452]: time="2025-01-13T21:27:11.571380884Z" level=info msg="CreateContainer within sandbox \"6ae8a06b061cea463458033efd31123b905ea296cda3ce8d12ce40b91e889922\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"82e2800bb11059bda20c4ae8df8d7527a169b10f0276af59b33312f86463d9e4\"" Jan 13 21:27:11.571778 containerd[1452]: time="2025-01-13T21:27:11.571751901Z" level=info msg="StartContainer for \"82e2800bb11059bda20c4ae8df8d7527a169b10f0276af59b33312f86463d9e4\"" Jan 13 21:27:11.603081 systemd[1]: Started cri-containerd-82e2800bb11059bda20c4ae8df8d7527a169b10f0276af59b33312f86463d9e4.scope - libcontainer container 82e2800bb11059bda20c4ae8df8d7527a169b10f0276af59b33312f86463d9e4. Jan 13 21:27:11.627375 containerd[1452]: time="2025-01-13T21:27:11.627245509Z" level=info msg="StartContainer for \"82e2800bb11059bda20c4ae8df8d7527a169b10f0276af59b33312f86463d9e4\" returns successfully" Jan 13 21:27:11.634698 systemd[1]: cri-containerd-82e2800bb11059bda20c4ae8df8d7527a169b10f0276af59b33312f86463d9e4.scope: Deactivated successfully. Jan 13 21:27:11.656731 containerd[1452]: time="2025-01-13T21:27:11.656644138Z" level=info msg="shim disconnected" id=82e2800bb11059bda20c4ae8df8d7527a169b10f0276af59b33312f86463d9e4 namespace=k8s.io Jan 13 21:27:11.656731 containerd[1452]: time="2025-01-13T21:27:11.656707547Z" level=warning msg="cleaning up after shim disconnected" id=82e2800bb11059bda20c4ae8df8d7527a169b10f0276af59b33312f86463d9e4 namespace=k8s.io Jan 13 21:27:11.656731 containerd[1452]: time="2025-01-13T21:27:11.656718658Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:27:11.952912 kubelet[1761]: E0113 21:27:11.952740 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:27:12.395388 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3187411884.mount: Deactivated successfully. Jan 13 21:27:12.562475 kubelet[1761]: E0113 21:27:12.562400 1761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:12.565799 containerd[1452]: time="2025-01-13T21:27:12.564887321Z" level=info msg="CreateContainer within sandbox \"6ae8a06b061cea463458033efd31123b905ea296cda3ce8d12ce40b91e889922\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 21:27:12.588900 containerd[1452]: time="2025-01-13T21:27:12.588806011Z" level=info msg="CreateContainer within sandbox \"6ae8a06b061cea463458033efd31123b905ea296cda3ce8d12ce40b91e889922\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"81eeddab8e8cedce2d89e8bf59e461c0a54e7a48191d0d6dfde8006fe0f3310b\"" Jan 13 21:27:12.589432 containerd[1452]: time="2025-01-13T21:27:12.589396782Z" level=info msg="StartContainer for \"81eeddab8e8cedce2d89e8bf59e461c0a54e7a48191d0d6dfde8006fe0f3310b\"" Jan 13 21:27:12.617073 systemd[1]: Started cri-containerd-81eeddab8e8cedce2d89e8bf59e461c0a54e7a48191d0d6dfde8006fe0f3310b.scope - libcontainer container 81eeddab8e8cedce2d89e8bf59e461c0a54e7a48191d0d6dfde8006fe0f3310b. Jan 13 21:27:12.648226 systemd[1]: cri-containerd-81eeddab8e8cedce2d89e8bf59e461c0a54e7a48191d0d6dfde8006fe0f3310b.scope: Deactivated successfully. Jan 13 21:27:12.649908 containerd[1452]: time="2025-01-13T21:27:12.649844755Z" level=info msg="StartContainer for \"81eeddab8e8cedce2d89e8bf59e461c0a54e7a48191d0d6dfde8006fe0f3310b\" returns successfully" Jan 13 21:27:12.873391 containerd[1452]: time="2025-01-13T21:27:12.873323357Z" level=info msg="shim disconnected" id=81eeddab8e8cedce2d89e8bf59e461c0a54e7a48191d0d6dfde8006fe0f3310b namespace=k8s.io Jan 13 21:27:12.873391 containerd[1452]: time="2025-01-13T21:27:12.873368201Z" level=warning msg="cleaning up after shim disconnected" id=81eeddab8e8cedce2d89e8bf59e461c0a54e7a48191d0d6dfde8006fe0f3310b namespace=k8s.io Jan 13 21:27:12.873391 containerd[1452]: time="2025-01-13T21:27:12.873378601Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:27:12.898652 containerd[1452]: time="2025-01-13T21:27:12.898514619Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:12.899340 containerd[1452]: time="2025-01-13T21:27:12.899278285Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907177" Jan 13 21:27:12.900481 containerd[1452]: time="2025-01-13T21:27:12.900453253Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:12.901654 containerd[1452]: time="2025-01-13T21:27:12.901617762Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.854771294s" Jan 13 21:27:12.901694 containerd[1452]: time="2025-01-13T21:27:12.901656525Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 13 21:27:12.903732 containerd[1452]: time="2025-01-13T21:27:12.903688625Z" level=info msg="CreateContainer within sandbox \"0e2e206cef74c41bb17c8713eb957e7f335b6ccda5afe14f52e075f5f227cf55\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 13 21:27:12.916071 containerd[1452]: time="2025-01-13T21:27:12.916036855Z" level=info msg="CreateContainer within sandbox \"0e2e206cef74c41bb17c8713eb957e7f335b6ccda5afe14f52e075f5f227cf55\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9098f2d83eb7aafe268bab54dba10d688149a697cf358e1e0adeefc480ec006c\"" Jan 13 21:27:12.916754 containerd[1452]: time="2025-01-13T21:27:12.916722253Z" level=info msg="StartContainer for \"9098f2d83eb7aafe268bab54dba10d688149a697cf358e1e0adeefc480ec006c\"" Jan 13 21:27:12.953142 kubelet[1761]: E0113 21:27:12.953093 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:27:12.954074 systemd[1]: Started cri-containerd-9098f2d83eb7aafe268bab54dba10d688149a697cf358e1e0adeefc480ec006c.scope - libcontainer container 9098f2d83eb7aafe268bab54dba10d688149a697cf358e1e0adeefc480ec006c. Jan 13 21:27:12.983124 containerd[1452]: time="2025-01-13T21:27:12.983079786Z" level=info msg="StartContainer for \"9098f2d83eb7aafe268bab54dba10d688149a697cf358e1e0adeefc480ec006c\" returns successfully" Jan 13 21:27:13.565674 kubelet[1761]: E0113 21:27:13.565634 1761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:13.567553 kubelet[1761]: E0113 21:27:13.567525 1761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:13.569323 containerd[1452]: time="2025-01-13T21:27:13.569282024Z" level=info msg="CreateContainer within sandbox \"6ae8a06b061cea463458033efd31123b905ea296cda3ce8d12ce40b91e889922\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 21:27:13.575105 kubelet[1761]: I0113 21:27:13.575040 1761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-4qxl6" podStartSLOduration=1.719144778 podStartE2EDuration="3.575018344s" podCreationTimestamp="2025-01-13 21:27:10 +0000 UTC" firstStartedPulling="2025-01-13 21:27:11.046503984 +0000 UTC m=+55.578722898" lastFinishedPulling="2025-01-13 21:27:12.90237754 +0000 UTC m=+57.434596464" observedRunningTime="2025-01-13 21:27:13.574996724 +0000 UTC m=+58.107215658" watchObservedRunningTime="2025-01-13 21:27:13.575018344 +0000 UTC m=+58.107237268" Jan 13 21:27:13.585096 containerd[1452]: time="2025-01-13T21:27:13.585045440Z" level=info msg="CreateContainer within sandbox \"6ae8a06b061cea463458033efd31123b905ea296cda3ce8d12ce40b91e889922\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"699e866b28a6ddb90a19d8935a53c88c323983e3b63e2b47d0472d91bb15e23a\"" Jan 13 21:27:13.585578 containerd[1452]: time="2025-01-13T21:27:13.585534098Z" level=info msg="StartContainer for \"699e866b28a6ddb90a19d8935a53c88c323983e3b63e2b47d0472d91bb15e23a\"" Jan 13 21:27:13.616102 systemd[1]: Started cri-containerd-699e866b28a6ddb90a19d8935a53c88c323983e3b63e2b47d0472d91bb15e23a.scope - libcontainer container 699e866b28a6ddb90a19d8935a53c88c323983e3b63e2b47d0472d91bb15e23a. Jan 13 21:27:13.640853 systemd[1]: cri-containerd-699e866b28a6ddb90a19d8935a53c88c323983e3b63e2b47d0472d91bb15e23a.scope: Deactivated successfully. Jan 13 21:27:13.642779 containerd[1452]: time="2025-01-13T21:27:13.642748598Z" level=info msg="StartContainer for \"699e866b28a6ddb90a19d8935a53c88c323983e3b63e2b47d0472d91bb15e23a\" returns successfully" Jan 13 21:27:13.663156 containerd[1452]: time="2025-01-13T21:27:13.663089608Z" level=info msg="shim disconnected" id=699e866b28a6ddb90a19d8935a53c88c323983e3b63e2b47d0472d91bb15e23a namespace=k8s.io Jan 13 21:27:13.663156 containerd[1452]: time="2025-01-13T21:27:13.663151824Z" level=warning msg="cleaning up after shim disconnected" id=699e866b28a6ddb90a19d8935a53c88c323983e3b63e2b47d0472d91bb15e23a namespace=k8s.io Jan 13 21:27:13.663430 containerd[1452]: time="2025-01-13T21:27:13.663166742Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:27:13.953351 kubelet[1761]: E0113 21:27:13.953210 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:27:14.571740 kubelet[1761]: E0113 21:27:14.571691 1761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:14.572202 kubelet[1761]: E0113 21:27:14.572161 1761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:14.573801 containerd[1452]: time="2025-01-13T21:27:14.573761976Z" level=info msg="CreateContainer within sandbox \"6ae8a06b061cea463458033efd31123b905ea296cda3ce8d12ce40b91e889922\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 21:27:14.715331 containerd[1452]: time="2025-01-13T21:27:14.715271606Z" level=info msg="CreateContainer within sandbox \"6ae8a06b061cea463458033efd31123b905ea296cda3ce8d12ce40b91e889922\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"239e67595a4b49b7a0bb26bd7ebbd3a854dd15183cfd5ef2fc9e11d29159ffe8\"" Jan 13 21:27:14.715822 containerd[1452]: time="2025-01-13T21:27:14.715783317Z" level=info msg="StartContainer for \"239e67595a4b49b7a0bb26bd7ebbd3a854dd15183cfd5ef2fc9e11d29159ffe8\"" Jan 13 21:27:14.753181 systemd[1]: Started cri-containerd-239e67595a4b49b7a0bb26bd7ebbd3a854dd15183cfd5ef2fc9e11d29159ffe8.scope - libcontainer container 239e67595a4b49b7a0bb26bd7ebbd3a854dd15183cfd5ef2fc9e11d29159ffe8. Jan 13 21:27:14.783910 containerd[1452]: time="2025-01-13T21:27:14.783829707Z" level=info msg="StartContainer for \"239e67595a4b49b7a0bb26bd7ebbd3a854dd15183cfd5ef2fc9e11d29159ffe8\" returns successfully" Jan 13 21:27:14.954377 kubelet[1761]: E0113 21:27:14.954173 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:27:15.239967 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 13 21:27:15.576340 kubelet[1761]: E0113 21:27:15.576286 1761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:15.591199 kubelet[1761]: I0113 21:27:15.591111 1761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5hnwk" podStartSLOduration=5.591089856 podStartE2EDuration="5.591089856s" podCreationTimestamp="2025-01-13 21:27:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:27:15.590403497 +0000 UTC m=+60.122622421" watchObservedRunningTime="2025-01-13 21:27:15.591089856 +0000 UTC m=+60.123308780" Jan 13 21:27:15.901214 kubelet[1761]: E0113 21:27:15.901041 1761 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:27:15.913756 containerd[1452]: time="2025-01-13T21:27:15.913719186Z" level=info msg="StopPodSandbox for \"a8fdf1c3f6acb37a3ae839abf59ef22b842457ff23c671de616bda46d9dc3e6a\"" Jan 13 21:27:15.914163 containerd[1452]: time="2025-01-13T21:27:15.913801290Z" level=info msg="TearDown network for sandbox \"a8fdf1c3f6acb37a3ae839abf59ef22b842457ff23c671de616bda46d9dc3e6a\" successfully" Jan 13 21:27:15.914163 containerd[1452]: time="2025-01-13T21:27:15.913811739Z" level=info msg="StopPodSandbox for \"a8fdf1c3f6acb37a3ae839abf59ef22b842457ff23c671de616bda46d9dc3e6a\" returns successfully" Jan 13 21:27:15.914163 containerd[1452]: time="2025-01-13T21:27:15.914153020Z" level=info msg="RemovePodSandbox for \"a8fdf1c3f6acb37a3ae839abf59ef22b842457ff23c671de616bda46d9dc3e6a\"" Jan 13 21:27:15.914240 containerd[1452]: time="2025-01-13T21:27:15.914179150Z" level=info msg="Forcibly stopping sandbox \"a8fdf1c3f6acb37a3ae839abf59ef22b842457ff23c671de616bda46d9dc3e6a\"" Jan 13 21:27:15.914270 containerd[1452]: time="2025-01-13T21:27:15.914248520Z" level=info msg="TearDown network for sandbox \"a8fdf1c3f6acb37a3ae839abf59ef22b842457ff23c671de616bda46d9dc3e6a\" successfully" Jan 13 21:27:15.919970 containerd[1452]: time="2025-01-13T21:27:15.919925036Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a8fdf1c3f6acb37a3ae839abf59ef22b842457ff23c671de616bda46d9dc3e6a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:27:15.920020 containerd[1452]: time="2025-01-13T21:27:15.919983816Z" level=info msg="RemovePodSandbox \"a8fdf1c3f6acb37a3ae839abf59ef22b842457ff23c671de616bda46d9dc3e6a\" returns successfully" Jan 13 21:27:15.954533 kubelet[1761]: E0113 21:27:15.954504 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:27:16.955401 kubelet[1761]: E0113 21:27:16.955315 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:27:16.966504 kubelet[1761]: E0113 21:27:16.966448 1761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:17.155320 systemd[1]: run-containerd-runc-k8s.io-239e67595a4b49b7a0bb26bd7ebbd3a854dd15183cfd5ef2fc9e11d29159ffe8-runc.yozKAC.mount: Deactivated successfully. Jan 13 21:27:17.956244 kubelet[1761]: E0113 21:27:17.956193 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:27:18.454559 systemd-networkd[1400]: lxc_health: Link UP Jan 13 21:27:18.463662 systemd-networkd[1400]: lxc_health: Gained carrier Jan 13 21:27:18.956955 kubelet[1761]: E0113 21:27:18.956886 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:27:18.967457 kubelet[1761]: E0113 21:27:18.967425 1761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:19.583264 kubelet[1761]: E0113 21:27:19.583217 1761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:19.816165 systemd-networkd[1400]: lxc_health: Gained IPv6LL Jan 13 21:27:19.957271 kubelet[1761]: E0113 21:27:19.957066 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:27:20.585136 kubelet[1761]: E0113 21:27:20.585084 1761 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:20.957453 kubelet[1761]: E0113 21:27:20.957269 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:27:21.957890 kubelet[1761]: E0113 21:27:21.957822 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:27:22.959059 kubelet[1761]: E0113 21:27:22.958972 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:27:23.959752 kubelet[1761]: E0113 21:27:23.959676 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:27:24.960478 kubelet[1761]: E0113 21:27:24.960418 1761 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"