Jan 13 21:25:16.921171 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:40:50 -00 2025 Jan 13 21:25:16.921192 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:25:16.921203 kernel: BIOS-provided physical RAM map: Jan 13 21:25:16.921209 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 13 21:25:16.921215 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 13 21:25:16.921221 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 13 21:25:16.921229 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 13 21:25:16.921235 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 13 21:25:16.921242 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 13 21:25:16.921341 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 13 21:25:16.921348 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 13 21:25:16.921354 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 13 21:25:16.921360 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 13 21:25:16.921366 kernel: NX (Execute Disable) protection: active Jan 13 21:25:16.921374 kernel: APIC: Static calls initialized Jan 13 21:25:16.921384 kernel: SMBIOS 2.8 present. Jan 13 21:25:16.921390 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 13 21:25:16.921397 kernel: Hypervisor detected: KVM Jan 13 21:25:16.921404 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 21:25:16.921410 kernel: kvm-clock: using sched offset of 2175854314 cycles Jan 13 21:25:16.921417 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 21:25:16.921425 kernel: tsc: Detected 2794.748 MHz processor Jan 13 21:25:16.921432 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 21:25:16.921439 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 21:25:16.921446 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 13 21:25:16.921455 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 13 21:25:16.921462 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 21:25:16.921469 kernel: Using GB pages for direct mapping Jan 13 21:25:16.921476 kernel: ACPI: Early table checksum verification disabled Jan 13 21:25:16.921483 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 13 21:25:16.921490 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:25:16.921497 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:25:16.921504 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:25:16.921513 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 13 21:25:16.921520 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:25:16.921527 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:25:16.921533 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:25:16.921540 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:25:16.921547 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Jan 13 21:25:16.921554 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Jan 13 21:25:16.921564 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 13 21:25:16.921574 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Jan 13 21:25:16.921581 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Jan 13 21:25:16.921588 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Jan 13 21:25:16.921595 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Jan 13 21:25:16.921602 kernel: No NUMA configuration found Jan 13 21:25:16.921609 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 13 21:25:16.921617 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 13 21:25:16.921626 kernel: Zone ranges: Jan 13 21:25:16.921633 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 21:25:16.921640 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 13 21:25:16.921647 kernel: Normal empty Jan 13 21:25:16.921654 kernel: Movable zone start for each node Jan 13 21:25:16.921661 kernel: Early memory node ranges Jan 13 21:25:16.921668 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 13 21:25:16.921675 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 13 21:25:16.921682 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 13 21:25:16.921692 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 21:25:16.921699 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 13 21:25:16.921706 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 13 21:25:16.921713 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 13 21:25:16.921720 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 21:25:16.921728 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 13 21:25:16.921735 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 13 21:25:16.921742 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 21:25:16.921756 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 21:25:16.921766 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 21:25:16.921773 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 21:25:16.921780 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 21:25:16.921787 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 13 21:25:16.921794 kernel: TSC deadline timer available Jan 13 21:25:16.921801 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 13 21:25:16.921809 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 13 21:25:16.921816 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 13 21:25:16.921823 kernel: kvm-guest: setup PV sched yield Jan 13 21:25:16.921830 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 13 21:25:16.921839 kernel: Booting paravirtualized kernel on KVM Jan 13 21:25:16.921847 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 21:25:16.921854 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 13 21:25:16.921861 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 13 21:25:16.921868 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 13 21:25:16.921875 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 13 21:25:16.921882 kernel: kvm-guest: PV spinlocks enabled Jan 13 21:25:16.921889 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 13 21:25:16.921897 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:25:16.921907 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 21:25:16.921914 kernel: random: crng init done Jan 13 21:25:16.921921 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 21:25:16.921929 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 21:25:16.921936 kernel: Fallback order for Node 0: 0 Jan 13 21:25:16.921943 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 13 21:25:16.921950 kernel: Policy zone: DMA32 Jan 13 21:25:16.921957 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 21:25:16.921967 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42844K init, 2348K bss, 136900K reserved, 0K cma-reserved) Jan 13 21:25:16.921974 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 13 21:25:16.921981 kernel: ftrace: allocating 37918 entries in 149 pages Jan 13 21:25:16.921989 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 21:25:16.921996 kernel: Dynamic Preempt: voluntary Jan 13 21:25:16.922003 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 21:25:16.922010 kernel: rcu: RCU event tracing is enabled. Jan 13 21:25:16.922018 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 13 21:25:16.922025 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 21:25:16.922035 kernel: Rude variant of Tasks RCU enabled. Jan 13 21:25:16.922042 kernel: Tracing variant of Tasks RCU enabled. Jan 13 21:25:16.922049 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 21:25:16.922056 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 13 21:25:16.922064 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 13 21:25:16.922071 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 21:25:16.922078 kernel: Console: colour VGA+ 80x25 Jan 13 21:25:16.922085 kernel: printk: console [ttyS0] enabled Jan 13 21:25:16.922092 kernel: ACPI: Core revision 20230628 Jan 13 21:25:16.922102 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 13 21:25:16.922109 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 21:25:16.922117 kernel: x2apic enabled Jan 13 21:25:16.922124 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 21:25:16.922131 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 13 21:25:16.922138 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 13 21:25:16.922146 kernel: kvm-guest: setup PV IPIs Jan 13 21:25:16.922162 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 13 21:25:16.922170 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 13 21:25:16.922177 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jan 13 21:25:16.922185 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 13 21:25:16.922192 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 13 21:25:16.922202 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 13 21:25:16.922210 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 21:25:16.922217 kernel: Spectre V2 : Mitigation: Retpolines Jan 13 21:25:16.922225 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 21:25:16.922232 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 21:25:16.922242 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 13 21:25:16.922261 kernel: RETBleed: Mitigation: untrained return thunk Jan 13 21:25:16.922269 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 13 21:25:16.922277 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 13 21:25:16.922285 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 13 21:25:16.922293 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 13 21:25:16.922300 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 13 21:25:16.922308 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 21:25:16.922318 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 21:25:16.922326 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 21:25:16.922333 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 21:25:16.922341 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 13 21:25:16.922348 kernel: Freeing SMP alternatives memory: 32K Jan 13 21:25:16.922356 kernel: pid_max: default: 32768 minimum: 301 Jan 13 21:25:16.922363 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 21:25:16.922371 kernel: landlock: Up and running. Jan 13 21:25:16.922378 kernel: SELinux: Initializing. Jan 13 21:25:16.922388 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:25:16.922396 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:25:16.922403 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 13 21:25:16.922411 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:25:16.922419 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:25:16.922426 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:25:16.922434 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 13 21:25:16.922441 kernel: ... version: 0 Jan 13 21:25:16.922451 kernel: ... bit width: 48 Jan 13 21:25:16.922458 kernel: ... generic registers: 6 Jan 13 21:25:16.922466 kernel: ... value mask: 0000ffffffffffff Jan 13 21:25:16.922473 kernel: ... max period: 00007fffffffffff Jan 13 21:25:16.922481 kernel: ... fixed-purpose events: 0 Jan 13 21:25:16.922488 kernel: ... event mask: 000000000000003f Jan 13 21:25:16.922495 kernel: signal: max sigframe size: 1776 Jan 13 21:25:16.922503 kernel: rcu: Hierarchical SRCU implementation. Jan 13 21:25:16.922510 kernel: rcu: Max phase no-delay instances is 400. Jan 13 21:25:16.922518 kernel: smp: Bringing up secondary CPUs ... Jan 13 21:25:16.922528 kernel: smpboot: x86: Booting SMP configuration: Jan 13 21:25:16.922535 kernel: .... node #0, CPUs: #1 #2 #3 Jan 13 21:25:16.922542 kernel: smp: Brought up 1 node, 4 CPUs Jan 13 21:25:16.922550 kernel: smpboot: Max logical packages: 1 Jan 13 21:25:16.922557 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jan 13 21:25:16.922565 kernel: devtmpfs: initialized Jan 13 21:25:16.922572 kernel: x86/mm: Memory block size: 128MB Jan 13 21:25:16.922580 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 21:25:16.922587 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 13 21:25:16.922597 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 21:25:16.922605 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 21:25:16.922612 kernel: audit: initializing netlink subsys (disabled) Jan 13 21:25:16.922620 kernel: audit: type=2000 audit(1736803516.660:1): state=initialized audit_enabled=0 res=1 Jan 13 21:25:16.922627 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 21:25:16.922634 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 21:25:16.922642 kernel: cpuidle: using governor menu Jan 13 21:25:16.922649 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 21:25:16.922657 kernel: dca service started, version 1.12.1 Jan 13 21:25:16.922667 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 13 21:25:16.922674 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 13 21:25:16.922682 kernel: PCI: Using configuration type 1 for base access Jan 13 21:25:16.922689 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 21:25:16.922697 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 21:25:16.922704 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 21:25:16.922712 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 21:25:16.922719 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 21:25:16.922727 kernel: ACPI: Added _OSI(Module Device) Jan 13 21:25:16.922737 kernel: ACPI: Added _OSI(Processor Device) Jan 13 21:25:16.922744 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 21:25:16.922760 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 21:25:16.922768 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 21:25:16.922775 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 21:25:16.922783 kernel: ACPI: Interpreter enabled Jan 13 21:25:16.922790 kernel: ACPI: PM: (supports S0 S3 S5) Jan 13 21:25:16.922798 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 21:25:16.922805 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 21:25:16.922815 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 21:25:16.922823 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 13 21:25:16.922830 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 21:25:16.923001 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 21:25:16.923130 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 13 21:25:16.923275 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 13 21:25:16.923286 kernel: PCI host bridge to bus 0000:00 Jan 13 21:25:16.923507 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 21:25:16.923637 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 21:25:16.923759 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 21:25:16.923875 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 13 21:25:16.923983 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 13 21:25:16.924090 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 13 21:25:16.924198 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 21:25:16.924375 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 13 21:25:16.924504 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 13 21:25:16.924621 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 13 21:25:16.924737 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 13 21:25:16.924863 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 13 21:25:16.925037 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 21:25:16.925204 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 21:25:16.925437 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 13 21:25:16.925629 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 13 21:25:16.925765 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 13 21:25:16.925907 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 13 21:25:16.926028 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 13 21:25:16.926149 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 13 21:25:16.926298 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 13 21:25:16.926427 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 13 21:25:16.926546 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 13 21:25:16.926664 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 13 21:25:16.926791 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 13 21:25:16.926911 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 13 21:25:16.927146 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 13 21:25:16.927312 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 13 21:25:16.927442 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 13 21:25:16.927561 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 13 21:25:16.927679 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 13 21:25:16.927816 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 13 21:25:16.927934 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 13 21:25:16.927945 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 21:25:16.927957 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 21:25:16.927965 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 21:25:16.927972 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 21:25:16.927980 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 13 21:25:16.927988 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 13 21:25:16.927996 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 13 21:25:16.928003 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 13 21:25:16.928011 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 13 21:25:16.928018 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 13 21:25:16.928029 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 13 21:25:16.928037 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 13 21:25:16.928044 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 13 21:25:16.928052 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 13 21:25:16.928060 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 13 21:25:16.928067 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 13 21:25:16.928075 kernel: iommu: Default domain type: Translated Jan 13 21:25:16.928082 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 21:25:16.928090 kernel: PCI: Using ACPI for IRQ routing Jan 13 21:25:16.928100 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 21:25:16.928108 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 13 21:25:16.928116 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 13 21:25:16.928566 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 13 21:25:16.928691 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 13 21:25:16.928824 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 21:25:16.928835 kernel: vgaarb: loaded Jan 13 21:25:16.928843 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 13 21:25:16.928856 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 13 21:25:16.928863 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 21:25:16.928871 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 21:25:16.928879 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 21:25:16.928886 kernel: pnp: PnP ACPI init Jan 13 21:25:16.929015 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 13 21:25:16.929027 kernel: pnp: PnP ACPI: found 6 devices Jan 13 21:25:16.929035 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 21:25:16.929046 kernel: NET: Registered PF_INET protocol family Jan 13 21:25:16.929054 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 21:25:16.929062 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 21:25:16.929070 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 21:25:16.929078 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 21:25:16.929085 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 21:25:16.929093 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 21:25:16.929100 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:25:16.929108 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:25:16.929118 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 21:25:16.929125 kernel: NET: Registered PF_XDP protocol family Jan 13 21:25:16.929236 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 21:25:16.929371 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 21:25:16.929480 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 21:25:16.929586 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 13 21:25:16.929692 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 13 21:25:16.929809 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 13 21:25:16.929823 kernel: PCI: CLS 0 bytes, default 64 Jan 13 21:25:16.929831 kernel: Initialise system trusted keyrings Jan 13 21:25:16.929839 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 21:25:16.929846 kernel: Key type asymmetric registered Jan 13 21:25:16.929854 kernel: Asymmetric key parser 'x509' registered Jan 13 21:25:16.929862 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 21:25:16.929869 kernel: io scheduler mq-deadline registered Jan 13 21:25:16.929877 kernel: io scheduler kyber registered Jan 13 21:25:16.929884 kernel: io scheduler bfq registered Jan 13 21:25:16.929894 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 21:25:16.929902 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 13 21:25:16.929910 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 13 21:25:16.929918 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 13 21:25:16.929925 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 21:25:16.929933 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 21:25:16.929941 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 21:25:16.929948 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 21:25:16.929956 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 21:25:16.930090 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 13 21:25:16.930102 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 21:25:16.930216 kernel: rtc_cmos 00:04: registered as rtc0 Jan 13 21:25:16.930417 kernel: rtc_cmos 00:04: setting system clock to 2025-01-13T21:25:16 UTC (1736803516) Jan 13 21:25:16.930531 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 13 21:25:16.930541 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 13 21:25:16.930549 kernel: NET: Registered PF_INET6 protocol family Jan 13 21:25:16.930557 kernel: Segment Routing with IPv6 Jan 13 21:25:16.930569 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 21:25:16.930576 kernel: NET: Registered PF_PACKET protocol family Jan 13 21:25:16.930584 kernel: Key type dns_resolver registered Jan 13 21:25:16.930592 kernel: IPI shorthand broadcast: enabled Jan 13 21:25:16.930600 kernel: sched_clock: Marking stable (605002882, 105109209)->(726378919, -16266828) Jan 13 21:25:16.930607 kernel: registered taskstats version 1 Jan 13 21:25:16.930615 kernel: Loading compiled-in X.509 certificates Jan 13 21:25:16.930623 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e8ca4908f7ff887d90a0430272c92dde55624447' Jan 13 21:25:16.930631 kernel: Key type .fscrypt registered Jan 13 21:25:16.930640 kernel: Key type fscrypt-provisioning registered Jan 13 21:25:16.930648 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 21:25:16.930656 kernel: ima: Allocated hash algorithm: sha1 Jan 13 21:25:16.930664 kernel: ima: No architecture policies found Jan 13 21:25:16.930671 kernel: clk: Disabling unused clocks Jan 13 21:25:16.930679 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 13 21:25:16.930687 kernel: Write protecting the kernel read-only data: 36864k Jan 13 21:25:16.930694 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 13 21:25:16.930702 kernel: Run /init as init process Jan 13 21:25:16.930712 kernel: with arguments: Jan 13 21:25:16.930720 kernel: /init Jan 13 21:25:16.930727 kernel: with environment: Jan 13 21:25:16.930735 kernel: HOME=/ Jan 13 21:25:16.930742 kernel: TERM=linux Jan 13 21:25:16.930758 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 21:25:16.930770 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:25:16.930782 systemd[1]: Detected virtualization kvm. Jan 13 21:25:16.930794 systemd[1]: Detected architecture x86-64. Jan 13 21:25:16.930802 systemd[1]: Running in initrd. Jan 13 21:25:16.930810 systemd[1]: No hostname configured, using default hostname. Jan 13 21:25:16.930818 systemd[1]: Hostname set to . Jan 13 21:25:16.930826 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:25:16.930834 systemd[1]: Queued start job for default target initrd.target. Jan 13 21:25:16.930843 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:25:16.930851 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:25:16.930863 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 21:25:16.930884 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:25:16.930895 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 21:25:16.930904 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 21:25:16.930914 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 21:25:16.930925 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 21:25:16.930934 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:25:16.930942 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:25:16.930951 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:25:16.930959 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:25:16.930968 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:25:16.930976 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:25:16.930984 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:25:16.930995 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:25:16.931004 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:25:16.931012 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:25:16.931021 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:25:16.931030 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:25:16.931038 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:25:16.931047 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:25:16.931055 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 21:25:16.931064 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:25:16.931074 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 21:25:16.931083 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 21:25:16.931091 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:25:16.931099 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:25:16.931108 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:25:16.931118 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 21:25:16.931127 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:25:16.931136 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 21:25:16.931166 systemd-journald[193]: Collecting audit messages is disabled. Jan 13 21:25:16.931187 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:25:16.931198 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:25:16.931207 systemd-journald[193]: Journal started Jan 13 21:25:16.931227 systemd-journald[193]: Runtime Journal (/run/log/journal/f89f3922cb1641e9b645b3e89c0d850a) is 6.0M, max 48.4M, 42.3M free. Jan 13 21:25:16.925584 systemd-modules-load[194]: Inserted module 'overlay' Jan 13 21:25:16.964377 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 21:25:16.964407 kernel: Bridge firewalling registered Jan 13 21:25:16.964419 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:25:16.953404 systemd-modules-load[194]: Inserted module 'br_netfilter' Jan 13 21:25:16.965016 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:25:16.967916 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:25:16.975499 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:25:16.976802 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:25:16.979850 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:25:16.982464 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:25:16.990374 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:25:16.993231 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 21:25:16.995917 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:25:16.998874 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:25:17.002985 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:25:17.005448 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:25:17.009456 dracut-cmdline[224]: dracut-dracut-053 Jan 13 21:25:17.011946 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:25:17.043832 systemd-resolved[227]: Positive Trust Anchors: Jan 13 21:25:17.043849 systemd-resolved[227]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:25:17.043888 systemd-resolved[227]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:25:17.046928 systemd-resolved[227]: Defaulting to hostname 'linux'. Jan 13 21:25:17.048126 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:25:17.053766 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:25:17.097308 kernel: SCSI subsystem initialized Jan 13 21:25:17.106294 kernel: Loading iSCSI transport class v2.0-870. Jan 13 21:25:17.117308 kernel: iscsi: registered transport (tcp) Jan 13 21:25:17.138293 kernel: iscsi: registered transport (qla4xxx) Jan 13 21:25:17.138348 kernel: QLogic iSCSI HBA Driver Jan 13 21:25:17.193280 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 21:25:17.203387 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 21:25:17.233350 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 21:25:17.233452 kernel: device-mapper: uevent: version 1.0.3 Jan 13 21:25:17.233469 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 21:25:17.279292 kernel: raid6: avx2x4 gen() 29307 MB/s Jan 13 21:25:17.296273 kernel: raid6: avx2x2 gen() 26359 MB/s Jan 13 21:25:17.313523 kernel: raid6: avx2x1 gen() 22740 MB/s Jan 13 21:25:17.313604 kernel: raid6: using algorithm avx2x4 gen() 29307 MB/s Jan 13 21:25:17.331618 kernel: raid6: .... xor() 6156 MB/s, rmw enabled Jan 13 21:25:17.331694 kernel: raid6: using avx2x2 recovery algorithm Jan 13 21:25:17.352295 kernel: xor: automatically using best checksumming function avx Jan 13 21:25:17.507287 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 21:25:17.520151 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:25:17.533516 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:25:17.547853 systemd-udevd[411]: Using default interface naming scheme 'v255'. Jan 13 21:25:17.553494 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:25:17.560523 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 21:25:17.577650 dracut-pre-trigger[415]: rd.md=0: removing MD RAID activation Jan 13 21:25:17.619665 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:25:17.632569 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:25:17.713125 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:25:17.724398 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 21:25:17.738848 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 21:25:17.739816 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:25:17.744521 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:25:17.747413 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:25:17.752271 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 21:25:17.757503 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 21:25:17.765287 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 13 21:25:17.790635 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 21:25:17.790675 kernel: AES CTR mode by8 optimization enabled Jan 13 21:25:17.790698 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 13 21:25:17.790995 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 21:25:17.791009 kernel: GPT:9289727 != 19775487 Jan 13 21:25:17.791021 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 21:25:17.791033 kernel: GPT:9289727 != 19775487 Jan 13 21:25:17.791044 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 21:25:17.791056 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:25:17.773765 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:25:17.785305 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:25:17.785389 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:25:17.789087 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:25:17.790756 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:25:17.790895 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:25:17.793441 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:25:17.805382 kernel: libata version 3.00 loaded. Jan 13 21:25:17.805494 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:25:17.814335 kernel: ahci 0000:00:1f.2: version 3.0 Jan 13 21:25:17.845498 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 13 21:25:17.845516 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 13 21:25:17.845677 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 13 21:25:17.845834 kernel: scsi host0: ahci Jan 13 21:25:17.845987 kernel: scsi host1: ahci Jan 13 21:25:17.846129 kernel: BTRFS: device fsid b8e2d3c5-4bed-4339-bed5-268c66823686 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (472) Jan 13 21:25:17.846140 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (455) Jan 13 21:25:17.846151 kernel: scsi host2: ahci Jan 13 21:25:17.846326 kernel: scsi host3: ahci Jan 13 21:25:17.846472 kernel: scsi host4: ahci Jan 13 21:25:17.846614 kernel: scsi host5: ahci Jan 13 21:25:17.846765 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 13 21:25:17.846776 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 13 21:25:17.846786 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 13 21:25:17.846796 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 13 21:25:17.846806 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 13 21:25:17.846820 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 13 21:25:17.838530 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 21:25:17.879693 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:25:17.889076 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 21:25:17.908492 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:25:17.914232 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 21:25:17.916779 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 21:25:17.936449 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 21:25:17.938628 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:25:17.947875 disk-uuid[565]: Primary Header is updated. Jan 13 21:25:17.947875 disk-uuid[565]: Secondary Entries is updated. Jan 13 21:25:17.947875 disk-uuid[565]: Secondary Header is updated. Jan 13 21:25:17.952279 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:25:17.957317 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:25:17.962105 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:25:17.964671 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:25:18.155664 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 13 21:25:18.155746 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 13 21:25:18.155769 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 13 21:25:18.157307 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 13 21:25:18.158284 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 13 21:25:18.159286 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 13 21:25:18.159315 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 13 21:25:18.160287 kernel: ata3.00: applying bridge limits Jan 13 21:25:18.161288 kernel: ata3.00: configured for UDMA/100 Jan 13 21:25:18.161315 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 13 21:25:18.206847 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 13 21:25:18.218968 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 13 21:25:18.218988 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 13 21:25:18.962323 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:25:18.962617 disk-uuid[569]: The operation has completed successfully. Jan 13 21:25:18.989627 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 21:25:18.989796 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 21:25:19.014410 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 21:25:19.017930 sh[593]: Success Jan 13 21:25:19.030286 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 13 21:25:19.062765 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 21:25:19.077813 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 21:25:19.080444 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 21:25:19.091209 kernel: BTRFS info (device dm-0): first mount of filesystem b8e2d3c5-4bed-4339-bed5-268c66823686 Jan 13 21:25:19.091237 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:25:19.091261 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 21:25:19.092282 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 21:25:19.093615 kernel: BTRFS info (device dm-0): using free space tree Jan 13 21:25:19.097970 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 21:25:19.100316 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 21:25:19.107426 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 21:25:19.110118 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 21:25:19.118314 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:25:19.118342 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:25:19.118358 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:25:19.121305 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:25:19.130486 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 21:25:19.132499 kernel: BTRFS info (device vda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:25:19.141953 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 21:25:19.146464 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 21:25:19.202529 ignition[683]: Ignition 2.19.0 Jan 13 21:25:19.202540 ignition[683]: Stage: fetch-offline Jan 13 21:25:19.202577 ignition[683]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:25:19.202586 ignition[683]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:25:19.202675 ignition[683]: parsed url from cmdline: "" Jan 13 21:25:19.202678 ignition[683]: no config URL provided Jan 13 21:25:19.202683 ignition[683]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:25:19.202701 ignition[683]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:25:19.202724 ignition[683]: op(1): [started] loading QEMU firmware config module Jan 13 21:25:19.202730 ignition[683]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 13 21:25:19.214120 ignition[683]: op(1): [finished] loading QEMU firmware config module Jan 13 21:25:19.215581 ignition[683]: parsing config with SHA512: f3fa0f3959480fe61c026068d795178faf8f0a819f35f993371e7616bde95ba2ba82253cd2e90677a7e997ef1468517bebba9cf8d0092a07142984eca92c099f Jan 13 21:25:19.218027 unknown[683]: fetched base config from "system" Jan 13 21:25:19.218541 unknown[683]: fetched user config from "qemu" Jan 13 21:25:19.218824 ignition[683]: fetch-offline: fetch-offline passed Jan 13 21:25:19.218893 ignition[683]: Ignition finished successfully Jan 13 21:25:19.221367 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:25:19.244134 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:25:19.255418 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:25:19.276318 systemd-networkd[784]: lo: Link UP Jan 13 21:25:19.276328 systemd-networkd[784]: lo: Gained carrier Jan 13 21:25:19.277857 systemd-networkd[784]: Enumeration completed Jan 13 21:25:19.277940 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:25:19.278272 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:25:19.278277 systemd-networkd[784]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:25:19.279224 systemd-networkd[784]: eth0: Link UP Jan 13 21:25:19.279228 systemd-networkd[784]: eth0: Gained carrier Jan 13 21:25:19.279234 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:25:19.280353 systemd[1]: Reached target network.target - Network. Jan 13 21:25:19.282162 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 13 21:25:19.293293 systemd-networkd[784]: eth0: DHCPv4 address 10.0.0.106/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 21:25:19.293400 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 21:25:19.306492 ignition[787]: Ignition 2.19.0 Jan 13 21:25:19.306503 ignition[787]: Stage: kargs Jan 13 21:25:19.306654 ignition[787]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:25:19.306665 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:25:19.307288 ignition[787]: kargs: kargs passed Jan 13 21:25:19.307329 ignition[787]: Ignition finished successfully Jan 13 21:25:19.312003 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 21:25:19.325397 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 21:25:19.336196 ignition[797]: Ignition 2.19.0 Jan 13 21:25:19.336206 ignition[797]: Stage: disks Jan 13 21:25:19.336378 ignition[797]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:25:19.336389 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:25:19.339860 ignition[797]: disks: disks passed Jan 13 21:25:19.340485 ignition[797]: Ignition finished successfully Jan 13 21:25:19.343239 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 21:25:19.345599 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 21:25:19.347729 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:25:19.350062 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:25:19.352077 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:25:19.354084 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:25:19.366400 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 21:25:19.392302 systemd-fsck[808]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 21:25:19.565309 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 21:25:19.575341 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 21:25:19.694283 kernel: EXT4-fs (vda9): mounted filesystem 39899d4c-a8b1-4feb-9875-e812cc535888 r/w with ordered data mode. Quota mode: none. Jan 13 21:25:19.695003 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 21:25:19.696059 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 21:25:19.709326 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:25:19.711280 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 21:25:19.711943 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 21:25:19.711979 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 21:25:19.719739 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (816) Jan 13 21:25:19.711999 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:25:19.723585 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:25:19.723603 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:25:19.723617 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:25:19.725277 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:25:19.727344 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:25:19.742550 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 21:25:19.744417 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 21:25:19.780623 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 21:25:19.784561 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Jan 13 21:25:19.788434 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 21:25:19.792870 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 21:25:19.869111 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 21:25:19.872425 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 21:25:19.875089 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 21:25:19.883279 kernel: BTRFS info (device vda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:25:19.898366 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 21:25:20.010593 ignition[933]: INFO : Ignition 2.19.0 Jan 13 21:25:20.010593 ignition[933]: INFO : Stage: mount Jan 13 21:25:20.012334 ignition[933]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:25:20.012334 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:25:20.014766 ignition[933]: INFO : mount: mount passed Jan 13 21:25:20.015483 ignition[933]: INFO : Ignition finished successfully Jan 13 21:25:20.018372 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 21:25:20.026436 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 21:25:20.090137 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 21:25:20.103393 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:25:20.110697 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (943) Jan 13 21:25:20.110726 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:25:20.110737 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:25:20.112269 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:25:20.115270 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:25:20.116113 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:25:20.137456 ignition[960]: INFO : Ignition 2.19.0 Jan 13 21:25:20.137456 ignition[960]: INFO : Stage: files Jan 13 21:25:20.139372 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:25:20.139372 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:25:20.139372 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Jan 13 21:25:20.142645 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 21:25:20.142645 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 21:25:20.145524 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 21:25:20.146912 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 21:25:20.148512 unknown[960]: wrote ssh authorized keys file for user: core Jan 13 21:25:20.149573 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 21:25:20.151493 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 13 21:25:20.153244 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 21:25:20.155044 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:25:20.155044 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:25:20.155044 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 21:25:20.155044 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 21:25:20.155044 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 21:25:20.155044 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 13 21:25:20.498648 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 13 21:25:20.709313 systemd-networkd[784]: eth0: Gained IPv6LL Jan 13 21:25:20.829211 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 21:25:20.829211 ignition[960]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Jan 13 21:25:20.833188 ignition[960]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 21:25:20.833188 ignition[960]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 21:25:20.833188 ignition[960]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Jan 13 21:25:20.833188 ignition[960]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Jan 13 21:25:20.854127 ignition[960]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 21:25:20.861198 ignition[960]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 21:25:20.863100 ignition[960]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Jan 13 21:25:20.863100 ignition[960]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:25:20.863100 ignition[960]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:25:20.863100 ignition[960]: INFO : files: files passed Jan 13 21:25:20.863100 ignition[960]: INFO : Ignition finished successfully Jan 13 21:25:20.864694 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 21:25:20.882519 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 21:25:20.884552 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 21:25:20.886570 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 21:25:20.886694 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 21:25:20.894624 initrd-setup-root-after-ignition[989]: grep: /sysroot/oem/oem-release: No such file or directory Jan 13 21:25:20.897508 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:25:20.899368 initrd-setup-root-after-ignition[991]: grep: Jan 13 21:25:20.899368 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:25:20.904510 initrd-setup-root-after-ignition[991]: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:25:20.900138 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:25:20.901720 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 21:25:20.928416 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 21:25:20.951941 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 21:25:20.952064 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 21:25:20.955067 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 21:25:20.957062 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 21:25:20.957535 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 21:25:20.975412 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 21:25:20.988459 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:25:21.001445 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 21:25:21.014952 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:25:21.015779 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:25:21.016146 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 21:25:21.016921 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 21:25:21.017019 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:25:21.017980 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 21:25:21.018565 systemd[1]: Stopped target basic.target - Basic System. Jan 13 21:25:21.018937 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 21:25:21.019518 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:25:21.019881 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 21:25:21.020277 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 21:25:21.020856 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:25:21.021261 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 21:25:21.041616 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 21:25:21.042244 systemd[1]: Stopped target swap.target - Swaps. Jan 13 21:25:21.042804 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 21:25:21.042941 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:25:21.047208 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:25:21.047819 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:25:21.048167 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 21:25:21.054191 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:25:21.054849 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 21:25:21.054951 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 21:25:21.055409 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 21:25:21.055514 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:25:21.055871 systemd[1]: Stopped target paths.target - Path Units. Jan 13 21:25:21.056175 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 21:25:21.068315 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:25:21.071399 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 21:25:21.071726 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 21:25:21.073723 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 21:25:21.073834 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:25:21.075759 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 21:25:21.075877 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:25:21.079513 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 21:25:21.079622 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:25:21.080119 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 21:25:21.080216 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 21:25:21.098381 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 21:25:21.099300 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 21:25:21.100596 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 21:25:21.100726 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:25:21.101017 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 21:25:21.101110 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:25:21.109470 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 21:25:21.109610 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 21:25:21.122446 ignition[1016]: INFO : Ignition 2.19.0 Jan 13 21:25:21.122446 ignition[1016]: INFO : Stage: umount Jan 13 21:25:21.124169 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:25:21.124169 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:25:21.126717 ignition[1016]: INFO : umount: umount passed Jan 13 21:25:21.127663 ignition[1016]: INFO : Ignition finished successfully Jan 13 21:25:21.126755 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 21:25:21.131489 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 21:25:21.131613 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 21:25:21.132403 systemd[1]: Stopped target network.target - Network. Jan 13 21:25:21.132695 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 21:25:21.132744 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 21:25:21.133057 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 21:25:21.133100 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 21:25:21.133732 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 21:25:21.133775 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 21:25:21.134057 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 21:25:21.134099 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 21:25:21.134681 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 21:25:21.134973 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 21:25:21.156299 systemd-networkd[784]: eth0: DHCPv6 lease lost Jan 13 21:25:21.158236 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 21:25:21.158411 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 21:25:21.161529 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 21:25:21.161720 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 21:25:21.163741 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 21:25:21.163806 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:25:21.182369 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 21:25:21.182809 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 21:25:21.182866 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:25:21.183185 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:25:21.183230 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:25:21.183675 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 21:25:21.183719 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 21:25:21.184004 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 21:25:21.184047 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:25:21.192009 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:25:21.204091 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 21:25:21.204227 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 21:25:21.213041 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 21:25:21.213220 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:25:21.213770 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 21:25:21.213817 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 21:25:21.217446 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 21:25:21.217483 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:25:21.217806 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 21:25:21.217850 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:25:21.218759 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 21:25:21.218804 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 21:25:21.225882 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:25:21.225929 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:25:21.250375 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 21:25:21.252830 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 21:25:21.254079 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:25:21.256756 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 13 21:25:21.257970 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:25:21.260950 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 21:25:21.261011 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:25:21.264772 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:25:21.265914 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:25:21.268800 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 21:25:21.270097 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 21:25:21.521888 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 21:25:21.522078 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 21:25:21.522965 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 21:25:21.525116 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 21:25:21.525182 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 21:25:21.539419 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 21:25:21.548799 systemd[1]: Switching root. Jan 13 21:25:21.583305 systemd-journald[193]: Journal stopped Jan 13 21:25:22.844624 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Jan 13 21:25:22.844695 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 21:25:22.844713 kernel: SELinux: policy capability open_perms=1 Jan 13 21:25:22.844724 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 21:25:22.844735 kernel: SELinux: policy capability always_check_network=0 Jan 13 21:25:22.844750 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 21:25:22.844761 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 21:25:22.844774 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 21:25:22.844786 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 21:25:22.844797 kernel: audit: type=1403 audit(1736803522.125:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 21:25:22.844809 systemd[1]: Successfully loaded SELinux policy in 39.846ms. Jan 13 21:25:22.844829 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.447ms. Jan 13 21:25:22.844842 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:25:22.844857 systemd[1]: Detected virtualization kvm. Jan 13 21:25:22.844869 systemd[1]: Detected architecture x86-64. Jan 13 21:25:22.844880 systemd[1]: Detected first boot. Jan 13 21:25:22.844892 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:25:22.844906 zram_generator::config[1062]: No configuration found. Jan 13 21:25:22.844922 systemd[1]: Populated /etc with preset unit settings. Jan 13 21:25:22.844934 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 21:25:22.844945 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 21:25:22.844957 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 21:25:22.844970 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 21:25:22.844982 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 21:25:22.844994 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 21:25:22.845006 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 21:25:22.845018 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 21:25:22.845032 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 21:25:22.845046 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 21:25:22.845057 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 21:25:22.845069 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:25:22.845080 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:25:22.845092 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 21:25:22.845109 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 21:25:22.845121 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 21:25:22.845135 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:25:22.845147 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 21:25:22.845158 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:25:22.845170 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 21:25:22.845182 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 21:25:22.845194 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 21:25:22.845206 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 21:25:22.845217 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:25:22.845232 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:25:22.845244 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:25:22.845315 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:25:22.845327 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 21:25:22.845339 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 21:25:22.845351 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:25:22.845364 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:25:22.845375 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:25:22.845388 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 21:25:22.845400 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 21:25:22.845415 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 21:25:22.845428 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 21:25:22.845440 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:25:22.845452 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 21:25:22.845464 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 21:25:22.845476 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 21:25:22.845488 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 21:25:22.845500 systemd[1]: Reached target machines.target - Containers. Jan 13 21:25:22.845514 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 21:25:22.845527 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:25:22.845539 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:25:22.845551 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 21:25:22.845563 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:25:22.845576 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:25:22.845600 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:25:22.845615 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 21:25:22.845634 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:25:22.845650 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 21:25:22.845666 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 21:25:22.845679 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 21:25:22.845690 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 21:25:22.845707 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 21:25:22.845720 kernel: fuse: init (API version 7.39) Jan 13 21:25:22.845732 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:25:22.845744 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:25:22.845767 kernel: loop: module loaded Jan 13 21:25:22.845782 kernel: ACPI: bus type drm_connector registered Jan 13 21:25:22.845798 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 21:25:22.845813 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 21:25:22.845825 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:25:22.845837 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 21:25:22.845848 systemd[1]: Stopped verity-setup.service. Jan 13 21:25:22.845861 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:25:22.845889 systemd-journald[1136]: Collecting audit messages is disabled. Jan 13 21:25:22.845914 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 21:25:22.845926 systemd-journald[1136]: Journal started Jan 13 21:25:22.845947 systemd-journald[1136]: Runtime Journal (/run/log/journal/f89f3922cb1641e9b645b3e89c0d850a) is 6.0M, max 48.4M, 42.3M free. Jan 13 21:25:22.615752 systemd[1]: Queued start job for default target multi-user.target. Jan 13 21:25:22.634740 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 21:25:22.635187 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 21:25:22.848300 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:25:22.849676 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 21:25:22.850884 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 21:25:22.851973 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 21:25:22.853153 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 21:25:22.854361 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 21:25:22.855578 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 21:25:22.857015 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:25:22.858566 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 21:25:22.858766 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 21:25:22.860281 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:25:22.860474 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:25:22.861959 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:25:22.862128 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:25:22.863500 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:25:22.863677 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:25:22.865173 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 21:25:22.865446 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 21:25:22.866845 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:25:22.867015 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:25:22.868430 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:25:22.869824 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 21:25:22.871513 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 21:25:22.884427 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 21:25:22.895359 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 21:25:22.897654 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 21:25:22.898971 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 21:25:22.899011 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:25:22.901402 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 21:25:22.904091 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 21:25:22.906505 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 21:25:22.907729 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:25:22.909714 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 21:25:22.912569 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 21:25:22.913840 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:25:22.917495 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 21:25:22.918882 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:25:22.921021 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:25:22.928137 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 21:25:22.942389 systemd-journald[1136]: Time spent on flushing to /var/log/journal/f89f3922cb1641e9b645b3e89c0d850a is 13.076ms for 937 entries. Jan 13 21:25:22.942389 systemd-journald[1136]: System Journal (/var/log/journal/f89f3922cb1641e9b645b3e89c0d850a) is 8.0M, max 195.6M, 187.6M free. Jan 13 21:25:23.356602 systemd-journald[1136]: Received client request to flush runtime journal. Jan 13 21:25:23.356660 kernel: loop0: detected capacity change from 0 to 210664 Jan 13 21:25:23.356694 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 21:25:23.356716 kernel: loop1: detected capacity change from 0 to 142488 Jan 13 21:25:23.356732 kernel: loop2: detected capacity change from 0 to 140768 Jan 13 21:25:23.356748 kernel: loop3: detected capacity change from 0 to 210664 Jan 13 21:25:23.356763 kernel: loop4: detected capacity change from 0 to 142488 Jan 13 21:25:23.356782 kernel: loop5: detected capacity change from 0 to 140768 Jan 13 21:25:22.949379 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:25:22.952325 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 21:25:22.953646 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 21:25:22.955102 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 21:25:22.975440 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:25:22.986858 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 21:25:23.007107 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Jan 13 21:25:23.007125 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Jan 13 21:25:23.008094 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:25:23.009624 udevadm[1184]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 13 21:25:23.014475 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:25:23.026404 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 21:25:23.056783 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 21:25:23.071438 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:25:23.078446 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 21:25:23.080518 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 21:25:23.083029 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 21:25:23.090761 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Jan 13 21:25:23.090775 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Jan 13 21:25:23.096803 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:25:23.359509 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 21:25:23.360464 (sd-merge)[1198]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 13 21:25:23.361073 (sd-merge)[1198]: Merged extensions into '/usr'. Jan 13 21:25:23.421299 systemd[1]: Reloading requested from client PID 1176 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 21:25:23.421315 systemd[1]: Reloading... Jan 13 21:25:23.483278 zram_generator::config[1232]: No configuration found. Jan 13 21:25:23.562789 ldconfig[1171]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 21:25:23.593475 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:25:23.643183 systemd[1]: Reloading finished in 221 ms. Jan 13 21:25:23.680757 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 21:25:23.682188 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 21:25:23.686529 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 21:25:23.688216 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 21:25:23.702406 systemd[1]: Starting ensure-sysext.service... Jan 13 21:25:23.704577 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:25:23.713229 systemd[1]: Reloading requested from client PID 1267 ('systemctl') (unit ensure-sysext.service)... Jan 13 21:25:23.713242 systemd[1]: Reloading... Jan 13 21:25:23.728734 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 21:25:23.729199 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 21:25:23.730468 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 21:25:23.730890 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Jan 13 21:25:23.730986 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Jan 13 21:25:23.736079 systemd-tmpfiles[1268]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:25:23.736387 systemd-tmpfiles[1268]: Skipping /boot Jan 13 21:25:23.754851 systemd-tmpfiles[1268]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:25:23.754996 systemd-tmpfiles[1268]: Skipping /boot Jan 13 21:25:23.763683 zram_generator::config[1295]: No configuration found. Jan 13 21:25:23.900796 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:25:23.954159 systemd[1]: Reloading finished in 240 ms. Jan 13 21:25:23.987696 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:25:23.994294 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:25:23.997222 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 21:25:23.999980 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 21:25:24.004319 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:25:24.008072 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 21:25:24.016108 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:25:24.016353 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:25:24.017651 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:25:24.022364 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:25:24.028172 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:25:24.029534 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:25:24.032307 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 21:25:24.034775 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:25:24.036164 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:25:24.036423 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:25:24.038496 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:25:24.038718 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:25:24.041572 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:25:24.041822 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:25:24.055101 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 21:25:24.058155 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 21:25:24.066511 systemd[1]: Finished ensure-sysext.service. Jan 13 21:25:24.068733 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:25:24.068987 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:25:24.077577 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:25:24.083412 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:25:24.086751 augenrules[1367]: No rules Jan 13 21:25:24.089161 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:25:24.092386 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:25:24.094026 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:25:24.097196 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 21:25:24.098617 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:25:24.099087 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 21:25:24.100918 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 21:25:24.103895 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:25:24.105691 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:25:24.106120 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:25:24.107785 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:25:24.107996 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:25:24.110037 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:25:24.110219 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:25:24.111950 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:25:24.112165 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:25:24.120194 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:25:24.120344 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:25:24.127596 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:25:24.131735 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 21:25:24.134482 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 21:25:24.138460 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 21:25:24.151382 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 21:25:24.168181 systemd-resolved[1337]: Positive Trust Anchors: Jan 13 21:25:24.168198 systemd-resolved[1337]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:25:24.168229 systemd-resolved[1337]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:25:24.169977 systemd-udevd[1384]: Using default interface naming scheme 'v255'. Jan 13 21:25:24.172283 systemd-resolved[1337]: Defaulting to hostname 'linux'. Jan 13 21:25:24.174080 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:25:24.175427 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:25:24.191904 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 21:25:24.193526 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 21:25:24.195191 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:25:24.209424 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:25:24.236363 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 21:25:24.241269 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1390) Jan 13 21:25:24.269592 systemd-networkd[1393]: lo: Link UP Jan 13 21:25:24.269608 systemd-networkd[1393]: lo: Gained carrier Jan 13 21:25:24.271740 systemd-networkd[1393]: Enumeration completed Jan 13 21:25:24.271853 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:25:24.273718 systemd-networkd[1393]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:25:24.273733 systemd-networkd[1393]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:25:24.274575 systemd[1]: Reached target network.target - Network. Jan 13 21:25:24.277443 systemd-networkd[1393]: eth0: Link UP Jan 13 21:25:24.277459 systemd-networkd[1393]: eth0: Gained carrier Jan 13 21:25:24.277478 systemd-networkd[1393]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:25:24.283694 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 21:25:24.296473 systemd-networkd[1393]: eth0: DHCPv4 address 10.0.0.106/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 21:25:24.299069 systemd-timesyncd[1377]: Network configuration changed, trying to establish connection. Jan 13 21:25:24.299208 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:25:25.068263 systemd-timesyncd[1377]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 13 21:25:25.068316 systemd-timesyncd[1377]: Initial clock synchronization to Mon 2025-01-13 21:25:25.068162 UTC. Jan 13 21:25:25.069059 systemd-resolved[1337]: Clock change detected. Flushing caches. Jan 13 21:25:25.069589 systemd-networkd[1393]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:25:25.069768 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 13 21:25:25.075767 kernel: ACPI: button: Power Button [PWRF] Jan 13 21:25:25.077005 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 21:25:25.093773 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 13 21:25:25.095546 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 21:25:25.118602 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 13 21:25:25.119933 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 13 21:25:25.120123 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 13 21:25:25.146310 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:25:25.152755 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 21:25:25.230778 kernel: kvm_amd: TSC scaling supported Jan 13 21:25:25.230903 kernel: kvm_amd: Nested Virtualization enabled Jan 13 21:25:25.230976 kernel: kvm_amd: Nested Paging enabled Jan 13 21:25:25.231006 kernel: kvm_amd: LBR virtualization supported Jan 13 21:25:25.231048 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 13 21:25:25.231081 kernel: kvm_amd: Virtual GIF supported Jan 13 21:25:25.241342 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:25:25.253793 kernel: EDAC MC: Ver: 3.0.0 Jan 13 21:25:25.284417 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 21:25:25.297946 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 21:25:25.306525 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:25:25.336306 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 21:25:25.354500 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:25:25.355659 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:25:25.356867 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 21:25:25.358181 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 21:25:25.359763 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 21:25:25.361041 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 21:25:25.362301 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 21:25:25.363540 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 21:25:25.363576 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:25:25.364504 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:25:25.366084 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 21:25:25.368808 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 21:25:25.403409 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 21:25:25.406186 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 21:25:25.407767 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 21:25:25.408933 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:25:25.409918 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:25:25.410905 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:25:25.410934 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:25:25.411891 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 21:25:25.413996 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 21:25:25.418021 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 21:25:25.420206 lvm[1437]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:25:25.420583 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 21:25:25.421628 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 21:25:25.426906 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 21:25:25.430195 jq[1440]: false Jan 13 21:25:25.431225 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 21:25:25.433951 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 21:25:25.440892 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 21:25:25.442414 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 21:25:25.444029 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 21:25:25.444893 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 21:25:25.446788 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 21:25:25.446838 dbus-daemon[1439]: [system] SELinux support is enabled Jan 13 21:25:25.448539 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 21:25:25.451285 extend-filesystems[1441]: Found loop3 Jan 13 21:25:25.453343 extend-filesystems[1441]: Found loop4 Jan 13 21:25:25.453343 extend-filesystems[1441]: Found loop5 Jan 13 21:25:25.453343 extend-filesystems[1441]: Found sr0 Jan 13 21:25:25.453343 extend-filesystems[1441]: Found vda Jan 13 21:25:25.453343 extend-filesystems[1441]: Found vda1 Jan 13 21:25:25.453343 extend-filesystems[1441]: Found vda2 Jan 13 21:25:25.453343 extend-filesystems[1441]: Found vda3 Jan 13 21:25:25.453343 extend-filesystems[1441]: Found usr Jan 13 21:25:25.453343 extend-filesystems[1441]: Found vda4 Jan 13 21:25:25.453343 extend-filesystems[1441]: Found vda6 Jan 13 21:25:25.453343 extend-filesystems[1441]: Found vda7 Jan 13 21:25:25.453343 extend-filesystems[1441]: Found vda9 Jan 13 21:25:25.453343 extend-filesystems[1441]: Checking size of /dev/vda9 Jan 13 21:25:25.458701 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 21:25:25.467167 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 21:25:25.467627 jq[1453]: true Jan 13 21:25:25.468355 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 21:25:25.469629 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 21:25:25.469910 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 21:25:25.472210 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 21:25:25.472471 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 21:25:25.482533 update_engine[1449]: I20250113 21:25:25.482450 1449 main.cc:92] Flatcar Update Engine starting Jan 13 21:25:25.488192 (ntainerd)[1461]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 21:25:25.489692 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 21:25:25.489732 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 21:25:25.493596 jq[1460]: true Jan 13 21:25:25.494192 extend-filesystems[1441]: Resized partition /dev/vda9 Jan 13 21:25:25.501612 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 21:25:25.502178 update_engine[1449]: I20250113 21:25:25.500977 1449 update_check_scheduler.cc:74] Next update check in 5m55s Jan 13 21:25:25.501632 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 21:25:25.503650 extend-filesystems[1471]: resize2fs 1.47.1 (20-May-2024) Jan 13 21:25:25.510064 systemd[1]: Started update-engine.service - Update Engine. Jan 13 21:25:25.515488 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 21:25:25.519827 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1390) Jan 13 21:25:25.559433 systemd-logind[1446]: Watching system buttons on /dev/input/event1 (Power Button) Jan 13 21:25:25.559464 systemd-logind[1446]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 21:25:25.559719 systemd-logind[1446]: New seat seat0. Jan 13 21:25:25.563517 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 21:25:25.593782 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 13 21:25:25.659652 locksmithd[1474]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 21:25:25.779252 sshd_keygen[1458]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 21:25:25.803942 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 21:25:25.821038 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 21:25:25.829849 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 21:25:25.830145 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 21:25:25.839105 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 21:25:25.851764 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 13 21:25:25.855763 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 21:25:25.864009 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 21:25:25.866417 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 21:25:25.867994 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 21:25:25.887510 extend-filesystems[1471]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 21:25:25.887510 extend-filesystems[1471]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 21:25:25.887510 extend-filesystems[1471]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 13 21:25:25.891351 extend-filesystems[1441]: Resized filesystem in /dev/vda9 Jan 13 21:25:25.892251 bash[1489]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:25:25.891398 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 21:25:25.892464 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 21:25:25.894207 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 21:25:25.897671 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 13 21:25:25.973770 containerd[1461]: time="2025-01-13T21:25:25.973655711Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 13 21:25:25.995357 containerd[1461]: time="2025-01-13T21:25:25.995285324Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:25:25.997155 containerd[1461]: time="2025-01-13T21:25:25.997115737Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:25:25.997155 containerd[1461]: time="2025-01-13T21:25:25.997144200Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 21:25:25.997211 containerd[1461]: time="2025-01-13T21:25:25.997159188Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 21:25:25.997394 containerd[1461]: time="2025-01-13T21:25:25.997368821Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 21:25:25.997415 containerd[1461]: time="2025-01-13T21:25:25.997393297Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 21:25:25.997488 containerd[1461]: time="2025-01-13T21:25:25.997463308Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:25:25.997488 containerd[1461]: time="2025-01-13T21:25:25.997479358Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:25:25.997712 containerd[1461]: time="2025-01-13T21:25:25.997686186Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:25:25.997712 containerd[1461]: time="2025-01-13T21:25:25.997704601Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 21:25:25.997765 containerd[1461]: time="2025-01-13T21:25:25.997718206Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:25:25.997765 containerd[1461]: time="2025-01-13T21:25:25.997728135Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 21:25:25.997847 containerd[1461]: time="2025-01-13T21:25:25.997830056Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:25:25.998126 containerd[1461]: time="2025-01-13T21:25:25.998091196Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:25:25.998244 containerd[1461]: time="2025-01-13T21:25:25.998218495Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:25:25.998244 containerd[1461]: time="2025-01-13T21:25:25.998233944Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 21:25:25.998367 containerd[1461]: time="2025-01-13T21:25:25.998344501Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 21:25:25.998416 containerd[1461]: time="2025-01-13T21:25:25.998399945Z" level=info msg="metadata content store policy set" policy=shared Jan 13 21:25:26.004080 containerd[1461]: time="2025-01-13T21:25:26.004031757Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 21:25:26.004141 containerd[1461]: time="2025-01-13T21:25:26.004090837Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 21:25:26.004141 containerd[1461]: time="2025-01-13T21:25:26.004113971Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 21:25:26.004201 containerd[1461]: time="2025-01-13T21:25:26.004142374Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 21:25:26.004201 containerd[1461]: time="2025-01-13T21:25:26.004162341Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 21:25:26.004348 containerd[1461]: time="2025-01-13T21:25:26.004314897Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 21:25:26.004607 containerd[1461]: time="2025-01-13T21:25:26.004575015Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 21:25:26.004751 containerd[1461]: time="2025-01-13T21:25:26.004706201Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 21:25:26.004751 containerd[1461]: time="2025-01-13T21:25:26.004728553Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 21:25:26.004815 containerd[1461]: time="2025-01-13T21:25:26.004763930Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 21:25:26.004815 containerd[1461]: time="2025-01-13T21:25:26.004783306Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 21:25:26.004815 containerd[1461]: time="2025-01-13T21:25:26.004802031Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 21:25:26.004888 containerd[1461]: time="2025-01-13T21:25:26.004824964Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 21:25:26.004888 containerd[1461]: time="2025-01-13T21:25:26.004848348Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 21:25:26.004888 containerd[1461]: time="2025-01-13T21:25:26.004866522Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 21:25:26.004888 containerd[1461]: time="2025-01-13T21:25:26.004883534Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 21:25:26.005001 containerd[1461]: time="2025-01-13T21:25:26.004907619Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 21:25:26.005001 containerd[1461]: time="2025-01-13T21:25:26.004923659Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 21:25:26.005001 containerd[1461]: time="2025-01-13T21:25:26.004950690Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 21:25:26.005001 containerd[1461]: time="2025-01-13T21:25:26.004970136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 21:25:26.005001 containerd[1461]: time="2025-01-13T21:25:26.004986838Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 21:25:26.005155 containerd[1461]: time="2025-01-13T21:25:26.005003208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 21:25:26.005155 containerd[1461]: time="2025-01-13T21:25:26.005027995Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 21:25:26.005155 containerd[1461]: time="2025-01-13T21:25:26.005045788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 21:25:26.005155 containerd[1461]: time="2025-01-13T21:25:26.005061498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 21:25:26.005155 containerd[1461]: time="2025-01-13T21:25:26.005077868Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 21:25:26.005155 containerd[1461]: time="2025-01-13T21:25:26.005096774Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 21:25:26.005155 containerd[1461]: time="2025-01-13T21:25:26.005117102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 21:25:26.005155 containerd[1461]: time="2025-01-13T21:25:26.005139744Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 21:25:26.005155 containerd[1461]: time="2025-01-13T21:25:26.005156907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 21:25:26.005368 containerd[1461]: time="2025-01-13T21:25:26.005179559Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 21:25:26.005368 containerd[1461]: time="2025-01-13T21:25:26.005200619Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 21:25:26.005368 containerd[1461]: time="2025-01-13T21:25:26.005246084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 21:25:26.005368 containerd[1461]: time="2025-01-13T21:25:26.005264248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 21:25:26.005368 containerd[1461]: time="2025-01-13T21:25:26.005278575Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 21:25:26.005682 containerd[1461]: time="2025-01-13T21:25:26.005636987Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 21:25:26.005780 containerd[1461]: time="2025-01-13T21:25:26.005722838Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 21:25:26.005780 containerd[1461]: time="2025-01-13T21:25:26.005774665Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 21:25:26.005841 containerd[1461]: time="2025-01-13T21:25:26.005805262Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 21:25:26.005841 containerd[1461]: time="2025-01-13T21:25:26.005820812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 21:25:26.005891 containerd[1461]: time="2025-01-13T21:25:26.005844807Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 21:25:26.005891 containerd[1461]: time="2025-01-13T21:25:26.005869132Z" level=info msg="NRI interface is disabled by configuration." Jan 13 21:25:26.005953 containerd[1461]: time="2025-01-13T21:25:26.005892366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 21:25:26.006541 containerd[1461]: time="2025-01-13T21:25:26.006305901Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 21:25:26.006541 containerd[1461]: time="2025-01-13T21:25:26.006477593Z" level=info msg="Connect containerd service" Jan 13 21:25:26.006823 containerd[1461]: time="2025-01-13T21:25:26.006730618Z" level=info msg="using legacy CRI server" Jan 13 21:25:26.006823 containerd[1461]: time="2025-01-13T21:25:26.006808885Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 21:25:26.006960 containerd[1461]: time="2025-01-13T21:25:26.006926686Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 21:25:26.007672 containerd[1461]: time="2025-01-13T21:25:26.007632459Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:25:26.008139 containerd[1461]: time="2025-01-13T21:25:26.007791147Z" level=info msg="Start subscribing containerd event" Jan 13 21:25:26.008139 containerd[1461]: time="2025-01-13T21:25:26.007860687Z" level=info msg="Start recovering state" Jan 13 21:25:26.008139 containerd[1461]: time="2025-01-13T21:25:26.007944725Z" level=info msg="Start event monitor" Jan 13 21:25:26.008139 containerd[1461]: time="2025-01-13T21:25:26.007967167Z" level=info msg="Start snapshots syncer" Jan 13 21:25:26.008139 containerd[1461]: time="2025-01-13T21:25:26.007982616Z" level=info msg="Start cni network conf syncer for default" Jan 13 21:25:26.008139 containerd[1461]: time="2025-01-13T21:25:26.007991623Z" level=info msg="Start streaming server" Jan 13 21:25:26.008139 containerd[1461]: time="2025-01-13T21:25:26.008056384Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 21:25:26.008139 containerd[1461]: time="2025-01-13T21:25:26.008127958Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 21:25:26.008307 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 21:25:26.008828 containerd[1461]: time="2025-01-13T21:25:26.008793987Z" level=info msg="containerd successfully booted in 0.036219s" Jan 13 21:25:26.660985 systemd-networkd[1393]: eth0: Gained IPv6LL Jan 13 21:25:26.664382 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 21:25:26.666216 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 21:25:26.680940 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 13 21:25:26.683373 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:25:26.685490 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 21:25:26.703313 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 13 21:25:26.703570 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 13 21:25:26.705304 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 21:25:26.706585 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 21:25:27.334401 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:25:27.336157 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 21:25:27.338010 systemd[1]: Startup finished in 767ms (kernel) + 5.402s (initrd) + 4.483s (userspace) = 10.653s. Jan 13 21:25:27.356105 (kubelet)[1545]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:25:27.791957 kubelet[1545]: E0113 21:25:27.791830 1545 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:25:27.795721 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:25:27.795957 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:25:35.400104 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 21:25:35.401648 systemd[1]: Started sshd@0-10.0.0.106:22-10.0.0.1:52970.service - OpenSSH per-connection server daemon (10.0.0.1:52970). Jan 13 21:25:35.453337 sshd[1559]: Accepted publickey for core from 10.0.0.1 port 52970 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:25:35.455643 sshd[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:25:35.464056 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 21:25:35.473960 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 21:25:35.475803 systemd-logind[1446]: New session 1 of user core. Jan 13 21:25:35.486722 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 21:25:35.500022 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 21:25:35.503367 (systemd)[1563]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 21:25:35.631438 systemd[1563]: Queued start job for default target default.target. Jan 13 21:25:35.641286 systemd[1563]: Created slice app.slice - User Application Slice. Jan 13 21:25:35.641315 systemd[1563]: Reached target paths.target - Paths. Jan 13 21:25:35.641331 systemd[1563]: Reached target timers.target - Timers. Jan 13 21:25:35.643318 systemd[1563]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 21:25:35.655774 systemd[1563]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 21:25:35.655935 systemd[1563]: Reached target sockets.target - Sockets. Jan 13 21:25:35.655956 systemd[1563]: Reached target basic.target - Basic System. Jan 13 21:25:35.655999 systemd[1563]: Reached target default.target - Main User Target. Jan 13 21:25:35.656041 systemd[1563]: Startup finished in 144ms. Jan 13 21:25:35.656718 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 21:25:35.658573 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 21:25:35.730046 systemd[1]: Started sshd@1-10.0.0.106:22-10.0.0.1:52982.service - OpenSSH per-connection server daemon (10.0.0.1:52982). Jan 13 21:25:35.771968 sshd[1574]: Accepted publickey for core from 10.0.0.1 port 52982 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:25:35.773647 sshd[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:25:35.778548 systemd-logind[1446]: New session 2 of user core. Jan 13 21:25:35.787892 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 21:25:35.842794 sshd[1574]: pam_unix(sshd:session): session closed for user core Jan 13 21:25:35.854607 systemd[1]: sshd@1-10.0.0.106:22-10.0.0.1:52982.service: Deactivated successfully. Jan 13 21:25:35.856712 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 21:25:35.858437 systemd-logind[1446]: Session 2 logged out. Waiting for processes to exit. Jan 13 21:25:35.859686 systemd[1]: Started sshd@2-10.0.0.106:22-10.0.0.1:52994.service - OpenSSH per-connection server daemon (10.0.0.1:52994). Jan 13 21:25:35.860508 systemd-logind[1446]: Removed session 2. Jan 13 21:25:35.895779 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 52994 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:25:35.897576 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:25:35.901953 systemd-logind[1446]: New session 3 of user core. Jan 13 21:25:35.912099 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 21:25:35.965610 sshd[1581]: pam_unix(sshd:session): session closed for user core Jan 13 21:25:35.980907 systemd[1]: sshd@2-10.0.0.106:22-10.0.0.1:52994.service: Deactivated successfully. Jan 13 21:25:35.982614 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 21:25:35.984014 systemd-logind[1446]: Session 3 logged out. Waiting for processes to exit. Jan 13 21:25:35.992974 systemd[1]: Started sshd@3-10.0.0.106:22-10.0.0.1:53000.service - OpenSSH per-connection server daemon (10.0.0.1:53000). Jan 13 21:25:35.994251 systemd-logind[1446]: Removed session 3. Jan 13 21:25:36.031473 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 53000 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:25:36.033201 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:25:36.037839 systemd-logind[1446]: New session 4 of user core. Jan 13 21:25:36.047879 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 21:25:36.103179 sshd[1588]: pam_unix(sshd:session): session closed for user core Jan 13 21:25:36.116962 systemd[1]: sshd@3-10.0.0.106:22-10.0.0.1:53000.service: Deactivated successfully. Jan 13 21:25:36.118606 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 21:25:36.120380 systemd-logind[1446]: Session 4 logged out. Waiting for processes to exit. Jan 13 21:25:36.121632 systemd[1]: Started sshd@4-10.0.0.106:22-10.0.0.1:53014.service - OpenSSH per-connection server daemon (10.0.0.1:53014). Jan 13 21:25:36.122650 systemd-logind[1446]: Removed session 4. Jan 13 21:25:36.161269 sshd[1595]: Accepted publickey for core from 10.0.0.1 port 53014 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:25:36.162775 sshd[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:25:36.167514 systemd-logind[1446]: New session 5 of user core. Jan 13 21:25:36.180995 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 21:25:36.239386 sudo[1598]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 21:25:36.239809 sudo[1598]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:25:36.263686 sudo[1598]: pam_unix(sudo:session): session closed for user root Jan 13 21:25:36.265919 sshd[1595]: pam_unix(sshd:session): session closed for user core Jan 13 21:25:36.279688 systemd[1]: sshd@4-10.0.0.106:22-10.0.0.1:53014.service: Deactivated successfully. Jan 13 21:25:36.281517 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 21:25:36.283118 systemd-logind[1446]: Session 5 logged out. Waiting for processes to exit. Jan 13 21:25:36.297069 systemd[1]: Started sshd@5-10.0.0.106:22-10.0.0.1:53028.service - OpenSSH per-connection server daemon (10.0.0.1:53028). Jan 13 21:25:36.298014 systemd-logind[1446]: Removed session 5. Jan 13 21:25:36.333314 sshd[1603]: Accepted publickey for core from 10.0.0.1 port 53028 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:25:36.335023 sshd[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:25:36.338984 systemd-logind[1446]: New session 6 of user core. Jan 13 21:25:36.349931 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 21:25:36.403292 sudo[1607]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 21:25:36.403607 sudo[1607]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:25:36.407284 sudo[1607]: pam_unix(sudo:session): session closed for user root Jan 13 21:25:36.412706 sudo[1606]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 13 21:25:36.413047 sudo[1606]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:25:36.433959 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 13 21:25:36.435693 auditctl[1610]: No rules Jan 13 21:25:36.437008 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 21:25:36.437262 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 13 21:25:36.438940 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:25:36.479611 augenrules[1628]: No rules Jan 13 21:25:36.481393 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:25:36.482817 sudo[1606]: pam_unix(sudo:session): session closed for user root Jan 13 21:25:36.484635 sshd[1603]: pam_unix(sshd:session): session closed for user core Jan 13 21:25:36.505094 systemd[1]: sshd@5-10.0.0.106:22-10.0.0.1:53028.service: Deactivated successfully. Jan 13 21:25:36.507413 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 21:25:36.509343 systemd-logind[1446]: Session 6 logged out. Waiting for processes to exit. Jan 13 21:25:36.523001 systemd[1]: Started sshd@6-10.0.0.106:22-10.0.0.1:53036.service - OpenSSH per-connection server daemon (10.0.0.1:53036). Jan 13 21:25:36.523903 systemd-logind[1446]: Removed session 6. Jan 13 21:25:36.555449 sshd[1636]: Accepted publickey for core from 10.0.0.1 port 53036 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:25:36.556916 sshd[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:25:36.560584 systemd-logind[1446]: New session 7 of user core. Jan 13 21:25:36.569861 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 21:25:36.621859 sudo[1639]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 21:25:36.622187 sudo[1639]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:25:36.644011 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 13 21:25:36.661413 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 13 21:25:36.661643 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 13 21:25:37.515722 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:25:37.525961 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:25:37.543448 systemd[1]: Reloading requested from client PID 1686 ('systemctl') (unit session-7.scope)... Jan 13 21:25:37.543466 systemd[1]: Reloading... Jan 13 21:25:37.624798 zram_generator::config[1724]: No configuration found. Jan 13 21:25:37.873244 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:25:37.952173 systemd[1]: Reloading finished in 408 ms. Jan 13 21:25:37.998077 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 21:25:37.998169 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 21:25:37.998444 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:25:38.000114 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:25:38.146220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:25:38.150354 (kubelet)[1772]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:25:38.189292 kubelet[1772]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:25:38.189292 kubelet[1772]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:25:38.189292 kubelet[1772]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:25:38.189663 kubelet[1772]: I0113 21:25:38.189551 1772 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:25:38.473803 kubelet[1772]: I0113 21:25:38.473675 1772 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 13 21:25:38.473803 kubelet[1772]: I0113 21:25:38.473715 1772 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:25:38.474123 kubelet[1772]: I0113 21:25:38.474087 1772 server.go:927] "Client rotation is on, will bootstrap in background" Jan 13 21:25:38.490731 kubelet[1772]: I0113 21:25:38.490671 1772 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:25:38.502362 kubelet[1772]: I0113 21:25:38.502317 1772 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:25:38.502582 kubelet[1772]: I0113 21:25:38.502537 1772 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:25:38.502754 kubelet[1772]: I0113 21:25:38.502567 1772 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.106","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:25:38.503204 kubelet[1772]: I0113 21:25:38.503187 1772 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:25:38.503204 kubelet[1772]: I0113 21:25:38.503203 1772 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:25:38.503348 kubelet[1772]: I0113 21:25:38.503342 1772 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:25:38.503985 kubelet[1772]: I0113 21:25:38.503957 1772 kubelet.go:400] "Attempting to sync node with API server" Jan 13 21:25:38.503985 kubelet[1772]: I0113 21:25:38.503976 1772 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:25:38.504047 kubelet[1772]: I0113 21:25:38.503995 1772 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:25:38.504047 kubelet[1772]: I0113 21:25:38.504013 1772 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:25:38.504148 kubelet[1772]: E0113 21:25:38.504120 1772 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:38.504181 kubelet[1772]: E0113 21:25:38.504163 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:38.507931 kubelet[1772]: I0113 21:25:38.507904 1772 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:25:38.508534 kubelet[1772]: W0113 21:25:38.508515 1772 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.106" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 13 21:25:38.508582 kubelet[1772]: E0113 21:25:38.508551 1772 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.106" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 13 21:25:38.508658 kubelet[1772]: W0113 21:25:38.508628 1772 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 13 21:25:38.508730 kubelet[1772]: E0113 21:25:38.508670 1772 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 13 21:25:38.509291 kubelet[1772]: I0113 21:25:38.509256 1772 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:25:38.509364 kubelet[1772]: W0113 21:25:38.509345 1772 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 21:25:38.510201 kubelet[1772]: I0113 21:25:38.510015 1772 server.go:1264] "Started kubelet" Jan 13 21:25:38.510201 kubelet[1772]: I0113 21:25:38.510079 1772 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:25:38.510963 kubelet[1772]: I0113 21:25:38.510374 1772 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:25:38.511301 kubelet[1772]: I0113 21:25:38.511086 1772 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:25:38.513844 kubelet[1772]: I0113 21:25:38.513818 1772 server.go:455] "Adding debug handlers to kubelet server" Jan 13 21:25:38.515148 kubelet[1772]: I0113 21:25:38.515126 1772 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:25:38.515218 kubelet[1772]: I0113 21:25:38.511161 1772 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:25:38.516109 kubelet[1772]: I0113 21:25:38.516076 1772 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 13 21:25:38.516215 kubelet[1772]: I0113 21:25:38.516183 1772 reconciler.go:26] "Reconciler: start to sync state" Jan 13 21:25:38.518729 kubelet[1772]: I0113 21:25:38.517412 1772 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:25:38.518729 kubelet[1772]: I0113 21:25:38.517523 1772 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:25:38.518729 kubelet[1772]: E0113 21:25:38.518321 1772 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:25:38.519103 kubelet[1772]: I0113 21:25:38.519086 1772 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:25:38.526038 kubelet[1772]: E0113 21:25:38.525996 1772 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.106\" not found" node="10.0.0.106" Jan 13 21:25:38.530618 kubelet[1772]: I0113 21:25:38.530600 1772 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:25:38.530735 kubelet[1772]: I0113 21:25:38.530683 1772 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:25:38.530735 kubelet[1772]: I0113 21:25:38.530718 1772 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:25:38.616574 kubelet[1772]: I0113 21:25:38.616543 1772 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.106" Jan 13 21:25:38.678919 kubelet[1772]: I0113 21:25:38.678885 1772 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.106" Jan 13 21:25:38.745155 kubelet[1772]: E0113 21:25:38.745029 1772 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.106\" not found" Jan 13 21:25:38.844319 kubelet[1772]: I0113 21:25:38.844282 1772 policy_none.go:49] "None policy: Start" Jan 13 21:25:38.845096 kubelet[1772]: I0113 21:25:38.845070 1772 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:25:38.845096 kubelet[1772]: E0113 21:25:38.845092 1772 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.106\" not found" Jan 13 21:25:38.845096 kubelet[1772]: I0113 21:25:38.845103 1772 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:25:38.852979 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 21:25:38.861494 sudo[1639]: pam_unix(sudo:session): session closed for user root Jan 13 21:25:38.863398 sshd[1636]: pam_unix(sshd:session): session closed for user core Jan 13 21:25:38.867993 systemd[1]: sshd@6-10.0.0.106:22-10.0.0.1:53036.service: Deactivated successfully. Jan 13 21:25:38.870784 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 21:25:38.872670 systemd-logind[1446]: Session 7 logged out. Waiting for processes to exit. Jan 13 21:25:38.873575 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 21:25:38.874435 kubelet[1772]: I0113 21:25:38.874358 1772 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:25:38.874455 systemd-logind[1446]: Removed session 7. Jan 13 21:25:38.875889 kubelet[1772]: I0113 21:25:38.875859 1772 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:25:38.875889 kubelet[1772]: I0113 21:25:38.875879 1772 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:25:38.875889 kubelet[1772]: I0113 21:25:38.875896 1772 kubelet.go:2337] "Starting kubelet main sync loop" Jan 13 21:25:38.876025 kubelet[1772]: E0113 21:25:38.875931 1772 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:25:38.878461 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 21:25:38.890995 kubelet[1772]: I0113 21:25:38.890919 1772 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:25:38.891249 kubelet[1772]: I0113 21:25:38.891202 1772 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 21:25:38.891429 kubelet[1772]: I0113 21:25:38.891362 1772 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:25:38.892485 kubelet[1772]: E0113 21:25:38.892461 1772 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.106\" not found" Jan 13 21:25:38.945443 kubelet[1772]: E0113 21:25:38.945383 1772 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.106\" not found" Jan 13 21:25:39.045887 kubelet[1772]: E0113 21:25:39.045712 1772 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.106\" not found" Jan 13 21:25:39.146536 kubelet[1772]: E0113 21:25:39.146467 1772 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.106\" not found" Jan 13 21:25:39.247273 kubelet[1772]: E0113 21:25:39.247201 1772 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.106\" not found" Jan 13 21:25:39.348934 kubelet[1772]: I0113 21:25:39.348809 1772 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 13 21:25:39.349145 containerd[1461]: time="2025-01-13T21:25:39.349088471Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 21:25:39.349616 kubelet[1772]: I0113 21:25:39.349242 1772 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 13 21:25:39.476455 kubelet[1772]: I0113 21:25:39.476396 1772 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 13 21:25:39.476611 kubelet[1772]: W0113 21:25:39.476575 1772 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 13 21:25:39.476654 kubelet[1772]: W0113 21:25:39.476619 1772 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 13 21:25:39.476722 kubelet[1772]: W0113 21:25:39.476669 1772 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 13 21:25:39.504832 kubelet[1772]: I0113 21:25:39.504789 1772 apiserver.go:52] "Watching apiserver" Jan 13 21:25:39.504871 kubelet[1772]: E0113 21:25:39.504828 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:39.511782 kubelet[1772]: I0113 21:25:39.511736 1772 topology_manager.go:215] "Topology Admit Handler" podUID="9cd07f28-31cd-4201-b6e0-a2b6c24f55bd" podNamespace="kube-system" podName="cilium-kch9h" Jan 13 21:25:39.511889 kubelet[1772]: I0113 21:25:39.511868 1772 topology_manager.go:215] "Topology Admit Handler" podUID="99518aff-46c2-4494-9748-6656a91c8c24" podNamespace="kube-system" podName="kube-proxy-jtfpw" Jan 13 21:25:39.517546 kubelet[1772]: I0113 21:25:39.517515 1772 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 13 21:25:39.518088 systemd[1]: Created slice kubepods-besteffort-pod99518aff_46c2_4494_9748_6656a91c8c24.slice - libcontainer container kubepods-besteffort-pod99518aff_46c2_4494_9748_6656a91c8c24.slice. Jan 13 21:25:39.522180 kubelet[1772]: I0113 21:25:39.522152 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd-xtables-lock\") pod \"cilium-kch9h\" (UID: \"9cd07f28-31cd-4201-b6e0-a2b6c24f55bd\") " pod="kube-system/cilium-kch9h" Jan 13 21:25:39.522232 kubelet[1772]: I0113 21:25:39.522183 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/99518aff-46c2-4494-9748-6656a91c8c24-xtables-lock\") pod \"kube-proxy-jtfpw\" (UID: \"99518aff-46c2-4494-9748-6656a91c8c24\") " pod="kube-system/kube-proxy-jtfpw" Jan 13 21:25:39.522232 kubelet[1772]: I0113 21:25:39.522200 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh6gn\" (UniqueName: \"kubernetes.io/projected/99518aff-46c2-4494-9748-6656a91c8c24-kube-api-access-jh6gn\") pod \"kube-proxy-jtfpw\" (UID: \"99518aff-46c2-4494-9748-6656a91c8c24\") " pod="kube-system/kube-proxy-jtfpw" Jan 13 21:25:39.522232 kubelet[1772]: I0113 21:25:39.522216 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd-etc-cni-netd\") pod \"cilium-kch9h\" (UID: \"9cd07f28-31cd-4201-b6e0-a2b6c24f55bd\") " pod="kube-system/cilium-kch9h" Jan 13 21:25:39.522333 kubelet[1772]: I0113 21:25:39.522235 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd-cilium-cgroup\") pod \"cilium-kch9h\" (UID: \"9cd07f28-31cd-4201-b6e0-a2b6c24f55bd\") " pod="kube-system/cilium-kch9h" Jan 13 21:25:39.522333 kubelet[1772]: I0113 21:25:39.522256 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd-cilium-config-path\") pod \"cilium-kch9h\" (UID: \"9cd07f28-31cd-4201-b6e0-a2b6c24f55bd\") " pod="kube-system/cilium-kch9h" Jan 13 21:25:39.522333 kubelet[1772]: I0113 21:25:39.522288 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd-host-proc-sys-net\") pod \"cilium-kch9h\" (UID: \"9cd07f28-31cd-4201-b6e0-a2b6c24f55bd\") " pod="kube-system/cilium-kch9h" Jan 13 21:25:39.522333 kubelet[1772]: I0113 21:25:39.522302 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/99518aff-46c2-4494-9748-6656a91c8c24-kube-proxy\") pod \"kube-proxy-jtfpw\" (UID: \"99518aff-46c2-4494-9748-6656a91c8c24\") " pod="kube-system/kube-proxy-jtfpw" Jan 13 21:25:39.522333 kubelet[1772]: I0113 21:25:39.522316 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd-hostproc\") pod \"cilium-kch9h\" (UID: \"9cd07f28-31cd-4201-b6e0-a2b6c24f55bd\") " pod="kube-system/cilium-kch9h" Jan 13 21:25:39.522483 kubelet[1772]: I0113 21:25:39.522334 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4wh5\" (UniqueName: \"kubernetes.io/projected/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd-kube-api-access-r4wh5\") pod \"cilium-kch9h\" (UID: \"9cd07f28-31cd-4201-b6e0-a2b6c24f55bd\") " pod="kube-system/cilium-kch9h" Jan 13 21:25:39.522483 kubelet[1772]: I0113 21:25:39.522353 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/99518aff-46c2-4494-9748-6656a91c8c24-lib-modules\") pod \"kube-proxy-jtfpw\" (UID: \"99518aff-46c2-4494-9748-6656a91c8c24\") " pod="kube-system/kube-proxy-jtfpw" Jan 13 21:25:39.522483 kubelet[1772]: I0113 21:25:39.522372 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd-clustermesh-secrets\") pod \"cilium-kch9h\" (UID: \"9cd07f28-31cd-4201-b6e0-a2b6c24f55bd\") " pod="kube-system/cilium-kch9h" Jan 13 21:25:39.522483 kubelet[1772]: I0113 21:25:39.522390 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd-bpf-maps\") pod \"cilium-kch9h\" (UID: \"9cd07f28-31cd-4201-b6e0-a2b6c24f55bd\") " pod="kube-system/cilium-kch9h" Jan 13 21:25:39.522483 kubelet[1772]: I0113 21:25:39.522408 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd-cni-path\") pod \"cilium-kch9h\" (UID: \"9cd07f28-31cd-4201-b6e0-a2b6c24f55bd\") " pod="kube-system/cilium-kch9h" Jan 13 21:25:39.522483 kubelet[1772]: I0113 21:25:39.522426 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd-lib-modules\") pod \"cilium-kch9h\" (UID: \"9cd07f28-31cd-4201-b6e0-a2b6c24f55bd\") " pod="kube-system/cilium-kch9h" Jan 13 21:25:39.522651 kubelet[1772]: I0113 21:25:39.522446 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd-host-proc-sys-kernel\") pod \"cilium-kch9h\" (UID: \"9cd07f28-31cd-4201-b6e0-a2b6c24f55bd\") " pod="kube-system/cilium-kch9h" Jan 13 21:25:39.522651 kubelet[1772]: I0113 21:25:39.522463 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd-hubble-tls\") pod \"cilium-kch9h\" (UID: \"9cd07f28-31cd-4201-b6e0-a2b6c24f55bd\") " pod="kube-system/cilium-kch9h" Jan 13 21:25:39.522651 kubelet[1772]: I0113 21:25:39.522495 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd-cilium-run\") pod \"cilium-kch9h\" (UID: \"9cd07f28-31cd-4201-b6e0-a2b6c24f55bd\") " pod="kube-system/cilium-kch9h" Jan 13 21:25:39.528456 systemd[1]: Created slice kubepods-burstable-pod9cd07f28_31cd_4201_b6e0_a2b6c24f55bd.slice - libcontainer container kubepods-burstable-pod9cd07f28_31cd_4201_b6e0_a2b6c24f55bd.slice. Jan 13 21:25:39.827106 kubelet[1772]: E0113 21:25:39.826959 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:39.827710 containerd[1461]: time="2025-01-13T21:25:39.827665324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jtfpw,Uid:99518aff-46c2-4494-9748-6656a91c8c24,Namespace:kube-system,Attempt:0,}" Jan 13 21:25:39.839266 kubelet[1772]: E0113 21:25:39.839229 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:39.839596 containerd[1461]: time="2025-01-13T21:25:39.839563718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kch9h,Uid:9cd07f28-31cd-4201-b6e0-a2b6c24f55bd,Namespace:kube-system,Attempt:0,}" Jan 13 21:25:40.459821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2911187633.mount: Deactivated successfully. Jan 13 21:25:40.470074 containerd[1461]: time="2025-01-13T21:25:40.469982077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:25:40.471951 containerd[1461]: time="2025-01-13T21:25:40.471893291Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 13 21:25:40.473881 containerd[1461]: time="2025-01-13T21:25:40.473829132Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:25:40.475014 containerd[1461]: time="2025-01-13T21:25:40.474961004Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:25:40.475478 containerd[1461]: time="2025-01-13T21:25:40.475430174Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:25:40.477917 containerd[1461]: time="2025-01-13T21:25:40.477873596Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:25:40.480182 containerd[1461]: time="2025-01-13T21:25:40.480125690Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 640.493023ms" Jan 13 21:25:40.480864 containerd[1461]: time="2025-01-13T21:25:40.480823128Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 653.048158ms" Jan 13 21:25:40.505087 kubelet[1772]: E0113 21:25:40.505044 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:40.591973 containerd[1461]: time="2025-01-13T21:25:40.591828182Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:25:40.591973 containerd[1461]: time="2025-01-13T21:25:40.591923240Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:25:40.591973 containerd[1461]: time="2025-01-13T21:25:40.591941605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:25:40.592273 containerd[1461]: time="2025-01-13T21:25:40.592168109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:25:40.592891 containerd[1461]: time="2025-01-13T21:25:40.592406647Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:25:40.592891 containerd[1461]: time="2025-01-13T21:25:40.592467020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:25:40.592891 containerd[1461]: time="2025-01-13T21:25:40.592480696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:25:40.592891 containerd[1461]: time="2025-01-13T21:25:40.592578008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:25:40.666898 systemd[1]: Started cri-containerd-1dc0ed26dc9a4d12b8a29f5314562ffa135d8574fef0e7319ae07a0da53dcd08.scope - libcontainer container 1dc0ed26dc9a4d12b8a29f5314562ffa135d8574fef0e7319ae07a0da53dcd08. Jan 13 21:25:40.669557 systemd[1]: Started cri-containerd-68e548d2a42e7b2c422fd77d27492d0caa256f29f29bfb71c24578b38109003d.scope - libcontainer container 68e548d2a42e7b2c422fd77d27492d0caa256f29f29bfb71c24578b38109003d. Jan 13 21:25:40.693280 containerd[1461]: time="2025-01-13T21:25:40.693127655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kch9h,Uid:9cd07f28-31cd-4201-b6e0-a2b6c24f55bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"1dc0ed26dc9a4d12b8a29f5314562ffa135d8574fef0e7319ae07a0da53dcd08\"" Jan 13 21:25:40.694069 kubelet[1772]: E0113 21:25:40.693950 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:40.695995 containerd[1461]: time="2025-01-13T21:25:40.695641679Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 13 21:25:40.700653 containerd[1461]: time="2025-01-13T21:25:40.700607041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jtfpw,Uid:99518aff-46c2-4494-9748-6656a91c8c24,Namespace:kube-system,Attempt:0,} returns sandbox id \"68e548d2a42e7b2c422fd77d27492d0caa256f29f29bfb71c24578b38109003d\"" Jan 13 21:25:40.701260 kubelet[1772]: E0113 21:25:40.701231 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:41.506140 kubelet[1772]: E0113 21:25:41.506083 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:42.507199 kubelet[1772]: E0113 21:25:42.507140 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:43.507427 kubelet[1772]: E0113 21:25:43.507342 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:44.508439 kubelet[1772]: E0113 21:25:44.508379 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:45.509427 kubelet[1772]: E0113 21:25:45.509351 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:46.510400 kubelet[1772]: E0113 21:25:46.510364 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:47.318886 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2742016797.mount: Deactivated successfully. Jan 13 21:25:47.511119 kubelet[1772]: E0113 21:25:47.511070 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:48.511289 kubelet[1772]: E0113 21:25:48.511245 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:49.511900 kubelet[1772]: E0113 21:25:49.511839 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:49.821764 containerd[1461]: time="2025-01-13T21:25:49.821622651Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:49.822695 containerd[1461]: time="2025-01-13T21:25:49.822656991Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166733523" Jan 13 21:25:49.824509 containerd[1461]: time="2025-01-13T21:25:49.824482544Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:49.826142 containerd[1461]: time="2025-01-13T21:25:49.826100258Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.130417462s" Jan 13 21:25:49.826188 containerd[1461]: time="2025-01-13T21:25:49.826137388Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 13 21:25:49.827817 containerd[1461]: time="2025-01-13T21:25:49.827796589Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Jan 13 21:25:49.828849 containerd[1461]: time="2025-01-13T21:25:49.828821291Z" level=info msg="CreateContainer within sandbox \"1dc0ed26dc9a4d12b8a29f5314562ffa135d8574fef0e7319ae07a0da53dcd08\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 21:25:49.843511 containerd[1461]: time="2025-01-13T21:25:49.843457058Z" level=info msg="CreateContainer within sandbox \"1dc0ed26dc9a4d12b8a29f5314562ffa135d8574fef0e7319ae07a0da53dcd08\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5bf1228addb648ea8d20b3dbdc8d62513f4565f42db8f5a32ee537674f7ab57b\"" Jan 13 21:25:49.844111 containerd[1461]: time="2025-01-13T21:25:49.844080458Z" level=info msg="StartContainer for \"5bf1228addb648ea8d20b3dbdc8d62513f4565f42db8f5a32ee537674f7ab57b\"" Jan 13 21:25:49.877878 systemd[1]: Started cri-containerd-5bf1228addb648ea8d20b3dbdc8d62513f4565f42db8f5a32ee537674f7ab57b.scope - libcontainer container 5bf1228addb648ea8d20b3dbdc8d62513f4565f42db8f5a32ee537674f7ab57b. Jan 13 21:25:49.903778 containerd[1461]: time="2025-01-13T21:25:49.903677644Z" level=info msg="StartContainer for \"5bf1228addb648ea8d20b3dbdc8d62513f4565f42db8f5a32ee537674f7ab57b\" returns successfully" Jan 13 21:25:49.917495 systemd[1]: cri-containerd-5bf1228addb648ea8d20b3dbdc8d62513f4565f42db8f5a32ee537674f7ab57b.scope: Deactivated successfully. Jan 13 21:25:50.475476 containerd[1461]: time="2025-01-13T21:25:50.475421337Z" level=info msg="shim disconnected" id=5bf1228addb648ea8d20b3dbdc8d62513f4565f42db8f5a32ee537674f7ab57b namespace=k8s.io Jan 13 21:25:50.475476 containerd[1461]: time="2025-01-13T21:25:50.475473124Z" level=warning msg="cleaning up after shim disconnected" id=5bf1228addb648ea8d20b3dbdc8d62513f4565f42db8f5a32ee537674f7ab57b namespace=k8s.io Jan 13 21:25:50.475476 containerd[1461]: time="2025-01-13T21:25:50.475483844Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:25:50.512268 kubelet[1772]: E0113 21:25:50.512232 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:50.839038 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5bf1228addb648ea8d20b3dbdc8d62513f4565f42db8f5a32ee537674f7ab57b-rootfs.mount: Deactivated successfully. Jan 13 21:25:50.901117 kubelet[1772]: E0113 21:25:50.901078 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:50.904778 containerd[1461]: time="2025-01-13T21:25:50.904725926Z" level=info msg="CreateContainer within sandbox \"1dc0ed26dc9a4d12b8a29f5314562ffa135d8574fef0e7319ae07a0da53dcd08\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 21:25:50.925681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount864238969.mount: Deactivated successfully. Jan 13 21:25:50.936754 containerd[1461]: time="2025-01-13T21:25:50.936701592Z" level=info msg="CreateContainer within sandbox \"1dc0ed26dc9a4d12b8a29f5314562ffa135d8574fef0e7319ae07a0da53dcd08\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d56cb87e5fc573af8d8d09072cfb1d0fb6b77ed00e2606d92ff087207d30ab16\"" Jan 13 21:25:50.937781 containerd[1461]: time="2025-01-13T21:25:50.937728157Z" level=info msg="StartContainer for \"d56cb87e5fc573af8d8d09072cfb1d0fb6b77ed00e2606d92ff087207d30ab16\"" Jan 13 21:25:50.967983 systemd[1]: Started cri-containerd-d56cb87e5fc573af8d8d09072cfb1d0fb6b77ed00e2606d92ff087207d30ab16.scope - libcontainer container d56cb87e5fc573af8d8d09072cfb1d0fb6b77ed00e2606d92ff087207d30ab16. Jan 13 21:25:51.018975 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:25:51.019431 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:25:51.019518 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:25:51.029539 containerd[1461]: time="2025-01-13T21:25:51.029493520Z" level=info msg="StartContainer for \"d56cb87e5fc573af8d8d09072cfb1d0fb6b77ed00e2606d92ff087207d30ab16\" returns successfully" Jan 13 21:25:51.030309 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:25:51.030573 systemd[1]: cri-containerd-d56cb87e5fc573af8d8d09072cfb1d0fb6b77ed00e2606d92ff087207d30ab16.scope: Deactivated successfully. Jan 13 21:25:51.056495 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:25:51.097870 containerd[1461]: time="2025-01-13T21:25:51.097721252Z" level=info msg="shim disconnected" id=d56cb87e5fc573af8d8d09072cfb1d0fb6b77ed00e2606d92ff087207d30ab16 namespace=k8s.io Jan 13 21:25:51.097870 containerd[1461]: time="2025-01-13T21:25:51.097795982Z" level=warning msg="cleaning up after shim disconnected" id=d56cb87e5fc573af8d8d09072cfb1d0fb6b77ed00e2606d92ff087207d30ab16 namespace=k8s.io Jan 13 21:25:51.097870 containerd[1461]: time="2025-01-13T21:25:51.097804748Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:25:51.512994 kubelet[1772]: E0113 21:25:51.512862 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:51.904418 kubelet[1772]: E0113 21:25:51.904287 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:51.907022 containerd[1461]: time="2025-01-13T21:25:51.906982836Z" level=info msg="CreateContainer within sandbox \"1dc0ed26dc9a4d12b8a29f5314562ffa135d8574fef0e7319ae07a0da53dcd08\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 21:25:51.928929 containerd[1461]: time="2025-01-13T21:25:51.928878919Z" level=info msg="CreateContainer within sandbox \"1dc0ed26dc9a4d12b8a29f5314562ffa135d8574fef0e7319ae07a0da53dcd08\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c237ca27fe713c51c9d2408dbf5a8e9207bc5a88a1063cbd46af5b3ac30208b0\"" Jan 13 21:25:51.929389 containerd[1461]: time="2025-01-13T21:25:51.929367786Z" level=info msg="StartContainer for \"c237ca27fe713c51c9d2408dbf5a8e9207bc5a88a1063cbd46af5b3ac30208b0\"" Jan 13 21:25:51.936053 containerd[1461]: time="2025-01-13T21:25:51.936002368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:51.936779 containerd[1461]: time="2025-01-13T21:25:51.936613594Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=29057470" Jan 13 21:25:51.937823 containerd[1461]: time="2025-01-13T21:25:51.937798516Z" level=info msg="ImageCreate event name:\"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:51.940018 containerd[1461]: time="2025-01-13T21:25:51.939757099Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:51.941902 containerd[1461]: time="2025-01-13T21:25:51.940925951Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"29056489\" in 2.113098264s" Jan 13 21:25:51.941902 containerd[1461]: time="2025-01-13T21:25:51.941817233Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Jan 13 21:25:51.947338 containerd[1461]: time="2025-01-13T21:25:51.947288624Z" level=info msg="CreateContainer within sandbox \"68e548d2a42e7b2c422fd77d27492d0caa256f29f29bfb71c24578b38109003d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 21:25:51.969309 containerd[1461]: time="2025-01-13T21:25:51.969266520Z" level=info msg="CreateContainer within sandbox \"68e548d2a42e7b2c422fd77d27492d0caa256f29f29bfb71c24578b38109003d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4680a50bfdc58255c654744265ec48288e71dcfec0aff027b4b7a5207a6f9104\"" Jan 13 21:25:51.969617 containerd[1461]: time="2025-01-13T21:25:51.969596689Z" level=info msg="StartContainer for \"4680a50bfdc58255c654744265ec48288e71dcfec0aff027b4b7a5207a6f9104\"" Jan 13 21:25:51.969883 systemd[1]: Started cri-containerd-c237ca27fe713c51c9d2408dbf5a8e9207bc5a88a1063cbd46af5b3ac30208b0.scope - libcontainer container c237ca27fe713c51c9d2408dbf5a8e9207bc5a88a1063cbd46af5b3ac30208b0. Jan 13 21:25:52.018989 systemd[1]: Started cri-containerd-4680a50bfdc58255c654744265ec48288e71dcfec0aff027b4b7a5207a6f9104.scope - libcontainer container 4680a50bfdc58255c654744265ec48288e71dcfec0aff027b4b7a5207a6f9104. Jan 13 21:25:52.031235 systemd[1]: cri-containerd-c237ca27fe713c51c9d2408dbf5a8e9207bc5a88a1063cbd46af5b3ac30208b0.scope: Deactivated successfully. Jan 13 21:25:52.033474 containerd[1461]: time="2025-01-13T21:25:52.032902771Z" level=info msg="StartContainer for \"c237ca27fe713c51c9d2408dbf5a8e9207bc5a88a1063cbd46af5b3ac30208b0\" returns successfully" Jan 13 21:25:52.052828 containerd[1461]: time="2025-01-13T21:25:52.052720696Z" level=info msg="StartContainer for \"4680a50bfdc58255c654744265ec48288e71dcfec0aff027b4b7a5207a6f9104\" returns successfully" Jan 13 21:25:52.390588 containerd[1461]: time="2025-01-13T21:25:52.390409413Z" level=info msg="shim disconnected" id=c237ca27fe713c51c9d2408dbf5a8e9207bc5a88a1063cbd46af5b3ac30208b0 namespace=k8s.io Jan 13 21:25:52.390588 containerd[1461]: time="2025-01-13T21:25:52.390468995Z" level=warning msg="cleaning up after shim disconnected" id=c237ca27fe713c51c9d2408dbf5a8e9207bc5a88a1063cbd46af5b3ac30208b0 namespace=k8s.io Jan 13 21:25:52.390588 containerd[1461]: time="2025-01-13T21:25:52.390482280Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:25:52.513880 kubelet[1772]: E0113 21:25:52.513829 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:52.840109 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c237ca27fe713c51c9d2408dbf5a8e9207bc5a88a1063cbd46af5b3ac30208b0-rootfs.mount: Deactivated successfully. Jan 13 21:25:52.906933 kubelet[1772]: E0113 21:25:52.906898 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:52.908466 kubelet[1772]: E0113 21:25:52.908436 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:52.910130 containerd[1461]: time="2025-01-13T21:25:52.910099939Z" level=info msg="CreateContainer within sandbox \"1dc0ed26dc9a4d12b8a29f5314562ffa135d8574fef0e7319ae07a0da53dcd08\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 21:25:52.961680 kubelet[1772]: I0113 21:25:52.961596 1772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jtfpw" podStartSLOduration=3.720195303 podStartE2EDuration="14.961579281s" podCreationTimestamp="2025-01-13 21:25:38 +0000 UTC" firstStartedPulling="2025-01-13 21:25:40.701652231 +0000 UTC m=+2.547659261" lastFinishedPulling="2025-01-13 21:25:51.943036209 +0000 UTC m=+13.789043239" observedRunningTime="2025-01-13 21:25:52.946679538 +0000 UTC m=+14.792686578" watchObservedRunningTime="2025-01-13 21:25:52.961579281 +0000 UTC m=+14.807586311" Jan 13 21:25:52.963773 containerd[1461]: time="2025-01-13T21:25:52.963709486Z" level=info msg="CreateContainer within sandbox \"1dc0ed26dc9a4d12b8a29f5314562ffa135d8574fef0e7319ae07a0da53dcd08\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"38d83ca3df077aa83a6b8996357d5ff28aaecec20c7833d3a95b86c7bdb942f0\"" Jan 13 21:25:52.964336 containerd[1461]: time="2025-01-13T21:25:52.964289483Z" level=info msg="StartContainer for \"38d83ca3df077aa83a6b8996357d5ff28aaecec20c7833d3a95b86c7bdb942f0\"" Jan 13 21:25:53.001939 systemd[1]: Started cri-containerd-38d83ca3df077aa83a6b8996357d5ff28aaecec20c7833d3a95b86c7bdb942f0.scope - libcontainer container 38d83ca3df077aa83a6b8996357d5ff28aaecec20c7833d3a95b86c7bdb942f0. Jan 13 21:25:53.024611 systemd[1]: cri-containerd-38d83ca3df077aa83a6b8996357d5ff28aaecec20c7833d3a95b86c7bdb942f0.scope: Deactivated successfully. Jan 13 21:25:53.026622 containerd[1461]: time="2025-01-13T21:25:53.026566164Z" level=info msg="StartContainer for \"38d83ca3df077aa83a6b8996357d5ff28aaecec20c7833d3a95b86c7bdb942f0\" returns successfully" Jan 13 21:25:53.050370 containerd[1461]: time="2025-01-13T21:25:53.050257815Z" level=info msg="shim disconnected" id=38d83ca3df077aa83a6b8996357d5ff28aaecec20c7833d3a95b86c7bdb942f0 namespace=k8s.io Jan 13 21:25:53.050370 containerd[1461]: time="2025-01-13T21:25:53.050341682Z" level=warning msg="cleaning up after shim disconnected" id=38d83ca3df077aa83a6b8996357d5ff28aaecec20c7833d3a95b86c7bdb942f0 namespace=k8s.io Jan 13 21:25:53.050370 containerd[1461]: time="2025-01-13T21:25:53.050353244Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:25:53.514463 kubelet[1772]: E0113 21:25:53.514410 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:53.839713 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-38d83ca3df077aa83a6b8996357d5ff28aaecec20c7833d3a95b86c7bdb942f0-rootfs.mount: Deactivated successfully. Jan 13 21:25:53.912525 kubelet[1772]: E0113 21:25:53.912425 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:53.912525 kubelet[1772]: E0113 21:25:53.912489 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:53.914583 containerd[1461]: time="2025-01-13T21:25:53.914535434Z" level=info msg="CreateContainer within sandbox \"1dc0ed26dc9a4d12b8a29f5314562ffa135d8574fef0e7319ae07a0da53dcd08\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 21:25:53.930917 containerd[1461]: time="2025-01-13T21:25:53.930869776Z" level=info msg="CreateContainer within sandbox \"1dc0ed26dc9a4d12b8a29f5314562ffa135d8574fef0e7319ae07a0da53dcd08\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ab1e567dab13f42195a4ded3489ebf284114fb43a38582ea75a7611538451e0e\"" Jan 13 21:25:53.931370 containerd[1461]: time="2025-01-13T21:25:53.931337293Z" level=info msg="StartContainer for \"ab1e567dab13f42195a4ded3489ebf284114fb43a38582ea75a7611538451e0e\"" Jan 13 21:25:53.962878 systemd[1]: Started cri-containerd-ab1e567dab13f42195a4ded3489ebf284114fb43a38582ea75a7611538451e0e.scope - libcontainer container ab1e567dab13f42195a4ded3489ebf284114fb43a38582ea75a7611538451e0e. Jan 13 21:25:53.996052 containerd[1461]: time="2025-01-13T21:25:53.995999077Z" level=info msg="StartContainer for \"ab1e567dab13f42195a4ded3489ebf284114fb43a38582ea75a7611538451e0e\" returns successfully" Jan 13 21:25:54.173719 kubelet[1772]: I0113 21:25:54.173598 1772 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 21:25:54.485811 kernel: Initializing XFRM netlink socket Jan 13 21:25:54.515532 kubelet[1772]: E0113 21:25:54.515457 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:54.916763 kubelet[1772]: E0113 21:25:54.916627 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:54.928907 kubelet[1772]: I0113 21:25:54.928869 1772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kch9h" podStartSLOduration=7.7961730849999995 podStartE2EDuration="16.928856347s" podCreationTimestamp="2025-01-13 21:25:38 +0000 UTC" firstStartedPulling="2025-01-13 21:25:40.694550212 +0000 UTC m=+2.540557242" lastFinishedPulling="2025-01-13 21:25:49.827233473 +0000 UTC m=+11.673240504" observedRunningTime="2025-01-13 21:25:54.927881028 +0000 UTC m=+16.773888058" watchObservedRunningTime="2025-01-13 21:25:54.928856347 +0000 UTC m=+16.774863377" Jan 13 21:25:54.930040 kubelet[1772]: I0113 21:25:54.929995 1772 topology_manager.go:215] "Topology Admit Handler" podUID="ffdd717f-7baa-41a4-8c34-72cbbe4f6447" podNamespace="default" podName="nginx-deployment-85f456d6dd-wjvvs" Jan 13 21:25:54.935396 systemd[1]: Created slice kubepods-besteffort-podffdd717f_7baa_41a4_8c34_72cbbe4f6447.slice - libcontainer container kubepods-besteffort-podffdd717f_7baa_41a4_8c34_72cbbe4f6447.slice. Jan 13 21:25:55.044766 kubelet[1772]: I0113 21:25:55.044380 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sznk\" (UniqueName: \"kubernetes.io/projected/ffdd717f-7baa-41a4-8c34-72cbbe4f6447-kube-api-access-8sznk\") pod \"nginx-deployment-85f456d6dd-wjvvs\" (UID: \"ffdd717f-7baa-41a4-8c34-72cbbe4f6447\") " pod="default/nginx-deployment-85f456d6dd-wjvvs" Jan 13 21:25:55.238409 containerd[1461]: time="2025-01-13T21:25:55.238295687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-wjvvs,Uid:ffdd717f-7baa-41a4-8c34-72cbbe4f6447,Namespace:default,Attempt:0,}" Jan 13 21:25:55.515696 kubelet[1772]: E0113 21:25:55.515576 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:55.918994 kubelet[1772]: E0113 21:25:55.918875 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:56.176844 systemd-networkd[1393]: cilium_host: Link UP Jan 13 21:25:56.177016 systemd-networkd[1393]: cilium_net: Link UP Jan 13 21:25:56.177783 systemd-networkd[1393]: cilium_net: Gained carrier Jan 13 21:25:56.177995 systemd-networkd[1393]: cilium_host: Gained carrier Jan 13 21:25:56.178138 systemd-networkd[1393]: cilium_net: Gained IPv6LL Jan 13 21:25:56.178320 systemd-networkd[1393]: cilium_host: Gained IPv6LL Jan 13 21:25:56.287885 systemd-networkd[1393]: cilium_vxlan: Link UP Jan 13 21:25:56.287895 systemd-networkd[1393]: cilium_vxlan: Gained carrier Jan 13 21:25:56.516471 kubelet[1772]: E0113 21:25:56.516328 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:56.615784 kernel: NET: Registered PF_ALG protocol family Jan 13 21:25:56.920704 kubelet[1772]: E0113 21:25:56.920598 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:57.325601 systemd-networkd[1393]: lxc_health: Link UP Jan 13 21:25:57.336524 systemd-networkd[1393]: lxc_health: Gained carrier Jan 13 21:25:57.497265 systemd-networkd[1393]: lxcacd735e608be: Link UP Jan 13 21:25:57.509772 kernel: eth0: renamed from tmp745cc Jan 13 21:25:57.514792 systemd-networkd[1393]: lxcacd735e608be: Gained carrier Jan 13 21:25:57.518166 kubelet[1772]: E0113 21:25:57.518134 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:58.149121 systemd-networkd[1393]: cilium_vxlan: Gained IPv6LL Jan 13 21:25:58.404904 systemd-networkd[1393]: lxc_health: Gained IPv6LL Jan 13 21:25:58.505139 kubelet[1772]: E0113 21:25:58.505079 1772 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:58.518603 kubelet[1772]: E0113 21:25:58.518571 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:58.916885 systemd-networkd[1393]: lxcacd735e608be: Gained IPv6LL Jan 13 21:25:59.257325 kubelet[1772]: E0113 21:25:59.257193 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:59.519274 kubelet[1772]: E0113 21:25:59.519119 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:59.924438 kubelet[1772]: E0113 21:25:59.924395 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:00.520213 kubelet[1772]: E0113 21:26:00.520147 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:00.926076 kubelet[1772]: E0113 21:26:00.926041 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:01.103126 containerd[1461]: time="2025-01-13T21:26:01.102466833Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:26:01.103126 containerd[1461]: time="2025-01-13T21:26:01.103088334Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:26:01.103126 containerd[1461]: time="2025-01-13T21:26:01.103101640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:26:01.103624 containerd[1461]: time="2025-01-13T21:26:01.103190120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:26:01.126002 systemd[1]: Started cri-containerd-745cc0cb23bf03a40e98968ae1e608523128d962429c7a4903fc154542ebcd3b.scope - libcontainer container 745cc0cb23bf03a40e98968ae1e608523128d962429c7a4903fc154542ebcd3b. Jan 13 21:26:01.138081 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:26:01.162678 containerd[1461]: time="2025-01-13T21:26:01.162643120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-wjvvs,Uid:ffdd717f-7baa-41a4-8c34-72cbbe4f6447,Namespace:default,Attempt:0,} returns sandbox id \"745cc0cb23bf03a40e98968ae1e608523128d962429c7a4903fc154542ebcd3b\"" Jan 13 21:26:01.164112 containerd[1461]: time="2025-01-13T21:26:01.164093121Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 13 21:26:01.520654 kubelet[1772]: E0113 21:26:01.520593 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:02.521643 kubelet[1772]: E0113 21:26:02.521582 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:03.522218 kubelet[1772]: E0113 21:26:03.522176 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:04.523016 kubelet[1772]: E0113 21:26:04.522965 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:04.650760 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount224580200.mount: Deactivated successfully. Jan 13 21:26:05.523340 kubelet[1772]: E0113 21:26:05.523282 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:06.042484 containerd[1461]: time="2025-01-13T21:26:06.042424063Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:06.043258 containerd[1461]: time="2025-01-13T21:26:06.043190433Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71036018" Jan 13 21:26:06.044330 containerd[1461]: time="2025-01-13T21:26:06.044303205Z" level=info msg="ImageCreate event name:\"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:06.047436 containerd[1461]: time="2025-01-13T21:26:06.047396792Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:06.048430 containerd[1461]: time="2025-01-13T21:26:06.048393913Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 4.884275144s" Jan 13 21:26:06.048430 containerd[1461]: time="2025-01-13T21:26:06.048426425Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 13 21:26:06.050496 containerd[1461]: time="2025-01-13T21:26:06.050461104Z" level=info msg="CreateContainer within sandbox \"745cc0cb23bf03a40e98968ae1e608523128d962429c7a4903fc154542ebcd3b\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 13 21:26:06.071444 containerd[1461]: time="2025-01-13T21:26:06.071384281Z" level=info msg="CreateContainer within sandbox \"745cc0cb23bf03a40e98968ae1e608523128d962429c7a4903fc154542ebcd3b\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"d5ef907ab0ef06d522315170268dd2b3453fbb6bf9a48001b6aa0df0e8071ddc\"" Jan 13 21:26:06.072016 containerd[1461]: time="2025-01-13T21:26:06.071961772Z" level=info msg="StartContainer for \"d5ef907ab0ef06d522315170268dd2b3453fbb6bf9a48001b6aa0df0e8071ddc\"" Jan 13 21:26:06.112879 systemd[1]: Started cri-containerd-d5ef907ab0ef06d522315170268dd2b3453fbb6bf9a48001b6aa0df0e8071ddc.scope - libcontainer container d5ef907ab0ef06d522315170268dd2b3453fbb6bf9a48001b6aa0df0e8071ddc. Jan 13 21:26:06.136928 containerd[1461]: time="2025-01-13T21:26:06.136867424Z" level=info msg="StartContainer for \"d5ef907ab0ef06d522315170268dd2b3453fbb6bf9a48001b6aa0df0e8071ddc\" returns successfully" Jan 13 21:26:06.524424 kubelet[1772]: E0113 21:26:06.524374 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:06.946762 kubelet[1772]: I0113 21:26:06.946670 1772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-wjvvs" podStartSLOduration=8.061251383 podStartE2EDuration="12.946648238s" podCreationTimestamp="2025-01-13 21:25:54 +0000 UTC" firstStartedPulling="2025-01-13 21:26:01.163835226 +0000 UTC m=+23.009842256" lastFinishedPulling="2025-01-13 21:26:06.049232081 +0000 UTC m=+27.895239111" observedRunningTime="2025-01-13 21:26:06.946229199 +0000 UTC m=+28.792236229" watchObservedRunningTime="2025-01-13 21:26:06.946648238 +0000 UTC m=+28.792655268" Jan 13 21:26:07.524999 kubelet[1772]: E0113 21:26:07.524933 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:08.525471 kubelet[1772]: E0113 21:26:08.525413 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:09.525996 kubelet[1772]: E0113 21:26:09.525947 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:10.526288 kubelet[1772]: E0113 21:26:10.526247 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:10.689885 update_engine[1449]: I20250113 21:26:10.689735 1449 update_attempter.cc:509] Updating boot flags... Jan 13 21:26:10.746878 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2966) Jan 13 21:26:10.776794 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2969) Jan 13 21:26:11.527013 kubelet[1772]: E0113 21:26:11.526965 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:12.527629 kubelet[1772]: E0113 21:26:12.527565 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:12.586045 kubelet[1772]: I0113 21:26:12.586002 1772 topology_manager.go:215] "Topology Admit Handler" podUID="7d0405a0-fee4-47f6-bb8f-dbf4f83cd335" podNamespace="default" podName="nfs-server-provisioner-0" Jan 13 21:26:12.592163 systemd[1]: Created slice kubepods-besteffort-pod7d0405a0_fee4_47f6_bb8f_dbf4f83cd335.slice - libcontainer container kubepods-besteffort-pod7d0405a0_fee4_47f6_bb8f_dbf4f83cd335.slice. Jan 13 21:26:12.712111 kubelet[1772]: I0113 21:26:12.712034 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr2cl\" (UniqueName: \"kubernetes.io/projected/7d0405a0-fee4-47f6-bb8f-dbf4f83cd335-kube-api-access-wr2cl\") pod \"nfs-server-provisioner-0\" (UID: \"7d0405a0-fee4-47f6-bb8f-dbf4f83cd335\") " pod="default/nfs-server-provisioner-0" Jan 13 21:26:12.712111 kubelet[1772]: I0113 21:26:12.712100 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/7d0405a0-fee4-47f6-bb8f-dbf4f83cd335-data\") pod \"nfs-server-provisioner-0\" (UID: \"7d0405a0-fee4-47f6-bb8f-dbf4f83cd335\") " pod="default/nfs-server-provisioner-0" Jan 13 21:26:12.895998 containerd[1461]: time="2025-01-13T21:26:12.895881146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:7d0405a0-fee4-47f6-bb8f-dbf4f83cd335,Namespace:default,Attempt:0,}" Jan 13 21:26:12.929508 systemd-networkd[1393]: lxc84f9429ede38: Link UP Jan 13 21:26:12.938774 kernel: eth0: renamed from tmp116e6 Jan 13 21:26:12.948943 systemd-networkd[1393]: lxc84f9429ede38: Gained carrier Jan 13 21:26:13.162133 containerd[1461]: time="2025-01-13T21:26:13.161714811Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:26:13.162133 containerd[1461]: time="2025-01-13T21:26:13.161797577Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:26:13.162133 containerd[1461]: time="2025-01-13T21:26:13.161810643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:26:13.162133 containerd[1461]: time="2025-01-13T21:26:13.161905843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:26:13.184904 systemd[1]: Started cri-containerd-116e6c648e8cedb75110c6a97b47bb23f36afe9f3207c29e3c74049e2116755d.scope - libcontainer container 116e6c648e8cedb75110c6a97b47bb23f36afe9f3207c29e3c74049e2116755d. Jan 13 21:26:13.235253 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:26:13.270504 containerd[1461]: time="2025-01-13T21:26:13.270451937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:7d0405a0-fee4-47f6-bb8f-dbf4f83cd335,Namespace:default,Attempt:0,} returns sandbox id \"116e6c648e8cedb75110c6a97b47bb23f36afe9f3207c29e3c74049e2116755d\"" Jan 13 21:26:13.272495 containerd[1461]: time="2025-01-13T21:26:13.272450604Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 13 21:26:13.527897 kubelet[1772]: E0113 21:26:13.527845 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:14.528274 kubelet[1772]: E0113 21:26:14.528210 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:14.852917 systemd-networkd[1393]: lxc84f9429ede38: Gained IPv6LL Jan 13 21:26:15.545374 kubelet[1772]: E0113 21:26:15.545313 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:15.980714 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount248004175.mount: Deactivated successfully. Jan 13 21:26:16.545734 kubelet[1772]: E0113 21:26:16.545674 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:17.546163 kubelet[1772]: E0113 21:26:17.546089 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:18.505058 kubelet[1772]: E0113 21:26:18.505019 1772 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:18.511408 containerd[1461]: time="2025-01-13T21:26:18.511354526Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:18.512298 containerd[1461]: time="2025-01-13T21:26:18.512262922Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jan 13 21:26:18.513651 containerd[1461]: time="2025-01-13T21:26:18.513615277Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:18.516533 containerd[1461]: time="2025-01-13T21:26:18.516471855Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:18.517587 containerd[1461]: time="2025-01-13T21:26:18.517543901Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 5.245046669s" Jan 13 21:26:18.517587 containerd[1461]: time="2025-01-13T21:26:18.517582404Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 13 21:26:18.519903 containerd[1461]: time="2025-01-13T21:26:18.519869386Z" level=info msg="CreateContainer within sandbox \"116e6c648e8cedb75110c6a97b47bb23f36afe9f3207c29e3c74049e2116755d\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 13 21:26:18.530813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount809214334.mount: Deactivated successfully. Jan 13 21:26:18.532072 containerd[1461]: time="2025-01-13T21:26:18.532042679Z" level=info msg="CreateContainer within sandbox \"116e6c648e8cedb75110c6a97b47bb23f36afe9f3207c29e3c74049e2116755d\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"f0ebb3a54c0348d92013b8a947bee3d0299f7408117b286e15d283a606f9e690\"" Jan 13 21:26:18.532485 containerd[1461]: time="2025-01-13T21:26:18.532435560Z" level=info msg="StartContainer for \"f0ebb3a54c0348d92013b8a947bee3d0299f7408117b286e15d283a606f9e690\"" Jan 13 21:26:18.546417 kubelet[1772]: E0113 21:26:18.546381 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:18.602918 systemd[1]: Started cri-containerd-f0ebb3a54c0348d92013b8a947bee3d0299f7408117b286e15d283a606f9e690.scope - libcontainer container f0ebb3a54c0348d92013b8a947bee3d0299f7408117b286e15d283a606f9e690. Jan 13 21:26:18.737558 containerd[1461]: time="2025-01-13T21:26:18.737507052Z" level=info msg="StartContainer for \"f0ebb3a54c0348d92013b8a947bee3d0299f7408117b286e15d283a606f9e690\" returns successfully" Jan 13 21:26:18.971654 kubelet[1772]: I0113 21:26:18.971593 1772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.725308171 podStartE2EDuration="6.971575398s" podCreationTimestamp="2025-01-13 21:26:12 +0000 UTC" firstStartedPulling="2025-01-13 21:26:13.272134896 +0000 UTC m=+35.118141926" lastFinishedPulling="2025-01-13 21:26:18.518402123 +0000 UTC m=+40.364409153" observedRunningTime="2025-01-13 21:26:18.971466372 +0000 UTC m=+40.817473402" watchObservedRunningTime="2025-01-13 21:26:18.971575398 +0000 UTC m=+40.817582428" Jan 13 21:26:19.546785 kubelet[1772]: E0113 21:26:19.546683 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:20.547471 kubelet[1772]: E0113 21:26:20.547408 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:21.548398 kubelet[1772]: E0113 21:26:21.548329 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:22.549287 kubelet[1772]: E0113 21:26:22.549225 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:23.549646 kubelet[1772]: E0113 21:26:23.549574 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:24.550276 kubelet[1772]: E0113 21:26:24.550232 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:25.551470 kubelet[1772]: E0113 21:26:25.551359 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:26.551851 kubelet[1772]: E0113 21:26:26.551801 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:27.552409 kubelet[1772]: E0113 21:26:27.552344 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:28.301230 kubelet[1772]: I0113 21:26:28.301179 1772 topology_manager.go:215] "Topology Admit Handler" podUID="0607a223-66de-4936-9b5a-1c8b5929fee9" podNamespace="default" podName="test-pod-1" Jan 13 21:26:28.308065 systemd[1]: Created slice kubepods-besteffort-pod0607a223_66de_4936_9b5a_1c8b5929fee9.slice - libcontainer container kubepods-besteffort-pod0607a223_66de_4936_9b5a_1c8b5929fee9.slice. Jan 13 21:26:28.427905 kubelet[1772]: I0113 21:26:28.427829 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8zdc\" (UniqueName: \"kubernetes.io/projected/0607a223-66de-4936-9b5a-1c8b5929fee9-kube-api-access-x8zdc\") pod \"test-pod-1\" (UID: \"0607a223-66de-4936-9b5a-1c8b5929fee9\") " pod="default/test-pod-1" Jan 13 21:26:28.427905 kubelet[1772]: I0113 21:26:28.427879 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-822c439f-a3e7-4fcb-8265-869477b1e6a3\" (UniqueName: \"kubernetes.io/nfs/0607a223-66de-4936-9b5a-1c8b5929fee9-pvc-822c439f-a3e7-4fcb-8265-869477b1e6a3\") pod \"test-pod-1\" (UID: \"0607a223-66de-4936-9b5a-1c8b5929fee9\") " pod="default/test-pod-1" Jan 13 21:26:28.552976 kubelet[1772]: E0113 21:26:28.552873 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:28.558874 kernel: FS-Cache: Loaded Jan 13 21:26:28.630362 kernel: RPC: Registered named UNIX socket transport module. Jan 13 21:26:28.630467 kernel: RPC: Registered udp transport module. Jan 13 21:26:28.630490 kernel: RPC: Registered tcp transport module. Jan 13 21:26:28.630509 kernel: RPC: Registered tcp-with-tls transport module. Jan 13 21:26:28.631226 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 13 21:26:28.898104 kernel: NFS: Registering the id_resolver key type Jan 13 21:26:28.898278 kernel: Key type id_resolver registered Jan 13 21:26:28.898331 kernel: Key type id_legacy registered Jan 13 21:26:28.929154 nfsidmap[3156]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 13 21:26:28.934772 nfsidmap[3159]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 13 21:26:29.211996 containerd[1461]: time="2025-01-13T21:26:29.211852267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:0607a223-66de-4936-9b5a-1c8b5929fee9,Namespace:default,Attempt:0,}" Jan 13 21:26:29.543146 systemd-networkd[1393]: lxc93e2deadd4be: Link UP Jan 13 21:26:29.553709 kubelet[1772]: E0113 21:26:29.553676 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:29.560779 kernel: eth0: renamed from tmp02d60 Jan 13 21:26:29.567632 systemd-networkd[1393]: lxc93e2deadd4be: Gained carrier Jan 13 21:26:29.798963 containerd[1461]: time="2025-01-13T21:26:29.798788114Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:26:29.799130 containerd[1461]: time="2025-01-13T21:26:29.798853838Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:26:29.799130 containerd[1461]: time="2025-01-13T21:26:29.798886410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:26:29.799130 containerd[1461]: time="2025-01-13T21:26:29.799025612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:26:29.815246 systemd[1]: run-containerd-runc-k8s.io-02d608c7b5f54c45e8eebdd9d7a4decd7c794020d670f8f94feed38a7b0d3e6e-runc.jcg8Bt.mount: Deactivated successfully. Jan 13 21:26:29.826863 systemd[1]: Started cri-containerd-02d608c7b5f54c45e8eebdd9d7a4decd7c794020d670f8f94feed38a7b0d3e6e.scope - libcontainer container 02d608c7b5f54c45e8eebdd9d7a4decd7c794020d670f8f94feed38a7b0d3e6e. Jan 13 21:26:29.840271 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:26:29.866227 containerd[1461]: time="2025-01-13T21:26:29.866184400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:0607a223-66de-4936-9b5a-1c8b5929fee9,Namespace:default,Attempt:0,} returns sandbox id \"02d608c7b5f54c45e8eebdd9d7a4decd7c794020d670f8f94feed38a7b0d3e6e\"" Jan 13 21:26:29.867791 containerd[1461]: time="2025-01-13T21:26:29.867712316Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 13 21:26:30.313006 containerd[1461]: time="2025-01-13T21:26:30.312935829Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:30.313604 containerd[1461]: time="2025-01-13T21:26:30.313571455Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 13 21:26:30.316330 containerd[1461]: time="2025-01-13T21:26:30.316300702Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 448.561245ms" Jan 13 21:26:30.316370 containerd[1461]: time="2025-01-13T21:26:30.316329105Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 13 21:26:30.318384 containerd[1461]: time="2025-01-13T21:26:30.318354586Z" level=info msg="CreateContainer within sandbox \"02d608c7b5f54c45e8eebdd9d7a4decd7c794020d670f8f94feed38a7b0d3e6e\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 13 21:26:30.333706 containerd[1461]: time="2025-01-13T21:26:30.333669318Z" level=info msg="CreateContainer within sandbox \"02d608c7b5f54c45e8eebdd9d7a4decd7c794020d670f8f94feed38a7b0d3e6e\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"bc51800efd2ca6eb26064d944a76131faa091974438278617ced9522535e0b1c\"" Jan 13 21:26:30.334208 containerd[1461]: time="2025-01-13T21:26:30.334147769Z" level=info msg="StartContainer for \"bc51800efd2ca6eb26064d944a76131faa091974438278617ced9522535e0b1c\"" Jan 13 21:26:30.363919 systemd[1]: Started cri-containerd-bc51800efd2ca6eb26064d944a76131faa091974438278617ced9522535e0b1c.scope - libcontainer container bc51800efd2ca6eb26064d944a76131faa091974438278617ced9522535e0b1c. Jan 13 21:26:30.419116 containerd[1461]: time="2025-01-13T21:26:30.419053426Z" level=info msg="StartContainer for \"bc51800efd2ca6eb26064d944a76131faa091974438278617ced9522535e0b1c\" returns successfully" Jan 13 21:26:30.540001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1753719520.mount: Deactivated successfully. Jan 13 21:26:30.554828 kubelet[1772]: E0113 21:26:30.554787 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:30.660931 systemd-networkd[1393]: lxc93e2deadd4be: Gained IPv6LL Jan 13 21:26:30.994479 kubelet[1772]: I0113 21:26:30.994418 1772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=18.544854458 podStartE2EDuration="18.994400677s" podCreationTimestamp="2025-01-13 21:26:12 +0000 UTC" firstStartedPulling="2025-01-13 21:26:29.867462335 +0000 UTC m=+51.713469365" lastFinishedPulling="2025-01-13 21:26:30.317008554 +0000 UTC m=+52.163015584" observedRunningTime="2025-01-13 21:26:30.994288195 +0000 UTC m=+52.840295225" watchObservedRunningTime="2025-01-13 21:26:30.994400677 +0000 UTC m=+52.840407707" Jan 13 21:26:31.555510 kubelet[1772]: E0113 21:26:31.555449 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:32.556095 kubelet[1772]: E0113 21:26:32.556040 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:33.556959 kubelet[1772]: E0113 21:26:33.556898 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:34.557461 kubelet[1772]: E0113 21:26:34.557398 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:35.263255 containerd[1461]: time="2025-01-13T21:26:35.263199382Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:26:35.271329 containerd[1461]: time="2025-01-13T21:26:35.271292870Z" level=info msg="StopContainer for \"ab1e567dab13f42195a4ded3489ebf284114fb43a38582ea75a7611538451e0e\" with timeout 2 (s)" Jan 13 21:26:35.271637 containerd[1461]: time="2025-01-13T21:26:35.271596650Z" level=info msg="Stop container \"ab1e567dab13f42195a4ded3489ebf284114fb43a38582ea75a7611538451e0e\" with signal terminated" Jan 13 21:26:35.279419 systemd-networkd[1393]: lxc_health: Link DOWN Jan 13 21:26:35.279427 systemd-networkd[1393]: lxc_health: Lost carrier Jan 13 21:26:35.314446 systemd[1]: cri-containerd-ab1e567dab13f42195a4ded3489ebf284114fb43a38582ea75a7611538451e0e.scope: Deactivated successfully. Jan 13 21:26:35.314785 systemd[1]: cri-containerd-ab1e567dab13f42195a4ded3489ebf284114fb43a38582ea75a7611538451e0e.scope: Consumed 7.629s CPU time. Jan 13 21:26:35.334498 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab1e567dab13f42195a4ded3489ebf284114fb43a38582ea75a7611538451e0e-rootfs.mount: Deactivated successfully. Jan 13 21:26:35.346019 containerd[1461]: time="2025-01-13T21:26:35.345961826Z" level=info msg="shim disconnected" id=ab1e567dab13f42195a4ded3489ebf284114fb43a38582ea75a7611538451e0e namespace=k8s.io Jan 13 21:26:35.346019 containerd[1461]: time="2025-01-13T21:26:35.346016560Z" level=warning msg="cleaning up after shim disconnected" id=ab1e567dab13f42195a4ded3489ebf284114fb43a38582ea75a7611538451e0e namespace=k8s.io Jan 13 21:26:35.346201 containerd[1461]: time="2025-01-13T21:26:35.346027480Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:26:35.368380 containerd[1461]: time="2025-01-13T21:26:35.368322757Z" level=info msg="StopContainer for \"ab1e567dab13f42195a4ded3489ebf284114fb43a38582ea75a7611538451e0e\" returns successfully" Jan 13 21:26:35.369006 containerd[1461]: time="2025-01-13T21:26:35.368970545Z" level=info msg="StopPodSandbox for \"1dc0ed26dc9a4d12b8a29f5314562ffa135d8574fef0e7319ae07a0da53dcd08\"" Jan 13 21:26:35.369006 containerd[1461]: time="2025-01-13T21:26:35.369006082Z" level=info msg="Container to stop \"ab1e567dab13f42195a4ded3489ebf284114fb43a38582ea75a7611538451e0e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:26:35.369166 containerd[1461]: time="2025-01-13T21:26:35.369017303Z" level=info msg="Container to stop \"5bf1228addb648ea8d20b3dbdc8d62513f4565f42db8f5a32ee537674f7ab57b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:26:35.369166 containerd[1461]: time="2025-01-13T21:26:35.369026712Z" level=info msg="Container to stop \"d56cb87e5fc573af8d8d09072cfb1d0fb6b77ed00e2606d92ff087207d30ab16\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:26:35.369166 containerd[1461]: time="2025-01-13T21:26:35.369036259Z" level=info msg="Container to stop \"c237ca27fe713c51c9d2408dbf5a8e9207bc5a88a1063cbd46af5b3ac30208b0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:26:35.369166 containerd[1461]: time="2025-01-13T21:26:35.369047941Z" level=info msg="Container to stop \"38d83ca3df077aa83a6b8996357d5ff28aaecec20c7833d3a95b86c7bdb942f0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:26:35.370857 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1dc0ed26dc9a4d12b8a29f5314562ffa135d8574fef0e7319ae07a0da53dcd08-shm.mount: Deactivated successfully. Jan 13 21:26:35.374808 systemd[1]: cri-containerd-1dc0ed26dc9a4d12b8a29f5314562ffa135d8574fef0e7319ae07a0da53dcd08.scope: Deactivated successfully. Jan 13 21:26:35.393051 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1dc0ed26dc9a4d12b8a29f5314562ffa135d8574fef0e7319ae07a0da53dcd08-rootfs.mount: Deactivated successfully. Jan 13 21:26:35.397535 containerd[1461]: time="2025-01-13T21:26:35.397367055Z" level=info msg="shim disconnected" id=1dc0ed26dc9a4d12b8a29f5314562ffa135d8574fef0e7319ae07a0da53dcd08 namespace=k8s.io Jan 13 21:26:35.397535 containerd[1461]: time="2025-01-13T21:26:35.397418942Z" level=warning msg="cleaning up after shim disconnected" id=1dc0ed26dc9a4d12b8a29f5314562ffa135d8574fef0e7319ae07a0da53dcd08 namespace=k8s.io Jan 13 21:26:35.397535 containerd[1461]: time="2025-01-13T21:26:35.397427097Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:26:35.410851 containerd[1461]: time="2025-01-13T21:26:35.410789540Z" level=info msg="TearDown network for sandbox \"1dc0ed26dc9a4d12b8a29f5314562ffa135d8574fef0e7319ae07a0da53dcd08\" successfully" Jan 13 21:26:35.410851 containerd[1461]: time="2025-01-13T21:26:35.410832240Z" level=info msg="StopPodSandbox for \"1dc0ed26dc9a4d12b8a29f5314562ffa135d8574fef0e7319ae07a0da53dcd08\" returns successfully" Jan 13 21:26:35.558678 kubelet[1772]: E0113 21:26:35.558534 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:35.569843 kubelet[1772]: I0113 21:26:35.569807 1772 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd-xtables-lock\") pod \"9cd07f28-31cd-4201-b6e0-a2b6c24f55bd\" (UID: \"9cd07f28-31cd-4201-b6e0-a2b6c24f55bd\") " Jan 13 21:26:35.569843 kubelet[1772]: I0113 21:26:35.569842 1772 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd-cilium-config-path\") pod \"9cd07f28-31cd-4201-b6e0-a2b6c24f55bd\" (UID: \"9cd07f28-31cd-4201-b6e0-a2b6c24f55bd\") " Jan 13 21:26:35.569922 kubelet[1772]: I0113 21:26:35.569859 1772 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd-bpf-maps\") pod \"9cd07f28-31cd-4201-b6e0-a2b6c24f55bd\" (UID: \"9cd07f28-31cd-4201-b6e0-a2b6c24f55bd\") " Jan 13 21:26:35.569922 kubelet[1772]: I0113 21:26:35.569873 1772 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd-cni-path\") pod \"9cd07f28-31cd-4201-b6e0-a2b6c24f55bd\" (UID: \"9cd07f28-31cd-4201-b6e0-a2b6c24f55bd\") " Jan 13 21:26:35.569922 kubelet[1772]: I0113 21:26:35.569887 1772 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd-host-proc-sys-kernel\") pod \"9cd07f28-31cd-4201-b6e0-a2b6c24f55bd\" (UID: \"9cd07f28-31cd-4201-b6e0-a2b6c24f55bd\") " Jan 13 21:26:35.569922 kubelet[1772]: I0113 21:26:35.569900 1772 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd-cilium-run\") pod \"9cd07f28-31cd-4201-b6e0-a2b6c24f55bd\" (UID: \"9cd07f28-31cd-4201-b6e0-a2b6c24f55bd\") " Jan 13 21:26:35.569922 kubelet[1772]: I0113 21:26:35.569912 1772 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd-cilium-cgroup\") pod \"9cd07f28-31cd-4201-b6e0-a2b6c24f55bd\" (UID: \"9cd07f28-31cd-4201-b6e0-a2b6c24f55bd\") " Jan 13 21:26:35.569922 kubelet[1772]: I0113 21:26:35.569926 1772 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd-host-proc-sys-net\") pod \"9cd07f28-31cd-4201-b6e0-a2b6c24f55bd\" (UID: \"9cd07f28-31cd-4201-b6e0-a2b6c24f55bd\") " Jan 13 21:26:35.570068 kubelet[1772]: I0113 21:26:35.569954 1772 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd-clustermesh-secrets\") pod \"9cd07f28-31cd-4201-b6e0-a2b6c24f55bd\" (UID: \"9cd07f28-31cd-4201-b6e0-a2b6c24f55bd\") " Jan 13 21:26:35.570068 kubelet[1772]: I0113 21:26:35.569945 1772 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9cd07f28-31cd-4201-b6e0-a2b6c24f55bd" (UID: "9cd07f28-31cd-4201-b6e0-a2b6c24f55bd"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:26:35.570068 kubelet[1772]: I0113 21:26:35.569971 1772 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd-hubble-tls\") pod \"9cd07f28-31cd-4201-b6e0-a2b6c24f55bd\" (UID: \"9cd07f28-31cd-4201-b6e0-a2b6c24f55bd\") " Jan 13 21:26:35.570068 kubelet[1772]: I0113 21:26:35.570060 1772 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd-lib-modules\") pod \"9cd07f28-31cd-4201-b6e0-a2b6c24f55bd\" (UID: \"9cd07f28-31cd-4201-b6e0-a2b6c24f55bd\") " Jan 13 21:26:35.570160 kubelet[1772]: I0113 21:26:35.570080 1772 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd-etc-cni-netd\") pod \"9cd07f28-31cd-4201-b6e0-a2b6c24f55bd\" (UID: \"9cd07f28-31cd-4201-b6e0-a2b6c24f55bd\") " Jan 13 21:26:35.570160 kubelet[1772]: I0113 21:26:35.570095 1772 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd-hostproc\") pod \"9cd07f28-31cd-4201-b6e0-a2b6c24f55bd\" (UID: \"9cd07f28-31cd-4201-b6e0-a2b6c24f55bd\") " Jan 13 21:26:35.570160 kubelet[1772]: I0113 21:26:35.570114 1772 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4wh5\" (UniqueName: \"kubernetes.io/projected/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd-kube-api-access-r4wh5\") pod \"9cd07f28-31cd-4201-b6e0-a2b6c24f55bd\" (UID: \"9cd07f28-31cd-4201-b6e0-a2b6c24f55bd\") " Jan 13 21:26:35.570160 kubelet[1772]: I0113 21:26:35.570140 1772 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd-bpf-maps\") on node \"10.0.0.106\" DevicePath \"\"" Jan 13 21:26:35.570627 kubelet[1772]: I0113 21:26:35.570290 1772 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9cd07f28-31cd-4201-b6e0-a2b6c24f55bd" (UID: "9cd07f28-31cd-4201-b6e0-a2b6c24f55bd"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:26:35.570627 kubelet[1772]: I0113 21:26:35.570324 1772 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9cd07f28-31cd-4201-b6e0-a2b6c24f55bd" (UID: "9cd07f28-31cd-4201-b6e0-a2b6c24f55bd"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:26:35.570627 kubelet[1772]: I0113 21:26:35.570342 1772 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd-cni-path" (OuterVolumeSpecName: "cni-path") pod "9cd07f28-31cd-4201-b6e0-a2b6c24f55bd" (UID: "9cd07f28-31cd-4201-b6e0-a2b6c24f55bd"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:26:35.570627 kubelet[1772]: I0113 21:26:35.570356 1772 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9cd07f28-31cd-4201-b6e0-a2b6c24f55bd" (UID: "9cd07f28-31cd-4201-b6e0-a2b6c24f55bd"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:26:35.570627 kubelet[1772]: I0113 21:26:35.570372 1772 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9cd07f28-31cd-4201-b6e0-a2b6c24f55bd" (UID: "9cd07f28-31cd-4201-b6e0-a2b6c24f55bd"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:26:35.570776 kubelet[1772]: I0113 21:26:35.570385 1772 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9cd07f28-31cd-4201-b6e0-a2b6c24f55bd" (UID: "9cd07f28-31cd-4201-b6e0-a2b6c24f55bd"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:26:35.570776 kubelet[1772]: I0113 21:26:35.570403 1772 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9cd07f28-31cd-4201-b6e0-a2b6c24f55bd" (UID: "9cd07f28-31cd-4201-b6e0-a2b6c24f55bd"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:26:35.570776 kubelet[1772]: I0113 21:26:35.570416 1772 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9cd07f28-31cd-4201-b6e0-a2b6c24f55bd" (UID: "9cd07f28-31cd-4201-b6e0-a2b6c24f55bd"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:26:35.573126 kubelet[1772]: I0113 21:26:35.573105 1772 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9cd07f28-31cd-4201-b6e0-a2b6c24f55bd" (UID: "9cd07f28-31cd-4201-b6e0-a2b6c24f55bd"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 21:26:35.573227 kubelet[1772]: I0113 21:26:35.573214 1772 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd-hostproc" (OuterVolumeSpecName: "hostproc") pod "9cd07f28-31cd-4201-b6e0-a2b6c24f55bd" (UID: "9cd07f28-31cd-4201-b6e0-a2b6c24f55bd"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:26:35.573791 kubelet[1772]: I0113 21:26:35.573725 1772 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9cd07f28-31cd-4201-b6e0-a2b6c24f55bd" (UID: "9cd07f28-31cd-4201-b6e0-a2b6c24f55bd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 21:26:35.573974 kubelet[1772]: I0113 21:26:35.573957 1772 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd-kube-api-access-r4wh5" (OuterVolumeSpecName: "kube-api-access-r4wh5") pod "9cd07f28-31cd-4201-b6e0-a2b6c24f55bd" (UID: "9cd07f28-31cd-4201-b6e0-a2b6c24f55bd"). InnerVolumeSpecName "kube-api-access-r4wh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:26:35.574112 systemd[1]: var-lib-kubelet-pods-9cd07f28\x2d31cd\x2d4201\x2db6e0\x2da2b6c24f55bd-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 13 21:26:35.574235 systemd[1]: var-lib-kubelet-pods-9cd07f28\x2d31cd\x2d4201\x2db6e0\x2da2b6c24f55bd-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 13 21:26:35.574908 kubelet[1772]: I0113 21:26:35.574877 1772 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9cd07f28-31cd-4201-b6e0-a2b6c24f55bd" (UID: "9cd07f28-31cd-4201-b6e0-a2b6c24f55bd"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:26:35.671045 kubelet[1772]: I0113 21:26:35.670997 1772 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-r4wh5\" (UniqueName: \"kubernetes.io/projected/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd-kube-api-access-r4wh5\") on node \"10.0.0.106\" DevicePath \"\"" Jan 13 21:26:35.671045 kubelet[1772]: I0113 21:26:35.671034 1772 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd-etc-cni-netd\") on node \"10.0.0.106\" DevicePath \"\"" Jan 13 21:26:35.671045 kubelet[1772]: I0113 21:26:35.671044 1772 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd-hostproc\") on node \"10.0.0.106\" DevicePath \"\"" Jan 13 21:26:35.671045 kubelet[1772]: I0113 21:26:35.671054 1772 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd-host-proc-sys-kernel\") on node \"10.0.0.106\" DevicePath \"\"" Jan 13 21:26:35.671207 kubelet[1772]: I0113 21:26:35.671062 1772 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd-cilium-run\") on node \"10.0.0.106\" DevicePath \"\"" Jan 13 21:26:35.671207 kubelet[1772]: I0113 21:26:35.671071 1772 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd-xtables-lock\") on node \"10.0.0.106\" DevicePath \"\"" Jan 13 21:26:35.671207 kubelet[1772]: I0113 21:26:35.671079 1772 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd-cilium-config-path\") on node \"10.0.0.106\" DevicePath \"\"" Jan 13 21:26:35.671207 kubelet[1772]: I0113 21:26:35.671088 1772 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd-cni-path\") on node \"10.0.0.106\" DevicePath \"\"" Jan 13 21:26:35.671207 kubelet[1772]: I0113 21:26:35.671095 1772 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd-cilium-cgroup\") on node \"10.0.0.106\" DevicePath \"\"" Jan 13 21:26:35.671207 kubelet[1772]: I0113 21:26:35.671102 1772 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd-host-proc-sys-net\") on node \"10.0.0.106\" DevicePath \"\"" Jan 13 21:26:35.671207 kubelet[1772]: I0113 21:26:35.671111 1772 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd-clustermesh-secrets\") on node \"10.0.0.106\" DevicePath \"\"" Jan 13 21:26:35.671207 kubelet[1772]: I0113 21:26:35.671119 1772 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd-hubble-tls\") on node \"10.0.0.106\" DevicePath \"\"" Jan 13 21:26:35.671393 kubelet[1772]: I0113 21:26:35.671126 1772 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd-lib-modules\") on node \"10.0.0.106\" DevicePath \"\"" Jan 13 21:26:35.998980 kubelet[1772]: I0113 21:26:35.998948 1772 scope.go:117] "RemoveContainer" containerID="ab1e567dab13f42195a4ded3489ebf284114fb43a38582ea75a7611538451e0e" Jan 13 21:26:36.000791 containerd[1461]: time="2025-01-13T21:26:36.000726928Z" level=info msg="RemoveContainer for \"ab1e567dab13f42195a4ded3489ebf284114fb43a38582ea75a7611538451e0e\"" Jan 13 21:26:36.004918 systemd[1]: Removed slice kubepods-burstable-pod9cd07f28_31cd_4201_b6e0_a2b6c24f55bd.slice - libcontainer container kubepods-burstable-pod9cd07f28_31cd_4201_b6e0_a2b6c24f55bd.slice. Jan 13 21:26:36.005176 containerd[1461]: time="2025-01-13T21:26:36.004930360Z" level=info msg="RemoveContainer for \"ab1e567dab13f42195a4ded3489ebf284114fb43a38582ea75a7611538451e0e\" returns successfully" Jan 13 21:26:36.005226 kubelet[1772]: I0113 21:26:36.005173 1772 scope.go:117] "RemoveContainer" containerID="38d83ca3df077aa83a6b8996357d5ff28aaecec20c7833d3a95b86c7bdb942f0" Jan 13 21:26:36.005329 systemd[1]: kubepods-burstable-pod9cd07f28_31cd_4201_b6e0_a2b6c24f55bd.slice: Consumed 7.740s CPU time. Jan 13 21:26:36.006590 containerd[1461]: time="2025-01-13T21:26:36.006544584Z" level=info msg="RemoveContainer for \"38d83ca3df077aa83a6b8996357d5ff28aaecec20c7833d3a95b86c7bdb942f0\"" Jan 13 21:26:36.010467 containerd[1461]: time="2025-01-13T21:26:36.010423756Z" level=info msg="RemoveContainer for \"38d83ca3df077aa83a6b8996357d5ff28aaecec20c7833d3a95b86c7bdb942f0\" returns successfully" Jan 13 21:26:36.011196 kubelet[1772]: I0113 21:26:36.010684 1772 scope.go:117] "RemoveContainer" containerID="c237ca27fe713c51c9d2408dbf5a8e9207bc5a88a1063cbd46af5b3ac30208b0" Jan 13 21:26:36.012402 containerd[1461]: time="2025-01-13T21:26:36.012354766Z" level=info msg="RemoveContainer for \"c237ca27fe713c51c9d2408dbf5a8e9207bc5a88a1063cbd46af5b3ac30208b0\"" Jan 13 21:26:36.016116 containerd[1461]: time="2025-01-13T21:26:36.016080129Z" level=info msg="RemoveContainer for \"c237ca27fe713c51c9d2408dbf5a8e9207bc5a88a1063cbd46af5b3ac30208b0\" returns successfully" Jan 13 21:26:36.016454 kubelet[1772]: I0113 21:26:36.016307 1772 scope.go:117] "RemoveContainer" containerID="d56cb87e5fc573af8d8d09072cfb1d0fb6b77ed00e2606d92ff087207d30ab16" Jan 13 21:26:36.017676 containerd[1461]: time="2025-01-13T21:26:36.017640252Z" level=info msg="RemoveContainer for \"d56cb87e5fc573af8d8d09072cfb1d0fb6b77ed00e2606d92ff087207d30ab16\"" Jan 13 21:26:36.023000 containerd[1461]: time="2025-01-13T21:26:36.022947018Z" level=info msg="RemoveContainer for \"d56cb87e5fc573af8d8d09072cfb1d0fb6b77ed00e2606d92ff087207d30ab16\" returns successfully" Jan 13 21:26:36.023280 kubelet[1772]: I0113 21:26:36.023246 1772 scope.go:117] "RemoveContainer" containerID="5bf1228addb648ea8d20b3dbdc8d62513f4565f42db8f5a32ee537674f7ab57b" Jan 13 21:26:36.024509 containerd[1461]: time="2025-01-13T21:26:36.024481162Z" level=info msg="RemoveContainer for \"5bf1228addb648ea8d20b3dbdc8d62513f4565f42db8f5a32ee537674f7ab57b\"" Jan 13 21:26:36.027687 containerd[1461]: time="2025-01-13T21:26:36.027648165Z" level=info msg="RemoveContainer for \"5bf1228addb648ea8d20b3dbdc8d62513f4565f42db8f5a32ee537674f7ab57b\" returns successfully" Jan 13 21:26:36.027858 kubelet[1772]: I0113 21:26:36.027833 1772 scope.go:117] "RemoveContainer" containerID="ab1e567dab13f42195a4ded3489ebf284114fb43a38582ea75a7611538451e0e" Jan 13 21:26:36.028111 containerd[1461]: time="2025-01-13T21:26:36.028056423Z" level=error msg="ContainerStatus for \"ab1e567dab13f42195a4ded3489ebf284114fb43a38582ea75a7611538451e0e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ab1e567dab13f42195a4ded3489ebf284114fb43a38582ea75a7611538451e0e\": not found" Jan 13 21:26:36.028237 kubelet[1772]: E0113 21:26:36.028214 1772 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ab1e567dab13f42195a4ded3489ebf284114fb43a38582ea75a7611538451e0e\": not found" containerID="ab1e567dab13f42195a4ded3489ebf284114fb43a38582ea75a7611538451e0e" Jan 13 21:26:36.028325 kubelet[1772]: I0113 21:26:36.028244 1772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ab1e567dab13f42195a4ded3489ebf284114fb43a38582ea75a7611538451e0e"} err="failed to get container status \"ab1e567dab13f42195a4ded3489ebf284114fb43a38582ea75a7611538451e0e\": rpc error: code = NotFound desc = an error occurred when try to find container \"ab1e567dab13f42195a4ded3489ebf284114fb43a38582ea75a7611538451e0e\": not found" Jan 13 21:26:36.028325 kubelet[1772]: I0113 21:26:36.028322 1772 scope.go:117] "RemoveContainer" containerID="38d83ca3df077aa83a6b8996357d5ff28aaecec20c7833d3a95b86c7bdb942f0" Jan 13 21:26:36.028574 containerd[1461]: time="2025-01-13T21:26:36.028541284Z" level=error msg="ContainerStatus for \"38d83ca3df077aa83a6b8996357d5ff28aaecec20c7833d3a95b86c7bdb942f0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"38d83ca3df077aa83a6b8996357d5ff28aaecec20c7833d3a95b86c7bdb942f0\": not found" Jan 13 21:26:36.028727 kubelet[1772]: E0113 21:26:36.028699 1772 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"38d83ca3df077aa83a6b8996357d5ff28aaecec20c7833d3a95b86c7bdb942f0\": not found" containerID="38d83ca3df077aa83a6b8996357d5ff28aaecec20c7833d3a95b86c7bdb942f0" Jan 13 21:26:36.028812 kubelet[1772]: I0113 21:26:36.028730 1772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"38d83ca3df077aa83a6b8996357d5ff28aaecec20c7833d3a95b86c7bdb942f0"} err="failed to get container status \"38d83ca3df077aa83a6b8996357d5ff28aaecec20c7833d3a95b86c7bdb942f0\": rpc error: code = NotFound desc = an error occurred when try to find container \"38d83ca3df077aa83a6b8996357d5ff28aaecec20c7833d3a95b86c7bdb942f0\": not found" Jan 13 21:26:36.028812 kubelet[1772]: I0113 21:26:36.028770 1772 scope.go:117] "RemoveContainer" containerID="c237ca27fe713c51c9d2408dbf5a8e9207bc5a88a1063cbd46af5b3ac30208b0" Jan 13 21:26:36.029017 containerd[1461]: time="2025-01-13T21:26:36.028914234Z" level=error msg="ContainerStatus for \"c237ca27fe713c51c9d2408dbf5a8e9207bc5a88a1063cbd46af5b3ac30208b0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c237ca27fe713c51c9d2408dbf5a8e9207bc5a88a1063cbd46af5b3ac30208b0\": not found" Jan 13 21:26:36.029199 kubelet[1772]: E0113 21:26:36.029041 1772 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c237ca27fe713c51c9d2408dbf5a8e9207bc5a88a1063cbd46af5b3ac30208b0\": not found" containerID="c237ca27fe713c51c9d2408dbf5a8e9207bc5a88a1063cbd46af5b3ac30208b0" Jan 13 21:26:36.029199 kubelet[1772]: I0113 21:26:36.029057 1772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c237ca27fe713c51c9d2408dbf5a8e9207bc5a88a1063cbd46af5b3ac30208b0"} err="failed to get container status \"c237ca27fe713c51c9d2408dbf5a8e9207bc5a88a1063cbd46af5b3ac30208b0\": rpc error: code = NotFound desc = an error occurred when try to find container \"c237ca27fe713c51c9d2408dbf5a8e9207bc5a88a1063cbd46af5b3ac30208b0\": not found" Jan 13 21:26:36.029199 kubelet[1772]: I0113 21:26:36.029070 1772 scope.go:117] "RemoveContainer" containerID="d56cb87e5fc573af8d8d09072cfb1d0fb6b77ed00e2606d92ff087207d30ab16" Jan 13 21:26:36.029302 containerd[1461]: time="2025-01-13T21:26:36.029204110Z" level=error msg="ContainerStatus for \"d56cb87e5fc573af8d8d09072cfb1d0fb6b77ed00e2606d92ff087207d30ab16\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d56cb87e5fc573af8d8d09072cfb1d0fb6b77ed00e2606d92ff087207d30ab16\": not found" Jan 13 21:26:36.029330 kubelet[1772]: E0113 21:26:36.029314 1772 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d56cb87e5fc573af8d8d09072cfb1d0fb6b77ed00e2606d92ff087207d30ab16\": not found" containerID="d56cb87e5fc573af8d8d09072cfb1d0fb6b77ed00e2606d92ff087207d30ab16" Jan 13 21:26:36.029362 kubelet[1772]: I0113 21:26:36.029335 1772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d56cb87e5fc573af8d8d09072cfb1d0fb6b77ed00e2606d92ff087207d30ab16"} err="failed to get container status \"d56cb87e5fc573af8d8d09072cfb1d0fb6b77ed00e2606d92ff087207d30ab16\": rpc error: code = NotFound desc = an error occurred when try to find container \"d56cb87e5fc573af8d8d09072cfb1d0fb6b77ed00e2606d92ff087207d30ab16\": not found" Jan 13 21:26:36.029362 kubelet[1772]: I0113 21:26:36.029351 1772 scope.go:117] "RemoveContainer" containerID="5bf1228addb648ea8d20b3dbdc8d62513f4565f42db8f5a32ee537674f7ab57b" Jan 13 21:26:36.029530 containerd[1461]: time="2025-01-13T21:26:36.029502140Z" level=error msg="ContainerStatus for \"5bf1228addb648ea8d20b3dbdc8d62513f4565f42db8f5a32ee537674f7ab57b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5bf1228addb648ea8d20b3dbdc8d62513f4565f42db8f5a32ee537674f7ab57b\": not found" Jan 13 21:26:36.029686 kubelet[1772]: E0113 21:26:36.029647 1772 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5bf1228addb648ea8d20b3dbdc8d62513f4565f42db8f5a32ee537674f7ab57b\": not found" containerID="5bf1228addb648ea8d20b3dbdc8d62513f4565f42db8f5a32ee537674f7ab57b" Jan 13 21:26:36.029733 kubelet[1772]: I0113 21:26:36.029699 1772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5bf1228addb648ea8d20b3dbdc8d62513f4565f42db8f5a32ee537674f7ab57b"} err="failed to get container status \"5bf1228addb648ea8d20b3dbdc8d62513f4565f42db8f5a32ee537674f7ab57b\": rpc error: code = NotFound desc = an error occurred when try to find container \"5bf1228addb648ea8d20b3dbdc8d62513f4565f42db8f5a32ee537674f7ab57b\": not found" Jan 13 21:26:36.239764 systemd[1]: var-lib-kubelet-pods-9cd07f28\x2d31cd\x2d4201\x2db6e0\x2da2b6c24f55bd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr4wh5.mount: Deactivated successfully. Jan 13 21:26:36.559539 kubelet[1772]: E0113 21:26:36.559465 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:36.879039 kubelet[1772]: I0113 21:26:36.878911 1772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cd07f28-31cd-4201-b6e0-a2b6c24f55bd" path="/var/lib/kubelet/pods/9cd07f28-31cd-4201-b6e0-a2b6c24f55bd/volumes" Jan 13 21:26:37.560282 kubelet[1772]: E0113 21:26:37.560223 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:37.779640 kubelet[1772]: I0113 21:26:37.779583 1772 topology_manager.go:215] "Topology Admit Handler" podUID="2e1b627d-dad0-48b4-97ea-14ab189b458a" podNamespace="kube-system" podName="cilium-operator-599987898-pvk8l" Jan 13 21:26:37.779640 kubelet[1772]: E0113 21:26:37.779642 1772 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9cd07f28-31cd-4201-b6e0-a2b6c24f55bd" containerName="mount-cgroup" Jan 13 21:26:37.779640 kubelet[1772]: E0113 21:26:37.779653 1772 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9cd07f28-31cd-4201-b6e0-a2b6c24f55bd" containerName="mount-bpf-fs" Jan 13 21:26:37.779640 kubelet[1772]: E0113 21:26:37.779660 1772 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9cd07f28-31cd-4201-b6e0-a2b6c24f55bd" containerName="clean-cilium-state" Jan 13 21:26:37.779869 kubelet[1772]: E0113 21:26:37.779669 1772 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9cd07f28-31cd-4201-b6e0-a2b6c24f55bd" containerName="apply-sysctl-overwrites" Jan 13 21:26:37.779869 kubelet[1772]: E0113 21:26:37.779677 1772 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9cd07f28-31cd-4201-b6e0-a2b6c24f55bd" containerName="cilium-agent" Jan 13 21:26:37.779869 kubelet[1772]: I0113 21:26:37.779698 1772 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cd07f28-31cd-4201-b6e0-a2b6c24f55bd" containerName="cilium-agent" Jan 13 21:26:37.785376 systemd[1]: Created slice kubepods-besteffort-pod2e1b627d_dad0_48b4_97ea_14ab189b458a.slice - libcontainer container kubepods-besteffort-pod2e1b627d_dad0_48b4_97ea_14ab189b458a.slice. Jan 13 21:26:37.791548 kubelet[1772]: I0113 21:26:37.791518 1772 topology_manager.go:215] "Topology Admit Handler" podUID="342a8ea7-3ce6-4997-99f0-4e08342c20f3" podNamespace="kube-system" podName="cilium-5ggqb" Jan 13 21:26:37.796710 systemd[1]: Created slice kubepods-burstable-pod342a8ea7_3ce6_4997_99f0_4e08342c20f3.slice - libcontainer container kubepods-burstable-pod342a8ea7_3ce6_4997_99f0_4e08342c20f3.slice. Jan 13 21:26:37.882450 kubelet[1772]: I0113 21:26:37.882271 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/342a8ea7-3ce6-4997-99f0-4e08342c20f3-cilium-config-path\") pod \"cilium-5ggqb\" (UID: \"342a8ea7-3ce6-4997-99f0-4e08342c20f3\") " pod="kube-system/cilium-5ggqb" Jan 13 21:26:37.882450 kubelet[1772]: I0113 21:26:37.882325 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/342a8ea7-3ce6-4997-99f0-4e08342c20f3-host-proc-sys-net\") pod \"cilium-5ggqb\" (UID: \"342a8ea7-3ce6-4997-99f0-4e08342c20f3\") " pod="kube-system/cilium-5ggqb" Jan 13 21:26:37.882450 kubelet[1772]: I0113 21:26:37.882351 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/342a8ea7-3ce6-4997-99f0-4e08342c20f3-cilium-run\") pod \"cilium-5ggqb\" (UID: \"342a8ea7-3ce6-4997-99f0-4e08342c20f3\") " pod="kube-system/cilium-5ggqb" Jan 13 21:26:37.882450 kubelet[1772]: I0113 21:26:37.882371 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/342a8ea7-3ce6-4997-99f0-4e08342c20f3-cilium-cgroup\") pod \"cilium-5ggqb\" (UID: \"342a8ea7-3ce6-4997-99f0-4e08342c20f3\") " pod="kube-system/cilium-5ggqb" Jan 13 21:26:37.882450 kubelet[1772]: I0113 21:26:37.882395 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/342a8ea7-3ce6-4997-99f0-4e08342c20f3-lib-modules\") pod \"cilium-5ggqb\" (UID: \"342a8ea7-3ce6-4997-99f0-4e08342c20f3\") " pod="kube-system/cilium-5ggqb" Jan 13 21:26:37.882450 kubelet[1772]: I0113 21:26:37.882418 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/342a8ea7-3ce6-4997-99f0-4e08342c20f3-xtables-lock\") pod \"cilium-5ggqb\" (UID: \"342a8ea7-3ce6-4997-99f0-4e08342c20f3\") " pod="kube-system/cilium-5ggqb" Jan 13 21:26:37.882780 kubelet[1772]: I0113 21:26:37.882439 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/342a8ea7-3ce6-4997-99f0-4e08342c20f3-host-proc-sys-kernel\") pod \"cilium-5ggqb\" (UID: \"342a8ea7-3ce6-4997-99f0-4e08342c20f3\") " pod="kube-system/cilium-5ggqb" Jan 13 21:26:37.882780 kubelet[1772]: I0113 21:26:37.882461 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmr4d\" (UniqueName: \"kubernetes.io/projected/2e1b627d-dad0-48b4-97ea-14ab189b458a-kube-api-access-bmr4d\") pod \"cilium-operator-599987898-pvk8l\" (UID: \"2e1b627d-dad0-48b4-97ea-14ab189b458a\") " pod="kube-system/cilium-operator-599987898-pvk8l" Jan 13 21:26:37.882780 kubelet[1772]: I0113 21:26:37.882485 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/342a8ea7-3ce6-4997-99f0-4e08342c20f3-cni-path\") pod \"cilium-5ggqb\" (UID: \"342a8ea7-3ce6-4997-99f0-4e08342c20f3\") " pod="kube-system/cilium-5ggqb" Jan 13 21:26:37.882780 kubelet[1772]: I0113 21:26:37.882504 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/342a8ea7-3ce6-4997-99f0-4e08342c20f3-clustermesh-secrets\") pod \"cilium-5ggqb\" (UID: \"342a8ea7-3ce6-4997-99f0-4e08342c20f3\") " pod="kube-system/cilium-5ggqb" Jan 13 21:26:37.882780 kubelet[1772]: I0113 21:26:37.882523 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/342a8ea7-3ce6-4997-99f0-4e08342c20f3-hubble-tls\") pod \"cilium-5ggqb\" (UID: \"342a8ea7-3ce6-4997-99f0-4e08342c20f3\") " pod="kube-system/cilium-5ggqb" Jan 13 21:26:37.883199 kubelet[1772]: I0113 21:26:37.882545 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/342a8ea7-3ce6-4997-99f0-4e08342c20f3-etc-cni-netd\") pod \"cilium-5ggqb\" (UID: \"342a8ea7-3ce6-4997-99f0-4e08342c20f3\") " pod="kube-system/cilium-5ggqb" Jan 13 21:26:37.883199 kubelet[1772]: I0113 21:26:37.882561 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/342a8ea7-3ce6-4997-99f0-4e08342c20f3-cilium-ipsec-secrets\") pod \"cilium-5ggqb\" (UID: \"342a8ea7-3ce6-4997-99f0-4e08342c20f3\") " pod="kube-system/cilium-5ggqb" Jan 13 21:26:37.883199 kubelet[1772]: I0113 21:26:37.882576 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n5cn\" (UniqueName: \"kubernetes.io/projected/342a8ea7-3ce6-4997-99f0-4e08342c20f3-kube-api-access-9n5cn\") pod \"cilium-5ggqb\" (UID: \"342a8ea7-3ce6-4997-99f0-4e08342c20f3\") " pod="kube-system/cilium-5ggqb" Jan 13 21:26:37.883199 kubelet[1772]: I0113 21:26:37.882594 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/342a8ea7-3ce6-4997-99f0-4e08342c20f3-bpf-maps\") pod \"cilium-5ggqb\" (UID: \"342a8ea7-3ce6-4997-99f0-4e08342c20f3\") " pod="kube-system/cilium-5ggqb" Jan 13 21:26:37.883199 kubelet[1772]: I0113 21:26:37.882612 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/342a8ea7-3ce6-4997-99f0-4e08342c20f3-hostproc\") pod \"cilium-5ggqb\" (UID: \"342a8ea7-3ce6-4997-99f0-4e08342c20f3\") " pod="kube-system/cilium-5ggqb" Jan 13 21:26:37.883310 kubelet[1772]: I0113 21:26:37.882635 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2e1b627d-dad0-48b4-97ea-14ab189b458a-cilium-config-path\") pod \"cilium-operator-599987898-pvk8l\" (UID: \"2e1b627d-dad0-48b4-97ea-14ab189b458a\") " pod="kube-system/cilium-operator-599987898-pvk8l" Jan 13 21:26:38.088295 kubelet[1772]: E0113 21:26:38.088245 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:38.088928 containerd[1461]: time="2025-01-13T21:26:38.088883998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-pvk8l,Uid:2e1b627d-dad0-48b4-97ea-14ab189b458a,Namespace:kube-system,Attempt:0,}" Jan 13 21:26:38.107332 kubelet[1772]: E0113 21:26:38.107297 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:38.107805 containerd[1461]: time="2025-01-13T21:26:38.107769720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5ggqb,Uid:342a8ea7-3ce6-4997-99f0-4e08342c20f3,Namespace:kube-system,Attempt:0,}" Jan 13 21:26:38.363726 containerd[1461]: time="2025-01-13T21:26:38.363199094Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:26:38.363726 containerd[1461]: time="2025-01-13T21:26:38.363341321Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:26:38.363726 containerd[1461]: time="2025-01-13T21:26:38.363386086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:26:38.363726 containerd[1461]: time="2025-01-13T21:26:38.363592863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:26:38.371348 containerd[1461]: time="2025-01-13T21:26:38.368818945Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:26:38.371348 containerd[1461]: time="2025-01-13T21:26:38.368921077Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:26:38.371348 containerd[1461]: time="2025-01-13T21:26:38.368936957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:26:38.371348 containerd[1461]: time="2025-01-13T21:26:38.369035923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:26:38.385894 systemd[1]: Started cri-containerd-a29672adefda52eae4d12447b3ed4ce65ef9191fa596bdc3b7ecd60325741d55.scope - libcontainer container a29672adefda52eae4d12447b3ed4ce65ef9191fa596bdc3b7ecd60325741d55. Jan 13 21:26:38.389291 systemd[1]: Started cri-containerd-5a4d43cefcf4a8af29100ac94e4c163eb9ce6a4fafef6062f5e9603cf53697a8.scope - libcontainer container 5a4d43cefcf4a8af29100ac94e4c163eb9ce6a4fafef6062f5e9603cf53697a8. Jan 13 21:26:38.411677 containerd[1461]: time="2025-01-13T21:26:38.411621663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5ggqb,Uid:342a8ea7-3ce6-4997-99f0-4e08342c20f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"5a4d43cefcf4a8af29100ac94e4c163eb9ce6a4fafef6062f5e9603cf53697a8\"" Jan 13 21:26:38.412612 kubelet[1772]: E0113 21:26:38.412587 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:38.415050 containerd[1461]: time="2025-01-13T21:26:38.415020381Z" level=info msg="CreateContainer within sandbox \"5a4d43cefcf4a8af29100ac94e4c163eb9ce6a4fafef6062f5e9603cf53697a8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 21:26:38.434608 containerd[1461]: time="2025-01-13T21:26:38.434569820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-pvk8l,Uid:2e1b627d-dad0-48b4-97ea-14ab189b458a,Namespace:kube-system,Attempt:0,} returns sandbox id \"a29672adefda52eae4d12447b3ed4ce65ef9191fa596bdc3b7ecd60325741d55\"" Jan 13 21:26:38.435298 kubelet[1772]: E0113 21:26:38.435235 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:38.435955 containerd[1461]: time="2025-01-13T21:26:38.435933001Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 13 21:26:38.471608 containerd[1461]: time="2025-01-13T21:26:38.471555606Z" level=info msg="CreateContainer within sandbox \"5a4d43cefcf4a8af29100ac94e4c163eb9ce6a4fafef6062f5e9603cf53697a8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9e3704392d6e0e9dae45104711c283feb9c9bf1912d8849cf616962932e6ebf5\"" Jan 13 21:26:38.472157 containerd[1461]: time="2025-01-13T21:26:38.472112454Z" level=info msg="StartContainer for \"9e3704392d6e0e9dae45104711c283feb9c9bf1912d8849cf616962932e6ebf5\"" Jan 13 21:26:38.504964 kubelet[1772]: E0113 21:26:38.504927 1772 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:38.505893 systemd[1]: Started cri-containerd-9e3704392d6e0e9dae45104711c283feb9c9bf1912d8849cf616962932e6ebf5.scope - libcontainer container 9e3704392d6e0e9dae45104711c283feb9c9bf1912d8849cf616962932e6ebf5. Jan 13 21:26:38.519231 containerd[1461]: time="2025-01-13T21:26:38.519169136Z" level=info msg="StopPodSandbox for \"1dc0ed26dc9a4d12b8a29f5314562ffa135d8574fef0e7319ae07a0da53dcd08\"" Jan 13 21:26:38.519339 containerd[1461]: time="2025-01-13T21:26:38.519281707Z" level=info msg="TearDown network for sandbox \"1dc0ed26dc9a4d12b8a29f5314562ffa135d8574fef0e7319ae07a0da53dcd08\" successfully" Jan 13 21:26:38.519339 containerd[1461]: time="2025-01-13T21:26:38.519293850Z" level=info msg="StopPodSandbox for \"1dc0ed26dc9a4d12b8a29f5314562ffa135d8574fef0e7319ae07a0da53dcd08\" returns successfully" Jan 13 21:26:38.519604 containerd[1461]: time="2025-01-13T21:26:38.519552927Z" level=info msg="RemovePodSandbox for \"1dc0ed26dc9a4d12b8a29f5314562ffa135d8574fef0e7319ae07a0da53dcd08\"" Jan 13 21:26:38.519604 containerd[1461]: time="2025-01-13T21:26:38.519587141Z" level=info msg="Forcibly stopping sandbox \"1dc0ed26dc9a4d12b8a29f5314562ffa135d8574fef0e7319ae07a0da53dcd08\"" Jan 13 21:26:38.519700 containerd[1461]: time="2025-01-13T21:26:38.519639661Z" level=info msg="TearDown network for sandbox \"1dc0ed26dc9a4d12b8a29f5314562ffa135d8574fef0e7319ae07a0da53dcd08\" successfully" Jan 13 21:26:38.522911 containerd[1461]: time="2025-01-13T21:26:38.522837690Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1dc0ed26dc9a4d12b8a29f5314562ffa135d8574fef0e7319ae07a0da53dcd08\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:26:38.522911 containerd[1461]: time="2025-01-13T21:26:38.522879428Z" level=info msg="RemovePodSandbox \"1dc0ed26dc9a4d12b8a29f5314562ffa135d8574fef0e7319ae07a0da53dcd08\" returns successfully" Jan 13 21:26:38.534033 containerd[1461]: time="2025-01-13T21:26:38.534003143Z" level=info msg="StartContainer for \"9e3704392d6e0e9dae45104711c283feb9c9bf1912d8849cf616962932e6ebf5\" returns successfully" Jan 13 21:26:38.541106 systemd[1]: cri-containerd-9e3704392d6e0e9dae45104711c283feb9c9bf1912d8849cf616962932e6ebf5.scope: Deactivated successfully. Jan 13 21:26:38.561210 kubelet[1772]: E0113 21:26:38.561164 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:38.573539 containerd[1461]: time="2025-01-13T21:26:38.573467728Z" level=info msg="shim disconnected" id=9e3704392d6e0e9dae45104711c283feb9c9bf1912d8849cf616962932e6ebf5 namespace=k8s.io Jan 13 21:26:38.573539 containerd[1461]: time="2025-01-13T21:26:38.573521520Z" level=warning msg="cleaning up after shim disconnected" id=9e3704392d6e0e9dae45104711c283feb9c9bf1912d8849cf616962932e6ebf5 namespace=k8s.io Jan 13 21:26:38.573539 containerd[1461]: time="2025-01-13T21:26:38.573531729Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:26:38.902924 kubelet[1772]: E0113 21:26:38.902891 1772 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 21:26:39.006434 kubelet[1772]: E0113 21:26:39.006391 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:39.008001 containerd[1461]: time="2025-01-13T21:26:39.007963349Z" level=info msg="CreateContainer within sandbox \"5a4d43cefcf4a8af29100ac94e4c163eb9ce6a4fafef6062f5e9603cf53697a8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 21:26:39.020898 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1067051514.mount: Deactivated successfully. Jan 13 21:26:39.022307 containerd[1461]: time="2025-01-13T21:26:39.022272165Z" level=info msg="CreateContainer within sandbox \"5a4d43cefcf4a8af29100ac94e4c163eb9ce6a4fafef6062f5e9603cf53697a8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"755b37ed26506949ba72d1deab7b4835d4c39157f0de1ca3a0b55e3ae5c8cee7\"" Jan 13 21:26:39.022755 containerd[1461]: time="2025-01-13T21:26:39.022721570Z" level=info msg="StartContainer for \"755b37ed26506949ba72d1deab7b4835d4c39157f0de1ca3a0b55e3ae5c8cee7\"" Jan 13 21:26:39.050877 systemd[1]: Started cri-containerd-755b37ed26506949ba72d1deab7b4835d4c39157f0de1ca3a0b55e3ae5c8cee7.scope - libcontainer container 755b37ed26506949ba72d1deab7b4835d4c39157f0de1ca3a0b55e3ae5c8cee7. Jan 13 21:26:39.074831 containerd[1461]: time="2025-01-13T21:26:39.074779810Z" level=info msg="StartContainer for \"755b37ed26506949ba72d1deab7b4835d4c39157f0de1ca3a0b55e3ae5c8cee7\" returns successfully" Jan 13 21:26:39.080998 systemd[1]: cri-containerd-755b37ed26506949ba72d1deab7b4835d4c39157f0de1ca3a0b55e3ae5c8cee7.scope: Deactivated successfully. Jan 13 21:26:39.104302 containerd[1461]: time="2025-01-13T21:26:39.104239656Z" level=info msg="shim disconnected" id=755b37ed26506949ba72d1deab7b4835d4c39157f0de1ca3a0b55e3ae5c8cee7 namespace=k8s.io Jan 13 21:26:39.104302 containerd[1461]: time="2025-01-13T21:26:39.104298246Z" level=warning msg="cleaning up after shim disconnected" id=755b37ed26506949ba72d1deab7b4835d4c39157f0de1ca3a0b55e3ae5c8cee7 namespace=k8s.io Jan 13 21:26:39.104302 containerd[1461]: time="2025-01-13T21:26:39.104308565Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:26:39.561362 kubelet[1772]: E0113 21:26:39.561312 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:39.988288 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-755b37ed26506949ba72d1deab7b4835d4c39157f0de1ca3a0b55e3ae5c8cee7-rootfs.mount: Deactivated successfully. Jan 13 21:26:40.009284 kubelet[1772]: E0113 21:26:40.009254 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:40.010924 containerd[1461]: time="2025-01-13T21:26:40.010883943Z" level=info msg="CreateContainer within sandbox \"5a4d43cefcf4a8af29100ac94e4c163eb9ce6a4fafef6062f5e9603cf53697a8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 21:26:40.024340 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3572510816.mount: Deactivated successfully. Jan 13 21:26:40.027985 containerd[1461]: time="2025-01-13T21:26:40.027931004Z" level=info msg="CreateContainer within sandbox \"5a4d43cefcf4a8af29100ac94e4c163eb9ce6a4fafef6062f5e9603cf53697a8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d1469e4ed9a4581dd48069713876ffa1f57630d845ad8765006d1b4f1130c44e\"" Jan 13 21:26:40.028512 containerd[1461]: time="2025-01-13T21:26:40.028474675Z" level=info msg="StartContainer for \"d1469e4ed9a4581dd48069713876ffa1f57630d845ad8765006d1b4f1130c44e\"" Jan 13 21:26:40.060910 systemd[1]: Started cri-containerd-d1469e4ed9a4581dd48069713876ffa1f57630d845ad8765006d1b4f1130c44e.scope - libcontainer container d1469e4ed9a4581dd48069713876ffa1f57630d845ad8765006d1b4f1130c44e. Jan 13 21:26:40.089129 containerd[1461]: time="2025-01-13T21:26:40.089044515Z" level=info msg="StartContainer for \"d1469e4ed9a4581dd48069713876ffa1f57630d845ad8765006d1b4f1130c44e\" returns successfully" Jan 13 21:26:40.089212 systemd[1]: cri-containerd-d1469e4ed9a4581dd48069713876ffa1f57630d845ad8765006d1b4f1130c44e.scope: Deactivated successfully. Jan 13 21:26:40.114755 containerd[1461]: time="2025-01-13T21:26:40.114669089Z" level=info msg="shim disconnected" id=d1469e4ed9a4581dd48069713876ffa1f57630d845ad8765006d1b4f1130c44e namespace=k8s.io Jan 13 21:26:40.114755 containerd[1461]: time="2025-01-13T21:26:40.114730695Z" level=warning msg="cleaning up after shim disconnected" id=d1469e4ed9a4581dd48069713876ffa1f57630d845ad8765006d1b4f1130c44e namespace=k8s.io Jan 13 21:26:40.115246 containerd[1461]: time="2025-01-13T21:26:40.114763396Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:26:40.123302 kubelet[1772]: I0113 21:26:40.123210 1772 setters.go:580] "Node became not ready" node="10.0.0.106" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-13T21:26:40Z","lastTransitionTime":"2025-01-13T21:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 13 21:26:40.562531 kubelet[1772]: E0113 21:26:40.562468 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:40.988614 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d1469e4ed9a4581dd48069713876ffa1f57630d845ad8765006d1b4f1130c44e-rootfs.mount: Deactivated successfully. Jan 13 21:26:41.015576 kubelet[1772]: E0113 21:26:41.015544 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:41.017321 containerd[1461]: time="2025-01-13T21:26:41.017271114Z" level=info msg="CreateContainer within sandbox \"5a4d43cefcf4a8af29100ac94e4c163eb9ce6a4fafef6062f5e9603cf53697a8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 21:26:41.030042 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount360510683.mount: Deactivated successfully. Jan 13 21:26:41.031209 containerd[1461]: time="2025-01-13T21:26:41.031162270Z" level=info msg="CreateContainer within sandbox \"5a4d43cefcf4a8af29100ac94e4c163eb9ce6a4fafef6062f5e9603cf53697a8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"312b6571285a8847e9db4bb883d6823af877630dd9ded87f878d7651494e5c80\"" Jan 13 21:26:41.031730 containerd[1461]: time="2025-01-13T21:26:41.031691234Z" level=info msg="StartContainer for \"312b6571285a8847e9db4bb883d6823af877630dd9ded87f878d7651494e5c80\"" Jan 13 21:26:41.061011 systemd[1]: Started cri-containerd-312b6571285a8847e9db4bb883d6823af877630dd9ded87f878d7651494e5c80.scope - libcontainer container 312b6571285a8847e9db4bb883d6823af877630dd9ded87f878d7651494e5c80. Jan 13 21:26:41.084530 systemd[1]: cri-containerd-312b6571285a8847e9db4bb883d6823af877630dd9ded87f878d7651494e5c80.scope: Deactivated successfully. Jan 13 21:26:41.086693 containerd[1461]: time="2025-01-13T21:26:41.086642284Z" level=info msg="StartContainer for \"312b6571285a8847e9db4bb883d6823af877630dd9ded87f878d7651494e5c80\" returns successfully" Jan 13 21:26:41.109691 containerd[1461]: time="2025-01-13T21:26:41.109618978Z" level=info msg="shim disconnected" id=312b6571285a8847e9db4bb883d6823af877630dd9ded87f878d7651494e5c80 namespace=k8s.io Jan 13 21:26:41.109691 containerd[1461]: time="2025-01-13T21:26:41.109676467Z" level=warning msg="cleaning up after shim disconnected" id=312b6571285a8847e9db4bb883d6823af877630dd9ded87f878d7651494e5c80 namespace=k8s.io Jan 13 21:26:41.109691 containerd[1461]: time="2025-01-13T21:26:41.109687778Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:26:41.563127 kubelet[1772]: E0113 21:26:41.563068 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:41.988523 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-312b6571285a8847e9db4bb883d6823af877630dd9ded87f878d7651494e5c80-rootfs.mount: Deactivated successfully. Jan 13 21:26:42.019367 kubelet[1772]: E0113 21:26:42.019333 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:42.021079 containerd[1461]: time="2025-01-13T21:26:42.021032372Z" level=info msg="CreateContainer within sandbox \"5a4d43cefcf4a8af29100ac94e4c163eb9ce6a4fafef6062f5e9603cf53697a8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 21:26:42.118933 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1542396022.mount: Deactivated successfully. Jan 13 21:26:42.121194 containerd[1461]: time="2025-01-13T21:26:42.121137728Z" level=info msg="CreateContainer within sandbox \"5a4d43cefcf4a8af29100ac94e4c163eb9ce6a4fafef6062f5e9603cf53697a8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f69d440b9c251d17d5b7cd06f651c0fc6a7e5d6f90aa43c2fdf5b4ed82fc0060\"" Jan 13 21:26:42.121727 containerd[1461]: time="2025-01-13T21:26:42.121697430Z" level=info msg="StartContainer for \"f69d440b9c251d17d5b7cd06f651c0fc6a7e5d6f90aa43c2fdf5b4ed82fc0060\"" Jan 13 21:26:42.158910 systemd[1]: Started cri-containerd-f69d440b9c251d17d5b7cd06f651c0fc6a7e5d6f90aa43c2fdf5b4ed82fc0060.scope - libcontainer container f69d440b9c251d17d5b7cd06f651c0fc6a7e5d6f90aa43c2fdf5b4ed82fc0060. Jan 13 21:26:42.191392 containerd[1461]: time="2025-01-13T21:26:42.191340718Z" level=info msg="StartContainer for \"f69d440b9c251d17d5b7cd06f651c0fc6a7e5d6f90aa43c2fdf5b4ed82fc0060\" returns successfully" Jan 13 21:26:42.563271 kubelet[1772]: E0113 21:26:42.563224 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:42.594804 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 13 21:26:43.023711 kubelet[1772]: E0113 21:26:43.023666 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:43.109488 kubelet[1772]: I0113 21:26:43.109423 1772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5ggqb" podStartSLOduration=6.109405383 podStartE2EDuration="6.109405383s" podCreationTimestamp="2025-01-13 21:26:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:26:43.109337816 +0000 UTC m=+64.955344846" watchObservedRunningTime="2025-01-13 21:26:43.109405383 +0000 UTC m=+64.955412413" Jan 13 21:26:43.564129 kubelet[1772]: E0113 21:26:43.564057 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:44.108902 kubelet[1772]: E0113 21:26:44.108829 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:44.156808 systemd[1]: run-containerd-runc-k8s.io-f69d440b9c251d17d5b7cd06f651c0fc6a7e5d6f90aa43c2fdf5b4ed82fc0060-runc.bsNmbS.mount: Deactivated successfully. Jan 13 21:26:44.564277 kubelet[1772]: E0113 21:26:44.564219 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:45.564556 kubelet[1772]: E0113 21:26:45.564503 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:45.680941 systemd-networkd[1393]: lxc_health: Link UP Jan 13 21:26:45.689327 systemd-networkd[1393]: lxc_health: Gained carrier Jan 13 21:26:46.109027 kubelet[1772]: E0113 21:26:46.108989 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:46.564760 kubelet[1772]: E0113 21:26:46.564672 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:47.030648 kubelet[1772]: E0113 21:26:47.030622 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:47.109137 systemd-networkd[1393]: lxc_health: Gained IPv6LL Jan 13 21:26:47.565891 kubelet[1772]: E0113 21:26:47.565832 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:48.032514 kubelet[1772]: E0113 21:26:48.032456 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:48.435333 systemd[1]: run-containerd-runc-k8s.io-f69d440b9c251d17d5b7cd06f651c0fc6a7e5d6f90aa43c2fdf5b4ed82fc0060-runc.eTjbvG.mount: Deactivated successfully. Jan 13 21:26:48.566276 kubelet[1772]: E0113 21:26:48.566210 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:48.588241 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4028686409.mount: Deactivated successfully. Jan 13 21:26:48.911363 containerd[1461]: time="2025-01-13T21:26:48.911243055Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:48.916756 containerd[1461]: time="2025-01-13T21:26:48.915732585Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907241" Jan 13 21:26:48.916970 containerd[1461]: time="2025-01-13T21:26:48.916937887Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:48.918160 containerd[1461]: time="2025-01-13T21:26:48.918117973Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 10.482157801s" Jan 13 21:26:48.918160 containerd[1461]: time="2025-01-13T21:26:48.918150064Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 13 21:26:48.924248 containerd[1461]: time="2025-01-13T21:26:48.924215921Z" level=info msg="CreateContainer within sandbox \"a29672adefda52eae4d12447b3ed4ce65ef9191fa596bdc3b7ecd60325741d55\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 13 21:26:48.943429 containerd[1461]: time="2025-01-13T21:26:48.943385321Z" level=info msg="CreateContainer within sandbox \"a29672adefda52eae4d12447b3ed4ce65ef9191fa596bdc3b7ecd60325741d55\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8001b178d7e8bfd1f5419b0e4d18205b7ac27c3fc5f42db25cb18e829453b3a0\"" Jan 13 21:26:48.945316 containerd[1461]: time="2025-01-13T21:26:48.945285979Z" level=info msg="StartContainer for \"8001b178d7e8bfd1f5419b0e4d18205b7ac27c3fc5f42db25cb18e829453b3a0\"" Jan 13 21:26:48.996067 systemd[1]: Started cri-containerd-8001b178d7e8bfd1f5419b0e4d18205b7ac27c3fc5f42db25cb18e829453b3a0.scope - libcontainer container 8001b178d7e8bfd1f5419b0e4d18205b7ac27c3fc5f42db25cb18e829453b3a0. Jan 13 21:26:49.051413 containerd[1461]: time="2025-01-13T21:26:49.051362833Z" level=info msg="StartContainer for \"8001b178d7e8bfd1f5419b0e4d18205b7ac27c3fc5f42db25cb18e829453b3a0\" returns successfully" Jan 13 21:26:49.566564 kubelet[1772]: E0113 21:26:49.566471 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:50.042696 kubelet[1772]: E0113 21:26:50.042658 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:50.232491 kubelet[1772]: I0113 21:26:50.232424 1772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-pvk8l" podStartSLOduration=2.749279907 podStartE2EDuration="13.232407669s" podCreationTimestamp="2025-01-13 21:26:37 +0000 UTC" firstStartedPulling="2025-01-13 21:26:38.435686448 +0000 UTC m=+60.281693478" lastFinishedPulling="2025-01-13 21:26:48.91881421 +0000 UTC m=+70.764821240" observedRunningTime="2025-01-13 21:26:50.232195241 +0000 UTC m=+72.078202281" watchObservedRunningTime="2025-01-13 21:26:50.232407669 +0000 UTC m=+72.078414689" Jan 13 21:26:50.567586 kubelet[1772]: E0113 21:26:50.567526 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:51.044246 kubelet[1772]: E0113 21:26:51.044200 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:51.568050 kubelet[1772]: E0113 21:26:51.567936 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:52.569071 kubelet[1772]: E0113 21:26:52.568997 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:53.569903 kubelet[1772]: E0113 21:26:53.569820 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"