Jan 30 13:46:12.874588 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 13:46:12.874608 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:46:12.874619 kernel: BIOS-provided physical RAM map: Jan 30 13:46:12.874625 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 30 13:46:12.874631 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 30 13:46:12.874637 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 30 13:46:12.874644 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 30 13:46:12.874651 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 30 13:46:12.874657 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jan 30 13:46:12.874663 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jan 30 13:46:12.874671 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jan 30 13:46:12.874677 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jan 30 13:46:12.874684 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jan 30 13:46:12.874690 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jan 30 13:46:12.874698 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jan 30 13:46:12.874705 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 30 13:46:12.874713 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jan 30 13:46:12.874720 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jan 30 13:46:12.874727 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 30 13:46:12.874733 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 30 13:46:12.874740 kernel: NX (Execute Disable) protection: active Jan 30 13:46:12.874747 kernel: APIC: Static calls initialized Jan 30 13:46:12.874753 kernel: efi: EFI v2.7 by EDK II Jan 30 13:46:12.874760 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Jan 30 13:46:12.874767 kernel: SMBIOS 2.8 present. Jan 30 13:46:12.874773 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jan 30 13:46:12.874780 kernel: Hypervisor detected: KVM Jan 30 13:46:12.874789 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 13:46:12.874795 kernel: kvm-clock: using sched offset of 3899804369 cycles Jan 30 13:46:12.874802 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 13:46:12.874844 kernel: tsc: Detected 2794.750 MHz processor Jan 30 13:46:12.874851 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:46:12.874858 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:46:12.874865 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jan 30 13:46:12.874872 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 30 13:46:12.874879 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:46:12.874888 kernel: Using GB pages for direct mapping Jan 30 13:46:12.874895 kernel: Secure boot disabled Jan 30 13:46:12.874902 kernel: ACPI: Early table checksum verification disabled Jan 30 13:46:12.874909 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 30 13:46:12.874919 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 30 13:46:12.874926 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:46:12.874934 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:46:12.874943 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 30 13:46:12.874950 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:46:12.874957 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:46:12.874964 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:46:12.874978 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:46:12.874985 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 30 13:46:12.874992 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 30 13:46:12.875001 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Jan 30 13:46:12.875008 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 30 13:46:12.875015 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 30 13:46:12.875022 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 30 13:46:12.875029 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 30 13:46:12.875036 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 30 13:46:12.875044 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 30 13:46:12.875051 kernel: No NUMA configuration found Jan 30 13:46:12.875058 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jan 30 13:46:12.875067 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jan 30 13:46:12.875074 kernel: Zone ranges: Jan 30 13:46:12.875081 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:46:12.875089 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jan 30 13:46:12.875096 kernel: Normal empty Jan 30 13:46:12.875103 kernel: Movable zone start for each node Jan 30 13:46:12.875110 kernel: Early memory node ranges Jan 30 13:46:12.875117 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 30 13:46:12.875124 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 30 13:46:12.875131 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 30 13:46:12.875140 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jan 30 13:46:12.875147 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jan 30 13:46:12.875154 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jan 30 13:46:12.875161 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jan 30 13:46:12.875168 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:46:12.875175 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 30 13:46:12.875182 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 30 13:46:12.875189 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:46:12.875196 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jan 30 13:46:12.875205 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 30 13:46:12.875212 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jan 30 13:46:12.875219 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 30 13:46:12.875226 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 13:46:12.875234 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 13:46:12.875241 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 13:46:12.875248 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 13:46:12.875255 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 13:46:12.875262 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 13:46:12.875269 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 13:46:12.875278 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:46:12.875285 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 13:46:12.875292 kernel: TSC deadline timer available Jan 30 13:46:12.875299 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 30 13:46:12.875306 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 13:46:12.875313 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 30 13:46:12.875320 kernel: kvm-guest: setup PV sched yield Jan 30 13:46:12.875327 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 30 13:46:12.875334 kernel: Booting paravirtualized kernel on KVM Jan 30 13:46:12.875344 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:46:12.875351 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 30 13:46:12.875358 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 30 13:46:12.875365 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 30 13:46:12.875374 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 30 13:46:12.875382 kernel: kvm-guest: PV spinlocks enabled Jan 30 13:46:12.875390 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 30 13:46:12.875400 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:46:12.875409 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:46:12.875416 kernel: random: crng init done Jan 30 13:46:12.875424 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 13:46:12.875431 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:46:12.875438 kernel: Fallback order for Node 0: 0 Jan 30 13:46:12.875445 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jan 30 13:46:12.875452 kernel: Policy zone: DMA32 Jan 30 13:46:12.875459 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:46:12.875467 kernel: Memory: 2395612K/2567000K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 171128K reserved, 0K cma-reserved) Jan 30 13:46:12.875476 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 30 13:46:12.875483 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 13:46:12.875490 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:46:12.875498 kernel: Dynamic Preempt: voluntary Jan 30 13:46:12.875512 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:46:12.875522 kernel: rcu: RCU event tracing is enabled. Jan 30 13:46:12.875530 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 30 13:46:12.875537 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:46:12.875545 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:46:12.875552 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:46:12.875560 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:46:12.875567 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 30 13:46:12.875577 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 30 13:46:12.875584 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:46:12.875592 kernel: Console: colour dummy device 80x25 Jan 30 13:46:12.875599 kernel: printk: console [ttyS0] enabled Jan 30 13:46:12.875606 kernel: ACPI: Core revision 20230628 Jan 30 13:46:12.875616 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 30 13:46:12.875624 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:46:12.875631 kernel: x2apic enabled Jan 30 13:46:12.875639 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 13:46:12.875646 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 30 13:46:12.875654 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 30 13:46:12.875661 kernel: kvm-guest: setup PV IPIs Jan 30 13:46:12.875669 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 30 13:46:12.875677 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 30 13:46:12.875686 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jan 30 13:46:12.875694 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 30 13:46:12.875701 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 30 13:46:12.875709 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 30 13:46:12.875716 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:46:12.875723 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 13:46:12.875731 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:46:12.875739 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 13:46:12.875746 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 30 13:46:12.875755 kernel: RETBleed: Mitigation: untrained return thunk Jan 30 13:46:12.875763 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 13:46:12.875771 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 13:46:12.875778 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 30 13:46:12.875786 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 30 13:46:12.875794 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 30 13:46:12.875801 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 13:46:12.875830 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 13:46:12.875840 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 13:46:12.875848 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 13:46:12.875855 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 30 13:46:12.875863 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:46:12.875870 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:46:12.875878 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:46:12.875885 kernel: landlock: Up and running. Jan 30 13:46:12.875892 kernel: SELinux: Initializing. Jan 30 13:46:12.875900 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:46:12.875909 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:46:12.875917 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 30 13:46:12.875925 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:46:12.875932 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:46:12.875940 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:46:12.875948 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 30 13:46:12.875955 kernel: ... version: 0 Jan 30 13:46:12.875963 kernel: ... bit width: 48 Jan 30 13:46:12.875976 kernel: ... generic registers: 6 Jan 30 13:46:12.875986 kernel: ... value mask: 0000ffffffffffff Jan 30 13:46:12.875993 kernel: ... max period: 00007fffffffffff Jan 30 13:46:12.876000 kernel: ... fixed-purpose events: 0 Jan 30 13:46:12.876008 kernel: ... event mask: 000000000000003f Jan 30 13:46:12.876015 kernel: signal: max sigframe size: 1776 Jan 30 13:46:12.876023 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:46:12.876030 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:46:12.876038 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:46:12.876045 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:46:12.876055 kernel: .... node #0, CPUs: #1 #2 #3 Jan 30 13:46:12.876062 kernel: smp: Brought up 1 node, 4 CPUs Jan 30 13:46:12.876069 kernel: smpboot: Max logical packages: 1 Jan 30 13:46:12.876077 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jan 30 13:46:12.876084 kernel: devtmpfs: initialized Jan 30 13:46:12.876092 kernel: x86/mm: Memory block size: 128MB Jan 30 13:46:12.876099 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 30 13:46:12.876107 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 30 13:46:12.876114 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jan 30 13:46:12.876124 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 30 13:46:12.876131 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 30 13:46:12.876139 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:46:12.876147 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 30 13:46:12.876154 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:46:12.876161 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:46:12.876169 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:46:12.876176 kernel: audit: type=2000 audit(1738244772.307:1): state=initialized audit_enabled=0 res=1 Jan 30 13:46:12.876184 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:46:12.876193 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:46:12.876201 kernel: cpuidle: using governor menu Jan 30 13:46:12.876208 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:46:12.876215 kernel: dca service started, version 1.12.1 Jan 30 13:46:12.876223 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 30 13:46:12.876231 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 30 13:46:12.876238 kernel: PCI: Using configuration type 1 for base access Jan 30 13:46:12.876245 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:46:12.876253 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:46:12.876262 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:46:12.876270 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:46:12.876277 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:46:12.876285 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:46:12.876292 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:46:12.876299 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:46:12.876307 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:46:12.876314 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 13:46:12.876322 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 13:46:12.876331 kernel: ACPI: Interpreter enabled Jan 30 13:46:12.876338 kernel: ACPI: PM: (supports S0 S3 S5) Jan 30 13:46:12.876346 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:46:12.876353 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:46:12.876361 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 13:46:12.876368 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 30 13:46:12.876385 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 13:46:12.876647 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:46:12.876783 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 30 13:46:12.876926 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 30 13:46:12.876936 kernel: PCI host bridge to bus 0000:00 Jan 30 13:46:12.877070 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 13:46:12.877182 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 13:46:12.877294 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 13:46:12.877404 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 30 13:46:12.877520 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 30 13:46:12.877654 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jan 30 13:46:12.877772 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 13:46:12.877927 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 30 13:46:12.878066 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 30 13:46:12.878187 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jan 30 13:46:12.878310 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jan 30 13:46:12.878429 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 30 13:46:12.878586 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 30 13:46:12.878713 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 13:46:12.878882 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 30 13:46:12.879016 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jan 30 13:46:12.879137 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jan 30 13:46:12.879262 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jan 30 13:46:12.879393 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 30 13:46:12.879514 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jan 30 13:46:12.879634 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jan 30 13:46:12.879792 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jan 30 13:46:12.879964 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 30 13:46:12.880117 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jan 30 13:46:12.880248 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jan 30 13:46:12.880367 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jan 30 13:46:12.880488 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 30 13:46:12.880617 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 30 13:46:12.880736 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 30 13:46:12.880893 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 30 13:46:12.881023 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jan 30 13:46:12.881147 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jan 30 13:46:12.881330 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 30 13:46:12.881452 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jan 30 13:46:12.881463 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 13:46:12.881471 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 13:46:12.881478 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 13:46:12.881486 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 13:46:12.881497 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 30 13:46:12.881505 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 30 13:46:12.881512 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 30 13:46:12.881520 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 30 13:46:12.881527 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 30 13:46:12.881535 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 30 13:46:12.881542 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 30 13:46:12.881550 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 30 13:46:12.881557 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 30 13:46:12.881567 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 30 13:46:12.881575 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 30 13:46:12.881582 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 30 13:46:12.881590 kernel: iommu: Default domain type: Translated Jan 30 13:46:12.881597 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:46:12.881604 kernel: efivars: Registered efivars operations Jan 30 13:46:12.881612 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:46:12.881620 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 13:46:12.881627 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 30 13:46:12.881637 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jan 30 13:46:12.881644 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jan 30 13:46:12.881651 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jan 30 13:46:12.881772 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 30 13:46:12.881909 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 30 13:46:12.882039 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 13:46:12.882050 kernel: vgaarb: loaded Jan 30 13:46:12.882057 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 30 13:46:12.882065 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 30 13:46:12.882085 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 13:46:12.882094 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:46:12.882102 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:46:12.882109 kernel: pnp: PnP ACPI init Jan 30 13:46:12.882241 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 30 13:46:12.882252 kernel: pnp: PnP ACPI: found 6 devices Jan 30 13:46:12.882260 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:46:12.882268 kernel: NET: Registered PF_INET protocol family Jan 30 13:46:12.882279 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:46:12.882287 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 13:46:12.882294 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:46:12.882302 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 13:46:12.882310 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 13:46:12.882317 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 13:46:12.882325 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:46:12.882332 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:46:12.882340 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:46:12.882350 kernel: NET: Registered PF_XDP protocol family Jan 30 13:46:12.882474 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 30 13:46:12.882595 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jan 30 13:46:12.882709 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 13:46:12.882833 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 13:46:12.882944 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 13:46:12.883063 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 30 13:46:12.883231 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 30 13:46:12.883348 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jan 30 13:46:12.883358 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:46:12.883366 kernel: Initialise system trusted keyrings Jan 30 13:46:12.883374 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 13:46:12.883382 kernel: Key type asymmetric registered Jan 30 13:46:12.883389 kernel: Asymmetric key parser 'x509' registered Jan 30 13:46:12.883397 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:46:12.883404 kernel: io scheduler mq-deadline registered Jan 30 13:46:12.883411 kernel: io scheduler kyber registered Jan 30 13:46:12.883422 kernel: io scheduler bfq registered Jan 30 13:46:12.883429 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:46:12.883437 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 30 13:46:12.883445 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 30 13:46:12.883453 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 30 13:46:12.883460 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:46:12.883468 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:46:12.883476 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 13:46:12.883483 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 13:46:12.883493 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 13:46:12.883617 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 30 13:46:12.883733 kernel: rtc_cmos 00:04: registered as rtc0 Jan 30 13:46:12.883743 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 13:46:12.883886 kernel: rtc_cmos 00:04: setting system clock to 2025-01-30T13:46:12 UTC (1738244772) Jan 30 13:46:12.884014 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 30 13:46:12.884025 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 30 13:46:12.884032 kernel: efifb: probing for efifb Jan 30 13:46:12.884044 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jan 30 13:46:12.884051 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jan 30 13:46:12.884059 kernel: efifb: scrolling: redraw Jan 30 13:46:12.884066 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jan 30 13:46:12.884074 kernel: Console: switching to colour frame buffer device 100x37 Jan 30 13:46:12.884098 kernel: fb0: EFI VGA frame buffer device Jan 30 13:46:12.884107 kernel: pstore: Using crash dump compression: deflate Jan 30 13:46:12.884115 kernel: pstore: Registered efi_pstore as persistent store backend Jan 30 13:46:12.884123 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:46:12.884133 kernel: Segment Routing with IPv6 Jan 30 13:46:12.884141 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:46:12.884149 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:46:12.884156 kernel: Key type dns_resolver registered Jan 30 13:46:12.884164 kernel: IPI shorthand broadcast: enabled Jan 30 13:46:12.884172 kernel: sched_clock: Marking stable (562002995, 112332493)->(716646712, -42311224) Jan 30 13:46:12.884180 kernel: registered taskstats version 1 Jan 30 13:46:12.884187 kernel: Loading compiled-in X.509 certificates Jan 30 13:46:12.884195 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 13:46:12.884205 kernel: Key type .fscrypt registered Jan 30 13:46:12.884213 kernel: Key type fscrypt-provisioning registered Jan 30 13:46:12.884221 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:46:12.884229 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:46:12.884241 kernel: ima: No architecture policies found Jan 30 13:46:12.884249 kernel: clk: Disabling unused clocks Jan 30 13:46:12.884257 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 13:46:12.884265 kernel: Write protecting the kernel read-only data: 36864k Jan 30 13:46:12.884275 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 13:46:12.884282 kernel: Run /init as init process Jan 30 13:46:12.884290 kernel: with arguments: Jan 30 13:46:12.884298 kernel: /init Jan 30 13:46:12.884305 kernel: with environment: Jan 30 13:46:12.884313 kernel: HOME=/ Jan 30 13:46:12.884321 kernel: TERM=linux Jan 30 13:46:12.884328 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:46:12.884338 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:46:12.884350 systemd[1]: Detected virtualization kvm. Jan 30 13:46:12.884359 systemd[1]: Detected architecture x86-64. Jan 30 13:46:12.884367 systemd[1]: Running in initrd. Jan 30 13:46:12.884377 systemd[1]: No hostname configured, using default hostname. Jan 30 13:46:12.884387 systemd[1]: Hostname set to . Jan 30 13:46:12.884396 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:46:12.884404 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:46:12.884412 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:46:12.884421 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:46:12.884429 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:46:12.884438 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:46:12.884446 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:46:12.884459 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:46:12.884469 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:46:12.884477 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:46:12.884486 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:46:12.884494 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:46:12.884502 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:46:12.884510 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:46:12.884521 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:46:12.884529 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:46:12.884537 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:46:12.884546 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:46:12.884554 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:46:12.884562 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:46:12.884571 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:46:12.884579 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:46:12.884587 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:46:12.884598 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:46:12.884606 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:46:12.884615 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:46:12.884623 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:46:12.884631 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:46:12.884639 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:46:12.884648 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:46:12.884656 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:46:12.884667 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:46:12.884675 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:46:12.884683 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:46:12.884709 systemd-journald[193]: Collecting audit messages is disabled. Jan 30 13:46:12.884729 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:46:12.884738 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:46:12.884747 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:46:12.884755 systemd-journald[193]: Journal started Jan 30 13:46:12.884774 systemd-journald[193]: Runtime Journal (/run/log/journal/b32db746695a4e9fb70b4dd6b4302222) is 6.0M, max 48.3M, 42.2M free. Jan 30 13:46:12.880108 systemd-modules-load[194]: Inserted module 'overlay' Jan 30 13:46:12.887004 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:46:12.889644 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:46:12.891892 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:46:12.894992 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:46:12.905434 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:46:12.907395 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:46:12.912831 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:46:12.915260 systemd-modules-load[194]: Inserted module 'br_netfilter' Jan 30 13:46:12.915832 kernel: Bridge firewalling registered Jan 30 13:46:12.916744 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:46:12.918426 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:46:12.922988 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:46:12.925055 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:46:12.931158 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:46:12.933692 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:46:12.945047 dracut-cmdline[225]: dracut-dracut-053 Jan 30 13:46:12.948367 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:46:12.966893 systemd-resolved[228]: Positive Trust Anchors: Jan 30 13:46:12.966910 systemd-resolved[228]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:46:12.966941 systemd-resolved[228]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:46:12.969427 systemd-resolved[228]: Defaulting to hostname 'linux'. Jan 30 13:46:12.970415 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:46:12.975872 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:46:13.046844 kernel: SCSI subsystem initialized Jan 30 13:46:13.056835 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:46:13.066836 kernel: iscsi: registered transport (tcp) Jan 30 13:46:13.087840 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:46:13.087860 kernel: QLogic iSCSI HBA Driver Jan 30 13:46:13.137870 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:46:13.148939 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:46:13.175267 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:46:13.175297 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:46:13.176318 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:46:13.216839 kernel: raid6: avx2x4 gen() 30537 MB/s Jan 30 13:46:13.233837 kernel: raid6: avx2x2 gen() 30996 MB/s Jan 30 13:46:13.250916 kernel: raid6: avx2x1 gen() 26104 MB/s Jan 30 13:46:13.250936 kernel: raid6: using algorithm avx2x2 gen() 30996 MB/s Jan 30 13:46:13.268918 kernel: raid6: .... xor() 19991 MB/s, rmw enabled Jan 30 13:46:13.268936 kernel: raid6: using avx2x2 recovery algorithm Jan 30 13:46:13.289835 kernel: xor: automatically using best checksumming function avx Jan 30 13:46:13.441844 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:46:13.454379 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:46:13.466998 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:46:13.478244 systemd-udevd[411]: Using default interface naming scheme 'v255'. Jan 30 13:46:13.482428 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:46:13.493998 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:46:13.507127 dracut-pre-trigger[419]: rd.md=0: removing MD RAID activation Jan 30 13:46:13.538688 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:46:13.551951 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:46:13.612057 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:46:13.623929 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:46:13.631277 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:46:13.633386 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:46:13.635066 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:46:13.638919 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:46:13.648650 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:46:13.651838 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 30 13:46:13.678230 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:46:13.678253 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 30 13:46:13.678440 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 13:46:13.678456 kernel: AES CTR mode by8 optimization enabled Jan 30 13:46:13.678470 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 13:46:13.678485 kernel: GPT:9289727 != 19775487 Jan 30 13:46:13.678499 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 13:46:13.678514 kernel: GPT:9289727 != 19775487 Jan 30 13:46:13.678527 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:46:13.678541 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:46:13.661559 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:46:13.673429 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:46:13.673572 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:46:13.676914 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:46:13.678656 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:46:13.678821 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:46:13.679392 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:46:13.688352 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:46:13.707816 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (464) Jan 30 13:46:13.707859 kernel: libata version 3.00 loaded. Jan 30 13:46:13.709839 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (467) Jan 30 13:46:13.711675 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:46:13.716832 kernel: ahci 0000:00:1f.2: version 3.0 Jan 30 13:46:13.732267 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 30 13:46:13.732282 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 30 13:46:13.732429 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 30 13:46:13.732572 kernel: scsi host0: ahci Jan 30 13:46:13.732732 kernel: scsi host1: ahci Jan 30 13:46:13.732901 kernel: scsi host2: ahci Jan 30 13:46:13.733070 kernel: scsi host3: ahci Jan 30 13:46:13.733227 kernel: scsi host4: ahci Jan 30 13:46:13.733368 kernel: scsi host5: ahci Jan 30 13:46:13.733506 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jan 30 13:46:13.733518 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jan 30 13:46:13.733532 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jan 30 13:46:13.733543 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jan 30 13:46:13.733553 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jan 30 13:46:13.733563 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jan 30 13:46:13.725935 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 13:46:13.736287 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 13:46:13.744032 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 13:46:13.744289 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 13:46:13.752356 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:46:13.763923 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:46:13.764185 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:46:13.764238 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:46:13.767336 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:46:13.773914 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:46:13.773938 disk-uuid[552]: Primary Header is updated. Jan 30 13:46:13.773938 disk-uuid[552]: Secondary Entries is updated. Jan 30 13:46:13.773938 disk-uuid[552]: Secondary Header is updated. Jan 30 13:46:13.769143 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:46:13.778282 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:46:13.787184 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:46:13.797158 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:46:13.822740 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:46:14.041069 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 30 13:46:14.041117 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 30 13:46:14.041135 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 30 13:46:14.042844 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 30 13:46:14.042904 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 30 13:46:14.043840 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 30 13:46:14.045287 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 30 13:46:14.045305 kernel: ata3.00: applying bridge limits Jan 30 13:46:14.045840 kernel: ata3.00: configured for UDMA/100 Jan 30 13:46:14.047824 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 30 13:46:14.093391 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 30 13:46:14.110445 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 30 13:46:14.110460 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 30 13:46:14.779781 disk-uuid[554]: The operation has completed successfully. Jan 30 13:46:14.781004 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:46:14.808569 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:46:14.808704 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:46:14.828948 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:46:14.832542 sh[595]: Success Jan 30 13:46:14.844838 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 30 13:46:14.877194 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:46:14.893230 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:46:14.895777 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:46:14.907698 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 13:46:14.907724 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:46:14.907736 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:46:14.907746 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:46:14.908447 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:46:14.913179 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:46:14.915525 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:46:14.931016 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:46:14.932772 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:46:14.940847 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:46:14.940900 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:46:14.940911 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:46:14.943832 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:46:14.952896 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:46:14.954762 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:46:14.963537 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:46:14.967973 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:46:15.015159 ignition[688]: Ignition 2.19.0 Jan 30 13:46:15.015172 ignition[688]: Stage: fetch-offline Jan 30 13:46:15.015210 ignition[688]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:46:15.015221 ignition[688]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:46:15.015311 ignition[688]: parsed url from cmdline: "" Jan 30 13:46:15.015315 ignition[688]: no config URL provided Jan 30 13:46:15.015320 ignition[688]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:46:15.015330 ignition[688]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:46:15.015356 ignition[688]: op(1): [started] loading QEMU firmware config module Jan 30 13:46:15.015362 ignition[688]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 30 13:46:15.022904 ignition[688]: op(1): [finished] loading QEMU firmware config module Jan 30 13:46:15.022935 ignition[688]: QEMU firmware config was not found. Ignoring... Jan 30 13:46:15.025708 ignition[688]: parsing config with SHA512: 50d99027100a0975489b268b11d1eb165ad3628fefa5192419b93f56c1ca87c2d61fb69cbd6e7529f4d1a10929b3c7121c263da2ea2d90cc4350bce47d67e0fd Jan 30 13:46:15.028601 unknown[688]: fetched base config from "system" Jan 30 13:46:15.028614 unknown[688]: fetched user config from "qemu" Jan 30 13:46:15.028983 ignition[688]: fetch-offline: fetch-offline passed Jan 30 13:46:15.031260 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:46:15.029054 ignition[688]: Ignition finished successfully Jan 30 13:46:15.061527 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:46:15.072123 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:46:15.092441 systemd-networkd[786]: lo: Link UP Jan 30 13:46:15.092453 systemd-networkd[786]: lo: Gained carrier Jan 30 13:46:15.093989 systemd-networkd[786]: Enumeration completed Jan 30 13:46:15.094387 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:46:15.094391 systemd-networkd[786]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:46:15.096517 systemd-networkd[786]: eth0: Link UP Jan 30 13:46:15.096521 systemd-networkd[786]: eth0: Gained carrier Jan 30 13:46:15.096532 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:46:15.097300 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:46:15.100380 systemd[1]: Reached target network.target - Network. Jan 30 13:46:15.104390 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 30 13:46:15.109853 systemd-networkd[786]: eth0: DHCPv4 address 10.0.0.114/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 13:46:15.117963 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:46:15.131881 ignition[788]: Ignition 2.19.0 Jan 30 13:46:15.131892 ignition[788]: Stage: kargs Jan 30 13:46:15.132054 ignition[788]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:46:15.132066 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:46:15.132704 ignition[788]: kargs: kargs passed Jan 30 13:46:15.132746 ignition[788]: Ignition finished successfully Jan 30 13:46:15.139468 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:46:15.156936 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:46:15.168987 ignition[798]: Ignition 2.19.0 Jan 30 13:46:15.169002 ignition[798]: Stage: disks Jan 30 13:46:15.169226 ignition[798]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:46:15.169242 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:46:15.170253 ignition[798]: disks: disks passed Jan 30 13:46:15.170306 ignition[798]: Ignition finished successfully Jan 30 13:46:15.176226 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:46:15.178418 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:46:15.179106 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:46:15.179429 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:46:15.179763 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:46:15.180270 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:46:15.209066 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:46:15.221798 systemd-fsck[808]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 13:46:15.228177 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:46:15.238935 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:46:15.331846 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 13:46:15.332593 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:46:15.333743 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:46:15.347772 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:46:15.350109 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:46:15.353195 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 13:46:15.353255 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:46:15.362851 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (816) Jan 30 13:46:15.362874 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:46:15.362885 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:46:15.362896 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:46:15.355424 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:46:15.364822 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:46:15.366111 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:46:15.367910 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:46:15.387006 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:46:15.421727 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:46:15.426000 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:46:15.429858 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:46:15.434667 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:46:15.519552 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:46:15.532896 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:46:15.536074 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:46:15.540825 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:46:15.560197 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:46:15.587870 ignition[933]: INFO : Ignition 2.19.0 Jan 30 13:46:15.587870 ignition[933]: INFO : Stage: mount Jan 30 13:46:15.589617 ignition[933]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:46:15.589617 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:46:15.589617 ignition[933]: INFO : mount: mount passed Jan 30 13:46:15.589617 ignition[933]: INFO : Ignition finished successfully Jan 30 13:46:15.595041 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:46:15.608900 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:46:15.906050 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:46:15.919019 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:46:15.926830 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (942) Jan 30 13:46:15.928850 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:46:15.928872 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:46:15.928883 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:46:15.931833 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:46:15.933253 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:46:15.965689 ignition[959]: INFO : Ignition 2.19.0 Jan 30 13:46:15.965689 ignition[959]: INFO : Stage: files Jan 30 13:46:15.967494 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:46:15.967494 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:46:15.967494 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:46:15.967494 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:46:15.967494 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:46:15.974203 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:46:15.974203 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:46:15.974203 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:46:15.974203 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 13:46:15.974203 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 13:46:15.974203 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:46:15.974203 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:46:15.974203 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:46:15.974203 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:46:15.974203 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:46:15.974203 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:46:15.974203 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:46:15.974203 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 30 13:46:15.970745 unknown[959]: wrote ssh authorized keys file for user: core Jan 30 13:46:16.365560 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Jan 30 13:46:16.676927 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:46:16.676927 ignition[959]: INFO : files: op(8): [started] processing unit "containerd.service" Jan 30 13:46:16.681127 ignition[959]: INFO : files: op(8): op(9): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 13:46:16.684257 ignition[959]: INFO : files: op(8): op(9): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 13:46:16.684257 ignition[959]: INFO : files: op(8): [finished] processing unit "containerd.service" Jan 30 13:46:16.684257 ignition[959]: INFO : files: op(a): [started] processing unit "coreos-metadata.service" Jan 30 13:46:16.684257 ignition[959]: INFO : files: op(a): op(b): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 13:46:16.684257 ignition[959]: INFO : files: op(a): op(b): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 13:46:16.684257 ignition[959]: INFO : files: op(a): [finished] processing unit "coreos-metadata.service" Jan 30 13:46:16.684257 ignition[959]: INFO : files: op(c): [started] setting preset to disabled for "coreos-metadata.service" Jan 30 13:46:16.711621 ignition[959]: INFO : files: op(c): op(d): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 13:46:16.716445 ignition[959]: INFO : files: op(c): op(d): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 13:46:16.716445 ignition[959]: INFO : files: op(c): [finished] setting preset to disabled for "coreos-metadata.service" Jan 30 13:46:16.719978 ignition[959]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:46:16.721913 ignition[959]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:46:16.723663 ignition[959]: INFO : files: files passed Jan 30 13:46:16.724465 ignition[959]: INFO : Ignition finished successfully Jan 30 13:46:16.728203 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:46:16.737950 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:46:16.739709 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:46:16.743927 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:46:16.744053 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:46:16.749681 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Jan 30 13:46:16.752419 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:46:16.752419 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:46:16.756638 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:46:16.754784 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:46:16.756827 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:46:16.761967 systemd-networkd[786]: eth0: Gained IPv6LL Jan 30 13:46:16.764934 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:46:16.788177 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:46:16.788294 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:46:16.788964 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:46:16.791769 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:46:16.793654 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:46:16.794386 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:46:16.814072 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:46:16.826943 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:46:16.837348 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:46:16.838590 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:46:16.840750 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:46:16.842714 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:46:16.842832 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:46:16.845027 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:46:16.846693 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:46:16.848671 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:46:16.850661 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:46:16.852622 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:46:16.854713 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:46:16.856816 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:46:16.859001 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:46:16.860955 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:46:16.863107 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:46:16.864835 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:46:16.864953 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:46:16.867032 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:46:16.868607 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:46:16.870640 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:46:16.870756 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:46:16.872825 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:46:16.872938 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:46:16.875084 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:46:16.875190 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:46:16.877173 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:46:16.878874 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:46:16.884898 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:46:16.886377 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:46:16.888243 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:46:16.890557 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:46:16.890646 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:46:16.892370 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:46:16.892455 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:46:16.894221 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:46:16.894329 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:46:16.896220 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:46:16.896322 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:46:16.906938 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:46:16.908435 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:46:16.909757 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:46:16.909890 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:46:16.911890 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:46:16.911994 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:46:16.916733 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:46:16.916889 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:46:16.920548 ignition[1014]: INFO : Ignition 2.19.0 Jan 30 13:46:16.920548 ignition[1014]: INFO : Stage: umount Jan 30 13:46:16.922222 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:46:16.922222 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:46:16.922222 ignition[1014]: INFO : umount: umount passed Jan 30 13:46:16.922222 ignition[1014]: INFO : Ignition finished successfully Jan 30 13:46:16.923652 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:46:16.923790 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:46:16.925558 systemd[1]: Stopped target network.target - Network. Jan 30 13:46:16.927036 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:46:16.927087 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:46:16.928926 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:46:16.928971 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:46:16.930864 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:46:16.930919 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:46:16.932699 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:46:16.932745 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:46:16.935987 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:46:16.937824 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:46:16.940537 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:46:16.940849 systemd-networkd[786]: eth0: DHCPv6 lease lost Jan 30 13:46:16.943388 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:46:16.943517 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:46:16.945928 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:46:16.945967 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:46:16.953871 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:46:16.955591 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:46:16.955643 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:46:16.957979 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:46:16.960220 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:46:16.960335 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:46:16.965083 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:46:16.965173 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:46:16.967126 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:46:16.967174 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:46:16.969180 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:46:16.969227 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:46:16.975502 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:46:16.975623 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:46:16.977552 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:46:16.977726 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:46:16.979198 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:46:16.979244 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:46:16.980988 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:46:16.981029 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:46:16.982902 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:46:16.982948 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:46:16.985019 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:46:16.985065 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:46:16.987201 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:46:16.987249 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:46:16.999921 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:46:17.000994 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:46:17.001044 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:46:17.003261 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 13:46:17.003309 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:46:17.005459 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:46:17.005506 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:46:17.007881 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:46:17.007928 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:46:17.010390 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:46:17.010491 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:46:17.076448 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:46:17.076569 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:46:17.079400 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:46:17.081412 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:46:17.081465 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:46:17.096939 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:46:17.105661 systemd[1]: Switching root. Jan 30 13:46:17.135160 systemd-journald[193]: Journal stopped Jan 30 13:46:18.179378 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Jan 30 13:46:18.179448 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:46:18.179463 kernel: SELinux: policy capability open_perms=1 Jan 30 13:46:18.179478 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:46:18.179491 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:46:18.179507 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:46:18.179519 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:46:18.179534 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:46:18.179545 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:46:18.179557 kernel: audit: type=1403 audit(1738244777.467:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:46:18.179573 systemd[1]: Successfully loaded SELinux policy in 39.520ms. Jan 30 13:46:18.179596 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.156ms. Jan 30 13:46:18.179612 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:46:18.179625 systemd[1]: Detected virtualization kvm. Jan 30 13:46:18.179637 systemd[1]: Detected architecture x86-64. Jan 30 13:46:18.179648 systemd[1]: Detected first boot. Jan 30 13:46:18.179660 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:46:18.179672 zram_generator::config[1088]: No configuration found. Jan 30 13:46:18.179686 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:46:18.179698 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:46:18.179710 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 13:46:18.179725 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:46:18.179738 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:46:18.179749 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:46:18.179761 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:46:18.179774 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:46:18.179786 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:46:18.179798 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:46:18.181903 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:46:18.181928 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:46:18.181941 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:46:18.181953 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:46:18.181965 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:46:18.181976 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:46:18.181989 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:46:18.182000 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 13:46:18.182012 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:46:18.182024 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:46:18.182039 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:46:18.182051 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:46:18.182063 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:46:18.182074 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:46:18.182086 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:46:18.182098 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:46:18.182111 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:46:18.182123 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:46:18.182137 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:46:18.182149 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:46:18.182161 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:46:18.182172 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:46:18.182184 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:46:18.182196 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:46:18.182207 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:46:18.182219 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:46:18.182235 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:46:18.182251 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:46:18.182263 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:46:18.182275 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:46:18.182286 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:46:18.182298 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:46:18.182311 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:46:18.182322 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:46:18.182334 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:46:18.182346 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:46:18.182360 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:46:18.182373 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:46:18.182386 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:46:18.182400 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 30 13:46:18.182413 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 30 13:46:18.182425 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:46:18.182436 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:46:18.182448 kernel: fuse: init (API version 7.39) Jan 30 13:46:18.182462 kernel: loop: module loaded Jan 30 13:46:18.182473 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:46:18.182485 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:46:18.182497 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:46:18.182511 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:46:18.182522 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:46:18.182534 kernel: ACPI: bus type drm_connector registered Jan 30 13:46:18.182564 systemd-journald[1169]: Collecting audit messages is disabled. Jan 30 13:46:18.182588 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:46:18.182600 systemd-journald[1169]: Journal started Jan 30 13:46:18.182622 systemd-journald[1169]: Runtime Journal (/run/log/journal/b32db746695a4e9fb70b4dd6b4302222) is 6.0M, max 48.3M, 42.2M free. Jan 30 13:46:18.182653 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:46:18.188470 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:46:18.189131 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:46:18.190366 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:46:18.191629 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:46:18.193037 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:46:18.194599 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:46:18.196225 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:46:18.196444 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:46:18.198143 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:46:18.198354 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:46:18.199819 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:46:18.200035 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:46:18.201420 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:46:18.201629 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:46:18.203173 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:46:18.203383 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:46:18.204899 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:46:18.205119 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:46:18.206736 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:46:18.208243 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:46:18.210097 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:46:18.223090 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:46:18.231896 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:46:18.234116 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:46:18.235321 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:46:18.240046 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:46:18.244009 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:46:18.245256 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:46:18.248950 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:46:18.250093 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:46:18.251917 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:46:18.255627 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:46:18.258408 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:46:18.259694 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:46:18.261545 systemd-journald[1169]: Time spent on flushing to /var/log/journal/b32db746695a4e9fb70b4dd6b4302222 is 13.005ms for 973 entries. Jan 30 13:46:18.261545 systemd-journald[1169]: System Journal (/var/log/journal/b32db746695a4e9fb70b4dd6b4302222) is 8.0M, max 195.6M, 187.6M free. Jan 30 13:46:18.290822 systemd-journald[1169]: Received client request to flush runtime journal. Jan 30 13:46:18.271629 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:46:18.282977 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:46:18.285349 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:46:18.288173 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:46:18.290858 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:46:18.294289 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:46:18.300627 systemd-tmpfiles[1222]: ACLs are not supported, ignoring. Jan 30 13:46:18.300646 systemd-tmpfiles[1222]: ACLs are not supported, ignoring. Jan 30 13:46:18.300676 udevadm[1230]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 13:46:18.308054 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:46:18.321004 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:46:18.346652 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:46:18.354027 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:46:18.371565 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Jan 30 13:46:18.371586 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Jan 30 13:46:18.377240 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:46:18.803210 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:46:18.822060 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:46:18.845441 systemd-udevd[1250]: Using default interface naming scheme 'v255'. Jan 30 13:46:18.861043 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:46:18.875043 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:46:18.882796 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:46:18.891174 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 30 13:46:18.897888 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1269) Jan 30 13:46:18.942201 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:46:18.949574 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:46:18.968849 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 30 13:46:18.978960 kernel: ACPI: button: Power Button [PWRF] Jan 30 13:46:18.983975 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 30 13:46:18.988870 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 30 13:46:18.990765 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 30 13:46:18.992065 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 30 13:46:18.992239 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 30 13:46:19.012926 systemd-networkd[1258]: lo: Link UP Jan 30 13:46:19.013203 systemd-networkd[1258]: lo: Gained carrier Jan 30 13:46:19.014782 systemd-networkd[1258]: Enumeration completed Jan 30 13:46:19.020227 systemd-networkd[1258]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:46:19.020236 systemd-networkd[1258]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:46:19.021007 systemd-networkd[1258]: eth0: Link UP Jan 30 13:46:19.021011 systemd-networkd[1258]: eth0: Gained carrier Jan 30 13:46:19.021027 systemd-networkd[1258]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:46:19.022126 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:46:19.031864 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 13:46:19.033949 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:46:19.037921 systemd-networkd[1258]: eth0: DHCPv4 address 10.0.0.114/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 13:46:19.041575 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:46:19.047082 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:46:19.047509 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:46:19.051269 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:46:19.104861 kernel: kvm_amd: TSC scaling supported Jan 30 13:46:19.104925 kernel: kvm_amd: Nested Virtualization enabled Jan 30 13:46:19.104938 kernel: kvm_amd: Nested Paging enabled Jan 30 13:46:19.106032 kernel: kvm_amd: LBR virtualization supported Jan 30 13:46:19.106046 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 30 13:46:19.107246 kernel: kvm_amd: Virtual GIF supported Jan 30 13:46:19.128242 kernel: EDAC MC: Ver: 3.0.0 Jan 30 13:46:19.133199 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:46:19.153009 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:46:19.170933 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:46:19.179012 lvm[1300]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:46:19.211029 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:46:19.213283 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:46:19.228004 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:46:19.233017 lvm[1303]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:46:19.268379 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:46:19.269887 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:46:19.271174 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:46:19.271201 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:46:19.272257 systemd[1]: Reached target machines.target - Containers. Jan 30 13:46:19.274283 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:46:19.291046 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:46:19.293789 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:46:19.295048 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:46:19.295948 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:46:19.299561 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:46:19.303511 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:46:19.305931 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:46:19.322785 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:46:19.324195 kernel: loop0: detected capacity change from 0 to 142488 Jan 30 13:46:19.327754 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:46:19.328572 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:46:19.343836 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:46:19.365842 kernel: loop1: detected capacity change from 0 to 140768 Jan 30 13:46:19.398843 kernel: loop2: detected capacity change from 0 to 210664 Jan 30 13:46:19.425832 kernel: loop3: detected capacity change from 0 to 142488 Jan 30 13:46:19.434843 kernel: loop4: detected capacity change from 0 to 140768 Jan 30 13:46:19.443835 kernel: loop5: detected capacity change from 0 to 210664 Jan 30 13:46:19.448602 (sd-merge)[1323]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 30 13:46:19.449219 (sd-merge)[1323]: Merged extensions into '/usr'. Jan 30 13:46:19.453365 systemd[1]: Reloading requested from client PID 1311 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:46:19.453380 systemd[1]: Reloading... Jan 30 13:46:19.506850 zram_generator::config[1351]: No configuration found. Jan 30 13:46:19.549098 ldconfig[1307]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:46:19.626953 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:46:19.690764 systemd[1]: Reloading finished in 236 ms. Jan 30 13:46:19.711845 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:46:19.713528 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:46:19.726998 systemd[1]: Starting ensure-sysext.service... Jan 30 13:46:19.729016 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:46:19.734840 systemd[1]: Reloading requested from client PID 1395 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:46:19.734855 systemd[1]: Reloading... Jan 30 13:46:19.753242 systemd-tmpfiles[1396]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:46:19.753624 systemd-tmpfiles[1396]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:46:19.754616 systemd-tmpfiles[1396]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:46:19.754938 systemd-tmpfiles[1396]: ACLs are not supported, ignoring. Jan 30 13:46:19.755021 systemd-tmpfiles[1396]: ACLs are not supported, ignoring. Jan 30 13:46:19.758308 systemd-tmpfiles[1396]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:46:19.758320 systemd-tmpfiles[1396]: Skipping /boot Jan 30 13:46:19.768692 systemd-tmpfiles[1396]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:46:19.768707 systemd-tmpfiles[1396]: Skipping /boot Jan 30 13:46:19.790876 zram_generator::config[1428]: No configuration found. Jan 30 13:46:19.899951 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:46:19.964663 systemd[1]: Reloading finished in 229 ms. Jan 30 13:46:19.986489 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:46:20.004625 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:46:20.007181 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:46:20.011117 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:46:20.015023 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:46:20.018930 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:46:20.026461 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:46:20.026626 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:46:20.029338 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:46:20.034510 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:46:20.041073 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:46:20.044030 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:46:20.044136 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:46:20.046251 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:46:20.046484 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:46:20.054361 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:46:20.056494 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:46:20.056729 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:46:20.060146 augenrules[1497]: No rules Jan 30 13:46:20.062174 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:46:20.064090 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:46:20.064334 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:46:20.070392 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:46:20.075068 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:46:20.075251 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:46:20.088104 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:46:20.091712 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:46:20.093827 systemd-resolved[1475]: Positive Trust Anchors: Jan 30 13:46:20.093843 systemd-resolved[1475]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:46:20.093873 systemd-resolved[1475]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:46:20.094984 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:46:20.096164 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:46:20.098609 systemd-resolved[1475]: Defaulting to hostname 'linux'. Jan 30 13:46:20.099899 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:46:20.100961 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:46:20.102204 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:46:20.104137 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:46:20.105952 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:46:20.106168 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:46:20.107855 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:46:20.108068 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:46:20.109951 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:46:20.110184 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:46:20.114338 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:46:20.120336 systemd[1]: Reached target network.target - Network. Jan 30 13:46:20.121386 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:46:20.122721 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:46:20.122962 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:46:20.136086 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:46:20.138427 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:46:20.140411 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:46:20.142556 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:46:20.143788 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:46:20.144008 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:46:20.144229 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:46:20.145574 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:46:20.145796 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:46:20.147503 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:46:20.147725 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:46:20.151223 systemd[1]: Finished ensure-sysext.service. Jan 30 13:46:20.152509 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:46:20.152732 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:46:20.154455 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:46:20.154723 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:46:20.161530 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:46:20.161599 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:46:20.169007 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 13:46:20.230623 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 13:46:20.232249 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:46:20.233424 systemd-timesyncd[1542]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 30 13:46:20.233453 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:46:20.233471 systemd-timesyncd[1542]: Initial clock synchronization to Thu 2025-01-30 13:46:20.622369 UTC. Jan 30 13:46:20.234717 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:46:20.235999 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:46:20.237267 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:46:20.237296 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:46:20.238204 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:46:20.239414 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:46:20.240701 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:46:20.242007 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:46:20.243529 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:46:20.246659 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:46:20.249306 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:46:20.265115 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:46:20.266300 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:46:20.267289 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:46:20.268411 systemd[1]: System is tainted: cgroupsv1 Jan 30 13:46:20.268456 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:46:20.268479 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:46:20.269869 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:46:20.272095 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:46:20.274369 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:46:20.277929 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:46:20.278975 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:46:20.281029 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:46:20.286084 jq[1548]: false Jan 30 13:46:20.286478 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:46:20.289995 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:46:20.296018 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:46:20.299303 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 13:46:20.302068 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:46:20.305903 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:46:20.307429 extend-filesystems[1549]: Found loop3 Jan 30 13:46:20.309057 extend-filesystems[1549]: Found loop4 Jan 30 13:46:20.309057 extend-filesystems[1549]: Found loop5 Jan 30 13:46:20.309057 extend-filesystems[1549]: Found sr0 Jan 30 13:46:20.309057 extend-filesystems[1549]: Found vda Jan 30 13:46:20.309057 extend-filesystems[1549]: Found vda1 Jan 30 13:46:20.309057 extend-filesystems[1549]: Found vda2 Jan 30 13:46:20.309057 extend-filesystems[1549]: Found vda3 Jan 30 13:46:20.309057 extend-filesystems[1549]: Found usr Jan 30 13:46:20.309057 extend-filesystems[1549]: Found vda4 Jan 30 13:46:20.309057 extend-filesystems[1549]: Found vda6 Jan 30 13:46:20.309057 extend-filesystems[1549]: Found vda7 Jan 30 13:46:20.309057 extend-filesystems[1549]: Found vda9 Jan 30 13:46:20.309057 extend-filesystems[1549]: Checking size of /dev/vda9 Jan 30 13:46:20.329291 extend-filesystems[1549]: Resized partition /dev/vda9 Jan 30 13:46:20.316418 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:46:20.313574 dbus-daemon[1547]: [system] SELinux support is enabled Jan 30 13:46:20.327349 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:46:20.330572 jq[1565]: true Jan 30 13:46:20.327680 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:46:20.330303 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:46:20.330600 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:46:20.332262 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:46:20.332560 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:46:20.335710 extend-filesystems[1576]: resize2fs 1.47.1 (20-May-2024) Jan 30 13:46:20.343908 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 30 13:46:20.343957 update_engine[1563]: I20250130 13:46:20.337052 1563 main.cc:92] Flatcar Update Engine starting Jan 30 13:46:20.343957 update_engine[1563]: I20250130 13:46:20.338380 1563 update_check_scheduler.cc:74] Next update check in 5m31s Jan 30 13:46:20.350597 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1268) Jan 30 13:46:20.350631 jq[1577]: true Jan 30 13:46:20.354257 (ntainerd)[1578]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:46:20.369792 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:46:20.369842 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:46:20.371536 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:46:20.371560 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:46:20.377123 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:46:20.379826 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 30 13:46:20.380642 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:46:20.382780 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:46:20.400702 systemd-logind[1559]: Watching system buttons on /dev/input/event1 (Power Button) Jan 30 13:46:20.400722 systemd-logind[1559]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 13:46:20.403375 systemd-logind[1559]: New seat seat0. Jan 30 13:46:20.405038 extend-filesystems[1576]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 13:46:20.405038 extend-filesystems[1576]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 30 13:46:20.405038 extend-filesystems[1576]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 30 13:46:20.415962 extend-filesystems[1549]: Resized filesystem in /dev/vda9 Jan 30 13:46:20.405882 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:46:20.406220 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:46:20.408791 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:46:20.428933 locksmithd[1590]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:46:20.432242 bash[1604]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:46:20.434408 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:46:20.436488 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 30 13:46:20.519629 sshd_keygen[1575]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:46:20.543267 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:46:20.548986 containerd[1578]: time="2025-01-30T13:46:20.548895497Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 13:46:20.558223 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:46:20.566300 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:46:20.566628 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:46:20.571178 containerd[1578]: time="2025-01-30T13:46:20.571128113Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:46:20.572663 containerd[1578]: time="2025-01-30T13:46:20.572619168Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:46:20.572663 containerd[1578]: time="2025-01-30T13:46:20.572648093Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:46:20.572663 containerd[1578]: time="2025-01-30T13:46:20.572662470Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:46:20.572906 containerd[1578]: time="2025-01-30T13:46:20.572884125Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:46:20.572933 containerd[1578]: time="2025-01-30T13:46:20.572908230Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:46:20.572995 containerd[1578]: time="2025-01-30T13:46:20.572976889Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:46:20.573016 containerd[1578]: time="2025-01-30T13:46:20.572994011Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:46:20.573013 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:46:20.574303 containerd[1578]: time="2025-01-30T13:46:20.573259880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:46:20.574303 containerd[1578]: time="2025-01-30T13:46:20.573278074Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:46:20.574303 containerd[1578]: time="2025-01-30T13:46:20.573294455Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:46:20.574303 containerd[1578]: time="2025-01-30T13:46:20.573305195Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:46:20.574303 containerd[1578]: time="2025-01-30T13:46:20.573397207Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:46:20.574303 containerd[1578]: time="2025-01-30T13:46:20.573623201Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:46:20.574303 containerd[1578]: time="2025-01-30T13:46:20.573839707Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:46:20.574303 containerd[1578]: time="2025-01-30T13:46:20.573856909Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:46:20.574303 containerd[1578]: time="2025-01-30T13:46:20.573974860Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:46:20.574303 containerd[1578]: time="2025-01-30T13:46:20.574028621Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:46:20.582070 containerd[1578]: time="2025-01-30T13:46:20.582035314Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:46:20.582150 containerd[1578]: time="2025-01-30T13:46:20.582099654Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:46:20.582150 containerd[1578]: time="2025-01-30T13:46:20.582118380Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:46:20.582150 containerd[1578]: time="2025-01-30T13:46:20.582135231Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:46:20.582205 containerd[1578]: time="2025-01-30T13:46:20.582150981Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:46:20.582341 containerd[1578]: time="2025-01-30T13:46:20.582320218Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:46:20.583074 containerd[1578]: time="2025-01-30T13:46:20.583039857Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:46:20.583227 containerd[1578]: time="2025-01-30T13:46:20.583183006Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:46:20.583227 containerd[1578]: time="2025-01-30T13:46:20.583206079Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:46:20.583275 containerd[1578]: time="2025-01-30T13:46:20.583227299Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:46:20.583275 containerd[1578]: time="2025-01-30T13:46:20.583254931Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:46:20.583326 containerd[1578]: time="2025-01-30T13:46:20.583273395Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:46:20.583326 containerd[1578]: time="2025-01-30T13:46:20.583290147Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:46:20.583326 containerd[1578]: time="2025-01-30T13:46:20.583310144Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:46:20.583386 containerd[1578]: time="2025-01-30T13:46:20.583328128Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:46:20.583386 containerd[1578]: time="2025-01-30T13:46:20.583344699Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:46:20.583386 containerd[1578]: time="2025-01-30T13:46:20.583362112Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:46:20.583386 containerd[1578]: time="2025-01-30T13:46:20.583378693Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:46:20.583463 containerd[1578]: time="2025-01-30T13:46:20.583404912Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:46:20.583463 containerd[1578]: time="2025-01-30T13:46:20.583426061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:46:20.583463 containerd[1578]: time="2025-01-30T13:46:20.583443975Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:46:20.583463 containerd[1578]: time="2025-01-30T13:46:20.583461418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:46:20.583553 containerd[1578]: time="2025-01-30T13:46:20.583477017Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:46:20.583553 containerd[1578]: time="2025-01-30T13:46:20.583496063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:46:20.583553 containerd[1578]: time="2025-01-30T13:46:20.583525718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:46:20.583553 containerd[1578]: time="2025-01-30T13:46:20.583544564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:46:20.583622 containerd[1578]: time="2025-01-30T13:46:20.583565022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:46:20.583622 containerd[1578]: time="2025-01-30T13:46:20.583604867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:46:20.583664 containerd[1578]: time="2025-01-30T13:46:20.583624173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:46:20.583713 containerd[1578]: time="2025-01-30T13:46:20.583674357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:46:20.583765 containerd[1578]: time="2025-01-30T13:46:20.583746462Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:46:20.583801 containerd[1578]: time="2025-01-30T13:46:20.583776689Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:46:20.583873 containerd[1578]: time="2025-01-30T13:46:20.583851780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:46:20.583896 containerd[1578]: time="2025-01-30T13:46:20.583873911Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:46:20.583926 containerd[1578]: time="2025-01-30T13:46:20.583887767Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:46:20.583991 containerd[1578]: time="2025-01-30T13:46:20.583971184Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:46:20.584013 containerd[1578]: time="2025-01-30T13:46:20.583999136Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:46:20.584035 containerd[1578]: time="2025-01-30T13:46:20.584018643Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:46:20.584054 containerd[1578]: time="2025-01-30T13:46:20.584040904Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:46:20.584075 containerd[1578]: time="2025-01-30T13:46:20.584051694Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:46:20.584075 containerd[1578]: time="2025-01-30T13:46:20.584067574Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:46:20.584130 containerd[1578]: time="2025-01-30T13:46:20.584096138Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:46:20.584130 containerd[1578]: time="2025-01-30T13:46:20.584112138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:46:20.584761 containerd[1578]: time="2025-01-30T13:46:20.584652661Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:46:20.584761 containerd[1578]: time="2025-01-30T13:46:20.584745145Z" level=info msg="Connect containerd service" Jan 30 13:46:20.584994 containerd[1578]: time="2025-01-30T13:46:20.584800819Z" level=info msg="using legacy CRI server" Jan 30 13:46:20.584994 containerd[1578]: time="2025-01-30T13:46:20.584826948Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:46:20.584994 containerd[1578]: time="2025-01-30T13:46:20.584938878Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:46:20.585601 containerd[1578]: time="2025-01-30T13:46:20.585576714Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:46:20.585769 containerd[1578]: time="2025-01-30T13:46:20.585734760Z" level=info msg="Start subscribing containerd event" Jan 30 13:46:20.585837 containerd[1578]: time="2025-01-30T13:46:20.585787760Z" level=info msg="Start recovering state" Jan 30 13:46:20.585920 containerd[1578]: time="2025-01-30T13:46:20.585886875Z" level=info msg="Start event monitor" Jan 30 13:46:20.585920 containerd[1578]: time="2025-01-30T13:46:20.585905661Z" level=info msg="Start snapshots syncer" Jan 30 13:46:20.585920 containerd[1578]: time="2025-01-30T13:46:20.585915339Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:46:20.586480 containerd[1578]: time="2025-01-30T13:46:20.585923594Z" level=info msg="Start streaming server" Jan 30 13:46:20.586480 containerd[1578]: time="2025-01-30T13:46:20.585979599Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:46:20.586480 containerd[1578]: time="2025-01-30T13:46:20.586031106Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:46:20.586480 containerd[1578]: time="2025-01-30T13:46:20.586090237Z" level=info msg="containerd successfully booted in 0.038536s" Jan 30 13:46:20.586185 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:46:20.587830 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:46:20.598134 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:46:20.600386 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 13:46:20.601688 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:46:20.794065 systemd-networkd[1258]: eth0: Gained IPv6LL Jan 30 13:46:20.797272 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:46:20.799028 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:46:20.812991 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 30 13:46:20.816295 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:46:20.818890 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:46:20.839466 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 30 13:46:20.839948 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 30 13:46:20.841863 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 13:46:20.842438 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:46:21.452613 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:46:21.454362 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:46:21.456829 systemd[1]: Startup finished in 5.459s (kernel) + 4.026s (userspace) = 9.486s. Jan 30 13:46:21.458071 (kubelet)[1673]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:46:21.934240 kubelet[1673]: E0130 13:46:21.934100 1673 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:46:21.938475 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:46:21.938777 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:46:30.206327 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:46:30.219079 systemd[1]: Started sshd@0-10.0.0.114:22-10.0.0.1:41890.service - OpenSSH per-connection server daemon (10.0.0.1:41890). Jan 30 13:46:30.264840 sshd[1687]: Accepted publickey for core from 10.0.0.1 port 41890 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:46:30.266981 sshd[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:46:30.277030 systemd-logind[1559]: New session 1 of user core. Jan 30 13:46:30.278129 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:46:30.290039 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:46:30.303200 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:46:30.310165 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:46:30.314375 (systemd)[1692]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:46:30.424486 systemd[1692]: Queued start job for default target default.target. Jan 30 13:46:30.424913 systemd[1692]: Created slice app.slice - User Application Slice. Jan 30 13:46:30.424937 systemd[1692]: Reached target paths.target - Paths. Jan 30 13:46:30.424950 systemd[1692]: Reached target timers.target - Timers. Jan 30 13:46:30.438895 systemd[1692]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:46:30.446922 systemd[1692]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:46:30.446989 systemd[1692]: Reached target sockets.target - Sockets. Jan 30 13:46:30.447001 systemd[1692]: Reached target basic.target - Basic System. Jan 30 13:46:30.447038 systemd[1692]: Reached target default.target - Main User Target. Jan 30 13:46:30.447069 systemd[1692]: Startup finished in 126ms. Jan 30 13:46:30.447760 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:46:30.449288 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:46:30.511093 systemd[1]: Started sshd@1-10.0.0.114:22-10.0.0.1:41900.service - OpenSSH per-connection server daemon (10.0.0.1:41900). Jan 30 13:46:30.543966 sshd[1705]: Accepted publickey for core from 10.0.0.1 port 41900 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:46:30.545559 sshd[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:46:30.549929 systemd-logind[1559]: New session 2 of user core. Jan 30 13:46:30.564176 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:46:30.621435 sshd[1705]: pam_unix(sshd:session): session closed for user core Jan 30 13:46:30.637040 systemd[1]: Started sshd@2-10.0.0.114:22-10.0.0.1:41910.service - OpenSSH per-connection server daemon (10.0.0.1:41910). Jan 30 13:46:30.637502 systemd[1]: sshd@1-10.0.0.114:22-10.0.0.1:41900.service: Deactivated successfully. Jan 30 13:46:30.639648 systemd-logind[1559]: Session 2 logged out. Waiting for processes to exit. Jan 30 13:46:30.640880 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 13:46:30.641814 systemd-logind[1559]: Removed session 2. Jan 30 13:46:30.671060 sshd[1710]: Accepted publickey for core from 10.0.0.1 port 41910 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:46:30.672947 sshd[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:46:30.677709 systemd-logind[1559]: New session 3 of user core. Jan 30 13:46:30.693106 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:46:30.744007 sshd[1710]: pam_unix(sshd:session): session closed for user core Jan 30 13:46:30.753044 systemd[1]: Started sshd@3-10.0.0.114:22-10.0.0.1:41926.service - OpenSSH per-connection server daemon (10.0.0.1:41926). Jan 30 13:46:30.753487 systemd[1]: sshd@2-10.0.0.114:22-10.0.0.1:41910.service: Deactivated successfully. Jan 30 13:46:30.755849 systemd-logind[1559]: Session 3 logged out. Waiting for processes to exit. Jan 30 13:46:30.757231 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 13:46:30.758088 systemd-logind[1559]: Removed session 3. Jan 30 13:46:30.785291 sshd[1718]: Accepted publickey for core from 10.0.0.1 port 41926 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:46:30.786904 sshd[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:46:30.790865 systemd-logind[1559]: New session 4 of user core. Jan 30 13:46:30.802240 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:46:30.859270 sshd[1718]: pam_unix(sshd:session): session closed for user core Jan 30 13:46:30.869082 systemd[1]: Started sshd@4-10.0.0.114:22-10.0.0.1:51516.service - OpenSSH per-connection server daemon (10.0.0.1:51516). Jan 30 13:46:30.869586 systemd[1]: sshd@3-10.0.0.114:22-10.0.0.1:41926.service: Deactivated successfully. Jan 30 13:46:30.871744 systemd-logind[1559]: Session 4 logged out. Waiting for processes to exit. Jan 30 13:46:30.873211 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 13:46:30.874456 systemd-logind[1559]: Removed session 4. Jan 30 13:46:30.901965 sshd[1726]: Accepted publickey for core from 10.0.0.1 port 51516 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:46:30.903537 sshd[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:46:30.907599 systemd-logind[1559]: New session 5 of user core. Jan 30 13:46:30.917159 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:46:30.974878 sudo[1733]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 13:46:30.975211 sudo[1733]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:46:30.998911 sudo[1733]: pam_unix(sudo:session): session closed for user root Jan 30 13:46:31.000864 sshd[1726]: pam_unix(sshd:session): session closed for user core Jan 30 13:46:31.018279 systemd[1]: Started sshd@5-10.0.0.114:22-10.0.0.1:51528.service - OpenSSH per-connection server daemon (10.0.0.1:51528). Jan 30 13:46:31.019187 systemd[1]: sshd@4-10.0.0.114:22-10.0.0.1:51516.service: Deactivated successfully. Jan 30 13:46:31.021802 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 13:46:31.022761 systemd-logind[1559]: Session 5 logged out. Waiting for processes to exit. Jan 30 13:46:31.024503 systemd-logind[1559]: Removed session 5. Jan 30 13:46:31.051562 sshd[1736]: Accepted publickey for core from 10.0.0.1 port 51528 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:46:31.053433 sshd[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:46:31.057812 systemd-logind[1559]: New session 6 of user core. Jan 30 13:46:31.064084 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:46:31.119588 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 13:46:31.119986 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:46:31.123977 sudo[1743]: pam_unix(sudo:session): session closed for user root Jan 30 13:46:31.130423 sudo[1742]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 13:46:31.130842 sudo[1742]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:46:31.153170 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 13:46:31.155130 auditctl[1746]: No rules Jan 30 13:46:31.156489 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:46:31.156852 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 13:46:31.158881 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:46:31.192882 augenrules[1765]: No rules Jan 30 13:46:31.194718 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:46:31.195963 sudo[1742]: pam_unix(sudo:session): session closed for user root Jan 30 13:46:31.197751 sshd[1736]: pam_unix(sshd:session): session closed for user core Jan 30 13:46:31.217082 systemd[1]: Started sshd@6-10.0.0.114:22-10.0.0.1:51544.service - OpenSSH per-connection server daemon (10.0.0.1:51544). Jan 30 13:46:31.218220 systemd[1]: sshd@5-10.0.0.114:22-10.0.0.1:51528.service: Deactivated successfully. Jan 30 13:46:31.220454 systemd-logind[1559]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:46:31.221498 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:46:31.222627 systemd-logind[1559]: Removed session 6. Jan 30 13:46:31.252687 sshd[1771]: Accepted publickey for core from 10.0.0.1 port 51544 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:46:31.254372 sshd[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:46:31.258681 systemd-logind[1559]: New session 7 of user core. Jan 30 13:46:31.274211 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:46:31.328471 sudo[1779]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:46:31.328810 sudo[1779]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:46:31.355136 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 30 13:46:31.378608 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 30 13:46:31.378999 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 30 13:46:31.878720 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:46:31.889014 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:46:31.907774 systemd[1]: Reloading requested from client PID 1832 ('systemctl') (unit session-7.scope)... Jan 30 13:46:31.907793 systemd[1]: Reloading... Jan 30 13:46:31.987864 zram_generator::config[1873]: No configuration found. Jan 30 13:46:32.173160 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:46:32.247183 systemd[1]: Reloading finished in 338 ms. Jan 30 13:46:32.300863 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 13:46:32.300962 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 13:46:32.301301 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:46:32.303050 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:46:32.444224 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:46:32.448726 (kubelet)[1930]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:46:32.488137 kubelet[1930]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:46:32.488137 kubelet[1930]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:46:32.488137 kubelet[1930]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:46:32.488530 kubelet[1930]: I0130 13:46:32.488186 1930 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:46:32.721890 kubelet[1930]: I0130 13:46:32.721747 1930 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 13:46:32.721890 kubelet[1930]: I0130 13:46:32.721779 1930 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:46:32.722033 kubelet[1930]: I0130 13:46:32.721993 1930 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 13:46:32.735070 kubelet[1930]: I0130 13:46:32.735041 1930 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:46:32.751556 kubelet[1930]: I0130 13:46:32.751516 1930 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:46:32.752919 kubelet[1930]: I0130 13:46:32.752865 1930 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:46:32.753095 kubelet[1930]: I0130 13:46:32.752908 1930 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.114","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 13:46:32.753212 kubelet[1930]: I0130 13:46:32.753101 1930 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:46:32.753212 kubelet[1930]: I0130 13:46:32.753112 1930 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 13:46:32.753277 kubelet[1930]: I0130 13:46:32.753257 1930 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:46:32.753914 kubelet[1930]: I0130 13:46:32.753887 1930 kubelet.go:400] "Attempting to sync node with API server" Jan 30 13:46:32.753914 kubelet[1930]: I0130 13:46:32.753906 1930 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:46:32.753956 kubelet[1930]: I0130 13:46:32.753930 1930 kubelet.go:312] "Adding apiserver pod source" Jan 30 13:46:32.753956 kubelet[1930]: I0130 13:46:32.753950 1930 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:46:32.754065 kubelet[1930]: E0130 13:46:32.754013 1930 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:46:32.754132 kubelet[1930]: E0130 13:46:32.754104 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:46:32.757232 kubelet[1930]: W0130 13:46:32.757178 1930 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.114" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 30 13:46:32.757232 kubelet[1930]: E0130 13:46:32.757209 1930 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.114" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 30 13:46:32.757459 kubelet[1930]: W0130 13:46:32.757431 1930 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 30 13:46:32.757511 kubelet[1930]: E0130 13:46:32.757459 1930 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 30 13:46:32.757671 kubelet[1930]: I0130 13:46:32.757640 1930 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:46:32.758832 kubelet[1930]: I0130 13:46:32.758799 1930 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:46:32.758887 kubelet[1930]: W0130 13:46:32.758879 1930 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:46:32.759590 kubelet[1930]: I0130 13:46:32.759561 1930 server.go:1264] "Started kubelet" Jan 30 13:46:32.759746 kubelet[1930]: I0130 13:46:32.759682 1930 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:46:32.760416 kubelet[1930]: I0130 13:46:32.759804 1930 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:46:32.760416 kubelet[1930]: I0130 13:46:32.760159 1930 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:46:32.761571 kubelet[1930]: I0130 13:46:32.761213 1930 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:46:32.761571 kubelet[1930]: I0130 13:46:32.761241 1930 server.go:455] "Adding debug handlers to kubelet server" Jan 30 13:46:32.763153 kubelet[1930]: E0130 13:46:32.762495 1930 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 30 13:46:32.763153 kubelet[1930]: I0130 13:46:32.762539 1930 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 13:46:32.763153 kubelet[1930]: I0130 13:46:32.762630 1930 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:46:32.763153 kubelet[1930]: I0130 13:46:32.762675 1930 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:46:32.763874 kubelet[1930]: E0130 13:46:32.763685 1930 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:46:32.764554 kubelet[1930]: I0130 13:46:32.764531 1930 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:46:32.764646 kubelet[1930]: I0130 13:46:32.764620 1930 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:46:32.765796 kubelet[1930]: I0130 13:46:32.765777 1930 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:46:32.782444 kubelet[1930]: E0130 13:46:32.782373 1930 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.114\" not found" node="10.0.0.114" Jan 30 13:46:32.785257 kubelet[1930]: I0130 13:46:32.785239 1930 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:46:32.785416 kubelet[1930]: I0130 13:46:32.785345 1930 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:46:32.785416 kubelet[1930]: I0130 13:46:32.785369 1930 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:46:32.863835 kubelet[1930]: I0130 13:46:32.863766 1930 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.114" Jan 30 13:46:32.898400 kubelet[1930]: I0130 13:46:32.898357 1930 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.114" Jan 30 13:46:33.050567 kubelet[1930]: E0130 13:46:33.050432 1930 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 30 13:46:33.117597 kubelet[1930]: I0130 13:46:33.117539 1930 policy_none.go:49] "None policy: Start" Jan 30 13:46:33.118460 kubelet[1930]: I0130 13:46:33.118396 1930 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:46:33.118460 kubelet[1930]: I0130 13:46:33.118453 1930 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:46:33.125070 kubelet[1930]: I0130 13:46:33.125036 1930 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:46:33.125297 kubelet[1930]: I0130 13:46:33.125261 1930 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:46:33.125394 kubelet[1930]: I0130 13:46:33.125382 1930 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:46:33.127511 kubelet[1930]: E0130 13:46:33.127479 1930 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.114\" not found" Jan 30 13:46:33.129622 sudo[1779]: pam_unix(sudo:session): session closed for user root Jan 30 13:46:33.131624 sshd[1771]: pam_unix(sshd:session): session closed for user core Jan 30 13:46:33.135491 systemd[1]: sshd@6-10.0.0.114:22-10.0.0.1:51544.service: Deactivated successfully. Jan 30 13:46:33.138131 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:46:33.138965 systemd-logind[1559]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:46:33.139807 systemd-logind[1559]: Removed session 7. Jan 30 13:46:33.147870 kubelet[1930]: I0130 13:46:33.147797 1930 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:46:33.149182 kubelet[1930]: I0130 13:46:33.149139 1930 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:46:33.149182 kubelet[1930]: I0130 13:46:33.149180 1930 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:46:33.149251 kubelet[1930]: I0130 13:46:33.149202 1930 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 13:46:33.149278 kubelet[1930]: E0130 13:46:33.149253 1930 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 30 13:46:33.150590 kubelet[1930]: E0130 13:46:33.150552 1930 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 30 13:46:33.251695 kubelet[1930]: E0130 13:46:33.251602 1930 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 30 13:46:33.352754 kubelet[1930]: E0130 13:46:33.352620 1930 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 30 13:46:33.452948 kubelet[1930]: E0130 13:46:33.452908 1930 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 30 13:46:33.553783 kubelet[1930]: E0130 13:46:33.553742 1930 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 30 13:46:33.654069 kubelet[1930]: E0130 13:46:33.653969 1930 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 30 13:46:33.724232 kubelet[1930]: I0130 13:46:33.724176 1930 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 30 13:46:33.724390 kubelet[1930]: W0130 13:46:33.724369 1930 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 30 13:46:33.724469 kubelet[1930]: W0130 13:46:33.724393 1930 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 30 13:46:33.754682 kubelet[1930]: E0130 13:46:33.754665 1930 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 30 13:46:33.754682 kubelet[1930]: E0130 13:46:33.754666 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:46:33.855138 kubelet[1930]: E0130 13:46:33.855101 1930 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 30 13:46:33.955602 kubelet[1930]: E0130 13:46:33.955523 1930 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Jan 30 13:46:34.056527 kubelet[1930]: I0130 13:46:34.056511 1930 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 30 13:46:34.056858 containerd[1578]: time="2025-01-30T13:46:34.056822291Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 13:46:34.057183 kubelet[1930]: I0130 13:46:34.056984 1930 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 30 13:46:34.755653 kubelet[1930]: I0130 13:46:34.755616 1930 apiserver.go:52] "Watching apiserver" Jan 30 13:46:34.755653 kubelet[1930]: E0130 13:46:34.755629 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:46:34.759348 kubelet[1930]: I0130 13:46:34.759296 1930 topology_manager.go:215] "Topology Admit Handler" podUID="de663b58-e8ae-403f-a76e-8ec55901f8bd" podNamespace="kube-system" podName="cilium-cwvwk" Jan 30 13:46:34.759470 kubelet[1930]: I0130 13:46:34.759450 1930 topology_manager.go:215] "Topology Admit Handler" podUID="279b22ee-166f-4a26-b99a-3ef4f5ad404e" podNamespace="kube-system" podName="kube-proxy-s67t7" Jan 30 13:46:34.763223 kubelet[1930]: I0130 13:46:34.763206 1930 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:46:34.772716 kubelet[1930]: I0130 13:46:34.772671 1930 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de663b58-e8ae-403f-a76e-8ec55901f8bd-xtables-lock\") pod \"cilium-cwvwk\" (UID: \"de663b58-e8ae-403f-a76e-8ec55901f8bd\") " pod="kube-system/cilium-cwvwk" Jan 30 13:46:34.772773 kubelet[1930]: I0130 13:46:34.772727 1930 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/279b22ee-166f-4a26-b99a-3ef4f5ad404e-kube-proxy\") pod \"kube-proxy-s67t7\" (UID: \"279b22ee-166f-4a26-b99a-3ef4f5ad404e\") " pod="kube-system/kube-proxy-s67t7" Jan 30 13:46:34.772773 kubelet[1930]: I0130 13:46:34.772749 1930 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/de663b58-e8ae-403f-a76e-8ec55901f8bd-cilium-run\") pod \"cilium-cwvwk\" (UID: \"de663b58-e8ae-403f-a76e-8ec55901f8bd\") " pod="kube-system/cilium-cwvwk" Jan 30 13:46:34.772773 kubelet[1930]: I0130 13:46:34.772767 1930 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/de663b58-e8ae-403f-a76e-8ec55901f8bd-clustermesh-secrets\") pod \"cilium-cwvwk\" (UID: \"de663b58-e8ae-403f-a76e-8ec55901f8bd\") " pod="kube-system/cilium-cwvwk" Jan 30 13:46:34.772864 kubelet[1930]: I0130 13:46:34.772784 1930 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/de663b58-e8ae-403f-a76e-8ec55901f8bd-cilium-config-path\") pod \"cilium-cwvwk\" (UID: \"de663b58-e8ae-403f-a76e-8ec55901f8bd\") " pod="kube-system/cilium-cwvwk" Jan 30 13:46:34.772864 kubelet[1930]: I0130 13:46:34.772799 1930 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/de663b58-e8ae-403f-a76e-8ec55901f8bd-hubble-tls\") pod \"cilium-cwvwk\" (UID: \"de663b58-e8ae-403f-a76e-8ec55901f8bd\") " pod="kube-system/cilium-cwvwk" Jan 30 13:46:34.772917 kubelet[1930]: I0130 13:46:34.772850 1930 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/279b22ee-166f-4a26-b99a-3ef4f5ad404e-xtables-lock\") pod \"kube-proxy-s67t7\" (UID: \"279b22ee-166f-4a26-b99a-3ef4f5ad404e\") " pod="kube-system/kube-proxy-s67t7" Jan 30 13:46:34.772917 kubelet[1930]: I0130 13:46:34.772888 1930 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/279b22ee-166f-4a26-b99a-3ef4f5ad404e-lib-modules\") pod \"kube-proxy-s67t7\" (UID: \"279b22ee-166f-4a26-b99a-3ef4f5ad404e\") " pod="kube-system/kube-proxy-s67t7" Jan 30 13:46:34.772917 kubelet[1930]: I0130 13:46:34.772902 1930 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6ksd\" (UniqueName: \"kubernetes.io/projected/279b22ee-166f-4a26-b99a-3ef4f5ad404e-kube-api-access-n6ksd\") pod \"kube-proxy-s67t7\" (UID: \"279b22ee-166f-4a26-b99a-3ef4f5ad404e\") " pod="kube-system/kube-proxy-s67t7" Jan 30 13:46:34.773002 kubelet[1930]: I0130 13:46:34.772918 1930 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/de663b58-e8ae-403f-a76e-8ec55901f8bd-bpf-maps\") pod \"cilium-cwvwk\" (UID: \"de663b58-e8ae-403f-a76e-8ec55901f8bd\") " pod="kube-system/cilium-cwvwk" Jan 30 13:46:34.773002 kubelet[1930]: I0130 13:46:34.772937 1930 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/de663b58-e8ae-403f-a76e-8ec55901f8bd-cni-path\") pod \"cilium-cwvwk\" (UID: \"de663b58-e8ae-403f-a76e-8ec55901f8bd\") " pod="kube-system/cilium-cwvwk" Jan 30 13:46:34.773002 kubelet[1930]: I0130 13:46:34.772953 1930 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de663b58-e8ae-403f-a76e-8ec55901f8bd-lib-modules\") pod \"cilium-cwvwk\" (UID: \"de663b58-e8ae-403f-a76e-8ec55901f8bd\") " pod="kube-system/cilium-cwvwk" Jan 30 13:46:34.773002 kubelet[1930]: I0130 13:46:34.772971 1930 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/de663b58-e8ae-403f-a76e-8ec55901f8bd-hostproc\") pod \"cilium-cwvwk\" (UID: \"de663b58-e8ae-403f-a76e-8ec55901f8bd\") " pod="kube-system/cilium-cwvwk" Jan 30 13:46:34.773002 kubelet[1930]: I0130 13:46:34.772985 1930 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/de663b58-e8ae-403f-a76e-8ec55901f8bd-cilium-cgroup\") pod \"cilium-cwvwk\" (UID: \"de663b58-e8ae-403f-a76e-8ec55901f8bd\") " pod="kube-system/cilium-cwvwk" Jan 30 13:46:34.773002 kubelet[1930]: I0130 13:46:34.772998 1930 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/de663b58-e8ae-403f-a76e-8ec55901f8bd-etc-cni-netd\") pod \"cilium-cwvwk\" (UID: \"de663b58-e8ae-403f-a76e-8ec55901f8bd\") " pod="kube-system/cilium-cwvwk" Jan 30 13:46:34.773155 kubelet[1930]: I0130 13:46:34.773032 1930 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/de663b58-e8ae-403f-a76e-8ec55901f8bd-host-proc-sys-net\") pod \"cilium-cwvwk\" (UID: \"de663b58-e8ae-403f-a76e-8ec55901f8bd\") " pod="kube-system/cilium-cwvwk" Jan 30 13:46:34.773155 kubelet[1930]: I0130 13:46:34.773065 1930 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/de663b58-e8ae-403f-a76e-8ec55901f8bd-host-proc-sys-kernel\") pod \"cilium-cwvwk\" (UID: \"de663b58-e8ae-403f-a76e-8ec55901f8bd\") " pod="kube-system/cilium-cwvwk" Jan 30 13:46:34.773155 kubelet[1930]: I0130 13:46:34.773083 1930 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh458\" (UniqueName: \"kubernetes.io/projected/de663b58-e8ae-403f-a76e-8ec55901f8bd-kube-api-access-fh458\") pod \"cilium-cwvwk\" (UID: \"de663b58-e8ae-403f-a76e-8ec55901f8bd\") " pod="kube-system/cilium-cwvwk" Jan 30 13:46:35.063549 kubelet[1930]: E0130 13:46:35.063410 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:35.063897 kubelet[1930]: E0130 13:46:35.063843 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:35.064237 containerd[1578]: time="2025-01-30T13:46:35.064134261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s67t7,Uid:279b22ee-166f-4a26-b99a-3ef4f5ad404e,Namespace:kube-system,Attempt:0,}" Jan 30 13:46:35.064596 containerd[1578]: time="2025-01-30T13:46:35.064398136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cwvwk,Uid:de663b58-e8ae-403f-a76e-8ec55901f8bd,Namespace:kube-system,Attempt:0,}" Jan 30 13:46:35.615255 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount770135970.mount: Deactivated successfully. Jan 30 13:46:35.623946 containerd[1578]: time="2025-01-30T13:46:35.623892474Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:46:35.624822 containerd[1578]: time="2025-01-30T13:46:35.624768585Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:46:35.626514 containerd[1578]: time="2025-01-30T13:46:35.626473155Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:46:35.627468 containerd[1578]: time="2025-01-30T13:46:35.627443957Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 30 13:46:35.628682 containerd[1578]: time="2025-01-30T13:46:35.628650280Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:46:35.631320 containerd[1578]: time="2025-01-30T13:46:35.631288669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:46:35.632677 containerd[1578]: time="2025-01-30T13:46:35.632379366Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 567.515659ms" Jan 30 13:46:35.636269 containerd[1578]: time="2025-01-30T13:46:35.636232829Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 571.760029ms" Jan 30 13:46:35.745702 containerd[1578]: time="2025-01-30T13:46:35.745515725Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:46:35.745702 containerd[1578]: time="2025-01-30T13:46:35.745585514Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:46:35.745702 containerd[1578]: time="2025-01-30T13:46:35.745604279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:46:35.746067 containerd[1578]: time="2025-01-30T13:46:35.745791723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:46:35.746099 containerd[1578]: time="2025-01-30T13:46:35.746040376Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:46:35.746125 containerd[1578]: time="2025-01-30T13:46:35.746106986Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:46:35.746151 containerd[1578]: time="2025-01-30T13:46:35.746132292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:46:35.746389 containerd[1578]: time="2025-01-30T13:46:35.746296560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:46:35.756209 kubelet[1930]: E0130 13:46:35.756163 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:46:35.826794 containerd[1578]: time="2025-01-30T13:46:35.826750428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cwvwk,Uid:de663b58-e8ae-403f-a76e-8ec55901f8bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d7144513a594fb0486968fdbd39c99288e248ba95c01c933c459a35e14f968a\"" Jan 30 13:46:35.828312 kubelet[1930]: E0130 13:46:35.828047 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:35.829036 containerd[1578]: time="2025-01-30T13:46:35.829008315Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 30 13:46:35.829592 containerd[1578]: time="2025-01-30T13:46:35.829550087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s67t7,Uid:279b22ee-166f-4a26-b99a-3ef4f5ad404e,Namespace:kube-system,Attempt:0,} returns sandbox id \"0337d03ec4a6bebe387ee3bbdb2960727d22a494a0e6322bbe66321c099cb26a\"" Jan 30 13:46:35.830053 kubelet[1930]: E0130 13:46:35.830002 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:36.757030 kubelet[1930]: E0130 13:46:36.756970 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:46:37.757357 kubelet[1930]: E0130 13:46:37.757303 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:46:38.757557 kubelet[1930]: E0130 13:46:38.757524 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:46:39.012503 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount695658890.mount: Deactivated successfully. Jan 30 13:46:39.758686 kubelet[1930]: E0130 13:46:39.758638 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:46:40.759866 kubelet[1930]: E0130 13:46:40.759798 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:46:40.974123 containerd[1578]: time="2025-01-30T13:46:40.974069333Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:40.974783 containerd[1578]: time="2025-01-30T13:46:40.974756177Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 30 13:46:40.976397 containerd[1578]: time="2025-01-30T13:46:40.976358157Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:40.977847 containerd[1578]: time="2025-01-30T13:46:40.977821126Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 5.148672678s" Jan 30 13:46:40.977899 containerd[1578]: time="2025-01-30T13:46:40.977850181Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 30 13:46:40.978954 containerd[1578]: time="2025-01-30T13:46:40.978916166Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 13:46:40.980123 containerd[1578]: time="2025-01-30T13:46:40.980094905Z" level=info msg="CreateContainer within sandbox \"4d7144513a594fb0486968fdbd39c99288e248ba95c01c933c459a35e14f968a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 13:46:40.996826 containerd[1578]: time="2025-01-30T13:46:40.996770222Z" level=info msg="CreateContainer within sandbox \"4d7144513a594fb0486968fdbd39c99288e248ba95c01c933c459a35e14f968a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7564126c361e084c12c0033f262c7b6f5959e21476c0d90a189a869555d1715b\"" Jan 30 13:46:40.997388 containerd[1578]: time="2025-01-30T13:46:40.997352852Z" level=info msg="StartContainer for \"7564126c361e084c12c0033f262c7b6f5959e21476c0d90a189a869555d1715b\"" Jan 30 13:46:41.050985 containerd[1578]: time="2025-01-30T13:46:41.050883694Z" level=info msg="StartContainer for \"7564126c361e084c12c0033f262c7b6f5959e21476c0d90a189a869555d1715b\" returns successfully" Jan 30 13:46:41.165800 kubelet[1930]: E0130 13:46:41.165762 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:41.504335 containerd[1578]: time="2025-01-30T13:46:41.504190346Z" level=info msg="shim disconnected" id=7564126c361e084c12c0033f262c7b6f5959e21476c0d90a189a869555d1715b namespace=k8s.io Jan 30 13:46:41.504335 containerd[1578]: time="2025-01-30T13:46:41.504253224Z" level=warning msg="cleaning up after shim disconnected" id=7564126c361e084c12c0033f262c7b6f5959e21476c0d90a189a869555d1715b namespace=k8s.io Jan 30 13:46:41.504335 containerd[1578]: time="2025-01-30T13:46:41.504264422Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:46:41.760850 kubelet[1930]: E0130 13:46:41.760701 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:46:41.991348 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7564126c361e084c12c0033f262c7b6f5959e21476c0d90a189a869555d1715b-rootfs.mount: Deactivated successfully. Jan 30 13:46:42.168339 kubelet[1930]: E0130 13:46:42.168194 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:42.170097 containerd[1578]: time="2025-01-30T13:46:42.170060116Z" level=info msg="CreateContainer within sandbox \"4d7144513a594fb0486968fdbd39c99288e248ba95c01c933c459a35e14f968a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 13:46:42.186083 containerd[1578]: time="2025-01-30T13:46:42.186027138Z" level=info msg="CreateContainer within sandbox \"4d7144513a594fb0486968fdbd39c99288e248ba95c01c933c459a35e14f968a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"09beda90046387238646b4b7773221b12edf8a523f5497addf19f389202d6df7\"" Jan 30 13:46:42.186765 containerd[1578]: time="2025-01-30T13:46:42.186737542Z" level=info msg="StartContainer for \"09beda90046387238646b4b7773221b12edf8a523f5497addf19f389202d6df7\"" Jan 30 13:46:42.238778 containerd[1578]: time="2025-01-30T13:46:42.238740740Z" level=info msg="StartContainer for \"09beda90046387238646b4b7773221b12edf8a523f5497addf19f389202d6df7\" returns successfully" Jan 30 13:46:42.248856 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:46:42.249401 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:46:42.249467 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:46:42.255222 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:46:42.271468 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:46:42.361733 containerd[1578]: time="2025-01-30T13:46:42.361666674Z" level=info msg="shim disconnected" id=09beda90046387238646b4b7773221b12edf8a523f5497addf19f389202d6df7 namespace=k8s.io Jan 30 13:46:42.361733 containerd[1578]: time="2025-01-30T13:46:42.361729677Z" level=warning msg="cleaning up after shim disconnected" id=09beda90046387238646b4b7773221b12edf8a523f5497addf19f389202d6df7 namespace=k8s.io Jan 30 13:46:42.361733 containerd[1578]: time="2025-01-30T13:46:42.361738559Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:46:42.642277 containerd[1578]: time="2025-01-30T13:46:42.642163512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:42.642917 containerd[1578]: time="2025-01-30T13:46:42.642874088Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058337" Jan 30 13:46:42.643978 containerd[1578]: time="2025-01-30T13:46:42.643946460Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:42.646112 containerd[1578]: time="2025-01-30T13:46:42.646082375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:42.646709 containerd[1578]: time="2025-01-30T13:46:42.646669436Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 1.667710203s" Jan 30 13:46:42.646709 containerd[1578]: time="2025-01-30T13:46:42.646705188Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 30 13:46:42.648705 containerd[1578]: time="2025-01-30T13:46:42.648680370Z" level=info msg="CreateContainer within sandbox \"0337d03ec4a6bebe387ee3bbdb2960727d22a494a0e6322bbe66321c099cb26a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 13:46:42.663335 containerd[1578]: time="2025-01-30T13:46:42.663297444Z" level=info msg="CreateContainer within sandbox \"0337d03ec4a6bebe387ee3bbdb2960727d22a494a0e6322bbe66321c099cb26a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0b5ff76e1ada9bbb6214a56f1340d613e236a8787f0f9dfffac89941a760b27c\"" Jan 30 13:46:42.663685 containerd[1578]: time="2025-01-30T13:46:42.663658710Z" level=info msg="StartContainer for \"0b5ff76e1ada9bbb6214a56f1340d613e236a8787f0f9dfffac89941a760b27c\"" Jan 30 13:46:42.718040 containerd[1578]: time="2025-01-30T13:46:42.717999088Z" level=info msg="StartContainer for \"0b5ff76e1ada9bbb6214a56f1340d613e236a8787f0f9dfffac89941a760b27c\" returns successfully" Jan 30 13:46:42.761333 kubelet[1930]: E0130 13:46:42.761272 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:46:42.992497 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-09beda90046387238646b4b7773221b12edf8a523f5497addf19f389202d6df7-rootfs.mount: Deactivated successfully. Jan 30 13:46:42.992685 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3462088483.mount: Deactivated successfully. Jan 30 13:46:43.170969 kubelet[1930]: E0130 13:46:43.170937 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:43.172553 kubelet[1930]: E0130 13:46:43.172528 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:43.172591 containerd[1578]: time="2025-01-30T13:46:43.172555122Z" level=info msg="CreateContainer within sandbox \"4d7144513a594fb0486968fdbd39c99288e248ba95c01c933c459a35e14f968a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 13:46:43.192433 containerd[1578]: time="2025-01-30T13:46:43.192359759Z" level=info msg="CreateContainer within sandbox \"4d7144513a594fb0486968fdbd39c99288e248ba95c01c933c459a35e14f968a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ea64c5cdc5d7be0febbebaf50f5e6fa662416e71c0e0e3f70c81279ecfad7e6e\"" Jan 30 13:46:43.193060 containerd[1578]: time="2025-01-30T13:46:43.193019147Z" level=info msg="StartContainer for \"ea64c5cdc5d7be0febbebaf50f5e6fa662416e71c0e0e3f70c81279ecfad7e6e\"" Jan 30 13:46:43.247298 containerd[1578]: time="2025-01-30T13:46:43.247186161Z" level=info msg="StartContainer for \"ea64c5cdc5d7be0febbebaf50f5e6fa662416e71c0e0e3f70c81279ecfad7e6e\" returns successfully" Jan 30 13:46:43.465134 containerd[1578]: time="2025-01-30T13:46:43.465059518Z" level=info msg="shim disconnected" id=ea64c5cdc5d7be0febbebaf50f5e6fa662416e71c0e0e3f70c81279ecfad7e6e namespace=k8s.io Jan 30 13:46:43.465134 containerd[1578]: time="2025-01-30T13:46:43.465111589Z" level=warning msg="cleaning up after shim disconnected" id=ea64c5cdc5d7be0febbebaf50f5e6fa662416e71c0e0e3f70c81279ecfad7e6e namespace=k8s.io Jan 30 13:46:43.465134 containerd[1578]: time="2025-01-30T13:46:43.465123181Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:46:43.761797 kubelet[1930]: E0130 13:46:43.761745 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:46:43.991460 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea64c5cdc5d7be0febbebaf50f5e6fa662416e71c0e0e3f70c81279ecfad7e6e-rootfs.mount: Deactivated successfully. Jan 30 13:46:44.175303 kubelet[1930]: E0130 13:46:44.175177 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:44.175303 kubelet[1930]: E0130 13:46:44.175178 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:44.176742 containerd[1578]: time="2025-01-30T13:46:44.176706077Z" level=info msg="CreateContainer within sandbox \"4d7144513a594fb0486968fdbd39c99288e248ba95c01c933c459a35e14f968a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 13:46:44.187035 kubelet[1930]: I0130 13:46:44.186915 1930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-s67t7" podStartSLOduration=5.37003167 podStartE2EDuration="12.186902137s" podCreationTimestamp="2025-01-30 13:46:32 +0000 UTC" firstStartedPulling="2025-01-30 13:46:35.830496321 +0000 UTC m=+3.378052345" lastFinishedPulling="2025-01-30 13:46:42.647366788 +0000 UTC m=+10.194922812" observedRunningTime="2025-01-30 13:46:43.192948041 +0000 UTC m=+10.740504065" watchObservedRunningTime="2025-01-30 13:46:44.186902137 +0000 UTC m=+11.734458161" Jan 30 13:46:44.192950 containerd[1578]: time="2025-01-30T13:46:44.192914139Z" level=info msg="CreateContainer within sandbox \"4d7144513a594fb0486968fdbd39c99288e248ba95c01c933c459a35e14f968a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8367973baf1226003c22fe8fd7a555e7db45d950cf99c4a35e0bc6818511939d\"" Jan 30 13:46:44.193344 containerd[1578]: time="2025-01-30T13:46:44.193317119Z" level=info msg="StartContainer for \"8367973baf1226003c22fe8fd7a555e7db45d950cf99c4a35e0bc6818511939d\"" Jan 30 13:46:44.246085 containerd[1578]: time="2025-01-30T13:46:44.246046379Z" level=info msg="StartContainer for \"8367973baf1226003c22fe8fd7a555e7db45d950cf99c4a35e0bc6818511939d\" returns successfully" Jan 30 13:46:44.267973 containerd[1578]: time="2025-01-30T13:46:44.267910677Z" level=info msg="shim disconnected" id=8367973baf1226003c22fe8fd7a555e7db45d950cf99c4a35e0bc6818511939d namespace=k8s.io Jan 30 13:46:44.267973 containerd[1578]: time="2025-01-30T13:46:44.267969157Z" level=warning msg="cleaning up after shim disconnected" id=8367973baf1226003c22fe8fd7a555e7db45d950cf99c4a35e0bc6818511939d namespace=k8s.io Jan 30 13:46:44.267973 containerd[1578]: time="2025-01-30T13:46:44.267980384Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:46:44.762355 kubelet[1930]: E0130 13:46:44.762305 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:46:44.991635 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8367973baf1226003c22fe8fd7a555e7db45d950cf99c4a35e0bc6818511939d-rootfs.mount: Deactivated successfully. Jan 30 13:46:45.178306 kubelet[1930]: E0130 13:46:45.178219 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:45.180192 containerd[1578]: time="2025-01-30T13:46:45.180147599Z" level=info msg="CreateContainer within sandbox \"4d7144513a594fb0486968fdbd39c99288e248ba95c01c933c459a35e14f968a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 13:46:45.195515 containerd[1578]: time="2025-01-30T13:46:45.195468013Z" level=info msg="CreateContainer within sandbox \"4d7144513a594fb0486968fdbd39c99288e248ba95c01c933c459a35e14f968a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"67c41ac4c64249ec66351a4b84ef1e271a2ec3287f397351660d47c3ffff534b\"" Jan 30 13:46:45.195911 containerd[1578]: time="2025-01-30T13:46:45.195839861Z" level=info msg="StartContainer for \"67c41ac4c64249ec66351a4b84ef1e271a2ec3287f397351660d47c3ffff534b\"" Jan 30 13:46:45.250938 containerd[1578]: time="2025-01-30T13:46:45.250899430Z" level=info msg="StartContainer for \"67c41ac4c64249ec66351a4b84ef1e271a2ec3287f397351660d47c3ffff534b\" returns successfully" Jan 30 13:46:45.360316 kubelet[1930]: I0130 13:46:45.360285 1930 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 13:46:45.680829 kernel: Initializing XFRM netlink socket Jan 30 13:46:45.762898 kubelet[1930]: E0130 13:46:45.762857 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:46:46.182069 kubelet[1930]: E0130 13:46:46.182040 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:46.762982 kubelet[1930]: E0130 13:46:46.762943 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:46:47.183730 kubelet[1930]: E0130 13:46:47.183594 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:47.357487 systemd-networkd[1258]: cilium_host: Link UP Jan 30 13:46:47.357737 systemd-networkd[1258]: cilium_net: Link UP Jan 30 13:46:47.357999 systemd-networkd[1258]: cilium_net: Gained carrier Jan 30 13:46:47.358241 systemd-networkd[1258]: cilium_host: Gained carrier Jan 30 13:46:47.456131 systemd-networkd[1258]: cilium_vxlan: Link UP Jan 30 13:46:47.456143 systemd-networkd[1258]: cilium_vxlan: Gained carrier Jan 30 13:46:47.652844 kernel: NET: Registered PF_ALG protocol family Jan 30 13:46:47.730031 systemd-networkd[1258]: cilium_net: Gained IPv6LL Jan 30 13:46:47.763898 kubelet[1930]: E0130 13:46:47.763841 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:46:47.797968 systemd-networkd[1258]: cilium_host: Gained IPv6LL Jan 30 13:46:48.185650 kubelet[1930]: E0130 13:46:48.185546 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:48.265439 systemd-networkd[1258]: lxc_health: Link UP Jan 30 13:46:48.277017 systemd-networkd[1258]: lxc_health: Gained carrier Jan 30 13:46:48.764926 kubelet[1930]: E0130 13:46:48.764881 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:46:49.081992 kubelet[1930]: I0130 13:46:49.081854 1930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cwvwk" podStartSLOduration=11.931687288 podStartE2EDuration="17.081838035s" podCreationTimestamp="2025-01-30 13:46:32 +0000 UTC" firstStartedPulling="2025-01-30 13:46:35.828636264 +0000 UTC m=+3.376192288" lastFinishedPulling="2025-01-30 13:46:40.97878701 +0000 UTC m=+8.526343035" observedRunningTime="2025-01-30 13:46:46.19493603 +0000 UTC m=+13.742492054" watchObservedRunningTime="2025-01-30 13:46:49.081838035 +0000 UTC m=+16.629394059" Jan 30 13:46:49.132458 kubelet[1930]: I0130 13:46:49.132410 1930 topology_manager.go:215] "Topology Admit Handler" podUID="748be8ae-f75e-451b-9aae-5722ebb55272" podNamespace="default" podName="nginx-deployment-85f456d6dd-d5h9l" Jan 30 13:46:49.157453 kubelet[1930]: I0130 13:46:49.157423 1930 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbbwz\" (UniqueName: \"kubernetes.io/projected/748be8ae-f75e-451b-9aae-5722ebb55272-kube-api-access-gbbwz\") pod \"nginx-deployment-85f456d6dd-d5h9l\" (UID: \"748be8ae-f75e-451b-9aae-5722ebb55272\") " pod="default/nginx-deployment-85f456d6dd-d5h9l" Jan 30 13:46:49.186960 kubelet[1930]: E0130 13:46:49.186939 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:49.404713 systemd-networkd[1258]: cilium_vxlan: Gained IPv6LL Jan 30 13:46:49.436468 containerd[1578]: time="2025-01-30T13:46:49.436426341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-d5h9l,Uid:748be8ae-f75e-451b-9aae-5722ebb55272,Namespace:default,Attempt:0,}" Jan 30 13:46:49.470338 systemd-networkd[1258]: lxcf91635da9158: Link UP Jan 30 13:46:49.483837 kernel: eth0: renamed from tmpbd5ca Jan 30 13:46:49.489221 systemd-networkd[1258]: lxcf91635da9158: Gained carrier Jan 30 13:46:49.529916 systemd-networkd[1258]: lxc_health: Gained IPv6LL Jan 30 13:46:49.765687 kubelet[1930]: E0130 13:46:49.765564 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:46:50.188906 kubelet[1930]: E0130 13:46:50.188736 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:50.766116 kubelet[1930]: E0130 13:46:50.766075 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:46:51.066037 systemd-networkd[1258]: lxcf91635da9158: Gained IPv6LL Jan 30 13:46:51.189778 kubelet[1930]: E0130 13:46:51.189732 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:51.767017 kubelet[1930]: E0130 13:46:51.766964 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:46:52.633404 containerd[1578]: time="2025-01-30T13:46:52.632704095Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:46:52.633826 containerd[1578]: time="2025-01-30T13:46:52.633457539Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:46:52.633826 containerd[1578]: time="2025-01-30T13:46:52.633539616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:46:52.633826 containerd[1578]: time="2025-01-30T13:46:52.633673676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:46:52.664474 systemd-resolved[1475]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:46:52.692749 containerd[1578]: time="2025-01-30T13:46:52.692692844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-d5h9l,Uid:748be8ae-f75e-451b-9aae-5722ebb55272,Namespace:default,Attempt:0,} returns sandbox id \"bd5ca22cbcbba3f5f9a0082930e78fa566d9fff8b2184e874a4cb864bf3d997e\"" Jan 30 13:46:52.694383 containerd[1578]: time="2025-01-30T13:46:52.694335576Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 30 13:46:52.754199 kubelet[1930]: E0130 13:46:52.754123 1930 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:46:52.767536 kubelet[1930]: E0130 13:46:52.767455 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:46:53.767642 kubelet[1930]: E0130 13:46:53.767599 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:46:54.768444 kubelet[1930]: E0130 13:46:54.768381 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:46:55.756272 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3830781790.mount: Deactivated successfully. Jan 30 13:46:55.769013 kubelet[1930]: E0130 13:46:55.768961 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:46:56.769637 kubelet[1930]: E0130 13:46:56.769596 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:46:57.260767 containerd[1578]: time="2025-01-30T13:46:57.260726768Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:57.261580 containerd[1578]: time="2025-01-30T13:46:57.261539878Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71015561" Jan 30 13:46:57.262635 containerd[1578]: time="2025-01-30T13:46:57.262605635Z" level=info msg="ImageCreate event name:\"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:57.265118 containerd[1578]: time="2025-01-30T13:46:57.265094701Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:57.265893 containerd[1578]: time="2025-01-30T13:46:57.265858266Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 4.571487112s" Jan 30 13:46:57.265893 containerd[1578]: time="2025-01-30T13:46:57.265885029Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 30 13:46:57.267645 containerd[1578]: time="2025-01-30T13:46:57.267625129Z" level=info msg="CreateContainer within sandbox \"bd5ca22cbcbba3f5f9a0082930e78fa566d9fff8b2184e874a4cb864bf3d997e\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 30 13:46:57.279288 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2339633555.mount: Deactivated successfully. Jan 30 13:46:57.280147 containerd[1578]: time="2025-01-30T13:46:57.280124947Z" level=info msg="CreateContainer within sandbox \"bd5ca22cbcbba3f5f9a0082930e78fa566d9fff8b2184e874a4cb864bf3d997e\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"1fd319645c3750369335c1b5d7c4d16fb4be28772c10f7d4f23bd3722555b268\"" Jan 30 13:46:57.280491 containerd[1578]: time="2025-01-30T13:46:57.280472532Z" level=info msg="StartContainer for \"1fd319645c3750369335c1b5d7c4d16fb4be28772c10f7d4f23bd3722555b268\"" Jan 30 13:46:57.329555 containerd[1578]: time="2025-01-30T13:46:57.329472978Z" level=info msg="StartContainer for \"1fd319645c3750369335c1b5d7c4d16fb4be28772c10f7d4f23bd3722555b268\" returns successfully" Jan 30 13:46:57.770300 kubelet[1930]: E0130 13:46:57.770250 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:46:58.771006 kubelet[1930]: E0130 13:46:58.770964 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:46:59.771278 kubelet[1930]: E0130 13:46:59.771203 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:47:00.772154 kubelet[1930]: E0130 13:47:00.772102 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:47:01.240278 kubelet[1930]: I0130 13:47:01.240210 1930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-d5h9l" podStartSLOduration=7.667616757 podStartE2EDuration="12.240191547s" podCreationTimestamp="2025-01-30 13:46:49 +0000 UTC" firstStartedPulling="2025-01-30 13:46:52.694126558 +0000 UTC m=+20.241682572" lastFinishedPulling="2025-01-30 13:46:57.266701328 +0000 UTC m=+24.814257362" observedRunningTime="2025-01-30 13:46:58.208346642 +0000 UTC m=+25.755902676" watchObservedRunningTime="2025-01-30 13:47:01.240191547 +0000 UTC m=+28.787747571" Jan 30 13:47:01.240458 kubelet[1930]: I0130 13:47:01.240428 1930 topology_manager.go:215] "Topology Admit Handler" podUID="fd0c5784-a899-4202-bd95-1541c1128ea0" podNamespace="default" podName="nfs-server-provisioner-0" Jan 30 13:47:01.319459 kubelet[1930]: I0130 13:47:01.319414 1930 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/fd0c5784-a899-4202-bd95-1541c1128ea0-data\") pod \"nfs-server-provisioner-0\" (UID: \"fd0c5784-a899-4202-bd95-1541c1128ea0\") " pod="default/nfs-server-provisioner-0" Jan 30 13:47:01.319459 kubelet[1930]: I0130 13:47:01.319458 1930 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lstlf\" (UniqueName: \"kubernetes.io/projected/fd0c5784-a899-4202-bd95-1541c1128ea0-kube-api-access-lstlf\") pod \"nfs-server-provisioner-0\" (UID: \"fd0c5784-a899-4202-bd95-1541c1128ea0\") " pod="default/nfs-server-provisioner-0" Jan 30 13:47:01.544239 containerd[1578]: time="2025-01-30T13:47:01.544197636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:fd0c5784-a899-4202-bd95-1541c1128ea0,Namespace:default,Attempt:0,}" Jan 30 13:47:01.571448 systemd-networkd[1258]: lxc25e7a51c9b08: Link UP Jan 30 13:47:01.577838 kernel: eth0: renamed from tmp4e654 Jan 30 13:47:01.587336 systemd-networkd[1258]: lxc25e7a51c9b08: Gained carrier Jan 30 13:47:01.773230 kubelet[1930]: E0130 13:47:01.773161 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:47:01.974555 containerd[1578]: time="2025-01-30T13:47:01.974142446Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:47:01.974555 containerd[1578]: time="2025-01-30T13:47:01.974200736Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:47:01.974555 containerd[1578]: time="2025-01-30T13:47:01.974212947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:47:01.974555 containerd[1578]: time="2025-01-30T13:47:01.974307067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:47:02.000900 systemd-resolved[1475]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:47:02.026091 containerd[1578]: time="2025-01-30T13:47:02.026029949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:fd0c5784-a899-4202-bd95-1541c1128ea0,Namespace:default,Attempt:0,} returns sandbox id \"4e6541215988506c52df6b9348e449ee9e8651ca479faf8a0b6a3ccba8cb40c9\"" Jan 30 13:47:02.027419 containerd[1578]: time="2025-01-30T13:47:02.027396090Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 30 13:47:02.773920 kubelet[1930]: E0130 13:47:02.773867 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:47:03.289978 systemd-networkd[1258]: lxc25e7a51c9b08: Gained IPv6LL Jan 30 13:47:03.775060 kubelet[1930]: E0130 13:47:03.774940 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:47:04.275732 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2043904344.mount: Deactivated successfully. Jan 30 13:47:04.776094 kubelet[1930]: E0130 13:47:04.776057 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:47:05.777157 kubelet[1930]: E0130 13:47:05.777095 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:47:06.003886 update_engine[1563]: I20250130 13:47:06.003829 1563 update_attempter.cc:509] Updating boot flags... Jan 30 13:47:06.079844 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3206) Jan 30 13:47:06.133896 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3205) Jan 30 13:47:06.220845 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3205) Jan 30 13:47:06.778208 kubelet[1930]: E0130 13:47:06.778143 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:47:07.111996 containerd[1578]: time="2025-01-30T13:47:07.111938925Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:47:07.113022 containerd[1578]: time="2025-01-30T13:47:07.112616243Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jan 30 13:47:07.113784 containerd[1578]: time="2025-01-30T13:47:07.113746004Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:47:07.116378 containerd[1578]: time="2025-01-30T13:47:07.116317457Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:47:07.117417 containerd[1578]: time="2025-01-30T13:47:07.117382053Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 5.089954576s" Jan 30 13:47:07.117461 containerd[1578]: time="2025-01-30T13:47:07.117417397Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 30 13:47:07.119687 containerd[1578]: time="2025-01-30T13:47:07.119662055Z" level=info msg="CreateContainer within sandbox \"4e6541215988506c52df6b9348e449ee9e8651ca479faf8a0b6a3ccba8cb40c9\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 30 13:47:07.133056 containerd[1578]: time="2025-01-30T13:47:07.133009021Z" level=info msg="CreateContainer within sandbox \"4e6541215988506c52df6b9348e449ee9e8651ca479faf8a0b6a3ccba8cb40c9\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"1b509a9406382e98ce3276392e69ad40d5c87fe717235d006f54d81537f90693\"" Jan 30 13:47:07.133430 containerd[1578]: time="2025-01-30T13:47:07.133402955Z" level=info msg="StartContainer for \"1b509a9406382e98ce3276392e69ad40d5c87fe717235d006f54d81537f90693\"" Jan 30 13:47:07.227582 containerd[1578]: time="2025-01-30T13:47:07.227534165Z" level=info msg="StartContainer for \"1b509a9406382e98ce3276392e69ad40d5c87fe717235d006f54d81537f90693\" returns successfully" Jan 30 13:47:07.779361 kubelet[1930]: E0130 13:47:07.779300 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:47:08.235576 kubelet[1930]: I0130 13:47:08.235517 1930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.144437912 podStartE2EDuration="7.23550225s" podCreationTimestamp="2025-01-30 13:47:01 +0000 UTC" firstStartedPulling="2025-01-30 13:47:02.027154651 +0000 UTC m=+29.574710676" lastFinishedPulling="2025-01-30 13:47:07.11821899 +0000 UTC m=+34.665775014" observedRunningTime="2025-01-30 13:47:08.235293299 +0000 UTC m=+35.782849323" watchObservedRunningTime="2025-01-30 13:47:08.23550225 +0000 UTC m=+35.783058274" Jan 30 13:47:08.779900 kubelet[1930]: E0130 13:47:08.779836 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:47:09.780651 kubelet[1930]: E0130 13:47:09.780601 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:47:10.781699 kubelet[1930]: E0130 13:47:10.781649 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:47:11.782778 kubelet[1930]: E0130 13:47:11.782747 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:47:12.754655 kubelet[1930]: E0130 13:47:12.754602 1930 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:47:12.783191 kubelet[1930]: E0130 13:47:12.783151 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:47:13.783942 kubelet[1930]: E0130 13:47:13.783885 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:47:14.784471 kubelet[1930]: E0130 13:47:14.784416 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:47:15.784644 kubelet[1930]: E0130 13:47:15.784593 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:47:16.631370 kubelet[1930]: I0130 13:47:16.631320 1930 topology_manager.go:215] "Topology Admit Handler" podUID="5572bb08-7e9d-404c-a9f1-5897ff491d16" podNamespace="default" podName="test-pod-1" Jan 30 13:47:16.784835 kubelet[1930]: E0130 13:47:16.784753 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:47:16.800986 kubelet[1930]: I0130 13:47:16.800951 1930 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsw2b\" (UniqueName: \"kubernetes.io/projected/5572bb08-7e9d-404c-a9f1-5897ff491d16-kube-api-access-dsw2b\") pod \"test-pod-1\" (UID: \"5572bb08-7e9d-404c-a9f1-5897ff491d16\") " pod="default/test-pod-1" Jan 30 13:47:16.800986 kubelet[1930]: I0130 13:47:16.800983 1930 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-05a6c831-d134-4d5f-8085-568b66793a26\" (UniqueName: \"kubernetes.io/nfs/5572bb08-7e9d-404c-a9f1-5897ff491d16-pvc-05a6c831-d134-4d5f-8085-568b66793a26\") pod \"test-pod-1\" (UID: \"5572bb08-7e9d-404c-a9f1-5897ff491d16\") " pod="default/test-pod-1" Jan 30 13:47:16.922833 kernel: FS-Cache: Loaded Jan 30 13:47:16.987870 kernel: RPC: Registered named UNIX socket transport module. Jan 30 13:47:16.987949 kernel: RPC: Registered udp transport module. Jan 30 13:47:16.987969 kernel: RPC: Registered tcp transport module. Jan 30 13:47:16.989258 kernel: RPC: Registered tcp-with-tls transport module. Jan 30 13:47:16.989297 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 30 13:47:17.262923 kernel: NFS: Registering the id_resolver key type Jan 30 13:47:17.263085 kernel: Key type id_resolver registered Jan 30 13:47:17.263113 kernel: Key type id_legacy registered Jan 30 13:47:17.287432 nfsidmap[3321]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 30 13:47:17.291681 nfsidmap[3324]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 30 13:47:17.534894 containerd[1578]: time="2025-01-30T13:47:17.534854930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:5572bb08-7e9d-404c-a9f1-5897ff491d16,Namespace:default,Attempt:0,}" Jan 30 13:47:17.559174 systemd-networkd[1258]: lxc48452c4a3481: Link UP Jan 30 13:47:17.568855 kernel: eth0: renamed from tmpa00d3 Jan 30 13:47:17.580521 systemd-networkd[1258]: lxc48452c4a3481: Gained carrier Jan 30 13:47:17.784934 kubelet[1930]: E0130 13:47:17.784880 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:47:17.808619 containerd[1578]: time="2025-01-30T13:47:17.808532661Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:47:17.808619 containerd[1578]: time="2025-01-30T13:47:17.808580947Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:47:17.808619 containerd[1578]: time="2025-01-30T13:47:17.808591029Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:47:17.809475 containerd[1578]: time="2025-01-30T13:47:17.808680435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:47:17.837665 systemd-resolved[1475]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:47:17.862086 containerd[1578]: time="2025-01-30T13:47:17.862052740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:5572bb08-7e9d-404c-a9f1-5897ff491d16,Namespace:default,Attempt:0,} returns sandbox id \"a00d318b42a7239601b194363598b72a8efd9669820ab5fdc643234e5cd598de\"" Jan 30 13:47:17.863272 containerd[1578]: time="2025-01-30T13:47:17.863250847Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 30 13:47:18.270104 containerd[1578]: time="2025-01-30T13:47:18.269984987Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:47:18.270871 containerd[1578]: time="2025-01-30T13:47:18.270829438Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 30 13:47:18.273157 containerd[1578]: time="2025-01-30T13:47:18.273131255Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 409.853659ms" Jan 30 13:47:18.273206 containerd[1578]: time="2025-01-30T13:47:18.273155759Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 30 13:47:18.274867 containerd[1578]: time="2025-01-30T13:47:18.274829417Z" level=info msg="CreateContainer within sandbox \"a00d318b42a7239601b194363598b72a8efd9669820ab5fdc643234e5cd598de\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 30 13:47:18.288103 containerd[1578]: time="2025-01-30T13:47:18.288054384Z" level=info msg="CreateContainer within sandbox \"a00d318b42a7239601b194363598b72a8efd9669820ab5fdc643234e5cd598de\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"6cfc8ae6af644f78e7786e619c8a2aa2c69269e55c35c24890b528b7a1d7de0b\"" Jan 30 13:47:18.288593 containerd[1578]: time="2025-01-30T13:47:18.288562962Z" level=info msg="StartContainer for \"6cfc8ae6af644f78e7786e619c8a2aa2c69269e55c35c24890b528b7a1d7de0b\"" Jan 30 13:47:18.343912 containerd[1578]: time="2025-01-30T13:47:18.343860559Z" level=info msg="StartContainer for \"6cfc8ae6af644f78e7786e619c8a2aa2c69269e55c35c24890b528b7a1d7de0b\" returns successfully" Jan 30 13:47:18.785997 kubelet[1930]: E0130 13:47:18.785914 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:47:18.906008 systemd-networkd[1258]: lxc48452c4a3481: Gained IPv6LL Jan 30 13:47:19.256896 kubelet[1930]: I0130 13:47:19.256761 1930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=17.846062152000002 podStartE2EDuration="18.256743598s" podCreationTimestamp="2025-01-30 13:47:01 +0000 UTC" firstStartedPulling="2025-01-30 13:47:17.86303287 +0000 UTC m=+45.410588894" lastFinishedPulling="2025-01-30 13:47:18.273714316 +0000 UTC m=+45.821270340" observedRunningTime="2025-01-30 13:47:19.256641998 +0000 UTC m=+46.804198022" watchObservedRunningTime="2025-01-30 13:47:19.256743598 +0000 UTC m=+46.804299632" Jan 30 13:47:19.787096 kubelet[1930]: E0130 13:47:19.787039 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:47:20.787535 kubelet[1930]: E0130 13:47:20.787502 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:47:21.788627 kubelet[1930]: E0130 13:47:21.788572 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:47:22.789414 kubelet[1930]: E0130 13:47:22.789366 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:47:23.787488 systemd[1]: run-containerd-runc-k8s.io-67c41ac4c64249ec66351a4b84ef1e271a2ec3287f397351660d47c3ffff534b-runc.XG5Uhn.mount: Deactivated successfully. Jan 30 13:47:23.789558 kubelet[1930]: E0130 13:47:23.789524 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:47:23.801920 containerd[1578]: time="2025-01-30T13:47:23.801885230Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:47:23.808936 containerd[1578]: time="2025-01-30T13:47:23.808914509Z" level=info msg="StopContainer for \"67c41ac4c64249ec66351a4b84ef1e271a2ec3287f397351660d47c3ffff534b\" with timeout 2 (s)" Jan 30 13:47:23.809173 containerd[1578]: time="2025-01-30T13:47:23.809139207Z" level=info msg="Stop container \"67c41ac4c64249ec66351a4b84ef1e271a2ec3287f397351660d47c3ffff534b\" with signal terminated" Jan 30 13:47:23.815057 systemd-networkd[1258]: lxc_health: Link DOWN Jan 30 13:47:23.815069 systemd-networkd[1258]: lxc_health: Lost carrier Jan 30 13:47:23.862754 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-67c41ac4c64249ec66351a4b84ef1e271a2ec3287f397351660d47c3ffff534b-rootfs.mount: Deactivated successfully. Jan 30 13:47:24.005430 containerd[1578]: time="2025-01-30T13:47:24.005354931Z" level=info msg="shim disconnected" id=67c41ac4c64249ec66351a4b84ef1e271a2ec3287f397351660d47c3ffff534b namespace=k8s.io Jan 30 13:47:24.005430 containerd[1578]: time="2025-01-30T13:47:24.005412563Z" level=warning msg="cleaning up after shim disconnected" id=67c41ac4c64249ec66351a4b84ef1e271a2ec3287f397351660d47c3ffff534b namespace=k8s.io Jan 30 13:47:24.005430 containerd[1578]: time="2025-01-30T13:47:24.005420840Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:47:24.023570 containerd[1578]: time="2025-01-30T13:47:24.023529893Z" level=info msg="StopContainer for \"67c41ac4c64249ec66351a4b84ef1e271a2ec3287f397351660d47c3ffff534b\" returns successfully" Jan 30 13:47:24.024205 containerd[1578]: time="2025-01-30T13:47:24.024168598Z" level=info msg="StopPodSandbox for \"4d7144513a594fb0486968fdbd39c99288e248ba95c01c933c459a35e14f968a\"" Jan 30 13:47:24.024261 containerd[1578]: time="2025-01-30T13:47:24.024212341Z" level=info msg="Container to stop \"ea64c5cdc5d7be0febbebaf50f5e6fa662416e71c0e0e3f70c81279ecfad7e6e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:47:24.024261 containerd[1578]: time="2025-01-30T13:47:24.024224757Z" level=info msg="Container to stop \"09beda90046387238646b4b7773221b12edf8a523f5497addf19f389202d6df7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:47:24.024261 containerd[1578]: time="2025-01-30T13:47:24.024234016Z" level=info msg="Container to stop \"8367973baf1226003c22fe8fd7a555e7db45d950cf99c4a35e0bc6818511939d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:47:24.024261 containerd[1578]: time="2025-01-30T13:47:24.024250782Z" level=info msg="Container to stop \"67c41ac4c64249ec66351a4b84ef1e271a2ec3287f397351660d47c3ffff534b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:47:24.024261 containerd[1578]: time="2025-01-30T13:47:24.024260021Z" level=info msg="Container to stop \"7564126c361e084c12c0033f262c7b6f5959e21476c0d90a189a869555d1715b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:47:24.026352 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4d7144513a594fb0486968fdbd39c99288e248ba95c01c933c459a35e14f968a-shm.mount: Deactivated successfully. Jan 30 13:47:24.046052 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d7144513a594fb0486968fdbd39c99288e248ba95c01c933c459a35e14f968a-rootfs.mount: Deactivated successfully. Jan 30 13:47:24.050304 containerd[1578]: time="2025-01-30T13:47:24.050235680Z" level=info msg="shim disconnected" id=4d7144513a594fb0486968fdbd39c99288e248ba95c01c933c459a35e14f968a namespace=k8s.io Jan 30 13:47:24.050304 containerd[1578]: time="2025-01-30T13:47:24.050296499Z" level=warning msg="cleaning up after shim disconnected" id=4d7144513a594fb0486968fdbd39c99288e248ba95c01c933c459a35e14f968a namespace=k8s.io Jan 30 13:47:24.050304 containerd[1578]: time="2025-01-30T13:47:24.050307293Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:47:24.061690 containerd[1578]: time="2025-01-30T13:47:24.061649504Z" level=warning msg="cleanup warnings time=\"2025-01-30T13:47:24Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 13:47:24.062895 containerd[1578]: time="2025-01-30T13:47:24.062850209Z" level=info msg="TearDown network for sandbox \"4d7144513a594fb0486968fdbd39c99288e248ba95c01c933c459a35e14f968a\" successfully" Jan 30 13:47:24.062895 containerd[1578]: time="2025-01-30T13:47:24.062888009Z" level=info msg="StopPodSandbox for \"4d7144513a594fb0486968fdbd39c99288e248ba95c01c933c459a35e14f968a\" returns successfully" Jan 30 13:47:24.238165 kubelet[1930]: I0130 13:47:24.238121 1930 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/de663b58-e8ae-403f-a76e-8ec55901f8bd-clustermesh-secrets\") pod \"de663b58-e8ae-403f-a76e-8ec55901f8bd\" (UID: \"de663b58-e8ae-403f-a76e-8ec55901f8bd\") " Jan 30 13:47:24.238165 kubelet[1930]: I0130 13:47:24.238176 1930 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de663b58-e8ae-403f-a76e-8ec55901f8bd-xtables-lock\") pod \"de663b58-e8ae-403f-a76e-8ec55901f8bd\" (UID: \"de663b58-e8ae-403f-a76e-8ec55901f8bd\") " Jan 30 13:47:24.238374 kubelet[1930]: I0130 13:47:24.238197 1930 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de663b58-e8ae-403f-a76e-8ec55901f8bd-lib-modules\") pod \"de663b58-e8ae-403f-a76e-8ec55901f8bd\" (UID: \"de663b58-e8ae-403f-a76e-8ec55901f8bd\") " Jan 30 13:47:24.238374 kubelet[1930]: I0130 13:47:24.238212 1930 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fh458\" (UniqueName: \"kubernetes.io/projected/de663b58-e8ae-403f-a76e-8ec55901f8bd-kube-api-access-fh458\") pod \"de663b58-e8ae-403f-a76e-8ec55901f8bd\" (UID: \"de663b58-e8ae-403f-a76e-8ec55901f8bd\") " Jan 30 13:47:24.238374 kubelet[1930]: I0130 13:47:24.238228 1930 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/de663b58-e8ae-403f-a76e-8ec55901f8bd-cilium-cgroup\") pod \"de663b58-e8ae-403f-a76e-8ec55901f8bd\" (UID: \"de663b58-e8ae-403f-a76e-8ec55901f8bd\") " Jan 30 13:47:24.238374 kubelet[1930]: I0130 13:47:24.238241 1930 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/de663b58-e8ae-403f-a76e-8ec55901f8bd-cni-path\") pod \"de663b58-e8ae-403f-a76e-8ec55901f8bd\" (UID: \"de663b58-e8ae-403f-a76e-8ec55901f8bd\") " Jan 30 13:47:24.238374 kubelet[1930]: I0130 13:47:24.238254 1930 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/de663b58-e8ae-403f-a76e-8ec55901f8bd-cilium-run\") pod \"de663b58-e8ae-403f-a76e-8ec55901f8bd\" (UID: \"de663b58-e8ae-403f-a76e-8ec55901f8bd\") " Jan 30 13:47:24.238374 kubelet[1930]: I0130 13:47:24.238268 1930 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/de663b58-e8ae-403f-a76e-8ec55901f8bd-cilium-config-path\") pod \"de663b58-e8ae-403f-a76e-8ec55901f8bd\" (UID: \"de663b58-e8ae-403f-a76e-8ec55901f8bd\") " Jan 30 13:47:24.238514 kubelet[1930]: I0130 13:47:24.238281 1930 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/de663b58-e8ae-403f-a76e-8ec55901f8bd-bpf-maps\") pod \"de663b58-e8ae-403f-a76e-8ec55901f8bd\" (UID: \"de663b58-e8ae-403f-a76e-8ec55901f8bd\") " Jan 30 13:47:24.238514 kubelet[1930]: I0130 13:47:24.238284 1930 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de663b58-e8ae-403f-a76e-8ec55901f8bd-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "de663b58-e8ae-403f-a76e-8ec55901f8bd" (UID: "de663b58-e8ae-403f-a76e-8ec55901f8bd"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:47:24.238514 kubelet[1930]: I0130 13:47:24.238303 1930 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/de663b58-e8ae-403f-a76e-8ec55901f8bd-host-proc-sys-kernel\") pod \"de663b58-e8ae-403f-a76e-8ec55901f8bd\" (UID: \"de663b58-e8ae-403f-a76e-8ec55901f8bd\") " Jan 30 13:47:24.238514 kubelet[1930]: I0130 13:47:24.238320 1930 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/de663b58-e8ae-403f-a76e-8ec55901f8bd-hubble-tls\") pod \"de663b58-e8ae-403f-a76e-8ec55901f8bd\" (UID: \"de663b58-e8ae-403f-a76e-8ec55901f8bd\") " Jan 30 13:47:24.238514 kubelet[1930]: I0130 13:47:24.238334 1930 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/de663b58-e8ae-403f-a76e-8ec55901f8bd-host-proc-sys-net\") pod \"de663b58-e8ae-403f-a76e-8ec55901f8bd\" (UID: \"de663b58-e8ae-403f-a76e-8ec55901f8bd\") " Jan 30 13:47:24.238514 kubelet[1930]: I0130 13:47:24.238339 1930 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de663b58-e8ae-403f-a76e-8ec55901f8bd-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "de663b58-e8ae-403f-a76e-8ec55901f8bd" (UID: "de663b58-e8ae-403f-a76e-8ec55901f8bd"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:47:24.238713 kubelet[1930]: I0130 13:47:24.238347 1930 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/de663b58-e8ae-403f-a76e-8ec55901f8bd-etc-cni-netd\") pod \"de663b58-e8ae-403f-a76e-8ec55901f8bd\" (UID: \"de663b58-e8ae-403f-a76e-8ec55901f8bd\") " Jan 30 13:47:24.238713 kubelet[1930]: I0130 13:47:24.238359 1930 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/de663b58-e8ae-403f-a76e-8ec55901f8bd-hostproc\") pod \"de663b58-e8ae-403f-a76e-8ec55901f8bd\" (UID: \"de663b58-e8ae-403f-a76e-8ec55901f8bd\") " Jan 30 13:47:24.238713 kubelet[1930]: I0130 13:47:24.238381 1930 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de663b58-e8ae-403f-a76e-8ec55901f8bd-xtables-lock\") on node \"10.0.0.114\" DevicePath \"\"" Jan 30 13:47:24.238713 kubelet[1930]: I0130 13:47:24.238389 1930 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de663b58-e8ae-403f-a76e-8ec55901f8bd-lib-modules\") on node \"10.0.0.114\" DevicePath \"\"" Jan 30 13:47:24.238713 kubelet[1930]: I0130 13:47:24.238413 1930 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de663b58-e8ae-403f-a76e-8ec55901f8bd-hostproc" (OuterVolumeSpecName: "hostproc") pod "de663b58-e8ae-403f-a76e-8ec55901f8bd" (UID: "de663b58-e8ae-403f-a76e-8ec55901f8bd"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:47:24.238713 kubelet[1930]: I0130 13:47:24.238435 1930 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de663b58-e8ae-403f-a76e-8ec55901f8bd-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "de663b58-e8ae-403f-a76e-8ec55901f8bd" (UID: "de663b58-e8ae-403f-a76e-8ec55901f8bd"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:47:24.238879 kubelet[1930]: I0130 13:47:24.238451 1930 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de663b58-e8ae-403f-a76e-8ec55901f8bd-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "de663b58-e8ae-403f-a76e-8ec55901f8bd" (UID: "de663b58-e8ae-403f-a76e-8ec55901f8bd"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:47:24.238879 kubelet[1930]: I0130 13:47:24.238746 1930 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de663b58-e8ae-403f-a76e-8ec55901f8bd-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "de663b58-e8ae-403f-a76e-8ec55901f8bd" (UID: "de663b58-e8ae-403f-a76e-8ec55901f8bd"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:47:24.240530 kubelet[1930]: I0130 13:47:24.240510 1930 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de663b58-e8ae-403f-a76e-8ec55901f8bd-cni-path" (OuterVolumeSpecName: "cni-path") pod "de663b58-e8ae-403f-a76e-8ec55901f8bd" (UID: "de663b58-e8ae-403f-a76e-8ec55901f8bd"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:47:24.240577 kubelet[1930]: I0130 13:47:24.240550 1930 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de663b58-e8ae-403f-a76e-8ec55901f8bd-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "de663b58-e8ae-403f-a76e-8ec55901f8bd" (UID: "de663b58-e8ae-403f-a76e-8ec55901f8bd"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:47:24.240577 kubelet[1930]: I0130 13:47:24.240569 1930 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de663b58-e8ae-403f-a76e-8ec55901f8bd-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "de663b58-e8ae-403f-a76e-8ec55901f8bd" (UID: "de663b58-e8ae-403f-a76e-8ec55901f8bd"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:47:24.240630 kubelet[1930]: I0130 13:47:24.240593 1930 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de663b58-e8ae-403f-a76e-8ec55901f8bd-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "de663b58-e8ae-403f-a76e-8ec55901f8bd" (UID: "de663b58-e8ae-403f-a76e-8ec55901f8bd"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:47:24.241284 kubelet[1930]: I0130 13:47:24.241246 1930 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de663b58-e8ae-403f-a76e-8ec55901f8bd-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "de663b58-e8ae-403f-a76e-8ec55901f8bd" (UID: "de663b58-e8ae-403f-a76e-8ec55901f8bd"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:47:24.241650 kubelet[1930]: I0130 13:47:24.241622 1930 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de663b58-e8ae-403f-a76e-8ec55901f8bd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "de663b58-e8ae-403f-a76e-8ec55901f8bd" (UID: "de663b58-e8ae-403f-a76e-8ec55901f8bd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:47:24.241893 kubelet[1930]: I0130 13:47:24.241873 1930 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de663b58-e8ae-403f-a76e-8ec55901f8bd-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "de663b58-e8ae-403f-a76e-8ec55901f8bd" (UID: "de663b58-e8ae-403f-a76e-8ec55901f8bd"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:47:24.241953 kubelet[1930]: I0130 13:47:24.241907 1930 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de663b58-e8ae-403f-a76e-8ec55901f8bd-kube-api-access-fh458" (OuterVolumeSpecName: "kube-api-access-fh458") pod "de663b58-e8ae-403f-a76e-8ec55901f8bd" (UID: "de663b58-e8ae-403f-a76e-8ec55901f8bd"). InnerVolumeSpecName "kube-api-access-fh458". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:47:24.257749 kubelet[1930]: I0130 13:47:24.257720 1930 scope.go:117] "RemoveContainer" containerID="67c41ac4c64249ec66351a4b84ef1e271a2ec3287f397351660d47c3ffff534b" Jan 30 13:47:24.259057 containerd[1578]: time="2025-01-30T13:47:24.259025245Z" level=info msg="RemoveContainer for \"67c41ac4c64249ec66351a4b84ef1e271a2ec3287f397351660d47c3ffff534b\"" Jan 30 13:47:24.263463 containerd[1578]: time="2025-01-30T13:47:24.263412806Z" level=info msg="RemoveContainer for \"67c41ac4c64249ec66351a4b84ef1e271a2ec3287f397351660d47c3ffff534b\" returns successfully" Jan 30 13:47:24.263730 kubelet[1930]: I0130 13:47:24.263648 1930 scope.go:117] "RemoveContainer" containerID="8367973baf1226003c22fe8fd7a555e7db45d950cf99c4a35e0bc6818511939d" Jan 30 13:47:24.264696 containerd[1578]: time="2025-01-30T13:47:24.264646049Z" level=info msg="RemoveContainer for \"8367973baf1226003c22fe8fd7a555e7db45d950cf99c4a35e0bc6818511939d\"" Jan 30 13:47:24.268352 containerd[1578]: time="2025-01-30T13:47:24.268304715Z" level=info msg="RemoveContainer for \"8367973baf1226003c22fe8fd7a555e7db45d950cf99c4a35e0bc6818511939d\" returns successfully" Jan 30 13:47:24.268531 kubelet[1930]: I0130 13:47:24.268497 1930 scope.go:117] "RemoveContainer" containerID="ea64c5cdc5d7be0febbebaf50f5e6fa662416e71c0e0e3f70c81279ecfad7e6e" Jan 30 13:47:24.269466 containerd[1578]: time="2025-01-30T13:47:24.269440872Z" level=info msg="RemoveContainer for \"ea64c5cdc5d7be0febbebaf50f5e6fa662416e71c0e0e3f70c81279ecfad7e6e\"" Jan 30 13:47:24.273085 containerd[1578]: time="2025-01-30T13:47:24.273058360Z" level=info msg="RemoveContainer for \"ea64c5cdc5d7be0febbebaf50f5e6fa662416e71c0e0e3f70c81279ecfad7e6e\" returns successfully" Jan 30 13:47:24.273320 kubelet[1930]: I0130 13:47:24.273209 1930 scope.go:117] "RemoveContainer" containerID="09beda90046387238646b4b7773221b12edf8a523f5497addf19f389202d6df7" Jan 30 13:47:24.274337 containerd[1578]: time="2025-01-30T13:47:24.274311627Z" level=info msg="RemoveContainer for \"09beda90046387238646b4b7773221b12edf8a523f5497addf19f389202d6df7\"" Jan 30 13:47:24.277288 containerd[1578]: time="2025-01-30T13:47:24.277263494Z" level=info msg="RemoveContainer for \"09beda90046387238646b4b7773221b12edf8a523f5497addf19f389202d6df7\" returns successfully" Jan 30 13:47:24.277466 kubelet[1930]: I0130 13:47:24.277435 1930 scope.go:117] "RemoveContainer" containerID="7564126c361e084c12c0033f262c7b6f5959e21476c0d90a189a869555d1715b" Jan 30 13:47:24.278261 containerd[1578]: time="2025-01-30T13:47:24.278236747Z" level=info msg="RemoveContainer for \"7564126c361e084c12c0033f262c7b6f5959e21476c0d90a189a869555d1715b\"" Jan 30 13:47:24.281537 containerd[1578]: time="2025-01-30T13:47:24.281503040Z" level=info msg="RemoveContainer for \"7564126c361e084c12c0033f262c7b6f5959e21476c0d90a189a869555d1715b\" returns successfully" Jan 30 13:47:24.281673 kubelet[1930]: I0130 13:47:24.281637 1930 scope.go:117] "RemoveContainer" containerID="67c41ac4c64249ec66351a4b84ef1e271a2ec3287f397351660d47c3ffff534b" Jan 30 13:47:24.281921 containerd[1578]: time="2025-01-30T13:47:24.281875760Z" level=error msg="ContainerStatus for \"67c41ac4c64249ec66351a4b84ef1e271a2ec3287f397351660d47c3ffff534b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"67c41ac4c64249ec66351a4b84ef1e271a2ec3287f397351660d47c3ffff534b\": not found" Jan 30 13:47:24.282022 kubelet[1930]: E0130 13:47:24.282001 1930 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"67c41ac4c64249ec66351a4b84ef1e271a2ec3287f397351660d47c3ffff534b\": not found" containerID="67c41ac4c64249ec66351a4b84ef1e271a2ec3287f397351660d47c3ffff534b" Jan 30 13:47:24.282095 kubelet[1930]: I0130 13:47:24.282027 1930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"67c41ac4c64249ec66351a4b84ef1e271a2ec3287f397351660d47c3ffff534b"} err="failed to get container status \"67c41ac4c64249ec66351a4b84ef1e271a2ec3287f397351660d47c3ffff534b\": rpc error: code = NotFound desc = an error occurred when try to find container \"67c41ac4c64249ec66351a4b84ef1e271a2ec3287f397351660d47c3ffff534b\": not found" Jan 30 13:47:24.282120 kubelet[1930]: I0130 13:47:24.282096 1930 scope.go:117] "RemoveContainer" containerID="8367973baf1226003c22fe8fd7a555e7db45d950cf99c4a35e0bc6818511939d" Jan 30 13:47:24.282370 containerd[1578]: time="2025-01-30T13:47:24.282329603Z" level=error msg="ContainerStatus for \"8367973baf1226003c22fe8fd7a555e7db45d950cf99c4a35e0bc6818511939d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8367973baf1226003c22fe8fd7a555e7db45d950cf99c4a35e0bc6818511939d\": not found" Jan 30 13:47:24.282478 kubelet[1930]: E0130 13:47:24.282456 1930 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8367973baf1226003c22fe8fd7a555e7db45d950cf99c4a35e0bc6818511939d\": not found" containerID="8367973baf1226003c22fe8fd7a555e7db45d950cf99c4a35e0bc6818511939d" Jan 30 13:47:24.282604 kubelet[1930]: I0130 13:47:24.282482 1930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8367973baf1226003c22fe8fd7a555e7db45d950cf99c4a35e0bc6818511939d"} err="failed to get container status \"8367973baf1226003c22fe8fd7a555e7db45d950cf99c4a35e0bc6818511939d\": rpc error: code = NotFound desc = an error occurred when try to find container \"8367973baf1226003c22fe8fd7a555e7db45d950cf99c4a35e0bc6818511939d\": not found" Jan 30 13:47:24.282604 kubelet[1930]: I0130 13:47:24.282501 1930 scope.go:117] "RemoveContainer" containerID="ea64c5cdc5d7be0febbebaf50f5e6fa662416e71c0e0e3f70c81279ecfad7e6e" Jan 30 13:47:24.282670 containerd[1578]: time="2025-01-30T13:47:24.282639910Z" level=error msg="ContainerStatus for \"ea64c5cdc5d7be0febbebaf50f5e6fa662416e71c0e0e3f70c81279ecfad7e6e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ea64c5cdc5d7be0febbebaf50f5e6fa662416e71c0e0e3f70c81279ecfad7e6e\": not found" Jan 30 13:47:24.282739 kubelet[1930]: E0130 13:47:24.282722 1930 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ea64c5cdc5d7be0febbebaf50f5e6fa662416e71c0e0e3f70c81279ecfad7e6e\": not found" containerID="ea64c5cdc5d7be0febbebaf50f5e6fa662416e71c0e0e3f70c81279ecfad7e6e" Jan 30 13:47:24.282775 kubelet[1930]: I0130 13:47:24.282739 1930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ea64c5cdc5d7be0febbebaf50f5e6fa662416e71c0e0e3f70c81279ecfad7e6e"} err="failed to get container status \"ea64c5cdc5d7be0febbebaf50f5e6fa662416e71c0e0e3f70c81279ecfad7e6e\": rpc error: code = NotFound desc = an error occurred when try to find container \"ea64c5cdc5d7be0febbebaf50f5e6fa662416e71c0e0e3f70c81279ecfad7e6e\": not found" Jan 30 13:47:24.282775 kubelet[1930]: I0130 13:47:24.282760 1930 scope.go:117] "RemoveContainer" containerID="09beda90046387238646b4b7773221b12edf8a523f5497addf19f389202d6df7" Jan 30 13:47:24.282955 containerd[1578]: time="2025-01-30T13:47:24.282928692Z" level=error msg="ContainerStatus for \"09beda90046387238646b4b7773221b12edf8a523f5497addf19f389202d6df7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"09beda90046387238646b4b7773221b12edf8a523f5497addf19f389202d6df7\": not found" Jan 30 13:47:24.283029 kubelet[1930]: E0130 13:47:24.283010 1930 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"09beda90046387238646b4b7773221b12edf8a523f5497addf19f389202d6df7\": not found" containerID="09beda90046387238646b4b7773221b12edf8a523f5497addf19f389202d6df7" Jan 30 13:47:24.283067 kubelet[1930]: I0130 13:47:24.283028 1930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"09beda90046387238646b4b7773221b12edf8a523f5497addf19f389202d6df7"} err="failed to get container status \"09beda90046387238646b4b7773221b12edf8a523f5497addf19f389202d6df7\": rpc error: code = NotFound desc = an error occurred when try to find container \"09beda90046387238646b4b7773221b12edf8a523f5497addf19f389202d6df7\": not found" Jan 30 13:47:24.283067 kubelet[1930]: I0130 13:47:24.283039 1930 scope.go:117] "RemoveContainer" containerID="7564126c361e084c12c0033f262c7b6f5959e21476c0d90a189a869555d1715b" Jan 30 13:47:24.283191 containerd[1578]: time="2025-01-30T13:47:24.283162488Z" level=error msg="ContainerStatus for \"7564126c361e084c12c0033f262c7b6f5959e21476c0d90a189a869555d1715b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7564126c361e084c12c0033f262c7b6f5959e21476c0d90a189a869555d1715b\": not found" Jan 30 13:47:24.283288 kubelet[1930]: E0130 13:47:24.283267 1930 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7564126c361e084c12c0033f262c7b6f5959e21476c0d90a189a869555d1715b\": not found" containerID="7564126c361e084c12c0033f262c7b6f5959e21476c0d90a189a869555d1715b" Jan 30 13:47:24.283350 kubelet[1930]: I0130 13:47:24.283290 1930 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7564126c361e084c12c0033f262c7b6f5959e21476c0d90a189a869555d1715b"} err="failed to get container status \"7564126c361e084c12c0033f262c7b6f5959e21476c0d90a189a869555d1715b\": rpc error: code = NotFound desc = an error occurred when try to find container \"7564126c361e084c12c0033f262c7b6f5959e21476c0d90a189a869555d1715b\": not found" Jan 30 13:47:24.339590 kubelet[1930]: I0130 13:47:24.339501 1930 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/de663b58-e8ae-403f-a76e-8ec55901f8bd-clustermesh-secrets\") on node \"10.0.0.114\" DevicePath \"\"" Jan 30 13:47:24.339590 kubelet[1930]: I0130 13:47:24.339520 1930 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-fh458\" (UniqueName: \"kubernetes.io/projected/de663b58-e8ae-403f-a76e-8ec55901f8bd-kube-api-access-fh458\") on node \"10.0.0.114\" DevicePath \"\"" Jan 30 13:47:24.339590 kubelet[1930]: I0130 13:47:24.339529 1930 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/de663b58-e8ae-403f-a76e-8ec55901f8bd-bpf-maps\") on node \"10.0.0.114\" DevicePath \"\"" Jan 30 13:47:24.339590 kubelet[1930]: I0130 13:47:24.339538 1930 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/de663b58-e8ae-403f-a76e-8ec55901f8bd-cilium-cgroup\") on node \"10.0.0.114\" DevicePath \"\"" Jan 30 13:47:24.339590 kubelet[1930]: I0130 13:47:24.339547 1930 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/de663b58-e8ae-403f-a76e-8ec55901f8bd-cni-path\") on node \"10.0.0.114\" DevicePath \"\"" Jan 30 13:47:24.339590 kubelet[1930]: I0130 13:47:24.339555 1930 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/de663b58-e8ae-403f-a76e-8ec55901f8bd-cilium-run\") on node \"10.0.0.114\" DevicePath \"\"" Jan 30 13:47:24.339590 kubelet[1930]: I0130 13:47:24.339562 1930 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/de663b58-e8ae-403f-a76e-8ec55901f8bd-cilium-config-path\") on node \"10.0.0.114\" DevicePath \"\"" Jan 30 13:47:24.339590 kubelet[1930]: I0130 13:47:24.339569 1930 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/de663b58-e8ae-403f-a76e-8ec55901f8bd-etc-cni-netd\") on node \"10.0.0.114\" DevicePath \"\"" Jan 30 13:47:24.339870 kubelet[1930]: I0130 13:47:24.339576 1930 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/de663b58-e8ae-403f-a76e-8ec55901f8bd-hostproc\") on node \"10.0.0.114\" DevicePath \"\"" Jan 30 13:47:24.339870 kubelet[1930]: I0130 13:47:24.339583 1930 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/de663b58-e8ae-403f-a76e-8ec55901f8bd-host-proc-sys-kernel\") on node \"10.0.0.114\" DevicePath \"\"" Jan 30 13:47:24.339870 kubelet[1930]: I0130 13:47:24.339590 1930 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/de663b58-e8ae-403f-a76e-8ec55901f8bd-hubble-tls\") on node \"10.0.0.114\" DevicePath \"\"" Jan 30 13:47:24.339870 kubelet[1930]: I0130 13:47:24.339598 1930 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/de663b58-e8ae-403f-a76e-8ec55901f8bd-host-proc-sys-net\") on node \"10.0.0.114\" DevicePath \"\"" Jan 30 13:47:24.783631 systemd[1]: var-lib-kubelet-pods-de663b58\x2de8ae\x2d403f\x2da76e\x2d8ec55901f8bd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfh458.mount: Deactivated successfully. Jan 30 13:47:24.783857 systemd[1]: var-lib-kubelet-pods-de663b58\x2de8ae\x2d403f\x2da76e\x2d8ec55901f8bd-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 30 13:47:24.784036 systemd[1]: var-lib-kubelet-pods-de663b58\x2de8ae\x2d403f\x2da76e\x2d8ec55901f8bd-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 30 13:47:24.790138 kubelet[1930]: E0130 13:47:24.790097 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:47:25.152141 kubelet[1930]: I0130 13:47:25.152033 1930 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de663b58-e8ae-403f-a76e-8ec55901f8bd" path="/var/lib/kubelet/pods/de663b58-e8ae-403f-a76e-8ec55901f8bd/volumes" Jan 30 13:47:25.791293 kubelet[1930]: E0130 13:47:25.791233 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:47:26.310032 kubelet[1930]: I0130 13:47:26.309984 1930 topology_manager.go:215] "Topology Admit Handler" podUID="488a6fcd-308a-445b-84db-a7ead64519f3" podNamespace="kube-system" podName="cilium-operator-599987898-ggcv6" Jan 30 13:47:26.310032 kubelet[1930]: E0130 13:47:26.310048 1930 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="de663b58-e8ae-403f-a76e-8ec55901f8bd" containerName="mount-bpf-fs" Jan 30 13:47:26.310229 kubelet[1930]: E0130 13:47:26.310065 1930 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="de663b58-e8ae-403f-a76e-8ec55901f8bd" containerName="mount-cgroup" Jan 30 13:47:26.310229 kubelet[1930]: E0130 13:47:26.310075 1930 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="de663b58-e8ae-403f-a76e-8ec55901f8bd" containerName="apply-sysctl-overwrites" Jan 30 13:47:26.310229 kubelet[1930]: E0130 13:47:26.310084 1930 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="de663b58-e8ae-403f-a76e-8ec55901f8bd" containerName="clean-cilium-state" Jan 30 13:47:26.310229 kubelet[1930]: E0130 13:47:26.310092 1930 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="de663b58-e8ae-403f-a76e-8ec55901f8bd" containerName="cilium-agent" Jan 30 13:47:26.310229 kubelet[1930]: I0130 13:47:26.310114 1930 memory_manager.go:354] "RemoveStaleState removing state" podUID="de663b58-e8ae-403f-a76e-8ec55901f8bd" containerName="cilium-agent" Jan 30 13:47:26.311610 kubelet[1930]: I0130 13:47:26.311561 1930 topology_manager.go:215] "Topology Admit Handler" podUID="d8e6d1f1-90fc-4692-ad81-8cc696b3ae0a" podNamespace="kube-system" podName="cilium-dlx7c" Jan 30 13:47:26.449692 kubelet[1930]: I0130 13:47:26.449647 1930 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d8e6d1f1-90fc-4692-ad81-8cc696b3ae0a-clustermesh-secrets\") pod \"cilium-dlx7c\" (UID: \"d8e6d1f1-90fc-4692-ad81-8cc696b3ae0a\") " pod="kube-system/cilium-dlx7c" Jan 30 13:47:26.449692 kubelet[1930]: I0130 13:47:26.449697 1930 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d8e6d1f1-90fc-4692-ad81-8cc696b3ae0a-host-proc-sys-net\") pod \"cilium-dlx7c\" (UID: \"d8e6d1f1-90fc-4692-ad81-8cc696b3ae0a\") " pod="kube-system/cilium-dlx7c" Jan 30 13:47:26.449927 kubelet[1930]: I0130 13:47:26.449726 1930 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5l22\" (UniqueName: \"kubernetes.io/projected/d8e6d1f1-90fc-4692-ad81-8cc696b3ae0a-kube-api-access-r5l22\") pod \"cilium-dlx7c\" (UID: \"d8e6d1f1-90fc-4692-ad81-8cc696b3ae0a\") " pod="kube-system/cilium-dlx7c" Jan 30 13:47:26.449927 kubelet[1930]: I0130 13:47:26.449751 1930 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d8e6d1f1-90fc-4692-ad81-8cc696b3ae0a-hostproc\") pod \"cilium-dlx7c\" (UID: \"d8e6d1f1-90fc-4692-ad81-8cc696b3ae0a\") " pod="kube-system/cilium-dlx7c" Jan 30 13:47:26.449927 kubelet[1930]: I0130 13:47:26.449785 1930 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d8e6d1f1-90fc-4692-ad81-8cc696b3ae0a-bpf-maps\") pod \"cilium-dlx7c\" (UID: \"d8e6d1f1-90fc-4692-ad81-8cc696b3ae0a\") " pod="kube-system/cilium-dlx7c" Jan 30 13:47:26.449927 kubelet[1930]: I0130 13:47:26.449830 1930 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d8e6d1f1-90fc-4692-ad81-8cc696b3ae0a-hubble-tls\") pod \"cilium-dlx7c\" (UID: \"d8e6d1f1-90fc-4692-ad81-8cc696b3ae0a\") " pod="kube-system/cilium-dlx7c" Jan 30 13:47:26.449927 kubelet[1930]: I0130 13:47:26.449853 1930 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpc57\" (UniqueName: \"kubernetes.io/projected/488a6fcd-308a-445b-84db-a7ead64519f3-kube-api-access-kpc57\") pod \"cilium-operator-599987898-ggcv6\" (UID: \"488a6fcd-308a-445b-84db-a7ead64519f3\") " pod="kube-system/cilium-operator-599987898-ggcv6" Jan 30 13:47:26.450080 kubelet[1930]: I0130 13:47:26.449878 1930 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d8e6d1f1-90fc-4692-ad81-8cc696b3ae0a-cilium-run\") pod \"cilium-dlx7c\" (UID: \"d8e6d1f1-90fc-4692-ad81-8cc696b3ae0a\") " pod="kube-system/cilium-dlx7c" Jan 30 13:47:26.450080 kubelet[1930]: I0130 13:47:26.449901 1930 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d8e6d1f1-90fc-4692-ad81-8cc696b3ae0a-cilium-config-path\") pod \"cilium-dlx7c\" (UID: \"d8e6d1f1-90fc-4692-ad81-8cc696b3ae0a\") " pod="kube-system/cilium-dlx7c" Jan 30 13:47:26.450080 kubelet[1930]: I0130 13:47:26.449923 1930 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d8e6d1f1-90fc-4692-ad81-8cc696b3ae0a-host-proc-sys-kernel\") pod \"cilium-dlx7c\" (UID: \"d8e6d1f1-90fc-4692-ad81-8cc696b3ae0a\") " pod="kube-system/cilium-dlx7c" Jan 30 13:47:26.450080 kubelet[1930]: I0130 13:47:26.449947 1930 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d8e6d1f1-90fc-4692-ad81-8cc696b3ae0a-cni-path\") pod \"cilium-dlx7c\" (UID: \"d8e6d1f1-90fc-4692-ad81-8cc696b3ae0a\") " pod="kube-system/cilium-dlx7c" Jan 30 13:47:26.450080 kubelet[1930]: I0130 13:47:26.449972 1930 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d8e6d1f1-90fc-4692-ad81-8cc696b3ae0a-etc-cni-netd\") pod \"cilium-dlx7c\" (UID: \"d8e6d1f1-90fc-4692-ad81-8cc696b3ae0a\") " pod="kube-system/cilium-dlx7c" Jan 30 13:47:26.450080 kubelet[1930]: I0130 13:47:26.449997 1930 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d8e6d1f1-90fc-4692-ad81-8cc696b3ae0a-lib-modules\") pod \"cilium-dlx7c\" (UID: \"d8e6d1f1-90fc-4692-ad81-8cc696b3ae0a\") " pod="kube-system/cilium-dlx7c" Jan 30 13:47:26.450256 kubelet[1930]: I0130 13:47:26.450021 1930 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d8e6d1f1-90fc-4692-ad81-8cc696b3ae0a-xtables-lock\") pod \"cilium-dlx7c\" (UID: \"d8e6d1f1-90fc-4692-ad81-8cc696b3ae0a\") " pod="kube-system/cilium-dlx7c" Jan 30 13:47:26.450256 kubelet[1930]: I0130 13:47:26.450044 1930 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d8e6d1f1-90fc-4692-ad81-8cc696b3ae0a-cilium-ipsec-secrets\") pod \"cilium-dlx7c\" (UID: \"d8e6d1f1-90fc-4692-ad81-8cc696b3ae0a\") " pod="kube-system/cilium-dlx7c" Jan 30 13:47:26.450256 kubelet[1930]: I0130 13:47:26.450067 1930 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/488a6fcd-308a-445b-84db-a7ead64519f3-cilium-config-path\") pod \"cilium-operator-599987898-ggcv6\" (UID: \"488a6fcd-308a-445b-84db-a7ead64519f3\") " pod="kube-system/cilium-operator-599987898-ggcv6" Jan 30 13:47:26.450256 kubelet[1930]: I0130 13:47:26.450089 1930 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d8e6d1f1-90fc-4692-ad81-8cc696b3ae0a-cilium-cgroup\") pod \"cilium-dlx7c\" (UID: \"d8e6d1f1-90fc-4692-ad81-8cc696b3ae0a\") " pod="kube-system/cilium-dlx7c" Jan 30 13:47:26.615072 kubelet[1930]: E0130 13:47:26.614920 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:26.615552 containerd[1578]: time="2025-01-30T13:47:26.615449159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-ggcv6,Uid:488a6fcd-308a-445b-84db-a7ead64519f3,Namespace:kube-system,Attempt:0,}" Jan 30 13:47:26.616082 kubelet[1930]: E0130 13:47:26.615720 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:26.616481 containerd[1578]: time="2025-01-30T13:47:26.616445856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dlx7c,Uid:d8e6d1f1-90fc-4692-ad81-8cc696b3ae0a,Namespace:kube-system,Attempt:0,}" Jan 30 13:47:26.642496 containerd[1578]: time="2025-01-30T13:47:26.642400070Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:47:26.642496 containerd[1578]: time="2025-01-30T13:47:26.642453884Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:47:26.642496 containerd[1578]: time="2025-01-30T13:47:26.642478606Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:47:26.642711 containerd[1578]: time="2025-01-30T13:47:26.642604721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:47:26.644719 containerd[1578]: time="2025-01-30T13:47:26.644625735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:47:26.644719 containerd[1578]: time="2025-01-30T13:47:26.644678455Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:47:26.644719 containerd[1578]: time="2025-01-30T13:47:26.644693376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:47:26.645032 containerd[1578]: time="2025-01-30T13:47:26.644792135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:47:26.681690 containerd[1578]: time="2025-01-30T13:47:26.681639772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dlx7c,Uid:d8e6d1f1-90fc-4692-ad81-8cc696b3ae0a,Namespace:kube-system,Attempt:0,} returns sandbox id \"425ad7b11fa8800635d69b2dd226c31a6296d0b19ac24799f7306f33dc53f935\"" Jan 30 13:47:26.682385 kubelet[1930]: E0130 13:47:26.682350 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:26.684023 containerd[1578]: time="2025-01-30T13:47:26.683986450Z" level=info msg="CreateContainer within sandbox \"425ad7b11fa8800635d69b2dd226c31a6296d0b19ac24799f7306f33dc53f935\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 13:47:26.699662 containerd[1578]: time="2025-01-30T13:47:26.699615398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-ggcv6,Uid:488a6fcd-308a-445b-84db-a7ead64519f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"d69ecb8d70779a8272e0c7243ff1ff62e442ff7f0140f6eeb854d808fdcdcdf6\"" Jan 30 13:47:26.700830 kubelet[1930]: E0130 13:47:26.700765 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:26.701455 containerd[1578]: time="2025-01-30T13:47:26.701394261Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 30 13:47:26.708160 containerd[1578]: time="2025-01-30T13:47:26.708115494Z" level=info msg="CreateContainer within sandbox \"425ad7b11fa8800635d69b2dd226c31a6296d0b19ac24799f7306f33dc53f935\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"780c683ab9d19d6511ae2d38fc39e400ed98b4063683afe2318db1bc5095a9d2\"" Jan 30 13:47:26.708848 containerd[1578]: time="2025-01-30T13:47:26.708550760Z" level=info msg="StartContainer for \"780c683ab9d19d6511ae2d38fc39e400ed98b4063683afe2318db1bc5095a9d2\"" Jan 30 13:47:26.763222 containerd[1578]: time="2025-01-30T13:47:26.763169543Z" level=info msg="StartContainer for \"780c683ab9d19d6511ae2d38fc39e400ed98b4063683afe2318db1bc5095a9d2\" returns successfully" Jan 30 13:47:26.792418 kubelet[1930]: E0130 13:47:26.792369 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:47:26.807593 containerd[1578]: time="2025-01-30T13:47:26.807532715Z" level=info msg="shim disconnected" id=780c683ab9d19d6511ae2d38fc39e400ed98b4063683afe2318db1bc5095a9d2 namespace=k8s.io Jan 30 13:47:26.807593 containerd[1578]: time="2025-01-30T13:47:26.807584033Z" level=warning msg="cleaning up after shim disconnected" id=780c683ab9d19d6511ae2d38fc39e400ed98b4063683afe2318db1bc5095a9d2 namespace=k8s.io Jan 30 13:47:26.807593 containerd[1578]: time="2025-01-30T13:47:26.807592501Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:47:27.269229 kubelet[1930]: E0130 13:47:27.269187 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:27.271093 containerd[1578]: time="2025-01-30T13:47:27.271049800Z" level=info msg="CreateContainer within sandbox \"425ad7b11fa8800635d69b2dd226c31a6296d0b19ac24799f7306f33dc53f935\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 13:47:27.460748 containerd[1578]: time="2025-01-30T13:47:27.460700562Z" level=info msg="CreateContainer within sandbox \"425ad7b11fa8800635d69b2dd226c31a6296d0b19ac24799f7306f33dc53f935\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"981411afd17f789beb85e644106d661d3ffc113fc29eeb95a82fbc1e48ab4e72\"" Jan 30 13:47:27.461193 containerd[1578]: time="2025-01-30T13:47:27.461166349Z" level=info msg="StartContainer for \"981411afd17f789beb85e644106d661d3ffc113fc29eeb95a82fbc1e48ab4e72\"" Jan 30 13:47:27.517756 containerd[1578]: time="2025-01-30T13:47:27.517715109Z" level=info msg="StartContainer for \"981411afd17f789beb85e644106d661d3ffc113fc29eeb95a82fbc1e48ab4e72\" returns successfully" Jan 30 13:47:27.546722 containerd[1578]: time="2025-01-30T13:47:27.546648470Z" level=info msg="shim disconnected" id=981411afd17f789beb85e644106d661d3ffc113fc29eeb95a82fbc1e48ab4e72 namespace=k8s.io Jan 30 13:47:27.546722 containerd[1578]: time="2025-01-30T13:47:27.546703607Z" level=warning msg="cleaning up after shim disconnected" id=981411afd17f789beb85e644106d661d3ffc113fc29eeb95a82fbc1e48ab4e72 namespace=k8s.io Jan 30 13:47:27.546722 containerd[1578]: time="2025-01-30T13:47:27.546713377Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:47:27.793574 kubelet[1930]: E0130 13:47:27.793525 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:47:28.135721 kubelet[1930]: E0130 13:47:28.135661 1930 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 13:47:28.278953 kubelet[1930]: E0130 13:47:28.278925 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:28.282090 containerd[1578]: time="2025-01-30T13:47:28.282058107Z" level=info msg="CreateContainer within sandbox \"425ad7b11fa8800635d69b2dd226c31a6296d0b19ac24799f7306f33dc53f935\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 13:47:28.363568 containerd[1578]: time="2025-01-30T13:47:28.363454805Z" level=info msg="CreateContainer within sandbox \"425ad7b11fa8800635d69b2dd226c31a6296d0b19ac24799f7306f33dc53f935\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"24e3769f91e39cfa5987c5add67a8858abcf9271c4ef7d69c50c9cdbfee57dfe\"" Jan 30 13:47:28.364051 containerd[1578]: time="2025-01-30T13:47:28.364019587Z" level=info msg="StartContainer for \"24e3769f91e39cfa5987c5add67a8858abcf9271c4ef7d69c50c9cdbfee57dfe\"" Jan 30 13:47:28.425617 containerd[1578]: time="2025-01-30T13:47:28.425405930Z" level=info msg="StartContainer for \"24e3769f91e39cfa5987c5add67a8858abcf9271c4ef7d69c50c9cdbfee57dfe\" returns successfully" Jan 30 13:47:28.464959 containerd[1578]: time="2025-01-30T13:47:28.464773338Z" level=info msg="shim disconnected" id=24e3769f91e39cfa5987c5add67a8858abcf9271c4ef7d69c50c9cdbfee57dfe namespace=k8s.io Jan 30 13:47:28.464959 containerd[1578]: time="2025-01-30T13:47:28.464849266Z" level=warning msg="cleaning up after shim disconnected" id=24e3769f91e39cfa5987c5add67a8858abcf9271c4ef7d69c50c9cdbfee57dfe namespace=k8s.io Jan 30 13:47:28.464959 containerd[1578]: time="2025-01-30T13:47:28.464860198Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:47:28.556650 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-24e3769f91e39cfa5987c5add67a8858abcf9271c4ef7d69c50c9cdbfee57dfe-rootfs.mount: Deactivated successfully. Jan 30 13:47:28.662617 containerd[1578]: time="2025-01-30T13:47:28.662568637Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:47:28.663332 containerd[1578]: time="2025-01-30T13:47:28.663191569Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 30 13:47:28.664216 containerd[1578]: time="2025-01-30T13:47:28.664172490Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:47:28.665585 containerd[1578]: time="2025-01-30T13:47:28.665549348Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.96412321s" Jan 30 13:47:28.665635 containerd[1578]: time="2025-01-30T13:47:28.665585494Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 30 13:47:28.667651 containerd[1578]: time="2025-01-30T13:47:28.667619687Z" level=info msg="CreateContainer within sandbox \"d69ecb8d70779a8272e0c7243ff1ff62e442ff7f0140f6eeb854d808fdcdcdf6\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 30 13:47:28.677648 containerd[1578]: time="2025-01-30T13:47:28.677565311Z" level=info msg="CreateContainer within sandbox \"d69ecb8d70779a8272e0c7243ff1ff62e442ff7f0140f6eeb854d808fdcdcdf6\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"fecdf860a1a61400792f34831d26db2d06fb9c13bdc7f0f4bd38a9826b8c34d0\"" Jan 30 13:47:28.678152 containerd[1578]: time="2025-01-30T13:47:28.678099307Z" level=info msg="StartContainer for \"fecdf860a1a61400792f34831d26db2d06fb9c13bdc7f0f4bd38a9826b8c34d0\"" Jan 30 13:47:28.724608 containerd[1578]: time="2025-01-30T13:47:28.724545264Z" level=info msg="StartContainer for \"fecdf860a1a61400792f34831d26db2d06fb9c13bdc7f0f4bd38a9826b8c34d0\" returns successfully" Jan 30 13:47:28.794311 kubelet[1930]: E0130 13:47:28.794263 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:47:29.283566 kubelet[1930]: E0130 13:47:29.283513 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:29.285437 kubelet[1930]: E0130 13:47:29.285319 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:29.285930 containerd[1578]: time="2025-01-30T13:47:29.285891812Z" level=info msg="CreateContainer within sandbox \"425ad7b11fa8800635d69b2dd226c31a6296d0b19ac24799f7306f33dc53f935\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 13:47:29.303559 containerd[1578]: time="2025-01-30T13:47:29.303498484Z" level=info msg="CreateContainer within sandbox \"425ad7b11fa8800635d69b2dd226c31a6296d0b19ac24799f7306f33dc53f935\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5e278e97acb6f98ff777909f73c5789b4b1e372e85f76c739feaea242b5623a9\"" Jan 30 13:47:29.304074 containerd[1578]: time="2025-01-30T13:47:29.303950617Z" level=info msg="StartContainer for \"5e278e97acb6f98ff777909f73c5789b4b1e372e85f76c739feaea242b5623a9\"" Jan 30 13:47:29.510342 containerd[1578]: time="2025-01-30T13:47:29.510296968Z" level=info msg="StartContainer for \"5e278e97acb6f98ff777909f73c5789b4b1e372e85f76c739feaea242b5623a9\" returns successfully" Jan 30 13:47:29.530495 containerd[1578]: time="2025-01-30T13:47:29.530427673Z" level=info msg="shim disconnected" id=5e278e97acb6f98ff777909f73c5789b4b1e372e85f76c739feaea242b5623a9 namespace=k8s.io Jan 30 13:47:29.530495 containerd[1578]: time="2025-01-30T13:47:29.530484191Z" level=warning msg="cleaning up after shim disconnected" id=5e278e97acb6f98ff777909f73c5789b4b1e372e85f76c739feaea242b5623a9 namespace=k8s.io Jan 30 13:47:29.530495 containerd[1578]: time="2025-01-30T13:47:29.530494563Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:47:29.794866 kubelet[1930]: E0130 13:47:29.794830 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:47:30.288919 kubelet[1930]: E0130 13:47:30.288886 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:30.288919 kubelet[1930]: E0130 13:47:30.288901 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:30.292950 containerd[1578]: time="2025-01-30T13:47:30.292900871Z" level=info msg="CreateContainer within sandbox \"425ad7b11fa8800635d69b2dd226c31a6296d0b19ac24799f7306f33dc53f935\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 13:47:30.301698 kubelet[1930]: I0130 13:47:30.301660 1930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-ggcv6" podStartSLOduration=2.336512905 podStartE2EDuration="4.301641972s" podCreationTimestamp="2025-01-30 13:47:26 +0000 UTC" firstStartedPulling="2025-01-30 13:47:26.701196264 +0000 UTC m=+54.248752288" lastFinishedPulling="2025-01-30 13:47:28.666325331 +0000 UTC m=+56.213881355" observedRunningTime="2025-01-30 13:47:29.305943201 +0000 UTC m=+56.853499225" watchObservedRunningTime="2025-01-30 13:47:30.301641972 +0000 UTC m=+57.849197996" Jan 30 13:47:30.307238 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount177642482.mount: Deactivated successfully. Jan 30 13:47:30.307692 containerd[1578]: time="2025-01-30T13:47:30.307659091Z" level=info msg="CreateContainer within sandbox \"425ad7b11fa8800635d69b2dd226c31a6296d0b19ac24799f7306f33dc53f935\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f871bad65485d4757b0abdb97253c7009684fb659151cb6ec2fd8e7ca8fd586d\"" Jan 30 13:47:30.308172 containerd[1578]: time="2025-01-30T13:47:30.308147295Z" level=info msg="StartContainer for \"f871bad65485d4757b0abdb97253c7009684fb659151cb6ec2fd8e7ca8fd586d\"" Jan 30 13:47:30.365344 containerd[1578]: time="2025-01-30T13:47:30.365305509Z" level=info msg="StartContainer for \"f871bad65485d4757b0abdb97253c7009684fb659151cb6ec2fd8e7ca8fd586d\" returns successfully" Jan 30 13:47:30.784859 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 30 13:47:30.795128 kubelet[1930]: E0130 13:47:30.795082 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:47:31.292440 kubelet[1930]: E0130 13:47:31.292404 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:31.313050 kubelet[1930]: I0130 13:47:31.312987 1930 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dlx7c" podStartSLOduration=5.312969942 podStartE2EDuration="5.312969942s" podCreationTimestamp="2025-01-30 13:47:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:47:31.312455155 +0000 UTC m=+58.860011179" watchObservedRunningTime="2025-01-30 13:47:31.312969942 +0000 UTC m=+58.860525966" Jan 30 13:47:31.795382 kubelet[1930]: E0130 13:47:31.795310 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:47:32.617735 kubelet[1930]: E0130 13:47:32.617684 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:32.754923 kubelet[1930]: E0130 13:47:32.754871 1930 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:47:32.766451 containerd[1578]: time="2025-01-30T13:47:32.766356697Z" level=info msg="StopPodSandbox for \"4d7144513a594fb0486968fdbd39c99288e248ba95c01c933c459a35e14f968a\"" Jan 30 13:47:32.766451 containerd[1578]: time="2025-01-30T13:47:32.766432463Z" level=info msg="TearDown network for sandbox \"4d7144513a594fb0486968fdbd39c99288e248ba95c01c933c459a35e14f968a\" successfully" Jan 30 13:47:32.766451 containerd[1578]: time="2025-01-30T13:47:32.766442624Z" level=info msg="StopPodSandbox for \"4d7144513a594fb0486968fdbd39c99288e248ba95c01c933c459a35e14f968a\" returns successfully" Jan 30 13:47:32.767204 containerd[1578]: time="2025-01-30T13:47:32.766768087Z" level=info msg="RemovePodSandbox for \"4d7144513a594fb0486968fdbd39c99288e248ba95c01c933c459a35e14f968a\"" Jan 30 13:47:32.767204 containerd[1578]: time="2025-01-30T13:47:32.766787518Z" level=info msg="Forcibly stopping sandbox \"4d7144513a594fb0486968fdbd39c99288e248ba95c01c933c459a35e14f968a\"" Jan 30 13:47:32.767204 containerd[1578]: time="2025-01-30T13:47:32.766850898Z" level=info msg="TearDown network for sandbox \"4d7144513a594fb0486968fdbd39c99288e248ba95c01c933c459a35e14f968a\" successfully" Jan 30 13:47:32.772904 containerd[1578]: time="2025-01-30T13:47:32.772863495Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4d7144513a594fb0486968fdbd39c99288e248ba95c01c933c459a35e14f968a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:47:32.773007 containerd[1578]: time="2025-01-30T13:47:32.772909159Z" level=info msg="RemovePodSandbox \"4d7144513a594fb0486968fdbd39c99288e248ba95c01c933c459a35e14f968a\" returns successfully" Jan 30 13:47:32.797201 kubelet[1930]: E0130 13:47:32.797046 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:47:33.798153 kubelet[1930]: E0130 13:47:33.798099 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:47:33.971738 systemd-networkd[1258]: lxc_health: Link UP Jan 30 13:47:33.975360 systemd-networkd[1258]: lxc_health: Gained carrier Jan 30 13:47:34.618314 kubelet[1930]: E0130 13:47:34.618245 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:34.798469 kubelet[1930]: E0130 13:47:34.798409 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:47:35.300091 kubelet[1930]: E0130 13:47:35.300049 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:35.354057 systemd-networkd[1258]: lxc_health: Gained IPv6LL Jan 30 13:47:35.798909 kubelet[1930]: E0130 13:47:35.798870 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:47:36.301457 kubelet[1930]: E0130 13:47:36.301296 1930 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:36.799350 kubelet[1930]: E0130 13:47:36.799299 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:47:37.800437 kubelet[1930]: E0130 13:47:37.800394 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:47:38.801158 kubelet[1930]: E0130 13:47:38.801108 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:47:39.801732 kubelet[1930]: E0130 13:47:39.801662 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:47:40.802794 kubelet[1930]: E0130 13:47:40.802732 1930 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"