Jan 13 21:24:44.897100 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:40:50 -00 2025 Jan 13 21:24:44.897126 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:24:44.897141 kernel: BIOS-provided physical RAM map: Jan 13 21:24:44.897149 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 13 21:24:44.897157 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 13 21:24:44.897165 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 13 21:24:44.897175 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 13 21:24:44.897183 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 13 21:24:44.897192 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jan 13 21:24:44.897200 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jan 13 21:24:44.897211 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jan 13 21:24:44.897219 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jan 13 21:24:44.897228 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jan 13 21:24:44.897236 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jan 13 21:24:44.897247 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jan 13 21:24:44.897256 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 13 21:24:44.897268 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jan 13 21:24:44.897277 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jan 13 21:24:44.897286 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 13 21:24:44.897295 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 13 21:24:44.897304 kernel: NX (Execute Disable) protection: active Jan 13 21:24:44.897326 kernel: APIC: Static calls initialized Jan 13 21:24:44.897335 kernel: efi: EFI v2.7 by EDK II Jan 13 21:24:44.897344 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Jan 13 21:24:44.897353 kernel: SMBIOS 2.8 present. Jan 13 21:24:44.897362 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jan 13 21:24:44.897371 kernel: Hypervisor detected: KVM Jan 13 21:24:44.897383 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 21:24:44.897392 kernel: kvm-clock: using sched offset of 4216708988 cycles Jan 13 21:24:44.897402 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 21:24:44.897411 kernel: tsc: Detected 2794.748 MHz processor Jan 13 21:24:44.897421 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 21:24:44.897431 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 21:24:44.897456 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jan 13 21:24:44.897483 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 13 21:24:44.897493 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 21:24:44.897506 kernel: Using GB pages for direct mapping Jan 13 21:24:44.897515 kernel: Secure boot disabled Jan 13 21:24:44.897524 kernel: ACPI: Early table checksum verification disabled Jan 13 21:24:44.897534 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 13 21:24:44.897553 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 13 21:24:44.897563 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:24:44.897572 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:24:44.897584 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 13 21:24:44.897594 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:24:44.897604 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:24:44.897613 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:24:44.897623 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:24:44.897632 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 13 21:24:44.897642 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 13 21:24:44.897654 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Jan 13 21:24:44.897664 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 13 21:24:44.897674 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 13 21:24:44.897683 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 13 21:24:44.897693 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 13 21:24:44.897705 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 13 21:24:44.897716 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 13 21:24:44.897727 kernel: No NUMA configuration found Jan 13 21:24:44.897737 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jan 13 21:24:44.897750 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jan 13 21:24:44.897759 kernel: Zone ranges: Jan 13 21:24:44.897769 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 21:24:44.897778 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jan 13 21:24:44.897787 kernel: Normal empty Jan 13 21:24:44.897797 kernel: Movable zone start for each node Jan 13 21:24:44.897806 kernel: Early memory node ranges Jan 13 21:24:44.897816 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 13 21:24:44.897825 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 13 21:24:44.897835 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 13 21:24:44.897847 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jan 13 21:24:44.897857 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jan 13 21:24:44.897866 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jan 13 21:24:44.897875 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jan 13 21:24:44.897885 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 21:24:44.897894 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 13 21:24:44.897904 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 13 21:24:44.897913 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 21:24:44.897922 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jan 13 21:24:44.897935 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 13 21:24:44.897945 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jan 13 21:24:44.897955 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 13 21:24:44.897964 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 21:24:44.897973 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 13 21:24:44.897983 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 13 21:24:44.897992 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 21:24:44.898002 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 21:24:44.898012 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 21:24:44.898024 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 21:24:44.898034 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 21:24:44.898044 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 13 21:24:44.898053 kernel: TSC deadline timer available Jan 13 21:24:44.898063 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 13 21:24:44.898073 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 13 21:24:44.898083 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 13 21:24:44.898092 kernel: kvm-guest: setup PV sched yield Jan 13 21:24:44.898102 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 13 21:24:44.898112 kernel: Booting paravirtualized kernel on KVM Jan 13 21:24:44.898125 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 21:24:44.898135 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 13 21:24:44.898145 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 13 21:24:44.898155 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 13 21:24:44.898164 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 13 21:24:44.898174 kernel: kvm-guest: PV spinlocks enabled Jan 13 21:24:44.898184 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 13 21:24:44.898195 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:24:44.898208 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 21:24:44.898218 kernel: random: crng init done Jan 13 21:24:44.898227 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 21:24:44.898236 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 21:24:44.898246 kernel: Fallback order for Node 0: 0 Jan 13 21:24:44.898256 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jan 13 21:24:44.898265 kernel: Policy zone: DMA32 Jan 13 21:24:44.898275 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 21:24:44.898285 kernel: Memory: 2395616K/2567000K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42844K init, 2348K bss, 171124K reserved, 0K cma-reserved) Jan 13 21:24:44.898297 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 13 21:24:44.898307 kernel: ftrace: allocating 37918 entries in 149 pages Jan 13 21:24:44.898487 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 21:24:44.898498 kernel: Dynamic Preempt: voluntary Jan 13 21:24:44.898518 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 21:24:44.898531 kernel: rcu: RCU event tracing is enabled. Jan 13 21:24:44.898541 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 13 21:24:44.898552 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 21:24:44.898562 kernel: Rude variant of Tasks RCU enabled. Jan 13 21:24:44.898572 kernel: Tracing variant of Tasks RCU enabled. Jan 13 21:24:44.898582 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 21:24:44.898592 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 13 21:24:44.898605 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 13 21:24:44.898615 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 21:24:44.898625 kernel: Console: colour dummy device 80x25 Jan 13 21:24:44.898635 kernel: printk: console [ttyS0] enabled Jan 13 21:24:44.898645 kernel: ACPI: Core revision 20230628 Jan 13 21:24:44.898658 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 13 21:24:44.898668 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 21:24:44.898678 kernel: x2apic enabled Jan 13 21:24:44.898688 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 21:24:44.898698 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 13 21:24:44.898708 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 13 21:24:44.898719 kernel: kvm-guest: setup PV IPIs Jan 13 21:24:44.898729 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 13 21:24:44.898739 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 13 21:24:44.898751 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jan 13 21:24:44.898761 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 13 21:24:44.898771 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 13 21:24:44.898781 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 13 21:24:44.898792 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 21:24:44.898802 kernel: Spectre V2 : Mitigation: Retpolines Jan 13 21:24:44.898812 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 21:24:44.898822 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 21:24:44.898832 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 13 21:24:44.898844 kernel: RETBleed: Mitigation: untrained return thunk Jan 13 21:24:44.898854 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 13 21:24:44.898864 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 13 21:24:44.898873 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 13 21:24:44.898884 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 13 21:24:44.898893 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 13 21:24:44.898904 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 21:24:44.898913 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 21:24:44.898926 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 21:24:44.898937 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 21:24:44.898947 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 13 21:24:44.898957 kernel: Freeing SMP alternatives memory: 32K Jan 13 21:24:44.898967 kernel: pid_max: default: 32768 minimum: 301 Jan 13 21:24:44.898977 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 21:24:44.898987 kernel: landlock: Up and running. Jan 13 21:24:44.898997 kernel: SELinux: Initializing. Jan 13 21:24:44.899007 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:24:44.899020 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:24:44.899031 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 13 21:24:44.899040 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:24:44.899050 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:24:44.899060 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:24:44.899070 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 13 21:24:44.899080 kernel: ... version: 0 Jan 13 21:24:44.899089 kernel: ... bit width: 48 Jan 13 21:24:44.899099 kernel: ... generic registers: 6 Jan 13 21:24:44.899112 kernel: ... value mask: 0000ffffffffffff Jan 13 21:24:44.899121 kernel: ... max period: 00007fffffffffff Jan 13 21:24:44.899131 kernel: ... fixed-purpose events: 0 Jan 13 21:24:44.899141 kernel: ... event mask: 000000000000003f Jan 13 21:24:44.899151 kernel: signal: max sigframe size: 1776 Jan 13 21:24:44.899161 kernel: rcu: Hierarchical SRCU implementation. Jan 13 21:24:44.899171 kernel: rcu: Max phase no-delay instances is 400. Jan 13 21:24:44.899182 kernel: smp: Bringing up secondary CPUs ... Jan 13 21:24:44.899192 kernel: smpboot: x86: Booting SMP configuration: Jan 13 21:24:44.899206 kernel: .... node #0, CPUs: #1 #2 #3 Jan 13 21:24:44.899216 kernel: smp: Brought up 1 node, 4 CPUs Jan 13 21:24:44.899226 kernel: smpboot: Max logical packages: 1 Jan 13 21:24:44.899237 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jan 13 21:24:44.899247 kernel: devtmpfs: initialized Jan 13 21:24:44.899257 kernel: x86/mm: Memory block size: 128MB Jan 13 21:24:44.899267 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 13 21:24:44.899277 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 13 21:24:44.899287 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jan 13 21:24:44.899301 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 13 21:24:44.899324 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 13 21:24:44.899334 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 21:24:44.899344 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 13 21:24:44.899354 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 21:24:44.899363 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 21:24:44.899373 kernel: audit: initializing netlink subsys (disabled) Jan 13 21:24:44.899383 kernel: audit: type=2000 audit(1736803484.784:1): state=initialized audit_enabled=0 res=1 Jan 13 21:24:44.899393 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 21:24:44.899406 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 21:24:44.899415 kernel: cpuidle: using governor menu Jan 13 21:24:44.899425 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 21:24:44.899434 kernel: dca service started, version 1.12.1 Jan 13 21:24:44.899455 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 13 21:24:44.899465 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 13 21:24:44.899475 kernel: PCI: Using configuration type 1 for base access Jan 13 21:24:44.899485 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 21:24:44.899495 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 21:24:44.899508 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 21:24:44.899518 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 21:24:44.899529 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 21:24:44.899539 kernel: ACPI: Added _OSI(Module Device) Jan 13 21:24:44.899549 kernel: ACPI: Added _OSI(Processor Device) Jan 13 21:24:44.899582 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 21:24:44.899593 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 21:24:44.899603 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 21:24:44.899613 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 21:24:44.899627 kernel: ACPI: Interpreter enabled Jan 13 21:24:44.899638 kernel: ACPI: PM: (supports S0 S3 S5) Jan 13 21:24:44.899648 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 21:24:44.899659 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 21:24:44.899669 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 21:24:44.899680 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 13 21:24:44.899690 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 21:24:44.899905 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 21:24:44.900071 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 13 21:24:44.900224 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 13 21:24:44.900239 kernel: PCI host bridge to bus 0000:00 Jan 13 21:24:44.900417 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 21:24:44.900572 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 21:24:44.900719 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 21:24:44.900861 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 13 21:24:44.901007 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 13 21:24:44.901131 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jan 13 21:24:44.901252 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 21:24:44.901433 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 13 21:24:44.901596 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 13 21:24:44.901720 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jan 13 21:24:44.901863 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jan 13 21:24:44.902041 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 13 21:24:44.902198 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 13 21:24:44.902371 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 21:24:44.902553 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 21:24:44.902709 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jan 13 21:24:44.902865 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jan 13 21:24:44.903023 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jan 13 21:24:44.903196 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 13 21:24:44.903370 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jan 13 21:24:44.903564 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jan 13 21:24:44.903720 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jan 13 21:24:44.903881 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 13 21:24:44.904036 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jan 13 21:24:44.904194 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jan 13 21:24:44.904367 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jan 13 21:24:44.904534 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 13 21:24:44.904695 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 13 21:24:44.904849 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 13 21:24:44.905011 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 13 21:24:44.905168 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jan 13 21:24:44.905340 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jan 13 21:24:44.905516 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 13 21:24:44.905669 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jan 13 21:24:44.905684 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 21:24:44.905695 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 21:24:44.905706 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 21:24:44.905717 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 21:24:44.905733 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 13 21:24:44.905744 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 13 21:24:44.905755 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 13 21:24:44.905766 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 13 21:24:44.905777 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 13 21:24:44.905788 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 13 21:24:44.905799 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 13 21:24:44.905809 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 13 21:24:44.905821 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 13 21:24:44.905835 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 13 21:24:44.905845 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 13 21:24:44.905856 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 13 21:24:44.905867 kernel: iommu: Default domain type: Translated Jan 13 21:24:44.905878 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 21:24:44.905889 kernel: efivars: Registered efivars operations Jan 13 21:24:44.905900 kernel: PCI: Using ACPI for IRQ routing Jan 13 21:24:44.905911 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 21:24:44.905922 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 13 21:24:44.905935 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jan 13 21:24:44.905946 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jan 13 21:24:44.905956 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jan 13 21:24:44.906109 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 13 21:24:44.906260 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 13 21:24:44.906452 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 21:24:44.906469 kernel: vgaarb: loaded Jan 13 21:24:44.906481 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 13 21:24:44.906492 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 13 21:24:44.906508 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 21:24:44.906519 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 21:24:44.906530 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 21:24:44.906540 kernel: pnp: PnP ACPI init Jan 13 21:24:44.906719 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 13 21:24:44.906735 kernel: pnp: PnP ACPI: found 6 devices Jan 13 21:24:44.906747 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 21:24:44.906760 kernel: NET: Registered PF_INET protocol family Jan 13 21:24:44.906777 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 21:24:44.906789 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 21:24:44.906800 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 21:24:44.906811 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 21:24:44.906822 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 21:24:44.906832 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 21:24:44.906843 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:24:44.906854 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:24:44.906865 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 21:24:44.906879 kernel: NET: Registered PF_XDP protocol family Jan 13 21:24:44.907030 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 13 21:24:44.907180 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jan 13 21:24:44.907384 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 21:24:44.907534 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 21:24:44.907668 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 21:24:44.907800 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 13 21:24:44.907932 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 13 21:24:44.908069 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jan 13 21:24:44.908083 kernel: PCI: CLS 0 bytes, default 64 Jan 13 21:24:44.908094 kernel: Initialise system trusted keyrings Jan 13 21:24:44.908105 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 21:24:44.908116 kernel: Key type asymmetric registered Jan 13 21:24:44.908126 kernel: Asymmetric key parser 'x509' registered Jan 13 21:24:44.908137 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 21:24:44.908148 kernel: io scheduler mq-deadline registered Jan 13 21:24:44.908162 kernel: io scheduler kyber registered Jan 13 21:24:44.908173 kernel: io scheduler bfq registered Jan 13 21:24:44.908184 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 21:24:44.908195 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 13 21:24:44.908206 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 13 21:24:44.908217 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 13 21:24:44.908228 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 21:24:44.908239 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 21:24:44.908250 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 21:24:44.908261 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 21:24:44.908275 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 21:24:44.908440 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 13 21:24:44.908466 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 21:24:44.908603 kernel: rtc_cmos 00:04: registered as rtc0 Jan 13 21:24:44.908739 kernel: rtc_cmos 00:04: setting system clock to 2025-01-13T21:24:44 UTC (1736803484) Jan 13 21:24:44.908875 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 13 21:24:44.908889 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 13 21:24:44.908904 kernel: efifb: probing for efifb Jan 13 21:24:44.908915 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jan 13 21:24:44.908926 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jan 13 21:24:44.908937 kernel: efifb: scrolling: redraw Jan 13 21:24:44.908947 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jan 13 21:24:44.908959 kernel: Console: switching to colour frame buffer device 100x37 Jan 13 21:24:44.908991 kernel: fb0: EFI VGA frame buffer device Jan 13 21:24:44.909005 kernel: pstore: Using crash dump compression: deflate Jan 13 21:24:44.909017 kernel: pstore: Registered efi_pstore as persistent store backend Jan 13 21:24:44.909031 kernel: NET: Registered PF_INET6 protocol family Jan 13 21:24:44.909042 kernel: Segment Routing with IPv6 Jan 13 21:24:44.909053 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 21:24:44.909065 kernel: NET: Registered PF_PACKET protocol family Jan 13 21:24:44.909076 kernel: Key type dns_resolver registered Jan 13 21:24:44.909087 kernel: IPI shorthand broadcast: enabled Jan 13 21:24:44.909099 kernel: sched_clock: Marking stable (593002341, 114680008)->(722368549, -14686200) Jan 13 21:24:44.909110 kernel: registered taskstats version 1 Jan 13 21:24:44.909121 kernel: Loading compiled-in X.509 certificates Jan 13 21:24:44.909133 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e8ca4908f7ff887d90a0430272c92dde55624447' Jan 13 21:24:44.909146 kernel: Key type .fscrypt registered Jan 13 21:24:44.909158 kernel: Key type fscrypt-provisioning registered Jan 13 21:24:44.909169 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 21:24:44.909180 kernel: ima: Allocated hash algorithm: sha1 Jan 13 21:24:44.909192 kernel: ima: No architecture policies found Jan 13 21:24:44.909203 kernel: clk: Disabling unused clocks Jan 13 21:24:44.909214 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 13 21:24:44.909225 kernel: Write protecting the kernel read-only data: 36864k Jan 13 21:24:44.909239 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 13 21:24:44.909251 kernel: Run /init as init process Jan 13 21:24:44.909262 kernel: with arguments: Jan 13 21:24:44.909273 kernel: /init Jan 13 21:24:44.909284 kernel: with environment: Jan 13 21:24:44.909294 kernel: HOME=/ Jan 13 21:24:44.909305 kernel: TERM=linux Jan 13 21:24:44.909330 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 21:24:44.909347 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:24:44.909364 systemd[1]: Detected virtualization kvm. Jan 13 21:24:44.909376 systemd[1]: Detected architecture x86-64. Jan 13 21:24:44.909388 systemd[1]: Running in initrd. Jan 13 21:24:44.909403 systemd[1]: No hostname configured, using default hostname. Jan 13 21:24:44.909417 systemd[1]: Hostname set to . Jan 13 21:24:44.909429 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:24:44.909441 systemd[1]: Queued start job for default target initrd.target. Jan 13 21:24:44.909461 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:24:44.909473 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:24:44.909486 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 21:24:44.909499 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:24:44.909511 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 21:24:44.909526 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 21:24:44.909541 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 21:24:44.909553 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 21:24:44.909565 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:24:44.909578 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:24:44.909590 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:24:44.909602 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:24:44.909616 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:24:44.909628 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:24:44.909640 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:24:44.909652 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:24:44.909664 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:24:44.909676 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:24:44.909688 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:24:44.909701 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:24:44.909716 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:24:44.909728 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:24:44.909740 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 21:24:44.909752 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:24:44.909765 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 21:24:44.909777 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 21:24:44.909789 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:24:44.909801 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:24:44.909813 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:24:44.909828 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 21:24:44.909840 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:24:44.909852 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 21:24:44.909864 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:24:44.909900 systemd-journald[191]: Collecting audit messages is disabled. Jan 13 21:24:44.909927 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:24:44.909940 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:24:44.909952 systemd-journald[191]: Journal started Jan 13 21:24:44.909980 systemd-journald[191]: Runtime Journal (/run/log/journal/8d200d217bc94985ac76d4e97939b6e9) is 6.0M, max 48.3M, 42.2M free. Jan 13 21:24:44.895455 systemd-modules-load[193]: Inserted module 'overlay' Jan 13 21:24:44.934500 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:24:44.939348 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 21:24:44.941954 systemd-modules-load[193]: Inserted module 'br_netfilter' Jan 13 21:24:44.942889 kernel: Bridge firewalling registered Jan 13 21:24:44.948550 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:24:44.949674 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:24:44.950695 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:24:44.951463 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:24:44.954658 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:24:44.964910 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:24:44.968642 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:24:44.971501 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:24:44.974181 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:24:44.991462 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 21:24:44.994642 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:24:45.002608 dracut-cmdline[229]: dracut-dracut-053 Jan 13 21:24:45.005431 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:24:45.033984 systemd-resolved[231]: Positive Trust Anchors: Jan 13 21:24:45.034000 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:24:45.034031 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:24:45.046604 systemd-resolved[231]: Defaulting to hostname 'linux'. Jan 13 21:24:45.048481 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:24:45.049032 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:24:45.089338 kernel: SCSI subsystem initialized Jan 13 21:24:45.099339 kernel: Loading iSCSI transport class v2.0-870. Jan 13 21:24:45.109338 kernel: iscsi: registered transport (tcp) Jan 13 21:24:45.130402 kernel: iscsi: registered transport (qla4xxx) Jan 13 21:24:45.130423 kernel: QLogic iSCSI HBA Driver Jan 13 21:24:45.179954 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 21:24:45.195453 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 21:24:45.221371 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 21:24:45.221400 kernel: device-mapper: uevent: version 1.0.3 Jan 13 21:24:45.222396 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 21:24:45.264343 kernel: raid6: avx2x4 gen() 28899 MB/s Jan 13 21:24:45.281329 kernel: raid6: avx2x2 gen() 31338 MB/s Jan 13 21:24:45.298409 kernel: raid6: avx2x1 gen() 25901 MB/s Jan 13 21:24:45.298428 kernel: raid6: using algorithm avx2x2 gen() 31338 MB/s Jan 13 21:24:45.316421 kernel: raid6: .... xor() 19930 MB/s, rmw enabled Jan 13 21:24:45.316455 kernel: raid6: using avx2x2 recovery algorithm Jan 13 21:24:45.337335 kernel: xor: automatically using best checksumming function avx Jan 13 21:24:45.489349 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 21:24:45.501734 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:24:45.512497 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:24:45.524295 systemd-udevd[414]: Using default interface naming scheme 'v255'. Jan 13 21:24:45.529357 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:24:45.541488 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 21:24:45.555880 dracut-pre-trigger[424]: rd.md=0: removing MD RAID activation Jan 13 21:24:45.588496 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:24:45.600528 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:24:45.664422 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:24:45.680524 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 21:24:45.709707 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 21:24:45.720549 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 13 21:24:45.725458 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 13 21:24:45.725705 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 21:24:45.725722 kernel: GPT:9289727 != 19775487 Jan 13 21:24:45.725736 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 21:24:45.725750 kernel: GPT:9289727 != 19775487 Jan 13 21:24:45.725762 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 21:24:45.725775 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:24:45.713469 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:24:45.728413 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 21:24:45.717558 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:24:45.723546 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:24:45.735536 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 21:24:45.747944 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:24:45.757333 kernel: libata version 3.00 loaded. Jan 13 21:24:45.757373 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 21:24:45.757384 kernel: AES CTR mode by8 optimization enabled Jan 13 21:24:45.760954 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:24:45.770723 kernel: BTRFS: device fsid b8e2d3c5-4bed-4339-bed5-268c66823686 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (460) Jan 13 21:24:45.761108 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:24:45.775332 kernel: ahci 0000:00:1f.2: version 3.0 Jan 13 21:24:45.789224 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (462) Jan 13 21:24:45.789243 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 13 21:24:45.789254 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 13 21:24:45.789447 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 13 21:24:45.789591 kernel: scsi host0: ahci Jan 13 21:24:45.789756 kernel: scsi host1: ahci Jan 13 21:24:45.789911 kernel: scsi host2: ahci Jan 13 21:24:45.790052 kernel: scsi host3: ahci Jan 13 21:24:45.790193 kernel: scsi host4: ahci Jan 13 21:24:45.790355 kernel: scsi host5: ahci Jan 13 21:24:45.790511 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jan 13 21:24:45.790523 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jan 13 21:24:45.790538 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jan 13 21:24:45.790548 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jan 13 21:24:45.790558 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jan 13 21:24:45.790568 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jan 13 21:24:45.765529 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:24:45.767929 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:24:45.768094 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:24:45.769043 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:24:45.779630 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:24:45.795632 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 21:24:45.806461 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 21:24:45.811482 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 21:24:45.811919 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 21:24:45.817332 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:24:45.824515 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 21:24:45.824817 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:24:45.824883 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:24:45.827266 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:24:45.828479 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:24:45.845108 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:24:45.848114 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:24:45.875723 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:24:45.915059 disk-uuid[558]: Primary Header is updated. Jan 13 21:24:45.915059 disk-uuid[558]: Secondary Entries is updated. Jan 13 21:24:45.915059 disk-uuid[558]: Secondary Header is updated. Jan 13 21:24:45.920365 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:24:45.925350 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:24:46.105543 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 13 21:24:46.105620 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 13 21:24:46.105634 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 13 21:24:46.107365 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 13 21:24:46.107473 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 13 21:24:46.108347 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 13 21:24:46.109352 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 13 21:24:46.110653 kernel: ata3.00: applying bridge limits Jan 13 21:24:46.110678 kernel: ata3.00: configured for UDMA/100 Jan 13 21:24:46.111334 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 13 21:24:46.154349 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 13 21:24:46.167055 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 13 21:24:46.167078 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 13 21:24:46.943271 disk-uuid[571]: The operation has completed successfully. Jan 13 21:24:46.945062 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:24:46.968222 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 21:24:46.968382 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 21:24:47.002519 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 21:24:47.006290 sh[598]: Success Jan 13 21:24:47.019356 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 13 21:24:47.051254 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 21:24:47.072147 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 21:24:47.076615 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 21:24:47.105863 kernel: BTRFS info (device dm-0): first mount of filesystem b8e2d3c5-4bed-4339-bed5-268c66823686 Jan 13 21:24:47.105894 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:24:47.105911 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 21:24:47.106887 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 21:24:47.108327 kernel: BTRFS info (device dm-0): using free space tree Jan 13 21:24:47.112070 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 21:24:47.114613 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 21:24:47.133487 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 21:24:47.134524 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 21:24:47.148922 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:24:47.148974 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:24:47.148989 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:24:47.151415 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:24:47.160852 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 21:24:47.166055 kernel: BTRFS info (device vda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:24:47.234635 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:24:47.251490 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:24:47.273321 systemd-networkd[776]: lo: Link UP Jan 13 21:24:47.273331 systemd-networkd[776]: lo: Gained carrier Jan 13 21:24:47.283232 systemd-networkd[776]: Enumeration completed Jan 13 21:24:47.283324 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:24:47.283937 systemd[1]: Reached target network.target - Network. Jan 13 21:24:47.287934 systemd-networkd[776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:24:47.287943 systemd-networkd[776]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:24:47.292035 systemd-networkd[776]: eth0: Link UP Jan 13 21:24:47.292045 systemd-networkd[776]: eth0: Gained carrier Jan 13 21:24:47.292051 systemd-networkd[776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:24:47.317351 systemd-networkd[776]: eth0: DHCPv4 address 10.0.0.88/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 21:24:47.513295 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 21:24:47.526549 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 21:24:47.576740 ignition[781]: Ignition 2.19.0 Jan 13 21:24:47.576756 ignition[781]: Stage: fetch-offline Jan 13 21:24:47.576803 ignition[781]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:24:47.576816 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:24:47.576927 ignition[781]: parsed url from cmdline: "" Jan 13 21:24:47.576932 ignition[781]: no config URL provided Jan 13 21:24:47.576939 ignition[781]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:24:47.576953 ignition[781]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:24:47.576986 ignition[781]: op(1): [started] loading QEMU firmware config module Jan 13 21:24:47.576993 ignition[781]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 13 21:24:47.585390 ignition[781]: op(1): [finished] loading QEMU firmware config module Jan 13 21:24:47.586940 ignition[781]: parsing config with SHA512: eb0daa4941a711c462ff3d1b8790f127a5add03d06765e0a7dcd4a6aa88e37289c4369da8087ea9131139d4b446cab146a81359cce4fffb5faf9d3d8b27f03cc Jan 13 21:24:47.589435 unknown[781]: fetched base config from "system" Jan 13 21:24:47.589450 unknown[781]: fetched user config from "qemu" Jan 13 21:24:47.589769 ignition[781]: fetch-offline: fetch-offline passed Jan 13 21:24:47.591995 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:24:47.589850 ignition[781]: Ignition finished successfully Jan 13 21:24:47.593565 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 13 21:24:47.600507 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 21:24:47.614710 ignition[791]: Ignition 2.19.0 Jan 13 21:24:47.614722 ignition[791]: Stage: kargs Jan 13 21:24:47.614938 ignition[791]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:24:47.614953 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:24:47.615914 ignition[791]: kargs: kargs passed Jan 13 21:24:47.615963 ignition[791]: Ignition finished successfully Jan 13 21:24:47.619778 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 21:24:47.634469 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 21:24:47.647527 ignition[799]: Ignition 2.19.0 Jan 13 21:24:47.647539 ignition[799]: Stage: disks Jan 13 21:24:47.647730 ignition[799]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:24:47.647743 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:24:47.650931 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 21:24:47.648568 ignition[799]: disks: disks passed Jan 13 21:24:47.652462 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 21:24:47.648618 ignition[799]: Ignition finished successfully Jan 13 21:24:47.654362 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:24:47.655608 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:24:47.657139 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:24:47.657537 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:24:47.664465 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 21:24:47.677273 systemd-fsck[809]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 21:24:47.685362 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 21:24:47.695437 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 21:24:47.779341 kernel: EXT4-fs (vda9): mounted filesystem 39899d4c-a8b1-4feb-9875-e812cc535888 r/w with ordered data mode. Quota mode: none. Jan 13 21:24:47.779874 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 21:24:47.781265 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 21:24:47.800421 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:24:47.802062 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 21:24:47.803518 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 21:24:47.803558 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 21:24:47.814750 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (817) Jan 13 21:24:47.814777 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:24:47.814793 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:24:47.814807 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:24:47.803577 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:24:47.817942 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:24:47.809928 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 21:24:47.815495 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 21:24:47.819664 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:24:47.849543 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 21:24:47.853552 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Jan 13 21:24:47.857209 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 21:24:47.860850 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 21:24:47.945677 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 21:24:47.954407 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 21:24:47.956012 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 21:24:47.963375 kernel: BTRFS info (device vda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:24:47.980531 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 21:24:47.983106 ignition[930]: INFO : Ignition 2.19.0 Jan 13 21:24:47.983106 ignition[930]: INFO : Stage: mount Jan 13 21:24:47.983106 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:24:47.983106 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:24:47.986795 ignition[930]: INFO : mount: mount passed Jan 13 21:24:47.986795 ignition[930]: INFO : Ignition finished successfully Jan 13 21:24:47.990098 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 21:24:47.997484 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 21:24:48.105181 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 21:24:48.119554 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:24:48.126334 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (943) Jan 13 21:24:48.126373 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:24:48.128014 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:24:48.128026 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:24:48.131342 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:24:48.133267 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:24:48.153720 ignition[960]: INFO : Ignition 2.19.0 Jan 13 21:24:48.153720 ignition[960]: INFO : Stage: files Jan 13 21:24:48.155746 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:24:48.155746 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:24:48.155746 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Jan 13 21:24:48.159838 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 21:24:48.159838 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 21:24:48.159838 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 21:24:48.159838 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 21:24:48.159838 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 21:24:48.159683 unknown[960]: wrote ssh authorized keys file for user: core Jan 13 21:24:48.168992 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 13 21:24:48.168992 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 21:24:48.168992 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:24:48.168992 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:24:48.168992 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 21:24:48.168992 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 21:24:48.168992 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 21:24:48.168992 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 13 21:24:48.529299 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 13 21:24:48.840730 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 21:24:48.840730 ignition[960]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Jan 13 21:24:48.844630 ignition[960]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 21:24:48.844630 ignition[960]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 21:24:48.844630 ignition[960]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Jan 13 21:24:48.844630 ignition[960]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Jan 13 21:24:48.866339 ignition[960]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 21:24:48.871721 ignition[960]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 21:24:48.873284 ignition[960]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Jan 13 21:24:48.873284 ignition[960]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:24:48.873284 ignition[960]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:24:48.873284 ignition[960]: INFO : files: files passed Jan 13 21:24:48.873284 ignition[960]: INFO : Ignition finished successfully Jan 13 21:24:48.874835 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 21:24:48.888463 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 21:24:48.891362 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 21:24:48.892651 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 21:24:48.892775 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 21:24:48.922776 initrd-setup-root-after-ignition[988]: grep: /sysroot/oem/oem-release: No such file or directory Jan 13 21:24:48.931813 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:24:48.931813 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:24:48.951224 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:24:48.934000 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:24:48.948129 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 21:24:48.959480 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 21:24:48.984256 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 21:24:48.984399 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 21:24:48.986822 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 21:24:48.987214 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 21:24:48.987600 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 21:24:48.991623 systemd-networkd[776]: eth0: Gained IPv6LL Jan 13 21:24:48.994463 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 21:24:49.008518 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:24:49.020502 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 21:24:49.029108 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:24:49.030410 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:24:49.032643 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 21:24:49.034671 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 21:24:49.034781 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:24:49.037091 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 21:24:49.038807 systemd[1]: Stopped target basic.target - Basic System. Jan 13 21:24:49.040905 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 21:24:49.042902 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:24:49.045052 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 21:24:49.047137 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 21:24:49.049285 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:24:49.051595 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 21:24:49.053643 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 21:24:49.055850 systemd[1]: Stopped target swap.target - Swaps. Jan 13 21:24:49.057653 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 21:24:49.057776 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:24:49.059916 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:24:49.061605 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:24:49.063664 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 21:24:49.063757 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:24:49.065927 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 21:24:49.066039 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 21:24:49.068222 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 21:24:49.068352 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:24:49.070391 systemd[1]: Stopped target paths.target - Path Units. Jan 13 21:24:49.078756 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 21:24:49.082385 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:24:49.084431 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 21:24:49.086446 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 21:24:49.088206 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 21:24:49.088295 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:24:49.090226 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 21:24:49.090326 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:24:49.092684 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 21:24:49.092791 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:24:49.094780 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 21:24:49.094882 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 21:24:49.109512 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 21:24:49.111336 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 21:24:49.112481 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 21:24:49.112633 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:24:49.114911 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 21:24:49.115107 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:24:49.122788 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 21:24:49.122908 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 21:24:49.127151 ignition[1015]: INFO : Ignition 2.19.0 Jan 13 21:24:49.127151 ignition[1015]: INFO : Stage: umount Jan 13 21:24:49.127151 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:24:49.127151 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:24:49.127151 ignition[1015]: INFO : umount: umount passed Jan 13 21:24:49.132732 ignition[1015]: INFO : Ignition finished successfully Jan 13 21:24:49.129516 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 21:24:49.129637 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 21:24:49.130876 systemd[1]: Stopped target network.target - Network. Jan 13 21:24:49.133158 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 21:24:49.133213 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 21:24:49.133707 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 21:24:49.133754 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 21:24:49.134039 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 21:24:49.134080 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 21:24:49.134554 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 21:24:49.134595 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 21:24:49.135022 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 21:24:49.142276 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 21:24:49.150364 systemd-networkd[776]: eth0: DHCPv6 lease lost Jan 13 21:24:49.151035 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 21:24:49.151157 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 21:24:49.153416 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 21:24:49.153552 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 21:24:49.155648 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 21:24:49.155700 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:24:49.165419 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 21:24:49.165888 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 21:24:49.165943 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:24:49.166261 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:24:49.166302 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:24:49.166761 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 21:24:49.166820 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 21:24:49.167092 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 21:24:49.167150 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:24:49.167691 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:24:49.180851 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 21:24:49.181015 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 21:24:49.196237 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 21:24:49.196827 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 21:24:49.197004 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:24:49.199000 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 21:24:49.199052 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 21:24:49.201170 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 21:24:49.201212 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:24:49.203714 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 21:24:49.203765 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:24:49.206680 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 21:24:49.206730 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 21:24:49.209117 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:24:49.209165 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:24:49.222536 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 21:24:49.223772 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 21:24:49.223834 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:24:49.226342 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 13 21:24:49.226394 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:24:49.228779 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 21:24:49.228828 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:24:49.231386 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:24:49.231432 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:24:49.234142 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 21:24:49.234250 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 21:24:49.376463 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 21:24:49.377574 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 21:24:49.380281 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 21:24:49.382584 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 21:24:49.383722 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 21:24:49.396574 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 21:24:49.405601 systemd[1]: Switching root. Jan 13 21:24:49.437254 systemd-journald[191]: Journal stopped Jan 13 21:24:50.529401 systemd-journald[191]: Received SIGTERM from PID 1 (systemd). Jan 13 21:24:50.529482 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 21:24:50.529500 kernel: SELinux: policy capability open_perms=1 Jan 13 21:24:50.529516 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 21:24:50.529531 kernel: SELinux: policy capability always_check_network=0 Jan 13 21:24:50.529549 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 21:24:50.529564 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 21:24:50.529579 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 21:24:50.529599 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 21:24:50.529614 kernel: audit: type=1403 audit(1736803489.779:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 21:24:50.529636 systemd[1]: Successfully loaded SELinux policy in 50.337ms. Jan 13 21:24:50.529667 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.318ms. Jan 13 21:24:50.529689 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:24:50.529709 systemd[1]: Detected virtualization kvm. Jan 13 21:24:50.529725 systemd[1]: Detected architecture x86-64. Jan 13 21:24:50.529742 systemd[1]: Detected first boot. Jan 13 21:24:50.529758 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:24:50.531093 zram_generator::config[1060]: No configuration found. Jan 13 21:24:50.531121 systemd[1]: Populated /etc with preset unit settings. Jan 13 21:24:50.531137 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 21:24:50.531153 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 21:24:50.531173 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 21:24:50.531191 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 21:24:50.531207 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 21:24:50.531223 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 21:24:50.531239 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 21:24:50.531258 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 21:24:50.531274 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 21:24:50.531290 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 21:24:50.531345 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 21:24:50.531366 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:24:50.531388 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:24:50.531404 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 21:24:50.531420 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 21:24:50.531437 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 21:24:50.531453 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:24:50.531470 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 21:24:50.531486 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:24:50.531502 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 21:24:50.531521 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 21:24:50.531537 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 21:24:50.531553 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 21:24:50.531570 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:24:50.531585 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:24:50.531602 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:24:50.531618 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:24:50.531634 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 21:24:50.531653 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 21:24:50.531669 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:24:50.531685 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:24:50.531702 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:24:50.531718 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 21:24:50.531737 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 21:24:50.531753 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 21:24:50.531769 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 21:24:50.531785 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:24:50.531804 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 21:24:50.531820 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 21:24:50.531836 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 21:24:50.531852 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 21:24:50.531869 systemd[1]: Reached target machines.target - Containers. Jan 13 21:24:50.531885 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 21:24:50.531901 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:24:50.531918 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:24:50.531936 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 21:24:50.531953 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:24:50.531969 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:24:50.531985 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:24:50.532000 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 21:24:50.532017 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:24:50.532033 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 21:24:50.532049 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 21:24:50.532068 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 21:24:50.532086 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 21:24:50.532102 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 21:24:50.532118 kernel: loop: module loaded Jan 13 21:24:50.532133 kernel: fuse: init (API version 7.39) Jan 13 21:24:50.532148 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:24:50.532164 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:24:50.532186 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 21:24:50.532202 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 21:24:50.532221 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:24:50.532237 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 21:24:50.532254 systemd[1]: Stopped verity-setup.service. Jan 13 21:24:50.532270 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:24:50.532286 kernel: ACPI: bus type drm_connector registered Jan 13 21:24:50.532325 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 21:24:50.532342 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 21:24:50.532359 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 21:24:50.532378 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 21:24:50.532394 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 21:24:50.533854 systemd-journald[1134]: Collecting audit messages is disabled. Jan 13 21:24:50.533892 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 21:24:50.533909 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 21:24:50.533930 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:24:50.533946 systemd-journald[1134]: Journal started Jan 13 21:24:50.533976 systemd-journald[1134]: Runtime Journal (/run/log/journal/8d200d217bc94985ac76d4e97939b6e9) is 6.0M, max 48.3M, 42.2M free. Jan 13 21:24:50.279987 systemd[1]: Queued start job for default target multi-user.target. Jan 13 21:24:50.298065 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 21:24:50.298518 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 21:24:50.536115 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 21:24:50.537397 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 21:24:50.539322 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:24:50.540130 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:24:50.540331 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:24:50.541794 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:24:50.541964 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:24:50.543353 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:24:50.543526 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:24:50.545098 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 21:24:50.545272 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 21:24:50.546797 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:24:50.546966 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:24:50.548695 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:24:50.550169 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 21:24:50.551795 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 21:24:50.564653 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 21:24:50.578460 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 21:24:50.581002 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 21:24:50.582257 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 21:24:50.582309 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:24:50.584670 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 21:24:50.587192 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 21:24:50.592603 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 21:24:50.594531 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:24:50.596806 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 21:24:50.599493 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 21:24:50.601051 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:24:50.604118 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 21:24:50.606523 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:24:50.611788 systemd-journald[1134]: Time spent on flushing to /var/log/journal/8d200d217bc94985ac76d4e97939b6e9 is 25.181ms for 977 entries. Jan 13 21:24:50.611788 systemd-journald[1134]: System Journal (/var/log/journal/8d200d217bc94985ac76d4e97939b6e9) is 8.0M, max 195.6M, 187.6M free. Jan 13 21:24:50.652854 systemd-journald[1134]: Received client request to flush runtime journal. Jan 13 21:24:50.652912 kernel: loop0: detected capacity change from 0 to 140768 Jan 13 21:24:50.610493 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:24:50.617503 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 21:24:50.623035 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:24:50.628745 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:24:50.633668 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 21:24:50.635363 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 21:24:50.637233 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 21:24:50.639646 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 21:24:50.646100 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 21:24:50.657539 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 21:24:50.660489 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 21:24:50.662424 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 21:24:50.665448 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:24:50.672886 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Jan 13 21:24:50.674103 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Jan 13 21:24:50.674243 udevadm[1185]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 13 21:24:50.678398 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 21:24:50.681165 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:24:50.690553 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 21:24:50.695015 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 21:24:50.696856 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 21:24:50.699336 kernel: loop1: detected capacity change from 0 to 205544 Jan 13 21:24:50.722066 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 21:24:50.727344 kernel: loop2: detected capacity change from 0 to 142488 Jan 13 21:24:50.729623 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:24:50.751961 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Jan 13 21:24:50.751986 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Jan 13 21:24:50.759447 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:24:50.773361 kernel: loop3: detected capacity change from 0 to 140768 Jan 13 21:24:50.786344 kernel: loop4: detected capacity change from 0 to 205544 Jan 13 21:24:50.793330 kernel: loop5: detected capacity change from 0 to 142488 Jan 13 21:24:50.801849 (sd-merge)[1202]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 13 21:24:50.802476 (sd-merge)[1202]: Merged extensions into '/usr'. Jan 13 21:24:50.806848 systemd[1]: Reloading requested from client PID 1174 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 21:24:50.806863 systemd[1]: Reloading... Jan 13 21:24:50.864198 zram_generator::config[1227]: No configuration found. Jan 13 21:24:50.957062 ldconfig[1169]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 21:24:51.001206 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:24:51.070882 systemd[1]: Reloading finished in 263 ms. Jan 13 21:24:51.106100 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 21:24:51.107732 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 21:24:51.121500 systemd[1]: Starting ensure-sysext.service... Jan 13 21:24:51.123691 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:24:51.130194 systemd[1]: Reloading requested from client PID 1265 ('systemctl') (unit ensure-sysext.service)... Jan 13 21:24:51.130208 systemd[1]: Reloading... Jan 13 21:24:51.148478 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 21:24:51.148832 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 21:24:51.149828 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 21:24:51.150121 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Jan 13 21:24:51.150199 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Jan 13 21:24:51.154010 systemd-tmpfiles[1266]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:24:51.154024 systemd-tmpfiles[1266]: Skipping /boot Jan 13 21:24:51.170807 systemd-tmpfiles[1266]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:24:51.170826 systemd-tmpfiles[1266]: Skipping /boot Jan 13 21:24:51.194388 zram_generator::config[1296]: No configuration found. Jan 13 21:24:51.308236 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:24:51.360721 systemd[1]: Reloading finished in 230 ms. Jan 13 21:24:51.380455 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 21:24:51.391801 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:24:51.400618 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:24:51.403563 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 21:24:51.406358 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 21:24:51.410629 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:24:51.414465 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:24:51.417540 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 21:24:51.421128 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:24:51.421816 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:24:51.428722 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:24:51.432632 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:24:51.436139 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:24:51.438514 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:24:51.442166 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 21:24:51.443291 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:24:51.444473 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:24:51.444691 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:24:51.449887 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:24:51.450990 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:24:51.452854 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:24:51.453172 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:24:51.454808 systemd-udevd[1337]: Using default interface naming scheme 'v255'. Jan 13 21:24:51.459854 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 21:24:51.461958 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 21:24:51.466555 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:24:51.466744 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:24:51.476115 augenrules[1363]: No rules Jan 13 21:24:51.476673 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:24:51.478960 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:24:51.481729 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:24:51.485436 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:24:51.486717 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 21:24:51.490350 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:24:51.491059 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:24:51.493121 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:24:51.494729 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:24:51.494892 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:24:51.496644 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:24:51.496823 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:24:51.498471 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 21:24:51.503931 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:24:51.504105 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:24:51.513812 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 21:24:51.532387 systemd[1]: Finished ensure-sysext.service. Jan 13 21:24:51.534554 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 21:24:51.541538 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:24:51.541678 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:24:51.547497 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:24:51.550481 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:24:51.554369 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1372) Jan 13 21:24:51.556611 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:24:51.558635 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:24:51.562143 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:24:51.567472 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 21:24:51.569439 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 21:24:51.569466 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:24:51.569962 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:24:51.570184 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:24:51.571800 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:24:51.572478 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:24:51.575772 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:24:51.575945 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:24:51.592335 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 21:24:51.594695 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:24:51.594748 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:24:51.598568 systemd-resolved[1336]: Positive Trust Anchors: Jan 13 21:24:51.598584 systemd-resolved[1336]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:24:51.598616 systemd-resolved[1336]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:24:51.603556 systemd-resolved[1336]: Defaulting to hostname 'linux'. Jan 13 21:24:51.606116 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:24:51.607531 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:24:51.621152 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:24:51.627483 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 21:24:51.630572 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 13 21:24:51.636461 kernel: ACPI: button: Power Button [PWRF] Jan 13 21:24:51.648534 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 21:24:51.654024 systemd-networkd[1406]: lo: Link UP Jan 13 21:24:51.654032 systemd-networkd[1406]: lo: Gained carrier Jan 13 21:24:51.657000 systemd-networkd[1406]: Enumeration completed Jan 13 21:24:51.657078 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:24:51.658304 systemd[1]: Reached target network.target - Network. Jan 13 21:24:51.661554 systemd-networkd[1406]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:24:51.661566 systemd-networkd[1406]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:24:51.667037 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 13 21:24:51.674657 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 13 21:24:51.674843 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 13 21:24:51.675026 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 13 21:24:51.662391 systemd-networkd[1406]: eth0: Link UP Jan 13 21:24:51.662399 systemd-networkd[1406]: eth0: Gained carrier Jan 13 21:24:51.662417 systemd-networkd[1406]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:24:51.673481 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 21:24:51.677366 systemd-networkd[1406]: eth0: DHCPv4 address 10.0.0.88/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 21:24:51.678022 systemd-timesyncd[1408]: Network configuration changed, trying to establish connection. Jan 13 21:24:51.679084 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 21:24:52.550744 systemd-timesyncd[1408]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 13 21:24:52.550781 systemd-timesyncd[1408]: Initial clock synchronization to Mon 2025-01-13 21:24:52.550668 UTC. Jan 13 21:24:52.551923 systemd-resolved[1336]: Clock change detected. Flushing caches. Jan 13 21:24:52.556951 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 13 21:24:52.558098 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 21:24:52.647929 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 21:24:52.648311 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:24:52.652104 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:24:52.652322 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:24:52.657953 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:24:52.662122 kernel: kvm_amd: TSC scaling supported Jan 13 21:24:52.662157 kernel: kvm_amd: Nested Virtualization enabled Jan 13 21:24:52.662188 kernel: kvm_amd: Nested Paging enabled Jan 13 21:24:52.662201 kernel: kvm_amd: LBR virtualization supported Jan 13 21:24:52.663267 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 13 21:24:52.663289 kernel: kvm_amd: Virtual GIF supported Jan 13 21:24:52.684929 kernel: EDAC MC: Ver: 3.0.0 Jan 13 21:24:52.714942 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:24:52.719393 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 21:24:52.734142 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 21:24:52.743711 lvm[1437]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:24:52.776385 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 21:24:52.778065 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:24:52.779261 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:24:52.780495 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 21:24:52.781792 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 21:24:52.783316 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 21:24:52.784678 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 21:24:52.786054 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 21:24:52.787375 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 21:24:52.787425 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:24:52.788369 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:24:52.790144 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 21:24:52.793352 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 21:24:52.806085 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 21:24:52.808521 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 21:24:52.810245 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 21:24:52.811543 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:24:52.812636 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:24:52.813725 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:24:52.813761 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:24:52.814810 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 21:24:52.817150 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 21:24:52.820932 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 21:24:52.822570 lvm[1441]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:24:52.824129 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 21:24:52.825506 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 21:24:52.827080 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 21:24:52.829120 jq[1444]: false Jan 13 21:24:52.831511 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 21:24:52.834660 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 21:24:52.840159 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 21:24:52.841999 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 21:24:52.842602 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 21:24:52.843449 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 21:24:52.848310 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 21:24:52.849539 extend-filesystems[1445]: Found loop3 Jan 13 21:24:52.850548 extend-filesystems[1445]: Found loop4 Jan 13 21:24:52.850548 extend-filesystems[1445]: Found loop5 Jan 13 21:24:52.850548 extend-filesystems[1445]: Found sr0 Jan 13 21:24:52.850548 extend-filesystems[1445]: Found vda Jan 13 21:24:52.850548 extend-filesystems[1445]: Found vda1 Jan 13 21:24:52.850548 extend-filesystems[1445]: Found vda2 Jan 13 21:24:52.850548 extend-filesystems[1445]: Found vda3 Jan 13 21:24:52.850548 extend-filesystems[1445]: Found usr Jan 13 21:24:52.850548 extend-filesystems[1445]: Found vda4 Jan 13 21:24:52.850548 extend-filesystems[1445]: Found vda6 Jan 13 21:24:52.850548 extend-filesystems[1445]: Found vda7 Jan 13 21:24:52.850548 extend-filesystems[1445]: Found vda9 Jan 13 21:24:52.850548 extend-filesystems[1445]: Checking size of /dev/vda9 Jan 13 21:24:52.856107 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 21:24:52.859891 jq[1453]: true Jan 13 21:24:52.871360 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 21:24:52.871629 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 21:24:52.872046 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 21:24:52.872287 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 21:24:52.874441 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 21:24:52.874721 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 21:24:52.878523 update_engine[1451]: I20250113 21:24:52.878420 1451 main.cc:92] Flatcar Update Engine starting Jan 13 21:24:52.889178 extend-filesystems[1445]: Resized partition /dev/vda9 Jan 13 21:24:52.890571 dbus-daemon[1443]: [system] SELinux support is enabled Jan 13 21:24:52.890986 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 21:24:52.895766 (ntainerd)[1474]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 21:24:52.901163 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1372) Jan 13 21:24:52.901476 update_engine[1451]: I20250113 21:24:52.901428 1451 update_check_scheduler.cc:74] Next update check in 3m42s Jan 13 21:24:52.901566 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 21:24:52.901599 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 21:24:52.905399 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 21:24:52.905442 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 21:24:52.905996 extend-filesystems[1475]: resize2fs 1.47.1 (20-May-2024) Jan 13 21:24:52.912235 systemd[1]: Started update-engine.service - Update Engine. Jan 13 21:24:52.914583 jq[1463]: true Jan 13 21:24:52.915037 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 13 21:24:52.935484 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 21:24:52.947916 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 13 21:24:52.980743 systemd-logind[1450]: Watching system buttons on /dev/input/event1 (Power Button) Jan 13 21:24:52.980771 systemd-logind[1450]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 21:24:52.981959 systemd-logind[1450]: New seat seat0. Jan 13 21:24:52.982488 locksmithd[1479]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 21:24:52.985290 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 21:24:52.985494 extend-filesystems[1475]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 21:24:52.985494 extend-filesystems[1475]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 21:24:52.985494 extend-filesystems[1475]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 13 21:24:52.992293 extend-filesystems[1445]: Resized filesystem in /dev/vda9 Jan 13 21:24:52.987941 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 21:24:52.988162 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 21:24:52.995864 bash[1493]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:24:52.997404 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 21:24:52.999525 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 13 21:24:53.100128 sshd_keygen[1467]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 21:24:53.110533 containerd[1474]: time="2025-01-13T21:24:53.110423413Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 13 21:24:53.127759 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 21:24:53.135300 containerd[1474]: time="2025-01-13T21:24:53.135243239Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:24:53.137044 containerd[1474]: time="2025-01-13T21:24:53.137005874Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:24:53.137044 containerd[1474]: time="2025-01-13T21:24:53.137032334Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 21:24:53.137044 containerd[1474]: time="2025-01-13T21:24:53.137047382Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 21:24:53.137151 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 21:24:53.138150 containerd[1474]: time="2025-01-13T21:24:53.137263447Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 21:24:53.138150 containerd[1474]: time="2025-01-13T21:24:53.137284086Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 21:24:53.138150 containerd[1474]: time="2025-01-13T21:24:53.137366470Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:24:53.138150 containerd[1474]: time="2025-01-13T21:24:53.137384204Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:24:53.138150 containerd[1474]: time="2025-01-13T21:24:53.137610508Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:24:53.138150 containerd[1474]: time="2025-01-13T21:24:53.137630425Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 21:24:53.138150 containerd[1474]: time="2025-01-13T21:24:53.137646405Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:24:53.138150 containerd[1474]: time="2025-01-13T21:24:53.137658869Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 21:24:53.138150 containerd[1474]: time="2025-01-13T21:24:53.137773554Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:24:53.138150 containerd[1474]: time="2025-01-13T21:24:53.138078686Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:24:53.138406 containerd[1474]: time="2025-01-13T21:24:53.138243956Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:24:53.138406 containerd[1474]: time="2025-01-13T21:24:53.138260778Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 21:24:53.138406 containerd[1474]: time="2025-01-13T21:24:53.138369852Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 21:24:53.138557 containerd[1474]: time="2025-01-13T21:24:53.138427430Z" level=info msg="metadata content store policy set" policy=shared Jan 13 21:24:53.144365 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 21:24:53.144592 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 21:24:53.148297 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 21:24:53.198914 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 21:24:53.213357 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 21:24:53.215933 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 21:24:53.217271 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 21:24:53.235465 containerd[1474]: time="2025-01-13T21:24:53.235323064Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 21:24:53.235465 containerd[1474]: time="2025-01-13T21:24:53.235414295Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 21:24:53.235465 containerd[1474]: time="2025-01-13T21:24:53.235431197Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 21:24:53.235465 containerd[1474]: time="2025-01-13T21:24:53.235446756Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 21:24:53.235465 containerd[1474]: time="2025-01-13T21:24:53.235462155Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 21:24:53.235669 containerd[1474]: time="2025-01-13T21:24:53.235646410Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 21:24:53.235934 containerd[1474]: time="2025-01-13T21:24:53.235887042Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 21:24:53.236043 containerd[1474]: time="2025-01-13T21:24:53.236014721Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 21:24:53.236043 containerd[1474]: time="2025-01-13T21:24:53.236032915Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 21:24:53.236100 containerd[1474]: time="2025-01-13T21:24:53.236044978Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 21:24:53.236100 containerd[1474]: time="2025-01-13T21:24:53.236057321Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 21:24:53.236100 containerd[1474]: time="2025-01-13T21:24:53.236069464Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 21:24:53.236100 containerd[1474]: time="2025-01-13T21:24:53.236081015Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 21:24:53.236100 containerd[1474]: time="2025-01-13T21:24:53.236094210Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 21:24:53.236206 containerd[1474]: time="2025-01-13T21:24:53.236107225Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 21:24:53.236206 containerd[1474]: time="2025-01-13T21:24:53.236121802Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 21:24:53.236206 containerd[1474]: time="2025-01-13T21:24:53.236133033Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 21:24:53.236206 containerd[1474]: time="2025-01-13T21:24:53.236144284Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 21:24:53.236206 containerd[1474]: time="2025-01-13T21:24:53.236162588Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 21:24:53.236206 containerd[1474]: time="2025-01-13T21:24:53.236175352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 21:24:53.236206 containerd[1474]: time="2025-01-13T21:24:53.236186563Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 21:24:53.236206 containerd[1474]: time="2025-01-13T21:24:53.236197664Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 21:24:53.236206 containerd[1474]: time="2025-01-13T21:24:53.236209056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 21:24:53.236376 containerd[1474]: time="2025-01-13T21:24:53.236221779Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 21:24:53.236376 containerd[1474]: time="2025-01-13T21:24:53.236232940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 21:24:53.236376 containerd[1474]: time="2025-01-13T21:24:53.236245023Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 21:24:53.236376 containerd[1474]: time="2025-01-13T21:24:53.236256615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 21:24:53.236376 containerd[1474]: time="2025-01-13T21:24:53.236269930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 21:24:53.236376 containerd[1474]: time="2025-01-13T21:24:53.236281091Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 21:24:53.236376 containerd[1474]: time="2025-01-13T21:24:53.236292182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 21:24:53.236376 containerd[1474]: time="2025-01-13T21:24:53.236303843Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 21:24:53.236376 containerd[1474]: time="2025-01-13T21:24:53.236322629Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 21:24:53.236376 containerd[1474]: time="2025-01-13T21:24:53.236341414Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 21:24:53.236376 containerd[1474]: time="2025-01-13T21:24:53.236351944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 21:24:53.236376 containerd[1474]: time="2025-01-13T21:24:53.236362604Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 21:24:53.236607 containerd[1474]: time="2025-01-13T21:24:53.236409311Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 21:24:53.236607 containerd[1474]: time="2025-01-13T21:24:53.236425401Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 21:24:53.236607 containerd[1474]: time="2025-01-13T21:24:53.236435631Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 21:24:53.236607 containerd[1474]: time="2025-01-13T21:24:53.236446431Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 21:24:53.236607 containerd[1474]: time="2025-01-13T21:24:53.236455889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 21:24:53.236607 containerd[1474]: time="2025-01-13T21:24:53.236467430Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 21:24:53.236607 containerd[1474]: time="2025-01-13T21:24:53.236477279Z" level=info msg="NRI interface is disabled by configuration." Jan 13 21:24:53.236607 containerd[1474]: time="2025-01-13T21:24:53.236487919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 21:24:53.236783 containerd[1474]: time="2025-01-13T21:24:53.236714804Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 21:24:53.236783 containerd[1474]: time="2025-01-13T21:24:53.236774807Z" level=info msg="Connect containerd service" Jan 13 21:24:53.237002 containerd[1474]: time="2025-01-13T21:24:53.236803751Z" level=info msg="using legacy CRI server" Jan 13 21:24:53.237002 containerd[1474]: time="2025-01-13T21:24:53.236810203Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 21:24:53.237002 containerd[1474]: time="2025-01-13T21:24:53.236895503Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 21:24:53.237550 containerd[1474]: time="2025-01-13T21:24:53.237511057Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:24:53.237692 containerd[1474]: time="2025-01-13T21:24:53.237632846Z" level=info msg="Start subscribing containerd event" Jan 13 21:24:53.237762 containerd[1474]: time="2025-01-13T21:24:53.237710291Z" level=info msg="Start recovering state" Jan 13 21:24:53.237870 containerd[1474]: time="2025-01-13T21:24:53.237851025Z" level=info msg="Start event monitor" Jan 13 21:24:53.237870 containerd[1474]: time="2025-01-13T21:24:53.237854982Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 21:24:53.237956 containerd[1474]: time="2025-01-13T21:24:53.237886842Z" level=info msg="Start snapshots syncer" Jan 13 21:24:53.237956 containerd[1474]: time="2025-01-13T21:24:53.237914714Z" level=info msg="Start cni network conf syncer for default" Jan 13 21:24:53.237956 containerd[1474]: time="2025-01-13T21:24:53.237925224Z" level=info msg="Start streaming server" Jan 13 21:24:53.238044 containerd[1474]: time="2025-01-13T21:24:53.237957234Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 21:24:53.238193 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 21:24:53.238402 containerd[1474]: time="2025-01-13T21:24:53.238048746Z" level=info msg="containerd successfully booted in 0.129074s" Jan 13 21:24:53.533210 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 21:24:53.536018 systemd[1]: Started sshd@0-10.0.0.88:22-10.0.0.1:46850.service - OpenSSH per-connection server daemon (10.0.0.1:46850). Jan 13 21:24:53.583595 sshd[1528]: Accepted publickey for core from 10.0.0.1 port 46850 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:24:53.585710 sshd[1528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:53.594462 systemd-logind[1450]: New session 1 of user core. Jan 13 21:24:53.595827 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 21:24:53.605106 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 21:24:53.671112 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 21:24:53.683174 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 21:24:53.720105 (systemd)[1532]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 21:24:53.834203 systemd[1532]: Queued start job for default target default.target. Jan 13 21:24:53.844249 systemd[1532]: Created slice app.slice - User Application Slice. Jan 13 21:24:53.844278 systemd[1532]: Reached target paths.target - Paths. Jan 13 21:24:53.844292 systemd[1532]: Reached target timers.target - Timers. Jan 13 21:24:53.845863 systemd[1532]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 21:24:53.856932 systemd[1532]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 21:24:53.857085 systemd[1532]: Reached target sockets.target - Sockets. Jan 13 21:24:53.857108 systemd[1532]: Reached target basic.target - Basic System. Jan 13 21:24:53.857154 systemd[1532]: Reached target default.target - Main User Target. Jan 13 21:24:53.857197 systemd[1532]: Startup finished in 129ms. Jan 13 21:24:53.857516 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 21:24:53.877692 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 21:24:53.942761 systemd[1]: Started sshd@1-10.0.0.88:22-10.0.0.1:46864.service - OpenSSH per-connection server daemon (10.0.0.1:46864). Jan 13 21:24:53.979405 sshd[1543]: Accepted publickey for core from 10.0.0.1 port 46864 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:24:53.981005 sshd[1543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:53.985651 systemd-logind[1450]: New session 2 of user core. Jan 13 21:24:53.999167 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 21:24:54.055721 sshd[1543]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:54.062772 systemd[1]: sshd@1-10.0.0.88:22-10.0.0.1:46864.service: Deactivated successfully. Jan 13 21:24:54.064413 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 21:24:54.066060 systemd-logind[1450]: Session 2 logged out. Waiting for processes to exit. Jan 13 21:24:54.077340 systemd[1]: Started sshd@2-10.0.0.88:22-10.0.0.1:46874.service - OpenSSH per-connection server daemon (10.0.0.1:46874). Jan 13 21:24:54.080030 systemd-logind[1450]: Removed session 2. Jan 13 21:24:54.111014 sshd[1550]: Accepted publickey for core from 10.0.0.1 port 46874 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:24:54.112754 sshd[1550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:54.116840 systemd-logind[1450]: New session 3 of user core. Jan 13 21:24:54.126046 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 21:24:54.182005 sshd[1550]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:54.186230 systemd[1]: sshd@2-10.0.0.88:22-10.0.0.1:46874.service: Deactivated successfully. Jan 13 21:24:54.187994 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 21:24:54.188613 systemd-logind[1450]: Session 3 logged out. Waiting for processes to exit. Jan 13 21:24:54.189404 systemd-logind[1450]: Removed session 3. Jan 13 21:24:54.402158 systemd-networkd[1406]: eth0: Gained IPv6LL Jan 13 21:24:54.406013 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 21:24:54.407995 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 21:24:54.423149 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 13 21:24:54.426194 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:24:54.428935 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 21:24:54.452592 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 13 21:24:54.452914 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 13 21:24:54.454767 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 21:24:54.457251 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 21:24:55.109161 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:24:55.110930 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 21:24:55.112988 systemd[1]: Startup finished in 722ms (kernel) + 5.070s (initrd) + 4.510s (userspace) = 10.303s. Jan 13 21:24:55.115633 (kubelet)[1578]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:24:55.505432 kubelet[1578]: E0113 21:24:55.505258 1578 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:24:55.509068 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:24:55.509278 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:25:04.192450 systemd[1]: Started sshd@3-10.0.0.88:22-10.0.0.1:48304.service - OpenSSH per-connection server daemon (10.0.0.1:48304). Jan 13 21:25:04.231843 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 48304 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:25:04.233475 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:25:04.237238 systemd-logind[1450]: New session 4 of user core. Jan 13 21:25:04.247173 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 21:25:04.302045 sshd[1591]: pam_unix(sshd:session): session closed for user core Jan 13 21:25:04.312456 systemd[1]: sshd@3-10.0.0.88:22-10.0.0.1:48304.service: Deactivated successfully. Jan 13 21:25:04.314203 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 21:25:04.315650 systemd-logind[1450]: Session 4 logged out. Waiting for processes to exit. Jan 13 21:25:04.328141 systemd[1]: Started sshd@4-10.0.0.88:22-10.0.0.1:48306.service - OpenSSH per-connection server daemon (10.0.0.1:48306). Jan 13 21:25:04.329093 systemd-logind[1450]: Removed session 4. Jan 13 21:25:04.359599 sshd[1598]: Accepted publickey for core from 10.0.0.1 port 48306 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:25:04.361502 sshd[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:25:04.366104 systemd-logind[1450]: New session 5 of user core. Jan 13 21:25:04.376049 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 21:25:04.427645 sshd[1598]: pam_unix(sshd:session): session closed for user core Jan 13 21:25:04.441337 systemd[1]: sshd@4-10.0.0.88:22-10.0.0.1:48306.service: Deactivated successfully. Jan 13 21:25:04.443778 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 21:25:04.445783 systemd-logind[1450]: Session 5 logged out. Waiting for processes to exit. Jan 13 21:25:04.456316 systemd[1]: Started sshd@5-10.0.0.88:22-10.0.0.1:48322.service - OpenSSH per-connection server daemon (10.0.0.1:48322). Jan 13 21:25:04.457611 systemd-logind[1450]: Removed session 5. Jan 13 21:25:04.493159 sshd[1605]: Accepted publickey for core from 10.0.0.1 port 48322 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:25:04.495298 sshd[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:25:04.499678 systemd-logind[1450]: New session 6 of user core. Jan 13 21:25:04.508060 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 21:25:04.562834 sshd[1605]: pam_unix(sshd:session): session closed for user core Jan 13 21:25:04.571434 systemd[1]: sshd@5-10.0.0.88:22-10.0.0.1:48322.service: Deactivated successfully. Jan 13 21:25:04.573362 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 21:25:04.574996 systemd-logind[1450]: Session 6 logged out. Waiting for processes to exit. Jan 13 21:25:04.585179 systemd[1]: Started sshd@6-10.0.0.88:22-10.0.0.1:48330.service - OpenSSH per-connection server daemon (10.0.0.1:48330). Jan 13 21:25:04.586524 systemd-logind[1450]: Removed session 6. Jan 13 21:25:04.619322 sshd[1612]: Accepted publickey for core from 10.0.0.1 port 48330 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:25:04.621181 sshd[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:25:04.625229 systemd-logind[1450]: New session 7 of user core. Jan 13 21:25:04.635052 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 21:25:04.693630 sudo[1615]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 21:25:04.693986 sudo[1615]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:25:05.020178 sudo[1615]: pam_unix(sudo:session): session closed for user root Jan 13 21:25:05.022345 sshd[1612]: pam_unix(sshd:session): session closed for user core Jan 13 21:25:05.038958 systemd[1]: sshd@6-10.0.0.88:22-10.0.0.1:48330.service: Deactivated successfully. Jan 13 21:25:05.040833 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 21:25:05.042720 systemd-logind[1450]: Session 7 logged out. Waiting for processes to exit. Jan 13 21:25:05.052174 systemd[1]: Started sshd@7-10.0.0.88:22-10.0.0.1:36116.service - OpenSSH per-connection server daemon (10.0.0.1:36116). Jan 13 21:25:05.053109 systemd-logind[1450]: Removed session 7. Jan 13 21:25:05.083487 sshd[1620]: Accepted publickey for core from 10.0.0.1 port 36116 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:25:05.085045 sshd[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:25:05.089148 systemd-logind[1450]: New session 8 of user core. Jan 13 21:25:05.110088 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 21:25:05.164426 sudo[1624]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 21:25:05.164780 sudo[1624]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:25:05.168474 sudo[1624]: pam_unix(sudo:session): session closed for user root Jan 13 21:25:05.174707 sudo[1623]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 13 21:25:05.175089 sudo[1623]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:25:05.200128 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 13 21:25:05.201897 auditctl[1627]: No rules Jan 13 21:25:05.203279 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 21:25:05.203559 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 13 21:25:05.205410 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:25:05.235492 augenrules[1645]: No rules Jan 13 21:25:05.237282 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:25:05.238596 sudo[1623]: pam_unix(sudo:session): session closed for user root Jan 13 21:25:05.240593 sshd[1620]: pam_unix(sshd:session): session closed for user core Jan 13 21:25:05.247415 systemd[1]: sshd@7-10.0.0.88:22-10.0.0.1:36116.service: Deactivated successfully. Jan 13 21:25:05.249324 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 21:25:05.250842 systemd-logind[1450]: Session 8 logged out. Waiting for processes to exit. Jan 13 21:25:05.261152 systemd[1]: Started sshd@8-10.0.0.88:22-10.0.0.1:36118.service - OpenSSH per-connection server daemon (10.0.0.1:36118). Jan 13 21:25:05.262168 systemd-logind[1450]: Removed session 8. Jan 13 21:25:05.292496 sshd[1653]: Accepted publickey for core from 10.0.0.1 port 36118 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:25:05.294087 sshd[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:25:05.297974 systemd-logind[1450]: New session 9 of user core. Jan 13 21:25:05.305063 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 21:25:05.357783 sudo[1656]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 21:25:05.358134 sudo[1656]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:25:05.384173 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 13 21:25:05.403465 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 13 21:25:05.403726 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 13 21:25:05.589215 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 21:25:05.607164 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:25:05.794169 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:25:05.794413 (kubelet)[1695]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:25:05.846637 kubelet[1695]: E0113 21:25:05.846480 1695 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:25:05.851145 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:25:05.851422 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:25:05.880154 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:25:05.894271 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:25:05.920566 systemd[1]: Reloading requested from client PID 1712 ('systemctl') (unit session-9.scope)... Jan 13 21:25:05.920583 systemd[1]: Reloading... Jan 13 21:25:06.008657 zram_generator::config[1753]: No configuration found. Jan 13 21:25:06.821010 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:25:06.896971 systemd[1]: Reloading finished in 975 ms. Jan 13 21:25:06.949186 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 21:25:06.949305 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 21:25:06.949598 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:25:06.952322 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:25:07.104349 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:25:07.109302 (kubelet)[1799]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:25:07.147331 kubelet[1799]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:25:07.147331 kubelet[1799]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:25:07.147331 kubelet[1799]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:25:07.147693 kubelet[1799]: I0113 21:25:07.147417 1799 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:25:07.601736 kubelet[1799]: I0113 21:25:07.601702 1799 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 13 21:25:07.601736 kubelet[1799]: I0113 21:25:07.601731 1799 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:25:07.601996 kubelet[1799]: I0113 21:25:07.601979 1799 server.go:929] "Client rotation is on, will bootstrap in background" Jan 13 21:25:07.619990 kubelet[1799]: I0113 21:25:07.619940 1799 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:25:07.625172 kubelet[1799]: E0113 21:25:07.625123 1799 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 13 21:25:07.625172 kubelet[1799]: I0113 21:25:07.625169 1799 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 13 21:25:07.632801 kubelet[1799]: I0113 21:25:07.632780 1799 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:25:07.633913 kubelet[1799]: I0113 21:25:07.633874 1799 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 13 21:25:07.634104 kubelet[1799]: I0113 21:25:07.634064 1799 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:25:07.634304 kubelet[1799]: I0113 21:25:07.634093 1799 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.88","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 21:25:07.634399 kubelet[1799]: I0113 21:25:07.634314 1799 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:25:07.634399 kubelet[1799]: I0113 21:25:07.634325 1799 container_manager_linux.go:300] "Creating device plugin manager" Jan 13 21:25:07.634496 kubelet[1799]: I0113 21:25:07.634462 1799 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:25:07.635831 kubelet[1799]: I0113 21:25:07.635802 1799 kubelet.go:408] "Attempting to sync node with API server" Jan 13 21:25:07.635831 kubelet[1799]: I0113 21:25:07.635824 1799 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:25:07.637964 kubelet[1799]: I0113 21:25:07.637507 1799 kubelet.go:314] "Adding apiserver pod source" Jan 13 21:25:07.637964 kubelet[1799]: I0113 21:25:07.637540 1799 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:25:07.637964 kubelet[1799]: E0113 21:25:07.637743 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:07.637964 kubelet[1799]: E0113 21:25:07.637786 1799 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:07.642711 kubelet[1799]: I0113 21:25:07.642683 1799 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:25:07.643061 kubelet[1799]: W0113 21:25:07.642982 1799 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.88" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 13 21:25:07.643061 kubelet[1799]: E0113 21:25:07.643022 1799 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.88\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 13 21:25:07.643136 kubelet[1799]: W0113 21:25:07.643116 1799 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 13 21:25:07.643161 kubelet[1799]: E0113 21:25:07.643142 1799 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 13 21:25:07.644046 kubelet[1799]: I0113 21:25:07.644028 1799 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:25:07.644488 kubelet[1799]: W0113 21:25:07.644457 1799 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 21:25:07.645069 kubelet[1799]: I0113 21:25:07.645046 1799 server.go:1269] "Started kubelet" Jan 13 21:25:07.646387 kubelet[1799]: I0113 21:25:07.646188 1799 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:25:07.646387 kubelet[1799]: I0113 21:25:07.646298 1799 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:25:07.647223 kubelet[1799]: I0113 21:25:07.647193 1799 server.go:460] "Adding debug handlers to kubelet server" Jan 13 21:25:07.647795 kubelet[1799]: I0113 21:25:07.647421 1799 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:25:07.647795 kubelet[1799]: I0113 21:25:07.647668 1799 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:25:07.648079 kubelet[1799]: I0113 21:25:07.648060 1799 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 21:25:07.649525 kubelet[1799]: E0113 21:25:07.649499 1799 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.88\" not found" Jan 13 21:25:07.649570 kubelet[1799]: I0113 21:25:07.649534 1799 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 13 21:25:07.649703 kubelet[1799]: I0113 21:25:07.649683 1799 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 13 21:25:07.649758 kubelet[1799]: I0113 21:25:07.649741 1799 reconciler.go:26] "Reconciler: start to sync state" Jan 13 21:25:07.650367 kubelet[1799]: I0113 21:25:07.650339 1799 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:25:07.650445 kubelet[1799]: E0113 21:25:07.650427 1799 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:25:07.650445 kubelet[1799]: I0113 21:25:07.650432 1799 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:25:07.651343 kubelet[1799]: I0113 21:25:07.651327 1799 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:25:07.663978 kubelet[1799]: W0113 21:25:07.662278 1799 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 13 21:25:07.663978 kubelet[1799]: E0113 21:25:07.662305 1799 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Jan 13 21:25:07.663978 kubelet[1799]: E0113 21:25:07.662345 1799 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.88\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 13 21:25:07.663978 kubelet[1799]: I0113 21:25:07.663202 1799 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:25:07.663978 kubelet[1799]: I0113 21:25:07.663211 1799 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:25:07.663978 kubelet[1799]: I0113 21:25:07.663226 1799 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:25:07.664226 kubelet[1799]: E0113 21:25:07.662248 1799 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.88.181a5d9695d58cd2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.88,UID:10.0.0.88,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.88,},FirstTimestamp:2025-01-13 21:25:07.64502549 +0000 UTC m=+0.530582783,LastTimestamp:2025-01-13 21:25:07.64502549 +0000 UTC m=+0.530582783,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.88,}" Jan 13 21:25:07.665311 kubelet[1799]: E0113 21:25:07.665198 1799 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.88.181a5d969627d502 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.88,UID:10.0.0.88,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.88,},FirstTimestamp:2025-01-13 21:25:07.650417922 +0000 UTC m=+0.535975215,LastTimestamp:2025-01-13 21:25:07.650417922 +0000 UTC m=+0.535975215,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.88,}" Jan 13 21:25:07.750559 kubelet[1799]: E0113 21:25:07.750507 1799 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.88\" not found" Jan 13 21:25:07.850934 kubelet[1799]: E0113 21:25:07.850874 1799 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.88\" not found" Jan 13 21:25:07.869868 kubelet[1799]: E0113 21:25:07.869745 1799 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.88\" not found" node="10.0.0.88" Jan 13 21:25:07.951064 kubelet[1799]: E0113 21:25:07.950995 1799 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.88\" not found" Jan 13 21:25:08.018616 kubelet[1799]: I0113 21:25:08.018576 1799 policy_none.go:49] "None policy: Start" Jan 13 21:25:08.019444 kubelet[1799]: I0113 21:25:08.019415 1799 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:25:08.019493 kubelet[1799]: I0113 21:25:08.019453 1799 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:25:08.029077 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 21:25:08.044306 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 21:25:08.047838 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 21:25:08.048365 kubelet[1799]: I0113 21:25:08.048325 1799 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:25:08.049507 kubelet[1799]: I0113 21:25:08.049481 1799 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:25:08.049554 kubelet[1799]: I0113 21:25:08.049521 1799 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:25:08.049554 kubelet[1799]: I0113 21:25:08.049541 1799 kubelet.go:2321] "Starting kubelet main sync loop" Jan 13 21:25:08.050034 kubelet[1799]: E0113 21:25:08.049648 1799 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:25:08.051114 kubelet[1799]: E0113 21:25:08.051097 1799 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.88\" not found" Jan 13 21:25:08.053164 kubelet[1799]: I0113 21:25:08.053011 1799 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:25:08.053257 kubelet[1799]: I0113 21:25:08.053200 1799 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 21:25:08.053257 kubelet[1799]: I0113 21:25:08.053210 1799 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 21:25:08.053745 kubelet[1799]: I0113 21:25:08.053427 1799 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:25:08.054733 kubelet[1799]: E0113 21:25:08.054705 1799 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.88\" not found" Jan 13 21:25:08.154169 kubelet[1799]: I0113 21:25:08.154042 1799 kubelet_node_status.go:72] "Attempting to register node" node="10.0.0.88" Jan 13 21:25:08.157449 kubelet[1799]: I0113 21:25:08.157424 1799 kubelet_node_status.go:75] "Successfully registered node" node="10.0.0.88" Jan 13 21:25:08.157449 kubelet[1799]: E0113 21:25:08.157450 1799 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"10.0.0.88\": node \"10.0.0.88\" not found" Jan 13 21:25:08.167940 kubelet[1799]: E0113 21:25:08.167914 1799 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.88\" not found" Jan 13 21:25:08.269082 kubelet[1799]: E0113 21:25:08.269022 1799 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.88\" not found" Jan 13 21:25:08.369739 kubelet[1799]: E0113 21:25:08.369693 1799 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.88\" not found" Jan 13 21:25:08.470412 kubelet[1799]: E0113 21:25:08.470307 1799 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.88\" not found" Jan 13 21:25:08.571165 kubelet[1799]: E0113 21:25:08.571103 1799 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.88\" not found" Jan 13 21:25:08.604361 kubelet[1799]: I0113 21:25:08.604301 1799 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 13 21:25:08.604573 kubelet[1799]: W0113 21:25:08.604541 1799 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 13 21:25:08.604670 kubelet[1799]: W0113 21:25:08.604541 1799 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 13 21:25:08.627050 sudo[1656]: pam_unix(sudo:session): session closed for user root Jan 13 21:25:08.628938 sshd[1653]: pam_unix(sshd:session): session closed for user core Jan 13 21:25:08.633019 systemd[1]: sshd@8-10.0.0.88:22-10.0.0.1:36118.service: Deactivated successfully. Jan 13 21:25:08.634937 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 21:25:08.635659 systemd-logind[1450]: Session 9 logged out. Waiting for processes to exit. Jan 13 21:25:08.636670 systemd-logind[1450]: Removed session 9. Jan 13 21:25:08.638712 kubelet[1799]: E0113 21:25:08.638663 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:08.671642 kubelet[1799]: E0113 21:25:08.671591 1799 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.88\" not found" Jan 13 21:25:08.772136 kubelet[1799]: E0113 21:25:08.771986 1799 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.88\" not found" Jan 13 21:25:08.872804 kubelet[1799]: E0113 21:25:08.872738 1799 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.88\" not found" Jan 13 21:25:08.973597 kubelet[1799]: E0113 21:25:08.973528 1799 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.88\" not found" Jan 13 21:25:09.074325 kubelet[1799]: I0113 21:25:09.074214 1799 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 13 21:25:09.074503 containerd[1474]: time="2025-01-13T21:25:09.074460030Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 21:25:09.074854 kubelet[1799]: I0113 21:25:09.074649 1799 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 13 21:25:09.639394 kubelet[1799]: E0113 21:25:09.639337 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:09.640574 kubelet[1799]: I0113 21:25:09.640554 1799 apiserver.go:52] "Watching apiserver" Jan 13 21:25:09.650061 kubelet[1799]: I0113 21:25:09.650038 1799 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 13 21:25:09.650529 systemd[1]: Created slice kubepods-besteffort-pod01a796f3_3716_4366_8b8b_29d78f7c6e0a.slice - libcontainer container kubepods-besteffort-pod01a796f3_3716_4366_8b8b_29d78f7c6e0a.slice. Jan 13 21:25:09.660470 kubelet[1799]: I0113 21:25:09.660432 1799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a2f83576-9690-40b2-bacf-895f61519e6a-hubble-tls\") pod \"cilium-xvdks\" (UID: \"a2f83576-9690-40b2-bacf-895f61519e6a\") " pod="kube-system/cilium-xvdks" Jan 13 21:25:09.660470 kubelet[1799]: I0113 21:25:09.660464 1799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01a796f3-3716-4366-8b8b-29d78f7c6e0a-xtables-lock\") pod \"kube-proxy-rxq8f\" (UID: \"01a796f3-3716-4366-8b8b-29d78f7c6e0a\") " pod="kube-system/kube-proxy-rxq8f" Jan 13 21:25:09.660619 kubelet[1799]: I0113 21:25:09.660481 1799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a2f83576-9690-40b2-bacf-895f61519e6a-cilium-run\") pod \"cilium-xvdks\" (UID: \"a2f83576-9690-40b2-bacf-895f61519e6a\") " pod="kube-system/cilium-xvdks" Jan 13 21:25:09.660619 kubelet[1799]: I0113 21:25:09.660497 1799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a2f83576-9690-40b2-bacf-895f61519e6a-bpf-maps\") pod \"cilium-xvdks\" (UID: \"a2f83576-9690-40b2-bacf-895f61519e6a\") " pod="kube-system/cilium-xvdks" Jan 13 21:25:09.660619 kubelet[1799]: I0113 21:25:09.660513 1799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a2f83576-9690-40b2-bacf-895f61519e6a-hostproc\") pod \"cilium-xvdks\" (UID: \"a2f83576-9690-40b2-bacf-895f61519e6a\") " pod="kube-system/cilium-xvdks" Jan 13 21:25:09.660619 kubelet[1799]: I0113 21:25:09.660527 1799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a2f83576-9690-40b2-bacf-895f61519e6a-cilium-config-path\") pod \"cilium-xvdks\" (UID: \"a2f83576-9690-40b2-bacf-895f61519e6a\") " pod="kube-system/cilium-xvdks" Jan 13 21:25:09.660619 kubelet[1799]: I0113 21:25:09.660551 1799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a2f83576-9690-40b2-bacf-895f61519e6a-cilium-cgroup\") pod \"cilium-xvdks\" (UID: \"a2f83576-9690-40b2-bacf-895f61519e6a\") " pod="kube-system/cilium-xvdks" Jan 13 21:25:09.660619 kubelet[1799]: I0113 21:25:09.660578 1799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a2f83576-9690-40b2-bacf-895f61519e6a-clustermesh-secrets\") pod \"cilium-xvdks\" (UID: \"a2f83576-9690-40b2-bacf-895f61519e6a\") " pod="kube-system/cilium-xvdks" Jan 13 21:25:09.660753 kubelet[1799]: I0113 21:25:09.660597 1799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bf9m\" (UniqueName: \"kubernetes.io/projected/a2f83576-9690-40b2-bacf-895f61519e6a-kube-api-access-9bf9m\") pod \"cilium-xvdks\" (UID: \"a2f83576-9690-40b2-bacf-895f61519e6a\") " pod="kube-system/cilium-xvdks" Jan 13 21:25:09.660753 kubelet[1799]: I0113 21:25:09.660653 1799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc9fc\" (UniqueName: \"kubernetes.io/projected/01a796f3-3716-4366-8b8b-29d78f7c6e0a-kube-api-access-jc9fc\") pod \"kube-proxy-rxq8f\" (UID: \"01a796f3-3716-4366-8b8b-29d78f7c6e0a\") " pod="kube-system/kube-proxy-rxq8f" Jan 13 21:25:09.660753 kubelet[1799]: I0113 21:25:09.660675 1799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a2f83576-9690-40b2-bacf-895f61519e6a-cni-path\") pod \"cilium-xvdks\" (UID: \"a2f83576-9690-40b2-bacf-895f61519e6a\") " pod="kube-system/cilium-xvdks" Jan 13 21:25:09.660753 kubelet[1799]: I0113 21:25:09.660711 1799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a2f83576-9690-40b2-bacf-895f61519e6a-etc-cni-netd\") pod \"cilium-xvdks\" (UID: \"a2f83576-9690-40b2-bacf-895f61519e6a\") " pod="kube-system/cilium-xvdks" Jan 13 21:25:09.660753 kubelet[1799]: I0113 21:25:09.660749 1799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2f83576-9690-40b2-bacf-895f61519e6a-xtables-lock\") pod \"cilium-xvdks\" (UID: \"a2f83576-9690-40b2-bacf-895f61519e6a\") " pod="kube-system/cilium-xvdks" Jan 13 21:25:09.660867 kubelet[1799]: I0113 21:25:09.660767 1799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/01a796f3-3716-4366-8b8b-29d78f7c6e0a-lib-modules\") pod \"kube-proxy-rxq8f\" (UID: \"01a796f3-3716-4366-8b8b-29d78f7c6e0a\") " pod="kube-system/kube-proxy-rxq8f" Jan 13 21:25:09.660867 kubelet[1799]: I0113 21:25:09.660788 1799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2f83576-9690-40b2-bacf-895f61519e6a-lib-modules\") pod \"cilium-xvdks\" (UID: \"a2f83576-9690-40b2-bacf-895f61519e6a\") " pod="kube-system/cilium-xvdks" Jan 13 21:25:09.660867 kubelet[1799]: I0113 21:25:09.660806 1799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a2f83576-9690-40b2-bacf-895f61519e6a-host-proc-sys-net\") pod \"cilium-xvdks\" (UID: \"a2f83576-9690-40b2-bacf-895f61519e6a\") " pod="kube-system/cilium-xvdks" Jan 13 21:25:09.660867 kubelet[1799]: I0113 21:25:09.660826 1799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a2f83576-9690-40b2-bacf-895f61519e6a-host-proc-sys-kernel\") pod \"cilium-xvdks\" (UID: \"a2f83576-9690-40b2-bacf-895f61519e6a\") " pod="kube-system/cilium-xvdks" Jan 13 21:25:09.660867 kubelet[1799]: I0113 21:25:09.660845 1799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/01a796f3-3716-4366-8b8b-29d78f7c6e0a-kube-proxy\") pod \"kube-proxy-rxq8f\" (UID: \"01a796f3-3716-4366-8b8b-29d78f7c6e0a\") " pod="kube-system/kube-proxy-rxq8f" Jan 13 21:25:09.669517 systemd[1]: Created slice kubepods-burstable-poda2f83576_9690_40b2_bacf_895f61519e6a.slice - libcontainer container kubepods-burstable-poda2f83576_9690_40b2_bacf_895f61519e6a.slice. Jan 13 21:25:09.968855 kubelet[1799]: E0113 21:25:09.968720 1799 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:09.969641 containerd[1474]: time="2025-01-13T21:25:09.969602682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rxq8f,Uid:01a796f3-3716-4366-8b8b-29d78f7c6e0a,Namespace:kube-system,Attempt:0,}" Jan 13 21:25:09.982025 kubelet[1799]: E0113 21:25:09.981998 1799 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:09.982666 containerd[1474]: time="2025-01-13T21:25:09.982353163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xvdks,Uid:a2f83576-9690-40b2-bacf-895f61519e6a,Namespace:kube-system,Attempt:0,}" Jan 13 21:25:10.581357 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1278194866.mount: Deactivated successfully. Jan 13 21:25:10.588893 containerd[1474]: time="2025-01-13T21:25:10.588851785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:25:10.590074 containerd[1474]: time="2025-01-13T21:25:10.590001501Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:25:10.590842 containerd[1474]: time="2025-01-13T21:25:10.590812622Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 13 21:25:10.593560 containerd[1474]: time="2025-01-13T21:25:10.593525940Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:25:10.594705 containerd[1474]: time="2025-01-13T21:25:10.594670808Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:25:10.597379 containerd[1474]: time="2025-01-13T21:25:10.597351445Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:25:10.598361 containerd[1474]: time="2025-01-13T21:25:10.598326002Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 628.630596ms" Jan 13 21:25:10.600801 containerd[1474]: time="2025-01-13T21:25:10.600764906Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 618.287399ms" Jan 13 21:25:10.639498 kubelet[1799]: E0113 21:25:10.639436 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:10.696720 containerd[1474]: time="2025-01-13T21:25:10.696546341Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:25:10.696720 containerd[1474]: time="2025-01-13T21:25:10.696627443Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:25:10.696720 containerd[1474]: time="2025-01-13T21:25:10.696655215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:25:10.697597 containerd[1474]: time="2025-01-13T21:25:10.696760392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:25:10.697597 containerd[1474]: time="2025-01-13T21:25:10.696726168Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:25:10.697597 containerd[1474]: time="2025-01-13T21:25:10.696768357Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:25:10.697597 containerd[1474]: time="2025-01-13T21:25:10.696781752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:25:10.697597 containerd[1474]: time="2025-01-13T21:25:10.696849419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:25:10.759049 systemd[1]: Started cri-containerd-56fe0ce02cc88e5eb5a49dffa080c101e96f14f1c56ad515071b87974e8d4794.scope - libcontainer container 56fe0ce02cc88e5eb5a49dffa080c101e96f14f1c56ad515071b87974e8d4794. Jan 13 21:25:10.760999 systemd[1]: Started cri-containerd-f4b54a766aa0cfc16ca36197fd4b860c0db95fc6c45e23acf4b085474f97531a.scope - libcontainer container f4b54a766aa0cfc16ca36197fd4b860c0db95fc6c45e23acf4b085474f97531a. Jan 13 21:25:10.786484 containerd[1474]: time="2025-01-13T21:25:10.786398846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xvdks,Uid:a2f83576-9690-40b2-bacf-895f61519e6a,Namespace:kube-system,Attempt:0,} returns sandbox id \"56fe0ce02cc88e5eb5a49dffa080c101e96f14f1c56ad515071b87974e8d4794\"" Jan 13 21:25:10.787773 kubelet[1799]: E0113 21:25:10.787750 1799 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:10.789344 containerd[1474]: time="2025-01-13T21:25:10.789317229Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 13 21:25:10.789687 containerd[1474]: time="2025-01-13T21:25:10.789506274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rxq8f,Uid:01a796f3-3716-4366-8b8b-29d78f7c6e0a,Namespace:kube-system,Attempt:0,} returns sandbox id \"f4b54a766aa0cfc16ca36197fd4b860c0db95fc6c45e23acf4b085474f97531a\"" Jan 13 21:25:10.790034 kubelet[1799]: E0113 21:25:10.790010 1799 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:11.639982 kubelet[1799]: E0113 21:25:11.639934 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:12.640911 kubelet[1799]: E0113 21:25:12.640851 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:13.642022 kubelet[1799]: E0113 21:25:13.641980 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:14.643011 kubelet[1799]: E0113 21:25:14.642958 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:14.864322 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount924341872.mount: Deactivated successfully. Jan 13 21:25:15.644047 kubelet[1799]: E0113 21:25:15.643988 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:16.644794 kubelet[1799]: E0113 21:25:16.644744 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:17.301828 containerd[1474]: time="2025-01-13T21:25:17.301711272Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:17.304776 containerd[1474]: time="2025-01-13T21:25:17.304686242Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166734711" Jan 13 21:25:17.305882 containerd[1474]: time="2025-01-13T21:25:17.305848050Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:17.307315 containerd[1474]: time="2025-01-13T21:25:17.307277231Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 6.517729529s" Jan 13 21:25:17.307362 containerd[1474]: time="2025-01-13T21:25:17.307317386Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 13 21:25:17.308536 containerd[1474]: time="2025-01-13T21:25:17.308514271Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Jan 13 21:25:17.309841 containerd[1474]: time="2025-01-13T21:25:17.309818306Z" level=info msg="CreateContainer within sandbox \"56fe0ce02cc88e5eb5a49dffa080c101e96f14f1c56ad515071b87974e8d4794\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 21:25:17.325398 containerd[1474]: time="2025-01-13T21:25:17.325349943Z" level=info msg="CreateContainer within sandbox \"56fe0ce02cc88e5eb5a49dffa080c101e96f14f1c56ad515071b87974e8d4794\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7035fe7b1a914cb1fbbb4e1093e723e30bbf5d13c4340003a51fccad1184d923\"" Jan 13 21:25:17.325941 containerd[1474]: time="2025-01-13T21:25:17.325913560Z" level=info msg="StartContainer for \"7035fe7b1a914cb1fbbb4e1093e723e30bbf5d13c4340003a51fccad1184d923\"" Jan 13 21:25:17.366051 systemd[1]: Started cri-containerd-7035fe7b1a914cb1fbbb4e1093e723e30bbf5d13c4340003a51fccad1184d923.scope - libcontainer container 7035fe7b1a914cb1fbbb4e1093e723e30bbf5d13c4340003a51fccad1184d923. Jan 13 21:25:17.393834 containerd[1474]: time="2025-01-13T21:25:17.393776908Z" level=info msg="StartContainer for \"7035fe7b1a914cb1fbbb4e1093e723e30bbf5d13c4340003a51fccad1184d923\" returns successfully" Jan 13 21:25:17.403006 systemd[1]: cri-containerd-7035fe7b1a914cb1fbbb4e1093e723e30bbf5d13c4340003a51fccad1184d923.scope: Deactivated successfully. Jan 13 21:25:17.645335 kubelet[1799]: E0113 21:25:17.645274 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:17.826886 containerd[1474]: time="2025-01-13T21:25:17.826827874Z" level=info msg="shim disconnected" id=7035fe7b1a914cb1fbbb4e1093e723e30bbf5d13c4340003a51fccad1184d923 namespace=k8s.io Jan 13 21:25:17.826886 containerd[1474]: time="2025-01-13T21:25:17.826879981Z" level=warning msg="cleaning up after shim disconnected" id=7035fe7b1a914cb1fbbb4e1093e723e30bbf5d13c4340003a51fccad1184d923 namespace=k8s.io Jan 13 21:25:17.826886 containerd[1474]: time="2025-01-13T21:25:17.826888187Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:25:18.067422 kubelet[1799]: E0113 21:25:18.067287 1799 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:18.068837 containerd[1474]: time="2025-01-13T21:25:18.068798405Z" level=info msg="CreateContainer within sandbox \"56fe0ce02cc88e5eb5a49dffa080c101e96f14f1c56ad515071b87974e8d4794\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 21:25:18.258661 containerd[1474]: time="2025-01-13T21:25:18.258602004Z" level=info msg="CreateContainer within sandbox \"56fe0ce02cc88e5eb5a49dffa080c101e96f14f1c56ad515071b87974e8d4794\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"27e0e528b460e220ac20147803aafa2e5841de51ab0cfaae9f8c0f58dab1fbcc\"" Jan 13 21:25:18.259261 containerd[1474]: time="2025-01-13T21:25:18.259232156Z" level=info msg="StartContainer for \"27e0e528b460e220ac20147803aafa2e5841de51ab0cfaae9f8c0f58dab1fbcc\"" Jan 13 21:25:18.284030 systemd[1]: Started cri-containerd-27e0e528b460e220ac20147803aafa2e5841de51ab0cfaae9f8c0f58dab1fbcc.scope - libcontainer container 27e0e528b460e220ac20147803aafa2e5841de51ab0cfaae9f8c0f58dab1fbcc. Jan 13 21:25:18.307147 containerd[1474]: time="2025-01-13T21:25:18.307095968Z" level=info msg="StartContainer for \"27e0e528b460e220ac20147803aafa2e5841de51ab0cfaae9f8c0f58dab1fbcc\" returns successfully" Jan 13 21:25:18.321420 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7035fe7b1a914cb1fbbb4e1093e723e30bbf5d13c4340003a51fccad1184d923-rootfs.mount: Deactivated successfully. Jan 13 21:25:18.322766 systemd[1]: cri-containerd-27e0e528b460e220ac20147803aafa2e5841de51ab0cfaae9f8c0f58dab1fbcc.scope: Deactivated successfully. Jan 13 21:25:18.324700 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:25:18.324872 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:25:18.324989 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:25:18.332630 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:25:18.341745 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-27e0e528b460e220ac20147803aafa2e5841de51ab0cfaae9f8c0f58dab1fbcc-rootfs.mount: Deactivated successfully. Jan 13 21:25:18.352865 containerd[1474]: time="2025-01-13T21:25:18.352805349Z" level=info msg="shim disconnected" id=27e0e528b460e220ac20147803aafa2e5841de51ab0cfaae9f8c0f58dab1fbcc namespace=k8s.io Jan 13 21:25:18.352865 containerd[1474]: time="2025-01-13T21:25:18.352853680Z" level=warning msg="cleaning up after shim disconnected" id=27e0e528b460e220ac20147803aafa2e5841de51ab0cfaae9f8c0f58dab1fbcc namespace=k8s.io Jan 13 21:25:18.352865 containerd[1474]: time="2025-01-13T21:25:18.352861955Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:25:18.356076 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:25:18.646103 kubelet[1799]: E0113 21:25:18.646043 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:19.023307 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3579420054.mount: Deactivated successfully. Jan 13 21:25:19.069862 kubelet[1799]: E0113 21:25:19.069825 1799 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:19.071440 containerd[1474]: time="2025-01-13T21:25:19.071403798Z" level=info msg="CreateContainer within sandbox \"56fe0ce02cc88e5eb5a49dffa080c101e96f14f1c56ad515071b87974e8d4794\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 21:25:19.171629 containerd[1474]: time="2025-01-13T21:25:19.171578581Z" level=info msg="CreateContainer within sandbox \"56fe0ce02cc88e5eb5a49dffa080c101e96f14f1c56ad515071b87974e8d4794\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"42cf9c2cf7c323ad619db58b0c7db37e901ba1a261943e8bf2541069a4e4da20\"" Jan 13 21:25:19.172187 containerd[1474]: time="2025-01-13T21:25:19.172140645Z" level=info msg="StartContainer for \"42cf9c2cf7c323ad619db58b0c7db37e901ba1a261943e8bf2541069a4e4da20\"" Jan 13 21:25:19.206083 systemd[1]: Started cri-containerd-42cf9c2cf7c323ad619db58b0c7db37e901ba1a261943e8bf2541069a4e4da20.scope - libcontainer container 42cf9c2cf7c323ad619db58b0c7db37e901ba1a261943e8bf2541069a4e4da20. Jan 13 21:25:19.238408 systemd[1]: cri-containerd-42cf9c2cf7c323ad619db58b0c7db37e901ba1a261943e8bf2541069a4e4da20.scope: Deactivated successfully. Jan 13 21:25:19.312318 containerd[1474]: time="2025-01-13T21:25:19.312206759Z" level=info msg="StartContainer for \"42cf9c2cf7c323ad619db58b0c7db37e901ba1a261943e8bf2541069a4e4da20\" returns successfully" Jan 13 21:25:19.331647 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-42cf9c2cf7c323ad619db58b0c7db37e901ba1a261943e8bf2541069a4e4da20-rootfs.mount: Deactivated successfully. Jan 13 21:25:19.646553 kubelet[1799]: E0113 21:25:19.646503 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:20.018944 containerd[1474]: time="2025-01-13T21:25:20.018801567Z" level=info msg="shim disconnected" id=42cf9c2cf7c323ad619db58b0c7db37e901ba1a261943e8bf2541069a4e4da20 namespace=k8s.io Jan 13 21:25:20.018944 containerd[1474]: time="2025-01-13T21:25:20.018852372Z" level=warning msg="cleaning up after shim disconnected" id=42cf9c2cf7c323ad619db58b0c7db37e901ba1a261943e8bf2541069a4e4da20 namespace=k8s.io Jan 13 21:25:20.018944 containerd[1474]: time="2025-01-13T21:25:20.018860798Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:25:20.025706 containerd[1474]: time="2025-01-13T21:25:20.025666731Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:20.027273 containerd[1474]: time="2025-01-13T21:25:20.027140886Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=30230243" Jan 13 21:25:20.028733 containerd[1474]: time="2025-01-13T21:25:20.028711221Z" level=info msg="ImageCreate event name:\"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:20.031388 containerd[1474]: time="2025-01-13T21:25:20.031359147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:20.032094 containerd[1474]: time="2025-01-13T21:25:20.032063528Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"30229262\" in 2.723516997s" Jan 13 21:25:20.032177 containerd[1474]: time="2025-01-13T21:25:20.032098353Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Jan 13 21:25:20.033812 containerd[1474]: time="2025-01-13T21:25:20.033789354Z" level=info msg="CreateContainer within sandbox \"f4b54a766aa0cfc16ca36197fd4b860c0db95fc6c45e23acf4b085474f97531a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 21:25:20.049719 containerd[1474]: time="2025-01-13T21:25:20.049675837Z" level=info msg="CreateContainer within sandbox \"f4b54a766aa0cfc16ca36197fd4b860c0db95fc6c45e23acf4b085474f97531a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fcb7946446f875d70a548b4f291c76ca418f599b3c160bf7f3b7d4b7c288a670\"" Jan 13 21:25:20.050287 containerd[1474]: time="2025-01-13T21:25:20.050244724Z" level=info msg="StartContainer for \"fcb7946446f875d70a548b4f291c76ca418f599b3c160bf7f3b7d4b7c288a670\"" Jan 13 21:25:20.074274 kubelet[1799]: E0113 21:25:20.074236 1799 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:20.076335 containerd[1474]: time="2025-01-13T21:25:20.076290629Z" level=info msg="CreateContainer within sandbox \"56fe0ce02cc88e5eb5a49dffa080c101e96f14f1c56ad515071b87974e8d4794\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 21:25:20.077358 systemd[1]: Started cri-containerd-fcb7946446f875d70a548b4f291c76ca418f599b3c160bf7f3b7d4b7c288a670.scope - libcontainer container fcb7946446f875d70a548b4f291c76ca418f599b3c160bf7f3b7d4b7c288a670. Jan 13 21:25:20.093008 containerd[1474]: time="2025-01-13T21:25:20.092962715Z" level=info msg="CreateContainer within sandbox \"56fe0ce02cc88e5eb5a49dffa080c101e96f14f1c56ad515071b87974e8d4794\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ffb2f7c10884950bdb2aa3b31a3d16b86105869696c1643406527bdb9ae58b83\"" Jan 13 21:25:20.093567 containerd[1474]: time="2025-01-13T21:25:20.093542432Z" level=info msg="StartContainer for \"ffb2f7c10884950bdb2aa3b31a3d16b86105869696c1643406527bdb9ae58b83\"" Jan 13 21:25:20.110139 containerd[1474]: time="2025-01-13T21:25:20.110082361Z" level=info msg="StartContainer for \"fcb7946446f875d70a548b4f291c76ca418f599b3c160bf7f3b7d4b7c288a670\" returns successfully" Jan 13 21:25:20.121323 systemd[1]: Started cri-containerd-ffb2f7c10884950bdb2aa3b31a3d16b86105869696c1643406527bdb9ae58b83.scope - libcontainer container ffb2f7c10884950bdb2aa3b31a3d16b86105869696c1643406527bdb9ae58b83. Jan 13 21:25:20.145489 systemd[1]: cri-containerd-ffb2f7c10884950bdb2aa3b31a3d16b86105869696c1643406527bdb9ae58b83.scope: Deactivated successfully. Jan 13 21:25:20.148687 containerd[1474]: time="2025-01-13T21:25:20.148588023Z" level=info msg="StartContainer for \"ffb2f7c10884950bdb2aa3b31a3d16b86105869696c1643406527bdb9ae58b83\" returns successfully" Jan 13 21:25:20.224873 containerd[1474]: time="2025-01-13T21:25:20.224798995Z" level=info msg="shim disconnected" id=ffb2f7c10884950bdb2aa3b31a3d16b86105869696c1643406527bdb9ae58b83 namespace=k8s.io Jan 13 21:25:20.224873 containerd[1474]: time="2025-01-13T21:25:20.224858908Z" level=warning msg="cleaning up after shim disconnected" id=ffb2f7c10884950bdb2aa3b31a3d16b86105869696c1643406527bdb9ae58b83 namespace=k8s.io Jan 13 21:25:20.224873 containerd[1474]: time="2025-01-13T21:25:20.224867444Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:25:20.647586 kubelet[1799]: E0113 21:25:20.647528 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:21.079797 kubelet[1799]: E0113 21:25:21.079533 1799 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:21.081139 kubelet[1799]: E0113 21:25:21.081107 1799 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:21.081668 containerd[1474]: time="2025-01-13T21:25:21.081636602Z" level=info msg="CreateContainer within sandbox \"56fe0ce02cc88e5eb5a49dffa080c101e96f14f1c56ad515071b87974e8d4794\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 21:25:21.101277 containerd[1474]: time="2025-01-13T21:25:21.101215890Z" level=info msg="CreateContainer within sandbox \"56fe0ce02cc88e5eb5a49dffa080c101e96f14f1c56ad515071b87974e8d4794\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"920a0efcc61d7d35f07df5ae70870192ce23ebd46e6d161f1ac2390f15f87c00\"" Jan 13 21:25:21.101721 containerd[1474]: time="2025-01-13T21:25:21.101682866Z" level=info msg="StartContainer for \"920a0efcc61d7d35f07df5ae70870192ce23ebd46e6d161f1ac2390f15f87c00\"" Jan 13 21:25:21.102443 kubelet[1799]: I0113 21:25:21.102363 1799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rxq8f" podStartSLOduration=3.860309692 podStartE2EDuration="13.10234616s" podCreationTimestamp="2025-01-13 21:25:08 +0000 UTC" firstStartedPulling="2025-01-13 21:25:10.790682399 +0000 UTC m=+3.676239692" lastFinishedPulling="2025-01-13 21:25:20.032718867 +0000 UTC m=+12.918276160" observedRunningTime="2025-01-13 21:25:21.102153348 +0000 UTC m=+13.987710651" watchObservedRunningTime="2025-01-13 21:25:21.10234616 +0000 UTC m=+13.987903453" Jan 13 21:25:21.137045 systemd[1]: Started cri-containerd-920a0efcc61d7d35f07df5ae70870192ce23ebd46e6d161f1ac2390f15f87c00.scope - libcontainer container 920a0efcc61d7d35f07df5ae70870192ce23ebd46e6d161f1ac2390f15f87c00. Jan 13 21:25:21.167650 containerd[1474]: time="2025-01-13T21:25:21.167600855Z" level=info msg="StartContainer for \"920a0efcc61d7d35f07df5ae70870192ce23ebd46e6d161f1ac2390f15f87c00\" returns successfully" Jan 13 21:25:21.281621 kubelet[1799]: I0113 21:25:21.281588 1799 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 13 21:25:21.631930 kernel: Initializing XFRM netlink socket Jan 13 21:25:21.648669 kubelet[1799]: E0113 21:25:21.648625 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:22.086010 kubelet[1799]: E0113 21:25:22.085870 1799 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:22.086138 kubelet[1799]: E0113 21:25:22.086039 1799 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:22.649712 kubelet[1799]: E0113 21:25:22.649628 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:23.087442 kubelet[1799]: E0113 21:25:23.087310 1799 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:23.324415 systemd-networkd[1406]: cilium_host: Link UP Jan 13 21:25:23.324572 systemd-networkd[1406]: cilium_net: Link UP Jan 13 21:25:23.325330 systemd-networkd[1406]: cilium_net: Gained carrier Jan 13 21:25:23.325518 systemd-networkd[1406]: cilium_host: Gained carrier Jan 13 21:25:23.325662 systemd-networkd[1406]: cilium_net: Gained IPv6LL Jan 13 21:25:23.325826 systemd-networkd[1406]: cilium_host: Gained IPv6LL Jan 13 21:25:23.427467 systemd-networkd[1406]: cilium_vxlan: Link UP Jan 13 21:25:23.427477 systemd-networkd[1406]: cilium_vxlan: Gained carrier Jan 13 21:25:23.643940 kernel: NET: Registered PF_ALG protocol family Jan 13 21:25:23.650195 kubelet[1799]: E0113 21:25:23.650159 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:24.089117 kubelet[1799]: E0113 21:25:24.089071 1799 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:24.260983 systemd-networkd[1406]: lxc_health: Link UP Jan 13 21:25:24.278032 systemd-networkd[1406]: lxc_health: Gained carrier Jan 13 21:25:24.610103 systemd-networkd[1406]: cilium_vxlan: Gained IPv6LL Jan 13 21:25:24.650749 kubelet[1799]: E0113 21:25:24.650678 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:24.961816 kubelet[1799]: I0113 21:25:24.961662 1799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xvdks" podStartSLOduration=10.442114256 podStartE2EDuration="16.961641847s" podCreationTimestamp="2025-01-13 21:25:08 +0000 UTC" firstStartedPulling="2025-01-13 21:25:10.788839142 +0000 UTC m=+3.674396436" lastFinishedPulling="2025-01-13 21:25:17.308366734 +0000 UTC m=+10.193924027" observedRunningTime="2025-01-13 21:25:22.103673987 +0000 UTC m=+14.989231280" watchObservedRunningTime="2025-01-13 21:25:24.961641847 +0000 UTC m=+17.847199140" Jan 13 21:25:24.967805 systemd[1]: Created slice kubepods-besteffort-podc94d0b2a_2c92_4ec9_94b7_988d7551969d.slice - libcontainer container kubepods-besteffort-podc94d0b2a_2c92_4ec9_94b7_988d7551969d.slice. Jan 13 21:25:25.058954 kubelet[1799]: I0113 21:25:25.058918 1799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnkmz\" (UniqueName: \"kubernetes.io/projected/c94d0b2a-2c92-4ec9-94b7-988d7551969d-kube-api-access-wnkmz\") pod \"nginx-deployment-8587fbcb89-4kl5c\" (UID: \"c94d0b2a-2c92-4ec9-94b7-988d7551969d\") " pod="default/nginx-deployment-8587fbcb89-4kl5c" Jan 13 21:25:25.571482 containerd[1474]: time="2025-01-13T21:25:25.571437292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-4kl5c,Uid:c94d0b2a-2c92-4ec9-94b7-988d7551969d,Namespace:default,Attempt:0,}" Jan 13 21:25:25.635016 systemd-networkd[1406]: lxc_health: Gained IPv6LL Jan 13 21:25:25.651242 kubelet[1799]: E0113 21:25:25.651211 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:25.984184 kubelet[1799]: E0113 21:25:25.984144 1799 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:26.273934 systemd-networkd[1406]: lxcb9e67eb6ac08: Link UP Jan 13 21:25:26.287038 kernel: eth0: renamed from tmp26c40 Jan 13 21:25:26.293481 systemd-networkd[1406]: lxcb9e67eb6ac08: Gained carrier Jan 13 21:25:26.652409 kubelet[1799]: E0113 21:25:26.652356 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:27.638170 kubelet[1799]: E0113 21:25:27.638114 1799 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:27.652746 kubelet[1799]: E0113 21:25:27.652676 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:28.258292 systemd-networkd[1406]: lxcb9e67eb6ac08: Gained IPv6LL Jan 13 21:25:28.652980 kubelet[1799]: E0113 21:25:28.652932 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:28.962722 kubelet[1799]: I0113 21:25:28.962582 1799 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:25:28.963254 kubelet[1799]: E0113 21:25:28.963192 1799 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:29.055002 containerd[1474]: time="2025-01-13T21:25:29.054387209Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:25:29.055002 containerd[1474]: time="2025-01-13T21:25:29.054978991Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:25:29.055002 containerd[1474]: time="2025-01-13T21:25:29.054991726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:25:29.055503 containerd[1474]: time="2025-01-13T21:25:29.055073783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:25:29.076025 systemd[1]: Started cri-containerd-26c4086853005fda4815121a788a0ec91326ccf56a622909f767eee17617c8fa.scope - libcontainer container 26c4086853005fda4815121a788a0ec91326ccf56a622909f767eee17617c8fa. Jan 13 21:25:29.086473 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:25:29.097231 kubelet[1799]: E0113 21:25:29.097204 1799 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:29.109506 containerd[1474]: time="2025-01-13T21:25:29.109456453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-4kl5c,Uid:c94d0b2a-2c92-4ec9-94b7-988d7551969d,Namespace:default,Attempt:0,} returns sandbox id \"26c4086853005fda4815121a788a0ec91326ccf56a622909f767eee17617c8fa\"" Jan 13 21:25:29.110840 containerd[1474]: time="2025-01-13T21:25:29.110792829Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 13 21:25:29.653744 kubelet[1799]: E0113 21:25:29.653677 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:30.654196 kubelet[1799]: E0113 21:25:30.654109 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:31.654742 kubelet[1799]: E0113 21:25:31.654696 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:32.236943 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3214652126.mount: Deactivated successfully. Jan 13 21:25:32.655337 kubelet[1799]: E0113 21:25:32.655300 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:33.656023 kubelet[1799]: E0113 21:25:33.655942 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:33.743060 containerd[1474]: time="2025-01-13T21:25:33.742986538Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:33.744120 containerd[1474]: time="2025-01-13T21:25:33.744045595Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71036018" Jan 13 21:25:33.745606 containerd[1474]: time="2025-01-13T21:25:33.745470989Z" level=info msg="ImageCreate event name:\"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:33.748146 containerd[1474]: time="2025-01-13T21:25:33.748066773Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:33.749422 containerd[1474]: time="2025-01-13T21:25:33.749300812Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 4.638464691s" Jan 13 21:25:33.749422 containerd[1474]: time="2025-01-13T21:25:33.749359034Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 13 21:25:33.751840 containerd[1474]: time="2025-01-13T21:25:33.751788490Z" level=info msg="CreateContainer within sandbox \"26c4086853005fda4815121a788a0ec91326ccf56a622909f767eee17617c8fa\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 13 21:25:33.770672 containerd[1474]: time="2025-01-13T21:25:33.770604380Z" level=info msg="CreateContainer within sandbox \"26c4086853005fda4815121a788a0ec91326ccf56a622909f767eee17617c8fa\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"e6413c838d7c84a12841f0f4a399eba2e8ee8e6c1b23f174a8bc8195c25797c8\"" Jan 13 21:25:33.771337 containerd[1474]: time="2025-01-13T21:25:33.771290426Z" level=info msg="StartContainer for \"e6413c838d7c84a12841f0f4a399eba2e8ee8e6c1b23f174a8bc8195c25797c8\"" Jan 13 21:25:33.803112 systemd[1]: Started cri-containerd-e6413c838d7c84a12841f0f4a399eba2e8ee8e6c1b23f174a8bc8195c25797c8.scope - libcontainer container e6413c838d7c84a12841f0f4a399eba2e8ee8e6c1b23f174a8bc8195c25797c8. Jan 13 21:25:33.899823 containerd[1474]: time="2025-01-13T21:25:33.899748911Z" level=info msg="StartContainer for \"e6413c838d7c84a12841f0f4a399eba2e8ee8e6c1b23f174a8bc8195c25797c8\" returns successfully" Jan 13 21:25:34.206105 kubelet[1799]: I0113 21:25:34.206033 1799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-4kl5c" podStartSLOduration=5.56610115 podStartE2EDuration="10.206016055s" podCreationTimestamp="2025-01-13 21:25:24 +0000 UTC" firstStartedPulling="2025-01-13 21:25:29.110535907 +0000 UTC m=+21.996093200" lastFinishedPulling="2025-01-13 21:25:33.750450812 +0000 UTC m=+26.636008105" observedRunningTime="2025-01-13 21:25:34.205802279 +0000 UTC m=+27.091359572" watchObservedRunningTime="2025-01-13 21:25:34.206016055 +0000 UTC m=+27.091573349" Jan 13 21:25:34.656810 kubelet[1799]: E0113 21:25:34.656727 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:35.657512 kubelet[1799]: E0113 21:25:35.657460 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:36.657947 kubelet[1799]: E0113 21:25:36.657872 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:37.658531 kubelet[1799]: E0113 21:25:37.658489 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:37.814350 update_engine[1451]: I20250113 21:25:37.814246 1451 update_attempter.cc:509] Updating boot flags... Jan 13 21:25:37.859702 systemd[1]: Created slice kubepods-besteffort-pod9f996b92_1c11_4e08_aeff_d63444629f50.slice - libcontainer container kubepods-besteffort-pod9f996b92_1c11_4e08_aeff_d63444629f50.slice. Jan 13 21:25:37.893089 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3013) Jan 13 21:25:37.947960 kubelet[1799]: I0113 21:25:37.947765 1799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/9f996b92-1c11-4e08-aeff-d63444629f50-data\") pod \"nfs-server-provisioner-0\" (UID: \"9f996b92-1c11-4e08-aeff-d63444629f50\") " pod="default/nfs-server-provisioner-0" Jan 13 21:25:37.947960 kubelet[1799]: I0113 21:25:37.947823 1799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shtbd\" (UniqueName: \"kubernetes.io/projected/9f996b92-1c11-4e08-aeff-d63444629f50-kube-api-access-shtbd\") pod \"nfs-server-provisioner-0\" (UID: \"9f996b92-1c11-4e08-aeff-d63444629f50\") " pod="default/nfs-server-provisioner-0" Jan 13 21:25:38.162982 containerd[1474]: time="2025-01-13T21:25:38.162924379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:9f996b92-1c11-4e08-aeff-d63444629f50,Namespace:default,Attempt:0,}" Jan 13 21:25:38.447258 systemd-networkd[1406]: lxc92c364790dbb: Link UP Jan 13 21:25:38.457941 kernel: eth0: renamed from tmp49abf Jan 13 21:25:38.466256 systemd-networkd[1406]: lxc92c364790dbb: Gained carrier Jan 13 21:25:38.659538 kubelet[1799]: E0113 21:25:38.659461 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:39.660273 kubelet[1799]: E0113 21:25:39.660217 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:39.842078 systemd-networkd[1406]: lxc92c364790dbb: Gained IPv6LL Jan 13 21:25:40.383370 containerd[1474]: time="2025-01-13T21:25:40.383029218Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:25:40.383370 containerd[1474]: time="2025-01-13T21:25:40.383152872Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:25:40.383370 containerd[1474]: time="2025-01-13T21:25:40.383183500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:25:40.383370 containerd[1474]: time="2025-01-13T21:25:40.383344235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:25:40.404042 systemd[1]: Started cri-containerd-49abf8484f11e49bc1dec1eb0916a75045eaaaef66317bd4e1c599b7d0ab6870.scope - libcontainer container 49abf8484f11e49bc1dec1eb0916a75045eaaaef66317bd4e1c599b7d0ab6870. Jan 13 21:25:40.414679 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:25:40.437223 containerd[1474]: time="2025-01-13T21:25:40.437175236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:9f996b92-1c11-4e08-aeff-d63444629f50,Namespace:default,Attempt:0,} returns sandbox id \"49abf8484f11e49bc1dec1eb0916a75045eaaaef66317bd4e1c599b7d0ab6870\"" Jan 13 21:25:40.438667 containerd[1474]: time="2025-01-13T21:25:40.438627457Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 13 21:25:40.660761 kubelet[1799]: E0113 21:25:40.660630 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:41.661446 kubelet[1799]: E0113 21:25:41.661384 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:42.662114 kubelet[1799]: E0113 21:25:42.662031 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:43.662775 kubelet[1799]: E0113 21:25:43.662710 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:44.663923 kubelet[1799]: E0113 21:25:44.663853 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:45.382483 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1663341393.mount: Deactivated successfully. Jan 13 21:25:45.664132 kubelet[1799]: E0113 21:25:45.663974 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:46.664542 kubelet[1799]: E0113 21:25:46.664470 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:47.638182 kubelet[1799]: E0113 21:25:47.638105 1799 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:47.665748 kubelet[1799]: E0113 21:25:47.665671 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:48.666583 kubelet[1799]: E0113 21:25:48.666530 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:48.874706 containerd[1474]: time="2025-01-13T21:25:48.874650785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:48.876177 containerd[1474]: time="2025-01-13T21:25:48.876143460Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jan 13 21:25:48.877599 containerd[1474]: time="2025-01-13T21:25:48.877530377Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:48.880106 containerd[1474]: time="2025-01-13T21:25:48.880068525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:48.880967 containerd[1474]: time="2025-01-13T21:25:48.880938315Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 8.442281623s" Jan 13 21:25:48.881010 containerd[1474]: time="2025-01-13T21:25:48.880969224Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 13 21:25:48.883172 containerd[1474]: time="2025-01-13T21:25:48.883150078Z" level=info msg="CreateContainer within sandbox \"49abf8484f11e49bc1dec1eb0916a75045eaaaef66317bd4e1c599b7d0ab6870\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 13 21:25:48.900227 containerd[1474]: time="2025-01-13T21:25:48.900190742Z" level=info msg="CreateContainer within sandbox \"49abf8484f11e49bc1dec1eb0916a75045eaaaef66317bd4e1c599b7d0ab6870\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"a6c5a69e3bcf1a7c8f58e54434e31a03b8d1ab3480cd3f2c9523139243cfbc86\"" Jan 13 21:25:48.900646 containerd[1474]: time="2025-01-13T21:25:48.900613909Z" level=info msg="StartContainer for \"a6c5a69e3bcf1a7c8f58e54434e31a03b8d1ab3480cd3f2c9523139243cfbc86\"" Jan 13 21:25:48.989221 systemd[1]: Started cri-containerd-a6c5a69e3bcf1a7c8f58e54434e31a03b8d1ab3480cd3f2c9523139243cfbc86.scope - libcontainer container a6c5a69e3bcf1a7c8f58e54434e31a03b8d1ab3480cd3f2c9523139243cfbc86. Jan 13 21:25:49.297311 containerd[1474]: time="2025-01-13T21:25:49.297178396Z" level=info msg="StartContainer for \"a6c5a69e3bcf1a7c8f58e54434e31a03b8d1ab3480cd3f2c9523139243cfbc86\" returns successfully" Jan 13 21:25:49.310034 kubelet[1799]: I0113 21:25:49.309960 1799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=3.866501032 podStartE2EDuration="12.309941002s" podCreationTimestamp="2025-01-13 21:25:37 +0000 UTC" firstStartedPulling="2025-01-13 21:25:40.438415315 +0000 UTC m=+33.323972608" lastFinishedPulling="2025-01-13 21:25:48.881855285 +0000 UTC m=+41.767412578" observedRunningTime="2025-01-13 21:25:49.309680802 +0000 UTC m=+42.195238095" watchObservedRunningTime="2025-01-13 21:25:49.309941002 +0000 UTC m=+42.195498295" Jan 13 21:25:49.667627 kubelet[1799]: E0113 21:25:49.667543 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:50.668532 kubelet[1799]: E0113 21:25:50.668479 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:51.669475 kubelet[1799]: E0113 21:25:51.669399 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:52.669629 kubelet[1799]: E0113 21:25:52.669560 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:53.670137 kubelet[1799]: E0113 21:25:53.670080 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:54.670433 kubelet[1799]: E0113 21:25:54.670358 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:55.671437 kubelet[1799]: E0113 21:25:55.671376 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:56.672570 kubelet[1799]: E0113 21:25:56.672523 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:57.673211 kubelet[1799]: E0113 21:25:57.673144 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:58.673971 kubelet[1799]: E0113 21:25:58.673891 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:25:59.674961 kubelet[1799]: E0113 21:25:59.674911 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:00.213316 systemd[1]: Created slice kubepods-besteffort-podcf98d039_fa78_46be_b74d_facdadf8b67e.slice - libcontainer container kubepods-besteffort-podcf98d039_fa78_46be_b74d_facdadf8b67e.slice. Jan 13 21:26:00.343492 kubelet[1799]: I0113 21:26:00.343433 1799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-992d0b89-90a7-4265-8bf7-f93bf4e0297d\" (UniqueName: \"kubernetes.io/nfs/cf98d039-fa78-46be-b74d-facdadf8b67e-pvc-992d0b89-90a7-4265-8bf7-f93bf4e0297d\") pod \"test-pod-1\" (UID: \"cf98d039-fa78-46be-b74d-facdadf8b67e\") " pod="default/test-pod-1" Jan 13 21:26:00.343492 kubelet[1799]: I0113 21:26:00.343480 1799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2thwb\" (UniqueName: \"kubernetes.io/projected/cf98d039-fa78-46be-b74d-facdadf8b67e-kube-api-access-2thwb\") pod \"test-pod-1\" (UID: \"cf98d039-fa78-46be-b74d-facdadf8b67e\") " pod="default/test-pod-1" Jan 13 21:26:00.478939 kernel: FS-Cache: Loaded Jan 13 21:26:00.553544 kernel: RPC: Registered named UNIX socket transport module. Jan 13 21:26:00.553711 kernel: RPC: Registered udp transport module. Jan 13 21:26:00.553734 kernel: RPC: Registered tcp transport module. Jan 13 21:26:00.553754 kernel: RPC: Registered tcp-with-tls transport module. Jan 13 21:26:00.554262 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 13 21:26:00.675668 kubelet[1799]: E0113 21:26:00.675569 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:00.919420 kernel: NFS: Registering the id_resolver key type Jan 13 21:26:00.919561 kernel: Key type id_resolver registered Jan 13 21:26:00.919588 kernel: Key type id_legacy registered Jan 13 21:26:00.994864 nfsidmap[3206]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 13 21:26:01.001388 nfsidmap[3209]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 13 21:26:01.117371 containerd[1474]: time="2025-01-13T21:26:01.117322459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:cf98d039-fa78-46be-b74d-facdadf8b67e,Namespace:default,Attempt:0,}" Jan 13 21:26:01.221869 systemd-networkd[1406]: lxc115773333070: Link UP Jan 13 21:26:01.230927 kernel: eth0: renamed from tmp0a3c1 Jan 13 21:26:01.236556 systemd-networkd[1406]: lxc115773333070: Gained carrier Jan 13 21:26:01.430619 containerd[1474]: time="2025-01-13T21:26:01.430437367Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:26:01.430756 containerd[1474]: time="2025-01-13T21:26:01.430597038Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:26:01.430756 containerd[1474]: time="2025-01-13T21:26:01.430632755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:26:01.430862 containerd[1474]: time="2025-01-13T21:26:01.430731701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:26:01.449068 systemd[1]: Started cri-containerd-0a3c187a76d0dd6f33c0ff433724c2d1fce1b42bb2830192322e27b6f344a744.scope - libcontainer container 0a3c187a76d0dd6f33c0ff433724c2d1fce1b42bb2830192322e27b6f344a744. Jan 13 21:26:01.461622 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:26:01.486474 containerd[1474]: time="2025-01-13T21:26:01.486302248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:cf98d039-fa78-46be-b74d-facdadf8b67e,Namespace:default,Attempt:0,} returns sandbox id \"0a3c187a76d0dd6f33c0ff433724c2d1fce1b42bb2830192322e27b6f344a744\"" Jan 13 21:26:01.488329 containerd[1474]: time="2025-01-13T21:26:01.488287091Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 13 21:26:01.676009 kubelet[1799]: E0113 21:26:01.675940 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:01.854609 containerd[1474]: time="2025-01-13T21:26:01.854570068Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:01.855430 containerd[1474]: time="2025-01-13T21:26:01.855387054Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 13 21:26:01.858002 containerd[1474]: time="2025-01-13T21:26:01.857969990Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 369.638776ms" Jan 13 21:26:01.859530 containerd[1474]: time="2025-01-13T21:26:01.858034381Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 13 21:26:01.861168 containerd[1474]: time="2025-01-13T21:26:01.861141943Z" level=info msg="CreateContainer within sandbox \"0a3c187a76d0dd6f33c0ff433724c2d1fce1b42bb2830192322e27b6f344a744\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 13 21:26:01.877443 containerd[1474]: time="2025-01-13T21:26:01.877405180Z" level=info msg="CreateContainer within sandbox \"0a3c187a76d0dd6f33c0ff433724c2d1fce1b42bb2830192322e27b6f344a744\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"ecbe998e3e0604edba3be23edae1aeac6e1bab965bd01a576735f07e1b0fdc44\"" Jan 13 21:26:01.877839 containerd[1474]: time="2025-01-13T21:26:01.877815552Z" level=info msg="StartContainer for \"ecbe998e3e0604edba3be23edae1aeac6e1bab965bd01a576735f07e1b0fdc44\"" Jan 13 21:26:01.910077 systemd[1]: Started cri-containerd-ecbe998e3e0604edba3be23edae1aeac6e1bab965bd01a576735f07e1b0fdc44.scope - libcontainer container ecbe998e3e0604edba3be23edae1aeac6e1bab965bd01a576735f07e1b0fdc44. Jan 13 21:26:01.935036 containerd[1474]: time="2025-01-13T21:26:01.934999533Z" level=info msg="StartContainer for \"ecbe998e3e0604edba3be23edae1aeac6e1bab965bd01a576735f07e1b0fdc44\" returns successfully" Jan 13 21:26:02.331281 kubelet[1799]: I0113 21:26:02.331135 1799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=23.958910867 podStartE2EDuration="24.331119805s" podCreationTimestamp="2025-01-13 21:25:38 +0000 UTC" firstStartedPulling="2025-01-13 21:26:01.487786029 +0000 UTC m=+54.373343322" lastFinishedPulling="2025-01-13 21:26:01.859994957 +0000 UTC m=+54.745552260" observedRunningTime="2025-01-13 21:26:02.33073418 +0000 UTC m=+55.216291493" watchObservedRunningTime="2025-01-13 21:26:02.331119805 +0000 UTC m=+55.216677099" Jan 13 21:26:02.676611 kubelet[1799]: E0113 21:26:02.676550 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:02.882093 systemd-networkd[1406]: lxc115773333070: Gained IPv6LL Jan 13 21:26:03.677445 kubelet[1799]: E0113 21:26:03.677381 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:04.678249 kubelet[1799]: E0113 21:26:04.678187 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:05.679335 kubelet[1799]: E0113 21:26:05.679259 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:06.137551 containerd[1474]: time="2025-01-13T21:26:06.137486446Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:26:06.145049 containerd[1474]: time="2025-01-13T21:26:06.145014710Z" level=info msg="StopContainer for \"920a0efcc61d7d35f07df5ae70870192ce23ebd46e6d161f1ac2390f15f87c00\" with timeout 2 (s)" Jan 13 21:26:06.145239 containerd[1474]: time="2025-01-13T21:26:06.145221077Z" level=info msg="Stop container \"920a0efcc61d7d35f07df5ae70870192ce23ebd46e6d161f1ac2390f15f87c00\" with signal terminated" Jan 13 21:26:06.152552 systemd-networkd[1406]: lxc_health: Link DOWN Jan 13 21:26:06.152570 systemd-networkd[1406]: lxc_health: Lost carrier Jan 13 21:26:06.181381 systemd[1]: cri-containerd-920a0efcc61d7d35f07df5ae70870192ce23ebd46e6d161f1ac2390f15f87c00.scope: Deactivated successfully. Jan 13 21:26:06.181824 systemd[1]: cri-containerd-920a0efcc61d7d35f07df5ae70870192ce23ebd46e6d161f1ac2390f15f87c00.scope: Consumed 7.052s CPU time. Jan 13 21:26:06.202236 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-920a0efcc61d7d35f07df5ae70870192ce23ebd46e6d161f1ac2390f15f87c00-rootfs.mount: Deactivated successfully. Jan 13 21:26:06.213133 containerd[1474]: time="2025-01-13T21:26:06.213058389Z" level=info msg="shim disconnected" id=920a0efcc61d7d35f07df5ae70870192ce23ebd46e6d161f1ac2390f15f87c00 namespace=k8s.io Jan 13 21:26:06.213133 containerd[1474]: time="2025-01-13T21:26:06.213125546Z" level=warning msg="cleaning up after shim disconnected" id=920a0efcc61d7d35f07df5ae70870192ce23ebd46e6d161f1ac2390f15f87c00 namespace=k8s.io Jan 13 21:26:06.213133 containerd[1474]: time="2025-01-13T21:26:06.213137428Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:26:06.234475 containerd[1474]: time="2025-01-13T21:26:06.234418992Z" level=info msg="StopContainer for \"920a0efcc61d7d35f07df5ae70870192ce23ebd46e6d161f1ac2390f15f87c00\" returns successfully" Jan 13 21:26:06.235219 containerd[1474]: time="2025-01-13T21:26:06.235179601Z" level=info msg="StopPodSandbox for \"56fe0ce02cc88e5eb5a49dffa080c101e96f14f1c56ad515071b87974e8d4794\"" Jan 13 21:26:06.235219 containerd[1474]: time="2025-01-13T21:26:06.235226529Z" level=info msg="Container to stop \"7035fe7b1a914cb1fbbb4e1093e723e30bbf5d13c4340003a51fccad1184d923\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:26:06.235399 containerd[1474]: time="2025-01-13T21:26:06.235239904Z" level=info msg="Container to stop \"27e0e528b460e220ac20147803aafa2e5841de51ab0cfaae9f8c0f58dab1fbcc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:26:06.235399 containerd[1474]: time="2025-01-13T21:26:06.235251045Z" level=info msg="Container to stop \"ffb2f7c10884950bdb2aa3b31a3d16b86105869696c1643406527bdb9ae58b83\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:26:06.235399 containerd[1474]: time="2025-01-13T21:26:06.235261134Z" level=info msg="Container to stop \"920a0efcc61d7d35f07df5ae70870192ce23ebd46e6d161f1ac2390f15f87c00\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:26:06.235399 containerd[1474]: time="2025-01-13T21:26:06.235271133Z" level=info msg="Container to stop \"42cf9c2cf7c323ad619db58b0c7db37e901ba1a261943e8bf2541069a4e4da20\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:26:06.237586 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-56fe0ce02cc88e5eb5a49dffa080c101e96f14f1c56ad515071b87974e8d4794-shm.mount: Deactivated successfully. Jan 13 21:26:06.242353 systemd[1]: cri-containerd-56fe0ce02cc88e5eb5a49dffa080c101e96f14f1c56ad515071b87974e8d4794.scope: Deactivated successfully. Jan 13 21:26:06.261291 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-56fe0ce02cc88e5eb5a49dffa080c101e96f14f1c56ad515071b87974e8d4794-rootfs.mount: Deactivated successfully. Jan 13 21:26:06.266150 containerd[1474]: time="2025-01-13T21:26:06.266051452Z" level=info msg="shim disconnected" id=56fe0ce02cc88e5eb5a49dffa080c101e96f14f1c56ad515071b87974e8d4794 namespace=k8s.io Jan 13 21:26:06.266150 containerd[1474]: time="2025-01-13T21:26:06.266119270Z" level=warning msg="cleaning up after shim disconnected" id=56fe0ce02cc88e5eb5a49dffa080c101e96f14f1c56ad515071b87974e8d4794 namespace=k8s.io Jan 13 21:26:06.266150 containerd[1474]: time="2025-01-13T21:26:06.266130651Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:26:06.280982 containerd[1474]: time="2025-01-13T21:26:06.280860476Z" level=info msg="TearDown network for sandbox \"56fe0ce02cc88e5eb5a49dffa080c101e96f14f1c56ad515071b87974e8d4794\" successfully" Jan 13 21:26:06.280982 containerd[1474]: time="2025-01-13T21:26:06.280912814Z" level=info msg="StopPodSandbox for \"56fe0ce02cc88e5eb5a49dffa080c101e96f14f1c56ad515071b87974e8d4794\" returns successfully" Jan 13 21:26:06.331627 kubelet[1799]: I0113 21:26:06.331588 1799 scope.go:117] "RemoveContainer" containerID="920a0efcc61d7d35f07df5ae70870192ce23ebd46e6d161f1ac2390f15f87c00" Jan 13 21:26:06.332987 containerd[1474]: time="2025-01-13T21:26:06.332944660Z" level=info msg="RemoveContainer for \"920a0efcc61d7d35f07df5ae70870192ce23ebd46e6d161f1ac2390f15f87c00\"" Jan 13 21:26:06.338016 containerd[1474]: time="2025-01-13T21:26:06.337973367Z" level=info msg="RemoveContainer for \"920a0efcc61d7d35f07df5ae70870192ce23ebd46e6d161f1ac2390f15f87c00\" returns successfully" Jan 13 21:26:06.338283 kubelet[1799]: I0113 21:26:06.338245 1799 scope.go:117] "RemoveContainer" containerID="ffb2f7c10884950bdb2aa3b31a3d16b86105869696c1643406527bdb9ae58b83" Jan 13 21:26:06.339321 containerd[1474]: time="2025-01-13T21:26:06.339296213Z" level=info msg="RemoveContainer for \"ffb2f7c10884950bdb2aa3b31a3d16b86105869696c1643406527bdb9ae58b83\"" Jan 13 21:26:06.342697 containerd[1474]: time="2025-01-13T21:26:06.342650144Z" level=info msg="RemoveContainer for \"ffb2f7c10884950bdb2aa3b31a3d16b86105869696c1643406527bdb9ae58b83\" returns successfully" Jan 13 21:26:06.342846 kubelet[1799]: I0113 21:26:06.342817 1799 scope.go:117] "RemoveContainer" containerID="42cf9c2cf7c323ad619db58b0c7db37e901ba1a261943e8bf2541069a4e4da20" Jan 13 21:26:06.343855 containerd[1474]: time="2025-01-13T21:26:06.343827456Z" level=info msg="RemoveContainer for \"42cf9c2cf7c323ad619db58b0c7db37e901ba1a261943e8bf2541069a4e4da20\"" Jan 13 21:26:06.347188 containerd[1474]: time="2025-01-13T21:26:06.347150450Z" level=info msg="RemoveContainer for \"42cf9c2cf7c323ad619db58b0c7db37e901ba1a261943e8bf2541069a4e4da20\" returns successfully" Jan 13 21:26:06.347349 kubelet[1799]: I0113 21:26:06.347311 1799 scope.go:117] "RemoveContainer" containerID="27e0e528b460e220ac20147803aafa2e5841de51ab0cfaae9f8c0f58dab1fbcc" Jan 13 21:26:06.348402 containerd[1474]: time="2025-01-13T21:26:06.348231740Z" level=info msg="RemoveContainer for \"27e0e528b460e220ac20147803aafa2e5841de51ab0cfaae9f8c0f58dab1fbcc\"" Jan 13 21:26:06.351769 containerd[1474]: time="2025-01-13T21:26:06.351725455Z" level=info msg="RemoveContainer for \"27e0e528b460e220ac20147803aafa2e5841de51ab0cfaae9f8c0f58dab1fbcc\" returns successfully" Jan 13 21:26:06.351966 kubelet[1799]: I0113 21:26:06.351933 1799 scope.go:117] "RemoveContainer" containerID="7035fe7b1a914cb1fbbb4e1093e723e30bbf5d13c4340003a51fccad1184d923" Jan 13 21:26:06.352855 containerd[1474]: time="2025-01-13T21:26:06.352831352Z" level=info msg="RemoveContainer for \"7035fe7b1a914cb1fbbb4e1093e723e30bbf5d13c4340003a51fccad1184d923\"" Jan 13 21:26:06.355946 containerd[1474]: time="2025-01-13T21:26:06.355892233Z" level=info msg="RemoveContainer for \"7035fe7b1a914cb1fbbb4e1093e723e30bbf5d13c4340003a51fccad1184d923\" returns successfully" Jan 13 21:26:06.356076 kubelet[1799]: I0113 21:26:06.356054 1799 scope.go:117] "RemoveContainer" containerID="920a0efcc61d7d35f07df5ae70870192ce23ebd46e6d161f1ac2390f15f87c00" Jan 13 21:26:06.356268 containerd[1474]: time="2025-01-13T21:26:06.356213036Z" level=error msg="ContainerStatus for \"920a0efcc61d7d35f07df5ae70870192ce23ebd46e6d161f1ac2390f15f87c00\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"920a0efcc61d7d35f07df5ae70870192ce23ebd46e6d161f1ac2390f15f87c00\": not found" Jan 13 21:26:06.356343 kubelet[1799]: E0113 21:26:06.356308 1799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"920a0efcc61d7d35f07df5ae70870192ce23ebd46e6d161f1ac2390f15f87c00\": not found" containerID="920a0efcc61d7d35f07df5ae70870192ce23ebd46e6d161f1ac2390f15f87c00" Jan 13 21:26:06.356412 kubelet[1799]: I0113 21:26:06.356335 1799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"920a0efcc61d7d35f07df5ae70870192ce23ebd46e6d161f1ac2390f15f87c00"} err="failed to get container status \"920a0efcc61d7d35f07df5ae70870192ce23ebd46e6d161f1ac2390f15f87c00\": rpc error: code = NotFound desc = an error occurred when try to find container \"920a0efcc61d7d35f07df5ae70870192ce23ebd46e6d161f1ac2390f15f87c00\": not found" Jan 13 21:26:06.356412 kubelet[1799]: I0113 21:26:06.356405 1799 scope.go:117] "RemoveContainer" containerID="ffb2f7c10884950bdb2aa3b31a3d16b86105869696c1643406527bdb9ae58b83" Jan 13 21:26:06.356590 containerd[1474]: time="2025-01-13T21:26:06.356556311Z" level=error msg="ContainerStatus for \"ffb2f7c10884950bdb2aa3b31a3d16b86105869696c1643406527bdb9ae58b83\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ffb2f7c10884950bdb2aa3b31a3d16b86105869696c1643406527bdb9ae58b83\": not found" Jan 13 21:26:06.356751 kubelet[1799]: E0113 21:26:06.356719 1799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ffb2f7c10884950bdb2aa3b31a3d16b86105869696c1643406527bdb9ae58b83\": not found" containerID="ffb2f7c10884950bdb2aa3b31a3d16b86105869696c1643406527bdb9ae58b83" Jan 13 21:26:06.356800 kubelet[1799]: I0113 21:26:06.356754 1799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ffb2f7c10884950bdb2aa3b31a3d16b86105869696c1643406527bdb9ae58b83"} err="failed to get container status \"ffb2f7c10884950bdb2aa3b31a3d16b86105869696c1643406527bdb9ae58b83\": rpc error: code = NotFound desc = an error occurred when try to find container \"ffb2f7c10884950bdb2aa3b31a3d16b86105869696c1643406527bdb9ae58b83\": not found" Jan 13 21:26:06.356800 kubelet[1799]: I0113 21:26:06.356780 1799 scope.go:117] "RemoveContainer" containerID="42cf9c2cf7c323ad619db58b0c7db37e901ba1a261943e8bf2541069a4e4da20" Jan 13 21:26:06.356971 containerd[1474]: time="2025-01-13T21:26:06.356939211Z" level=error msg="ContainerStatus for \"42cf9c2cf7c323ad619db58b0c7db37e901ba1a261943e8bf2541069a4e4da20\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"42cf9c2cf7c323ad619db58b0c7db37e901ba1a261943e8bf2541069a4e4da20\": not found" Jan 13 21:26:06.357089 kubelet[1799]: E0113 21:26:06.357059 1799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"42cf9c2cf7c323ad619db58b0c7db37e901ba1a261943e8bf2541069a4e4da20\": not found" containerID="42cf9c2cf7c323ad619db58b0c7db37e901ba1a261943e8bf2541069a4e4da20" Jan 13 21:26:06.357089 kubelet[1799]: I0113 21:26:06.357082 1799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"42cf9c2cf7c323ad619db58b0c7db37e901ba1a261943e8bf2541069a4e4da20"} err="failed to get container status \"42cf9c2cf7c323ad619db58b0c7db37e901ba1a261943e8bf2541069a4e4da20\": rpc error: code = NotFound desc = an error occurred when try to find container \"42cf9c2cf7c323ad619db58b0c7db37e901ba1a261943e8bf2541069a4e4da20\": not found" Jan 13 21:26:06.357169 kubelet[1799]: I0113 21:26:06.357097 1799 scope.go:117] "RemoveContainer" containerID="27e0e528b460e220ac20147803aafa2e5841de51ab0cfaae9f8c0f58dab1fbcc" Jan 13 21:26:06.357285 containerd[1474]: time="2025-01-13T21:26:06.357252770Z" level=error msg="ContainerStatus for \"27e0e528b460e220ac20147803aafa2e5841de51ab0cfaae9f8c0f58dab1fbcc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"27e0e528b460e220ac20147803aafa2e5841de51ab0cfaae9f8c0f58dab1fbcc\": not found" Jan 13 21:26:06.357376 kubelet[1799]: E0113 21:26:06.357350 1799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"27e0e528b460e220ac20147803aafa2e5841de51ab0cfaae9f8c0f58dab1fbcc\": not found" containerID="27e0e528b460e220ac20147803aafa2e5841de51ab0cfaae9f8c0f58dab1fbcc" Jan 13 21:26:06.357412 kubelet[1799]: I0113 21:26:06.357374 1799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"27e0e528b460e220ac20147803aafa2e5841de51ab0cfaae9f8c0f58dab1fbcc"} err="failed to get container status \"27e0e528b460e220ac20147803aafa2e5841de51ab0cfaae9f8c0f58dab1fbcc\": rpc error: code = NotFound desc = an error occurred when try to find container \"27e0e528b460e220ac20147803aafa2e5841de51ab0cfaae9f8c0f58dab1fbcc\": not found" Jan 13 21:26:06.357412 kubelet[1799]: I0113 21:26:06.357388 1799 scope.go:117] "RemoveContainer" containerID="7035fe7b1a914cb1fbbb4e1093e723e30bbf5d13c4340003a51fccad1184d923" Jan 13 21:26:06.357544 containerd[1474]: time="2025-01-13T21:26:06.357513059Z" level=error msg="ContainerStatus for \"7035fe7b1a914cb1fbbb4e1093e723e30bbf5d13c4340003a51fccad1184d923\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7035fe7b1a914cb1fbbb4e1093e723e30bbf5d13c4340003a51fccad1184d923\": not found" Jan 13 21:26:06.357630 kubelet[1799]: E0113 21:26:06.357607 1799 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7035fe7b1a914cb1fbbb4e1093e723e30bbf5d13c4340003a51fccad1184d923\": not found" containerID="7035fe7b1a914cb1fbbb4e1093e723e30bbf5d13c4340003a51fccad1184d923" Jan 13 21:26:06.357630 kubelet[1799]: I0113 21:26:06.357625 1799 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7035fe7b1a914cb1fbbb4e1093e723e30bbf5d13c4340003a51fccad1184d923"} err="failed to get container status \"7035fe7b1a914cb1fbbb4e1093e723e30bbf5d13c4340003a51fccad1184d923\": rpc error: code = NotFound desc = an error occurred when try to find container \"7035fe7b1a914cb1fbbb4e1093e723e30bbf5d13c4340003a51fccad1184d923\": not found" Jan 13 21:26:06.482208 kubelet[1799]: I0113 21:26:06.482054 1799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a2f83576-9690-40b2-bacf-895f61519e6a-hostproc\") pod \"a2f83576-9690-40b2-bacf-895f61519e6a\" (UID: \"a2f83576-9690-40b2-bacf-895f61519e6a\") " Jan 13 21:26:06.482208 kubelet[1799]: I0113 21:26:06.482129 1799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a2f83576-9690-40b2-bacf-895f61519e6a-cilium-config-path\") pod \"a2f83576-9690-40b2-bacf-895f61519e6a\" (UID: \"a2f83576-9690-40b2-bacf-895f61519e6a\") " Jan 13 21:26:06.482208 kubelet[1799]: I0113 21:26:06.482153 1799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a2f83576-9690-40b2-bacf-895f61519e6a-cni-path\") pod \"a2f83576-9690-40b2-bacf-895f61519e6a\" (UID: \"a2f83576-9690-40b2-bacf-895f61519e6a\") " Jan 13 21:26:06.482208 kubelet[1799]: I0113 21:26:06.482171 1799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2f83576-9690-40b2-bacf-895f61519e6a-xtables-lock\") pod \"a2f83576-9690-40b2-bacf-895f61519e6a\" (UID: \"a2f83576-9690-40b2-bacf-895f61519e6a\") " Jan 13 21:26:06.482208 kubelet[1799]: I0113 21:26:06.482190 1799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a2f83576-9690-40b2-bacf-895f61519e6a-etc-cni-netd\") pod \"a2f83576-9690-40b2-bacf-895f61519e6a\" (UID: \"a2f83576-9690-40b2-bacf-895f61519e6a\") " Jan 13 21:26:06.482208 kubelet[1799]: I0113 21:26:06.482207 1799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2f83576-9690-40b2-bacf-895f61519e6a-lib-modules\") pod \"a2f83576-9690-40b2-bacf-895f61519e6a\" (UID: \"a2f83576-9690-40b2-bacf-895f61519e6a\") " Jan 13 21:26:06.482457 kubelet[1799]: I0113 21:26:06.482229 1799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bf9m\" (UniqueName: \"kubernetes.io/projected/a2f83576-9690-40b2-bacf-895f61519e6a-kube-api-access-9bf9m\") pod \"a2f83576-9690-40b2-bacf-895f61519e6a\" (UID: \"a2f83576-9690-40b2-bacf-895f61519e6a\") " Jan 13 21:26:06.482457 kubelet[1799]: I0113 21:26:06.482215 1799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2f83576-9690-40b2-bacf-895f61519e6a-hostproc" (OuterVolumeSpecName: "hostproc") pod "a2f83576-9690-40b2-bacf-895f61519e6a" (UID: "a2f83576-9690-40b2-bacf-895f61519e6a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:26:06.482457 kubelet[1799]: I0113 21:26:06.482253 1799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a2f83576-9690-40b2-bacf-895f61519e6a-bpf-maps\") pod \"a2f83576-9690-40b2-bacf-895f61519e6a\" (UID: \"a2f83576-9690-40b2-bacf-895f61519e6a\") " Jan 13 21:26:06.482457 kubelet[1799]: I0113 21:26:06.482290 1799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2f83576-9690-40b2-bacf-895f61519e6a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a2f83576-9690-40b2-bacf-895f61519e6a" (UID: "a2f83576-9690-40b2-bacf-895f61519e6a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:26:06.482457 kubelet[1799]: I0113 21:26:06.482332 1799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2f83576-9690-40b2-bacf-895f61519e6a-cni-path" (OuterVolumeSpecName: "cni-path") pod "a2f83576-9690-40b2-bacf-895f61519e6a" (UID: "a2f83576-9690-40b2-bacf-895f61519e6a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:26:06.482578 kubelet[1799]: I0113 21:26:06.482338 1799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a2f83576-9690-40b2-bacf-895f61519e6a-clustermesh-secrets\") pod \"a2f83576-9690-40b2-bacf-895f61519e6a\" (UID: \"a2f83576-9690-40b2-bacf-895f61519e6a\") " Jan 13 21:26:06.482578 kubelet[1799]: I0113 21:26:06.482354 1799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2f83576-9690-40b2-bacf-895f61519e6a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a2f83576-9690-40b2-bacf-895f61519e6a" (UID: "a2f83576-9690-40b2-bacf-895f61519e6a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:26:06.482578 kubelet[1799]: I0113 21:26:06.482364 1799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a2f83576-9690-40b2-bacf-895f61519e6a-host-proc-sys-kernel\") pod \"a2f83576-9690-40b2-bacf-895f61519e6a\" (UID: \"a2f83576-9690-40b2-bacf-895f61519e6a\") " Jan 13 21:26:06.482578 kubelet[1799]: I0113 21:26:06.482375 1799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2f83576-9690-40b2-bacf-895f61519e6a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a2f83576-9690-40b2-bacf-895f61519e6a" (UID: "a2f83576-9690-40b2-bacf-895f61519e6a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:26:06.482578 kubelet[1799]: I0113 21:26:06.482392 1799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a2f83576-9690-40b2-bacf-895f61519e6a-hubble-tls\") pod \"a2f83576-9690-40b2-bacf-895f61519e6a\" (UID: \"a2f83576-9690-40b2-bacf-895f61519e6a\") " Jan 13 21:26:06.482713 kubelet[1799]: I0113 21:26:06.482412 1799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a2f83576-9690-40b2-bacf-895f61519e6a-cilium-run\") pod \"a2f83576-9690-40b2-bacf-895f61519e6a\" (UID: \"a2f83576-9690-40b2-bacf-895f61519e6a\") " Jan 13 21:26:06.482713 kubelet[1799]: I0113 21:26:06.482431 1799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a2f83576-9690-40b2-bacf-895f61519e6a-cilium-cgroup\") pod \"a2f83576-9690-40b2-bacf-895f61519e6a\" (UID: \"a2f83576-9690-40b2-bacf-895f61519e6a\") " Jan 13 21:26:06.482713 kubelet[1799]: I0113 21:26:06.482451 1799 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a2f83576-9690-40b2-bacf-895f61519e6a-host-proc-sys-net\") pod \"a2f83576-9690-40b2-bacf-895f61519e6a\" (UID: \"a2f83576-9690-40b2-bacf-895f61519e6a\") " Jan 13 21:26:06.482713 kubelet[1799]: I0113 21:26:06.482495 1799 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a2f83576-9690-40b2-bacf-895f61519e6a-bpf-maps\") on node \"10.0.0.88\" DevicePath \"\"" Jan 13 21:26:06.482713 kubelet[1799]: I0113 21:26:06.482508 1799 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a2f83576-9690-40b2-bacf-895f61519e6a-cni-path\") on node \"10.0.0.88\" DevicePath \"\"" Jan 13 21:26:06.482713 kubelet[1799]: I0113 21:26:06.482518 1799 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2f83576-9690-40b2-bacf-895f61519e6a-xtables-lock\") on node \"10.0.0.88\" DevicePath \"\"" Jan 13 21:26:06.482713 kubelet[1799]: I0113 21:26:06.482531 1799 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a2f83576-9690-40b2-bacf-895f61519e6a-etc-cni-netd\") on node \"10.0.0.88\" DevicePath \"\"" Jan 13 21:26:06.482869 kubelet[1799]: I0113 21:26:06.482541 1799 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a2f83576-9690-40b2-bacf-895f61519e6a-hostproc\") on node \"10.0.0.88\" DevicePath \"\"" Jan 13 21:26:06.484949 kubelet[1799]: I0113 21:26:06.482393 1799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2f83576-9690-40b2-bacf-895f61519e6a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a2f83576-9690-40b2-bacf-895f61519e6a" (UID: "a2f83576-9690-40b2-bacf-895f61519e6a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:26:06.485390 kubelet[1799]: I0113 21:26:06.482567 1799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2f83576-9690-40b2-bacf-895f61519e6a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a2f83576-9690-40b2-bacf-895f61519e6a" (UID: "a2f83576-9690-40b2-bacf-895f61519e6a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:26:06.485390 kubelet[1799]: I0113 21:26:06.485148 1799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2f83576-9690-40b2-bacf-895f61519e6a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a2f83576-9690-40b2-bacf-895f61519e6a" (UID: "a2f83576-9690-40b2-bacf-895f61519e6a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:26:06.487235 kubelet[1799]: I0113 21:26:06.487186 1799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2f83576-9690-40b2-bacf-895f61519e6a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a2f83576-9690-40b2-bacf-895f61519e6a" (UID: "a2f83576-9690-40b2-bacf-895f61519e6a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 21:26:06.487298 kubelet[1799]: I0113 21:26:06.487278 1799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2f83576-9690-40b2-bacf-895f61519e6a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a2f83576-9690-40b2-bacf-895f61519e6a" (UID: "a2f83576-9690-40b2-bacf-895f61519e6a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:26:06.487333 kubelet[1799]: I0113 21:26:06.487307 1799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2f83576-9690-40b2-bacf-895f61519e6a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a2f83576-9690-40b2-bacf-895f61519e6a" (UID: "a2f83576-9690-40b2-bacf-895f61519e6a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:26:06.487333 kubelet[1799]: I0113 21:26:06.487308 1799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2f83576-9690-40b2-bacf-895f61519e6a-kube-api-access-9bf9m" (OuterVolumeSpecName: "kube-api-access-9bf9m") pod "a2f83576-9690-40b2-bacf-895f61519e6a" (UID: "a2f83576-9690-40b2-bacf-895f61519e6a"). InnerVolumeSpecName "kube-api-access-9bf9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:26:06.487582 systemd[1]: var-lib-kubelet-pods-a2f83576\x2d9690\x2d40b2\x2dbacf\x2d895f61519e6a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9bf9m.mount: Deactivated successfully. Jan 13 21:26:06.488555 kubelet[1799]: I0113 21:26:06.488535 1799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2f83576-9690-40b2-bacf-895f61519e6a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a2f83576-9690-40b2-bacf-895f61519e6a" (UID: "a2f83576-9690-40b2-bacf-895f61519e6a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 21:26:06.488633 kubelet[1799]: I0113 21:26:06.488599 1799 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2f83576-9690-40b2-bacf-895f61519e6a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a2f83576-9690-40b2-bacf-895f61519e6a" (UID: "a2f83576-9690-40b2-bacf-895f61519e6a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:26:06.582882 kubelet[1799]: I0113 21:26:06.582798 1799 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-9bf9m\" (UniqueName: \"kubernetes.io/projected/a2f83576-9690-40b2-bacf-895f61519e6a-kube-api-access-9bf9m\") on node \"10.0.0.88\" DevicePath \"\"" Jan 13 21:26:06.582882 kubelet[1799]: I0113 21:26:06.582842 1799 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a2f83576-9690-40b2-bacf-895f61519e6a-host-proc-sys-kernel\") on node \"10.0.0.88\" DevicePath \"\"" Jan 13 21:26:06.582882 kubelet[1799]: I0113 21:26:06.582851 1799 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a2f83576-9690-40b2-bacf-895f61519e6a-clustermesh-secrets\") on node \"10.0.0.88\" DevicePath \"\"" Jan 13 21:26:06.582882 kubelet[1799]: I0113 21:26:06.582859 1799 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a2f83576-9690-40b2-bacf-895f61519e6a-cilium-cgroup\") on node \"10.0.0.88\" DevicePath \"\"" Jan 13 21:26:06.582882 kubelet[1799]: I0113 21:26:06.582868 1799 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a2f83576-9690-40b2-bacf-895f61519e6a-host-proc-sys-net\") on node \"10.0.0.88\" DevicePath \"\"" Jan 13 21:26:06.582882 kubelet[1799]: I0113 21:26:06.582875 1799 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a2f83576-9690-40b2-bacf-895f61519e6a-hubble-tls\") on node \"10.0.0.88\" DevicePath \"\"" Jan 13 21:26:06.582882 kubelet[1799]: I0113 21:26:06.582883 1799 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a2f83576-9690-40b2-bacf-895f61519e6a-cilium-run\") on node \"10.0.0.88\" DevicePath \"\"" Jan 13 21:26:06.582882 kubelet[1799]: I0113 21:26:06.582890 1799 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2f83576-9690-40b2-bacf-895f61519e6a-lib-modules\") on node \"10.0.0.88\" DevicePath \"\"" Jan 13 21:26:06.583302 kubelet[1799]: I0113 21:26:06.582920 1799 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a2f83576-9690-40b2-bacf-895f61519e6a-cilium-config-path\") on node \"10.0.0.88\" DevicePath \"\"" Jan 13 21:26:06.638671 systemd[1]: Removed slice kubepods-burstable-poda2f83576_9690_40b2_bacf_895f61519e6a.slice - libcontainer container kubepods-burstable-poda2f83576_9690_40b2_bacf_895f61519e6a.slice. Jan 13 21:26:06.638777 systemd[1]: kubepods-burstable-poda2f83576_9690_40b2_bacf_895f61519e6a.slice: Consumed 7.146s CPU time. Jan 13 21:26:06.679668 kubelet[1799]: E0113 21:26:06.679588 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:07.122846 systemd[1]: var-lib-kubelet-pods-a2f83576\x2d9690\x2d40b2\x2dbacf\x2d895f61519e6a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 13 21:26:07.122986 systemd[1]: var-lib-kubelet-pods-a2f83576\x2d9690\x2d40b2\x2dbacf\x2d895f61519e6a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 13 21:26:07.637821 kubelet[1799]: E0113 21:26:07.637720 1799 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:07.652138 containerd[1474]: time="2025-01-13T21:26:07.652105026Z" level=info msg="StopPodSandbox for \"56fe0ce02cc88e5eb5a49dffa080c101e96f14f1c56ad515071b87974e8d4794\"" Jan 13 21:26:07.652487 containerd[1474]: time="2025-01-13T21:26:07.652183573Z" level=info msg="TearDown network for sandbox \"56fe0ce02cc88e5eb5a49dffa080c101e96f14f1c56ad515071b87974e8d4794\" successfully" Jan 13 21:26:07.652487 containerd[1474]: time="2025-01-13T21:26:07.652193973Z" level=info msg="StopPodSandbox for \"56fe0ce02cc88e5eb5a49dffa080c101e96f14f1c56ad515071b87974e8d4794\" returns successfully" Jan 13 21:26:07.652487 containerd[1474]: time="2025-01-13T21:26:07.652469100Z" level=info msg="RemovePodSandbox for \"56fe0ce02cc88e5eb5a49dffa080c101e96f14f1c56ad515071b87974e8d4794\"" Jan 13 21:26:07.652487 containerd[1474]: time="2025-01-13T21:26:07.652485971Z" level=info msg="Forcibly stopping sandbox \"56fe0ce02cc88e5eb5a49dffa080c101e96f14f1c56ad515071b87974e8d4794\"" Jan 13 21:26:07.652572 containerd[1474]: time="2025-01-13T21:26:07.652537097Z" level=info msg="TearDown network for sandbox \"56fe0ce02cc88e5eb5a49dffa080c101e96f14f1c56ad515071b87974e8d4794\" successfully" Jan 13 21:26:07.679781 kubelet[1799]: E0113 21:26:07.679743 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:07.757991 containerd[1474]: time="2025-01-13T21:26:07.757931178Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"56fe0ce02cc88e5eb5a49dffa080c101e96f14f1c56ad515071b87974e8d4794\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:26:07.758105 containerd[1474]: time="2025-01-13T21:26:07.758002422Z" level=info msg="RemovePodSandbox \"56fe0ce02cc88e5eb5a49dffa080c101e96f14f1c56ad515071b87974e8d4794\" returns successfully" Jan 13 21:26:08.053193 kubelet[1799]: I0113 21:26:08.053040 1799 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2f83576-9690-40b2-bacf-895f61519e6a" path="/var/lib/kubelet/pods/a2f83576-9690-40b2-bacf-895f61519e6a/volumes" Jan 13 21:26:08.066932 kubelet[1799]: E0113 21:26:08.066875 1799 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 21:26:08.680721 kubelet[1799]: E0113 21:26:08.680663 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:08.968801 kubelet[1799]: E0113 21:26:08.968666 1799 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a2f83576-9690-40b2-bacf-895f61519e6a" containerName="apply-sysctl-overwrites" Jan 13 21:26:08.968801 kubelet[1799]: E0113 21:26:08.968697 1799 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a2f83576-9690-40b2-bacf-895f61519e6a" containerName="clean-cilium-state" Jan 13 21:26:08.968801 kubelet[1799]: E0113 21:26:08.968703 1799 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a2f83576-9690-40b2-bacf-895f61519e6a" containerName="cilium-agent" Jan 13 21:26:08.968801 kubelet[1799]: E0113 21:26:08.968709 1799 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a2f83576-9690-40b2-bacf-895f61519e6a" containerName="mount-cgroup" Jan 13 21:26:08.968801 kubelet[1799]: E0113 21:26:08.968715 1799 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a2f83576-9690-40b2-bacf-895f61519e6a" containerName="mount-bpf-fs" Jan 13 21:26:08.968801 kubelet[1799]: I0113 21:26:08.968733 1799 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2f83576-9690-40b2-bacf-895f61519e6a" containerName="cilium-agent" Jan 13 21:26:08.974690 systemd[1]: Created slice kubepods-burstable-pod75bdd03a_e544_4853_8721_7b350ddc078d.slice - libcontainer container kubepods-burstable-pod75bdd03a_e544_4853_8721_7b350ddc078d.slice. Jan 13 21:26:09.001713 systemd[1]: Created slice kubepods-besteffort-pod37336538_ae43_4ad1_a3ed_6a6b28c446b9.slice - libcontainer container kubepods-besteffort-pod37336538_ae43_4ad1_a3ed_6a6b28c446b9.slice. Jan 13 21:26:09.095887 kubelet[1799]: I0113 21:26:09.095836 1799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/37336538-ae43-4ad1-a3ed-6a6b28c446b9-cilium-config-path\") pod \"cilium-operator-5d85765b45-tnvtl\" (UID: \"37336538-ae43-4ad1-a3ed-6a6b28c446b9\") " pod="kube-system/cilium-operator-5d85765b45-tnvtl" Jan 13 21:26:09.095887 kubelet[1799]: I0113 21:26:09.095881 1799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/75bdd03a-e544-4853-8721-7b350ddc078d-etc-cni-netd\") pod \"cilium-h5rrb\" (UID: \"75bdd03a-e544-4853-8721-7b350ddc078d\") " pod="kube-system/cilium-h5rrb" Jan 13 21:26:09.096122 kubelet[1799]: I0113 21:26:09.095931 1799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/75bdd03a-e544-4853-8721-7b350ddc078d-bpf-maps\") pod \"cilium-h5rrb\" (UID: \"75bdd03a-e544-4853-8721-7b350ddc078d\") " pod="kube-system/cilium-h5rrb" Jan 13 21:26:09.096122 kubelet[1799]: I0113 21:26:09.095983 1799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/75bdd03a-e544-4853-8721-7b350ddc078d-hostproc\") pod \"cilium-h5rrb\" (UID: \"75bdd03a-e544-4853-8721-7b350ddc078d\") " pod="kube-system/cilium-h5rrb" Jan 13 21:26:09.096122 kubelet[1799]: I0113 21:26:09.096019 1799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/75bdd03a-e544-4853-8721-7b350ddc078d-xtables-lock\") pod \"cilium-h5rrb\" (UID: \"75bdd03a-e544-4853-8721-7b350ddc078d\") " pod="kube-system/cilium-h5rrb" Jan 13 21:26:09.096122 kubelet[1799]: I0113 21:26:09.096037 1799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/75bdd03a-e544-4853-8721-7b350ddc078d-cilium-config-path\") pod \"cilium-h5rrb\" (UID: \"75bdd03a-e544-4853-8721-7b350ddc078d\") " pod="kube-system/cilium-h5rrb" Jan 13 21:26:09.096122 kubelet[1799]: I0113 21:26:09.096051 1799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/75bdd03a-e544-4853-8721-7b350ddc078d-cilium-ipsec-secrets\") pod \"cilium-h5rrb\" (UID: \"75bdd03a-e544-4853-8721-7b350ddc078d\") " pod="kube-system/cilium-h5rrb" Jan 13 21:26:09.096122 kubelet[1799]: I0113 21:26:09.096096 1799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lxw8\" (UniqueName: \"kubernetes.io/projected/75bdd03a-e544-4853-8721-7b350ddc078d-kube-api-access-9lxw8\") pod \"cilium-h5rrb\" (UID: \"75bdd03a-e544-4853-8721-7b350ddc078d\") " pod="kube-system/cilium-h5rrb" Jan 13 21:26:09.096271 kubelet[1799]: I0113 21:26:09.096120 1799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvxx8\" (UniqueName: \"kubernetes.io/projected/37336538-ae43-4ad1-a3ed-6a6b28c446b9-kube-api-access-mvxx8\") pod \"cilium-operator-5d85765b45-tnvtl\" (UID: \"37336538-ae43-4ad1-a3ed-6a6b28c446b9\") " pod="kube-system/cilium-operator-5d85765b45-tnvtl" Jan 13 21:26:09.096271 kubelet[1799]: I0113 21:26:09.096136 1799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/75bdd03a-e544-4853-8721-7b350ddc078d-cni-path\") pod \"cilium-h5rrb\" (UID: \"75bdd03a-e544-4853-8721-7b350ddc078d\") " pod="kube-system/cilium-h5rrb" Jan 13 21:26:09.096271 kubelet[1799]: I0113 21:26:09.096166 1799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/75bdd03a-e544-4853-8721-7b350ddc078d-clustermesh-secrets\") pod \"cilium-h5rrb\" (UID: \"75bdd03a-e544-4853-8721-7b350ddc078d\") " pod="kube-system/cilium-h5rrb" Jan 13 21:26:09.096271 kubelet[1799]: I0113 21:26:09.096212 1799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/75bdd03a-e544-4853-8721-7b350ddc078d-host-proc-sys-net\") pod \"cilium-h5rrb\" (UID: \"75bdd03a-e544-4853-8721-7b350ddc078d\") " pod="kube-system/cilium-h5rrb" Jan 13 21:26:09.096271 kubelet[1799]: I0113 21:26:09.096232 1799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/75bdd03a-e544-4853-8721-7b350ddc078d-host-proc-sys-kernel\") pod \"cilium-h5rrb\" (UID: \"75bdd03a-e544-4853-8721-7b350ddc078d\") " pod="kube-system/cilium-h5rrb" Jan 13 21:26:09.096427 kubelet[1799]: I0113 21:26:09.096246 1799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/75bdd03a-e544-4853-8721-7b350ddc078d-hubble-tls\") pod \"cilium-h5rrb\" (UID: \"75bdd03a-e544-4853-8721-7b350ddc078d\") " pod="kube-system/cilium-h5rrb" Jan 13 21:26:09.096427 kubelet[1799]: I0113 21:26:09.096259 1799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/75bdd03a-e544-4853-8721-7b350ddc078d-cilium-run\") pod \"cilium-h5rrb\" (UID: \"75bdd03a-e544-4853-8721-7b350ddc078d\") " pod="kube-system/cilium-h5rrb" Jan 13 21:26:09.096427 kubelet[1799]: I0113 21:26:09.096271 1799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/75bdd03a-e544-4853-8721-7b350ddc078d-cilium-cgroup\") pod \"cilium-h5rrb\" (UID: \"75bdd03a-e544-4853-8721-7b350ddc078d\") " pod="kube-system/cilium-h5rrb" Jan 13 21:26:09.096427 kubelet[1799]: I0113 21:26:09.096303 1799 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/75bdd03a-e544-4853-8721-7b350ddc078d-lib-modules\") pod \"cilium-h5rrb\" (UID: \"75bdd03a-e544-4853-8721-7b350ddc078d\") " pod="kube-system/cilium-h5rrb" Jan 13 21:26:09.299531 kubelet[1799]: E0113 21:26:09.299332 1799 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:09.300006 containerd[1474]: time="2025-01-13T21:26:09.299944417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h5rrb,Uid:75bdd03a-e544-4853-8721-7b350ddc078d,Namespace:kube-system,Attempt:0,}" Jan 13 21:26:09.304491 kubelet[1799]: E0113 21:26:09.304451 1799 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:09.305041 containerd[1474]: time="2025-01-13T21:26:09.305001615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-tnvtl,Uid:37336538-ae43-4ad1-a3ed-6a6b28c446b9,Namespace:kube-system,Attempt:0,}" Jan 13 21:26:09.331782 containerd[1474]: time="2025-01-13T21:26:09.331604420Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:26:09.331782 containerd[1474]: time="2025-01-13T21:26:09.331730437Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:26:09.332025 containerd[1474]: time="2025-01-13T21:26:09.331788005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:26:09.332025 containerd[1474]: time="2025-01-13T21:26:09.331928529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:26:09.332884 containerd[1474]: time="2025-01-13T21:26:09.332414031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:26:09.332884 containerd[1474]: time="2025-01-13T21:26:09.332476858Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:26:09.332884 containerd[1474]: time="2025-01-13T21:26:09.332542351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:26:09.333099 containerd[1474]: time="2025-01-13T21:26:09.332815484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:26:09.357137 systemd[1]: Started cri-containerd-e978f4e183b9baadfc0cb93bd0e02d7fd6c25c61bd0336fd71d6ae0abdc194fc.scope - libcontainer container e978f4e183b9baadfc0cb93bd0e02d7fd6c25c61bd0336fd71d6ae0abdc194fc. Jan 13 21:26:09.362212 systemd[1]: Started cri-containerd-bcaa996587b69b9623c7f5022d0adc22efac0a6bdd95558c74fc3b428d1dcacb.scope - libcontainer container bcaa996587b69b9623c7f5022d0adc22efac0a6bdd95558c74fc3b428d1dcacb. Jan 13 21:26:09.385367 containerd[1474]: time="2025-01-13T21:26:09.385329085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h5rrb,Uid:75bdd03a-e544-4853-8721-7b350ddc078d,Namespace:kube-system,Attempt:0,} returns sandbox id \"bcaa996587b69b9623c7f5022d0adc22efac0a6bdd95558c74fc3b428d1dcacb\"" Jan 13 21:26:09.386959 kubelet[1799]: E0113 21:26:09.386445 1799 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:09.390552 containerd[1474]: time="2025-01-13T21:26:09.390497402Z" level=info msg="CreateContainer within sandbox \"bcaa996587b69b9623c7f5022d0adc22efac0a6bdd95558c74fc3b428d1dcacb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 21:26:09.400264 containerd[1474]: time="2025-01-13T21:26:09.400207730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-tnvtl,Uid:37336538-ae43-4ad1-a3ed-6a6b28c446b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"e978f4e183b9baadfc0cb93bd0e02d7fd6c25c61bd0336fd71d6ae0abdc194fc\"" Jan 13 21:26:09.400969 kubelet[1799]: E0113 21:26:09.400939 1799 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:09.402023 containerd[1474]: time="2025-01-13T21:26:09.401982884Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 13 21:26:09.411596 containerd[1474]: time="2025-01-13T21:26:09.411536868Z" level=info msg="CreateContainer within sandbox \"bcaa996587b69b9623c7f5022d0adc22efac0a6bdd95558c74fc3b428d1dcacb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8937d3b388ccbec46549f2818121f5d01263cf155824afd0b1bc4338b498895f\"" Jan 13 21:26:09.412273 containerd[1474]: time="2025-01-13T21:26:09.411990180Z" level=info msg="StartContainer for \"8937d3b388ccbec46549f2818121f5d01263cf155824afd0b1bc4338b498895f\"" Jan 13 21:26:09.440034 systemd[1]: Started cri-containerd-8937d3b388ccbec46549f2818121f5d01263cf155824afd0b1bc4338b498895f.scope - libcontainer container 8937d3b388ccbec46549f2818121f5d01263cf155824afd0b1bc4338b498895f. Jan 13 21:26:09.464881 containerd[1474]: time="2025-01-13T21:26:09.464824983Z" level=info msg="StartContainer for \"8937d3b388ccbec46549f2818121f5d01263cf155824afd0b1bc4338b498895f\" returns successfully" Jan 13 21:26:09.474567 systemd[1]: cri-containerd-8937d3b388ccbec46549f2818121f5d01263cf155824afd0b1bc4338b498895f.scope: Deactivated successfully. Jan 13 21:26:09.519194 containerd[1474]: time="2025-01-13T21:26:09.519098779Z" level=info msg="shim disconnected" id=8937d3b388ccbec46549f2818121f5d01263cf155824afd0b1bc4338b498895f namespace=k8s.io Jan 13 21:26:09.519194 containerd[1474]: time="2025-01-13T21:26:09.519163811Z" level=warning msg="cleaning up after shim disconnected" id=8937d3b388ccbec46549f2818121f5d01263cf155824afd0b1bc4338b498895f namespace=k8s.io Jan 13 21:26:09.519194 containerd[1474]: time="2025-01-13T21:26:09.519174161Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:26:09.681172 kubelet[1799]: E0113 21:26:09.681087 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:09.911447 kubelet[1799]: I0113 21:26:09.911400 1799 setters.go:600] "Node became not ready" node="10.0.0.88" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-13T21:26:09Z","lastTransitionTime":"2025-01-13T21:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 13 21:26:10.341104 kubelet[1799]: E0113 21:26:10.341074 1799 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:10.342787 containerd[1474]: time="2025-01-13T21:26:10.342744449Z" level=info msg="CreateContainer within sandbox \"bcaa996587b69b9623c7f5022d0adc22efac0a6bdd95558c74fc3b428d1dcacb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 21:26:10.584624 containerd[1474]: time="2025-01-13T21:26:10.584567714Z" level=info msg="CreateContainer within sandbox \"bcaa996587b69b9623c7f5022d0adc22efac0a6bdd95558c74fc3b428d1dcacb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"96e28de4fc73670c00ebd9f815fa854ea61060b3378e5506f98c38722d967737\"" Jan 13 21:26:10.585131 containerd[1474]: time="2025-01-13T21:26:10.585087651Z" level=info msg="StartContainer for \"96e28de4fc73670c00ebd9f815fa854ea61060b3378e5506f98c38722d967737\"" Jan 13 21:26:10.618043 systemd[1]: Started cri-containerd-96e28de4fc73670c00ebd9f815fa854ea61060b3378e5506f98c38722d967737.scope - libcontainer container 96e28de4fc73670c00ebd9f815fa854ea61060b3378e5506f98c38722d967737. Jan 13 21:26:10.650293 containerd[1474]: time="2025-01-13T21:26:10.650240980Z" level=info msg="StartContainer for \"96e28de4fc73670c00ebd9f815fa854ea61060b3378e5506f98c38722d967737\" returns successfully" Jan 13 21:26:10.655803 systemd[1]: cri-containerd-96e28de4fc73670c00ebd9f815fa854ea61060b3378e5506f98c38722d967737.scope: Deactivated successfully. Jan 13 21:26:10.681836 kubelet[1799]: E0113 21:26:10.681782 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:10.682352 containerd[1474]: time="2025-01-13T21:26:10.682248630Z" level=info msg="shim disconnected" id=96e28de4fc73670c00ebd9f815fa854ea61060b3378e5506f98c38722d967737 namespace=k8s.io Jan 13 21:26:10.682352 containerd[1474]: time="2025-01-13T21:26:10.682304064Z" level=warning msg="cleaning up after shim disconnected" id=96e28de4fc73670c00ebd9f815fa854ea61060b3378e5506f98c38722d967737 namespace=k8s.io Jan 13 21:26:10.682352 containerd[1474]: time="2025-01-13T21:26:10.682312630Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:26:11.202809 systemd[1]: run-containerd-runc-k8s.io-96e28de4fc73670c00ebd9f815fa854ea61060b3378e5506f98c38722d967737-runc.hwYf47.mount: Deactivated successfully. Jan 13 21:26:11.202932 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-96e28de4fc73670c00ebd9f815fa854ea61060b3378e5506f98c38722d967737-rootfs.mount: Deactivated successfully. Jan 13 21:26:11.348291 kubelet[1799]: E0113 21:26:11.346276 1799 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:11.350523 containerd[1474]: time="2025-01-13T21:26:11.350469977Z" level=info msg="CreateContainer within sandbox \"bcaa996587b69b9623c7f5022d0adc22efac0a6bdd95558c74fc3b428d1dcacb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 21:26:11.369307 containerd[1474]: time="2025-01-13T21:26:11.369247739Z" level=info msg="CreateContainer within sandbox \"bcaa996587b69b9623c7f5022d0adc22efac0a6bdd95558c74fc3b428d1dcacb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e8911ab2039311e753e63e5b4ae2277cfd92bd029ee5c3bb0760c4d934b117ee\"" Jan 13 21:26:11.369916 containerd[1474]: time="2025-01-13T21:26:11.369873624Z" level=info msg="StartContainer for \"e8911ab2039311e753e63e5b4ae2277cfd92bd029ee5c3bb0760c4d934b117ee\"" Jan 13 21:26:11.404087 systemd[1]: Started cri-containerd-e8911ab2039311e753e63e5b4ae2277cfd92bd029ee5c3bb0760c4d934b117ee.scope - libcontainer container e8911ab2039311e753e63e5b4ae2277cfd92bd029ee5c3bb0760c4d934b117ee. Jan 13 21:26:11.434363 containerd[1474]: time="2025-01-13T21:26:11.434320175Z" level=info msg="StartContainer for \"e8911ab2039311e753e63e5b4ae2277cfd92bd029ee5c3bb0760c4d934b117ee\" returns successfully" Jan 13 21:26:11.434484 systemd[1]: cri-containerd-e8911ab2039311e753e63e5b4ae2277cfd92bd029ee5c3bb0760c4d934b117ee.scope: Deactivated successfully. Jan 13 21:26:11.461361 containerd[1474]: time="2025-01-13T21:26:11.461214579Z" level=info msg="shim disconnected" id=e8911ab2039311e753e63e5b4ae2277cfd92bd029ee5c3bb0760c4d934b117ee namespace=k8s.io Jan 13 21:26:11.461361 containerd[1474]: time="2025-01-13T21:26:11.461273870Z" level=warning msg="cleaning up after shim disconnected" id=e8911ab2039311e753e63e5b4ae2277cfd92bd029ee5c3bb0760c4d934b117ee namespace=k8s.io Jan 13 21:26:11.461361 containerd[1474]: time="2025-01-13T21:26:11.461284590Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:26:11.682509 kubelet[1799]: E0113 21:26:11.682446 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:12.203141 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8911ab2039311e753e63e5b4ae2277cfd92bd029ee5c3bb0760c4d934b117ee-rootfs.mount: Deactivated successfully. Jan 13 21:26:12.349219 kubelet[1799]: E0113 21:26:12.349181 1799 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:12.351044 containerd[1474]: time="2025-01-13T21:26:12.350997969Z" level=info msg="CreateContainer within sandbox \"bcaa996587b69b9623c7f5022d0adc22efac0a6bdd95558c74fc3b428d1dcacb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 21:26:12.363710 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1987520456.mount: Deactivated successfully. Jan 13 21:26:12.368936 containerd[1474]: time="2025-01-13T21:26:12.368866441Z" level=info msg="CreateContainer within sandbox \"bcaa996587b69b9623c7f5022d0adc22efac0a6bdd95558c74fc3b428d1dcacb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7ce00905a0bf2b44151c1230571a6dc1c43fa0916814f3bca4e338c41308dd64\"" Jan 13 21:26:12.369358 containerd[1474]: time="2025-01-13T21:26:12.369326344Z" level=info msg="StartContainer for \"7ce00905a0bf2b44151c1230571a6dc1c43fa0916814f3bca4e338c41308dd64\"" Jan 13 21:26:12.404064 systemd[1]: Started cri-containerd-7ce00905a0bf2b44151c1230571a6dc1c43fa0916814f3bca4e338c41308dd64.scope - libcontainer container 7ce00905a0bf2b44151c1230571a6dc1c43fa0916814f3bca4e338c41308dd64. Jan 13 21:26:12.428840 systemd[1]: cri-containerd-7ce00905a0bf2b44151c1230571a6dc1c43fa0916814f3bca4e338c41308dd64.scope: Deactivated successfully. Jan 13 21:26:12.430781 containerd[1474]: time="2025-01-13T21:26:12.430733298Z" level=info msg="StartContainer for \"7ce00905a0bf2b44151c1230571a6dc1c43fa0916814f3bca4e338c41308dd64\" returns successfully" Jan 13 21:26:12.459093 containerd[1474]: time="2025-01-13T21:26:12.458954588Z" level=info msg="shim disconnected" id=7ce00905a0bf2b44151c1230571a6dc1c43fa0916814f3bca4e338c41308dd64 namespace=k8s.io Jan 13 21:26:12.459093 containerd[1474]: time="2025-01-13T21:26:12.459006435Z" level=warning msg="cleaning up after shim disconnected" id=7ce00905a0bf2b44151c1230571a6dc1c43fa0916814f3bca4e338c41308dd64 namespace=k8s.io Jan 13 21:26:12.459093 containerd[1474]: time="2025-01-13T21:26:12.459014602Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:26:12.683091 kubelet[1799]: E0113 21:26:12.683016 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:13.068159 kubelet[1799]: E0113 21:26:13.068102 1799 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 21:26:13.202702 systemd[1]: run-containerd-runc-k8s.io-7ce00905a0bf2b44151c1230571a6dc1c43fa0916814f3bca4e338c41308dd64-runc.STwvvQ.mount: Deactivated successfully. Jan 13 21:26:13.202803 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7ce00905a0bf2b44151c1230571a6dc1c43fa0916814f3bca4e338c41308dd64-rootfs.mount: Deactivated successfully. Jan 13 21:26:13.352645 kubelet[1799]: E0113 21:26:13.352610 1799 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:13.354305 containerd[1474]: time="2025-01-13T21:26:13.354270581Z" level=info msg="CreateContainer within sandbox \"bcaa996587b69b9623c7f5022d0adc22efac0a6bdd95558c74fc3b428d1dcacb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 21:26:13.372089 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount490235085.mount: Deactivated successfully. Jan 13 21:26:13.374250 containerd[1474]: time="2025-01-13T21:26:13.374210500Z" level=info msg="CreateContainer within sandbox \"bcaa996587b69b9623c7f5022d0adc22efac0a6bdd95558c74fc3b428d1dcacb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b9255c2e45e3af2769dbc2d0b289c9b980186f6b6d44784ccae6e929df9d17c9\"" Jan 13 21:26:13.374807 containerd[1474]: time="2025-01-13T21:26:13.374715067Z" level=info msg="StartContainer for \"b9255c2e45e3af2769dbc2d0b289c9b980186f6b6d44784ccae6e929df9d17c9\"" Jan 13 21:26:13.402091 systemd[1]: Started cri-containerd-b9255c2e45e3af2769dbc2d0b289c9b980186f6b6d44784ccae6e929df9d17c9.scope - libcontainer container b9255c2e45e3af2769dbc2d0b289c9b980186f6b6d44784ccae6e929df9d17c9. Jan 13 21:26:13.433043 containerd[1474]: time="2025-01-13T21:26:13.431953414Z" level=info msg="StartContainer for \"b9255c2e45e3af2769dbc2d0b289c9b980186f6b6d44784ccae6e929df9d17c9\" returns successfully" Jan 13 21:26:13.683292 kubelet[1799]: E0113 21:26:13.683174 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:13.856943 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 13 21:26:14.357015 kubelet[1799]: E0113 21:26:14.356984 1799 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:14.372795 kubelet[1799]: I0113 21:26:14.372688 1799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-h5rrb" podStartSLOduration=6.372666441 podStartE2EDuration="6.372666441s" podCreationTimestamp="2025-01-13 21:26:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:26:14.372193202 +0000 UTC m=+67.257750495" watchObservedRunningTime="2025-01-13 21:26:14.372666441 +0000 UTC m=+67.258223734" Jan 13 21:26:14.684523 kubelet[1799]: E0113 21:26:14.684316 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:15.358642 kubelet[1799]: E0113 21:26:15.358603 1799 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:15.684867 kubelet[1799]: E0113 21:26:15.684723 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:16.685086 kubelet[1799]: E0113 21:26:16.685045 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:17.052788 systemd-networkd[1406]: lxc_health: Link UP Jan 13 21:26:17.063825 systemd-networkd[1406]: lxc_health: Gained carrier Jan 13 21:26:17.301161 kubelet[1799]: E0113 21:26:17.301110 1799 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:17.364022 kubelet[1799]: E0113 21:26:17.363989 1799 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:17.685674 kubelet[1799]: E0113 21:26:17.685516 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:18.365160 kubelet[1799]: E0113 21:26:18.365121 1799 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:18.685804 kubelet[1799]: E0113 21:26:18.685677 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:18.818090 systemd-networkd[1406]: lxc_health: Gained IPv6LL Jan 13 21:26:19.650759 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1818946731.mount: Deactivated successfully. Jan 13 21:26:19.686801 kubelet[1799]: E0113 21:26:19.686745 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:20.081535 containerd[1474]: time="2025-01-13T21:26:20.081353692Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:20.082680 containerd[1474]: time="2025-01-13T21:26:20.082603577Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907177" Jan 13 21:26:20.084286 containerd[1474]: time="2025-01-13T21:26:20.084246942Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:26:20.085583 containerd[1474]: time="2025-01-13T21:26:20.085409132Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 10.683383729s" Jan 13 21:26:20.085583 containerd[1474]: time="2025-01-13T21:26:20.085453405Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 13 21:26:20.087942 containerd[1474]: time="2025-01-13T21:26:20.087887524Z" level=info msg="CreateContainer within sandbox \"e978f4e183b9baadfc0cb93bd0e02d7fd6c25c61bd0336fd71d6ae0abdc194fc\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 13 21:26:20.110654 containerd[1474]: time="2025-01-13T21:26:20.110580162Z" level=info msg="CreateContainer within sandbox \"e978f4e183b9baadfc0cb93bd0e02d7fd6c25c61bd0336fd71d6ae0abdc194fc\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8bc6365f13e6887c39a55ad6b0b0f4387d16deeec319348eeaf5d1940b076e4c\"" Jan 13 21:26:20.112291 containerd[1474]: time="2025-01-13T21:26:20.111334407Z" level=info msg="StartContainer for \"8bc6365f13e6887c39a55ad6b0b0f4387d16deeec319348eeaf5d1940b076e4c\"" Jan 13 21:26:20.146125 systemd[1]: Started cri-containerd-8bc6365f13e6887c39a55ad6b0b0f4387d16deeec319348eeaf5d1940b076e4c.scope - libcontainer container 8bc6365f13e6887c39a55ad6b0b0f4387d16deeec319348eeaf5d1940b076e4c. Jan 13 21:26:20.195110 containerd[1474]: time="2025-01-13T21:26:20.195049279Z" level=info msg="StartContainer for \"8bc6365f13e6887c39a55ad6b0b0f4387d16deeec319348eeaf5d1940b076e4c\" returns successfully" Jan 13 21:26:20.369720 kubelet[1799]: E0113 21:26:20.369689 1799 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:20.380215 kubelet[1799]: I0113 21:26:20.379944 1799 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-tnvtl" podStartSLOduration=1.6948274190000001 podStartE2EDuration="12.379932759s" podCreationTimestamp="2025-01-13 21:26:08 +0000 UTC" firstStartedPulling="2025-01-13 21:26:09.401433703 +0000 UTC m=+62.286991006" lastFinishedPulling="2025-01-13 21:26:20.086539053 +0000 UTC m=+72.972096346" observedRunningTime="2025-01-13 21:26:20.379857308 +0000 UTC m=+73.265414601" watchObservedRunningTime="2025-01-13 21:26:20.379932759 +0000 UTC m=+73.265490052" Jan 13 21:26:20.687772 kubelet[1799]: E0113 21:26:20.687639 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:21.371697 kubelet[1799]: E0113 21:26:21.371651 1799 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:21.688102 kubelet[1799]: E0113 21:26:21.687964 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:21.972100 systemd[1]: run-containerd-runc-k8s.io-b9255c2e45e3af2769dbc2d0b289c9b980186f6b6d44784ccae6e929df9d17c9-runc.Bj57rj.mount: Deactivated successfully. Jan 13 21:26:22.688849 kubelet[1799]: E0113 21:26:22.688775 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:23.689440 kubelet[1799]: E0113 21:26:23.689365 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:24.690318 kubelet[1799]: E0113 21:26:24.690235 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:26:25.691005 kubelet[1799]: E0113 21:26:25.690859 1799 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"