Jan 13 21:29:30.905752 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:40:50 -00 2025 Jan 13 21:29:30.905781 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:29:30.905796 kernel: BIOS-provided physical RAM map: Jan 13 21:29:30.905805 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 13 21:29:30.905813 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 13 21:29:30.905821 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 13 21:29:30.905831 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 13 21:29:30.905839 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 13 21:29:30.905848 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jan 13 21:29:30.905856 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jan 13 21:29:30.905869 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jan 13 21:29:30.905877 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jan 13 21:29:30.905885 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jan 13 21:29:30.905895 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jan 13 21:29:30.905906 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jan 13 21:29:30.905916 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 13 21:29:30.905930 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jan 13 21:29:30.905963 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jan 13 21:29:30.905974 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 13 21:29:30.905984 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 13 21:29:30.905993 kernel: NX (Execute Disable) protection: active Jan 13 21:29:30.906003 kernel: APIC: Static calls initialized Jan 13 21:29:30.906013 kernel: efi: EFI v2.7 by EDK II Jan 13 21:29:30.906023 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Jan 13 21:29:30.906032 kernel: SMBIOS 2.8 present. Jan 13 21:29:30.906042 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jan 13 21:29:30.906052 kernel: Hypervisor detected: KVM Jan 13 21:29:30.906065 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 21:29:30.906075 kernel: kvm-clock: using sched offset of 3986931904 cycles Jan 13 21:29:30.906085 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 21:29:30.906095 kernel: tsc: Detected 2794.748 MHz processor Jan 13 21:29:30.906106 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 21:29:30.906116 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 21:29:30.906126 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jan 13 21:29:30.906136 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 13 21:29:30.906147 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 21:29:30.906160 kernel: Using GB pages for direct mapping Jan 13 21:29:30.906170 kernel: Secure boot disabled Jan 13 21:29:30.906180 kernel: ACPI: Early table checksum verification disabled Jan 13 21:29:30.906191 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 13 21:29:30.906206 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 13 21:29:30.906216 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:29:30.906227 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:29:30.906240 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 13 21:29:30.906251 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:29:30.906262 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:29:30.906272 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:29:30.906283 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:29:30.906294 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 13 21:29:30.906304 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 13 21:29:30.906318 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Jan 13 21:29:30.906340 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 13 21:29:30.906350 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 13 21:29:30.906360 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 13 21:29:30.906371 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 13 21:29:30.906382 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 13 21:29:30.906392 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 13 21:29:30.906402 kernel: No NUMA configuration found Jan 13 21:29:30.906415 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jan 13 21:29:30.906431 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jan 13 21:29:30.906444 kernel: Zone ranges: Jan 13 21:29:30.906454 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 21:29:30.906465 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jan 13 21:29:30.906476 kernel: Normal empty Jan 13 21:29:30.906486 kernel: Movable zone start for each node Jan 13 21:29:30.906497 kernel: Early memory node ranges Jan 13 21:29:30.906508 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 13 21:29:30.906519 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 13 21:29:30.906529 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 13 21:29:30.906543 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jan 13 21:29:30.906554 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jan 13 21:29:30.906565 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jan 13 21:29:30.906575 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jan 13 21:29:30.906586 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 21:29:30.906597 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 13 21:29:30.906607 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 13 21:29:30.906617 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 21:29:30.906627 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jan 13 21:29:30.906642 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 13 21:29:30.906653 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jan 13 21:29:30.906663 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 13 21:29:30.906674 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 21:29:30.906685 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 13 21:29:30.906695 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 13 21:29:30.906705 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 21:29:30.906717 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 21:29:30.906727 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 21:29:30.906737 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 21:29:30.906752 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 21:29:30.906763 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 13 21:29:30.906773 kernel: TSC deadline timer available Jan 13 21:29:30.906783 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 13 21:29:30.906794 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 13 21:29:30.906805 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 13 21:29:30.906815 kernel: kvm-guest: setup PV sched yield Jan 13 21:29:30.906825 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 13 21:29:30.906836 kernel: Booting paravirtualized kernel on KVM Jan 13 21:29:30.906850 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 21:29:30.906860 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 13 21:29:30.906871 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 13 21:29:30.906882 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 13 21:29:30.906892 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 13 21:29:30.906903 kernel: kvm-guest: PV spinlocks enabled Jan 13 21:29:30.906914 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 13 21:29:30.906926 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:29:30.906955 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 21:29:30.906966 kernel: random: crng init done Jan 13 21:29:30.906976 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 21:29:30.906987 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 21:29:30.906997 kernel: Fallback order for Node 0: 0 Jan 13 21:29:30.907008 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jan 13 21:29:30.907018 kernel: Policy zone: DMA32 Jan 13 21:29:30.907029 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 21:29:30.907040 kernel: Memory: 2395616K/2567000K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42844K init, 2348K bss, 171124K reserved, 0K cma-reserved) Jan 13 21:29:30.907055 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 13 21:29:30.907065 kernel: ftrace: allocating 37918 entries in 149 pages Jan 13 21:29:30.907076 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 21:29:30.907087 kernel: Dynamic Preempt: voluntary Jan 13 21:29:30.907107 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 21:29:30.907122 kernel: rcu: RCU event tracing is enabled. Jan 13 21:29:30.907134 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 13 21:29:30.907145 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 21:29:30.907156 kernel: Rude variant of Tasks RCU enabled. Jan 13 21:29:30.907167 kernel: Tracing variant of Tasks RCU enabled. Jan 13 21:29:30.907178 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 21:29:30.907189 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 13 21:29:30.907204 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 13 21:29:30.907215 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 21:29:30.907227 kernel: Console: colour dummy device 80x25 Jan 13 21:29:30.907238 kernel: printk: console [ttyS0] enabled Jan 13 21:29:30.907249 kernel: ACPI: Core revision 20230628 Jan 13 21:29:30.907263 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 13 21:29:30.907275 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 21:29:30.907285 kernel: x2apic enabled Jan 13 21:29:30.907297 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 21:29:30.907308 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 13 21:29:30.907320 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 13 21:29:30.907343 kernel: kvm-guest: setup PV IPIs Jan 13 21:29:30.907354 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 13 21:29:30.907365 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 13 21:29:30.907380 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jan 13 21:29:30.907391 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 13 21:29:30.907403 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 13 21:29:30.907414 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 13 21:29:30.907425 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 21:29:30.907436 kernel: Spectre V2 : Mitigation: Retpolines Jan 13 21:29:30.907448 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 21:29:30.907460 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 21:29:30.907470 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 13 21:29:30.907485 kernel: RETBleed: Mitigation: untrained return thunk Jan 13 21:29:30.907497 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 13 21:29:30.907508 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 13 21:29:30.907519 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 13 21:29:30.907532 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 13 21:29:30.907543 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 13 21:29:30.907554 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 21:29:30.907565 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 21:29:30.907580 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 21:29:30.907590 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 21:29:30.907601 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 13 21:29:30.907613 kernel: Freeing SMP alternatives memory: 32K Jan 13 21:29:30.907625 kernel: pid_max: default: 32768 minimum: 301 Jan 13 21:29:30.907635 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 21:29:30.907647 kernel: landlock: Up and running. Jan 13 21:29:30.907658 kernel: SELinux: Initializing. Jan 13 21:29:30.907670 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:29:30.907684 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:29:30.907695 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 13 21:29:30.907707 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:29:30.907718 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:29:30.907729 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:29:30.907740 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 13 21:29:30.907750 kernel: ... version: 0 Jan 13 21:29:30.907760 kernel: ... bit width: 48 Jan 13 21:29:30.907771 kernel: ... generic registers: 6 Jan 13 21:29:30.907784 kernel: ... value mask: 0000ffffffffffff Jan 13 21:29:30.907795 kernel: ... max period: 00007fffffffffff Jan 13 21:29:30.907806 kernel: ... fixed-purpose events: 0 Jan 13 21:29:30.907818 kernel: ... event mask: 000000000000003f Jan 13 21:29:30.907828 kernel: signal: max sigframe size: 1776 Jan 13 21:29:30.907840 kernel: rcu: Hierarchical SRCU implementation. Jan 13 21:29:30.907851 kernel: rcu: Max phase no-delay instances is 400. Jan 13 21:29:30.907861 kernel: smp: Bringing up secondary CPUs ... Jan 13 21:29:30.907873 kernel: smpboot: x86: Booting SMP configuration: Jan 13 21:29:30.907887 kernel: .... node #0, CPUs: #1 #2 #3 Jan 13 21:29:30.907898 kernel: smp: Brought up 1 node, 4 CPUs Jan 13 21:29:30.907910 kernel: smpboot: Max logical packages: 1 Jan 13 21:29:30.907921 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jan 13 21:29:30.907932 kernel: devtmpfs: initialized Jan 13 21:29:30.907958 kernel: x86/mm: Memory block size: 128MB Jan 13 21:29:30.907969 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 13 21:29:30.907981 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 13 21:29:30.907991 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jan 13 21:29:30.908007 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 13 21:29:30.908018 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 13 21:29:30.908029 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 21:29:30.908041 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 13 21:29:30.908052 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 21:29:30.908063 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 21:29:30.908074 kernel: audit: initializing netlink subsys (disabled) Jan 13 21:29:30.908086 kernel: audit: type=2000 audit(1736803770.141:1): state=initialized audit_enabled=0 res=1 Jan 13 21:29:30.908097 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 21:29:30.908111 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 21:29:30.908123 kernel: cpuidle: using governor menu Jan 13 21:29:30.908135 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 21:29:30.908146 kernel: dca service started, version 1.12.1 Jan 13 21:29:30.908158 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 13 21:29:30.908169 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 13 21:29:30.908180 kernel: PCI: Using configuration type 1 for base access Jan 13 21:29:30.908191 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 21:29:30.908202 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 21:29:30.908218 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 21:29:30.908229 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 21:29:30.908240 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 21:29:30.908252 kernel: ACPI: Added _OSI(Module Device) Jan 13 21:29:30.908263 kernel: ACPI: Added _OSI(Processor Device) Jan 13 21:29:30.908274 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 21:29:30.908285 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 21:29:30.908297 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 21:29:30.908308 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 21:29:30.908334 kernel: ACPI: Interpreter enabled Jan 13 21:29:30.908345 kernel: ACPI: PM: (supports S0 S3 S5) Jan 13 21:29:30.908357 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 21:29:30.908368 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 21:29:30.908379 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 21:29:30.908390 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 13 21:29:30.908402 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 21:29:30.908630 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 21:29:30.908813 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 13 21:29:30.908995 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 13 21:29:30.909013 kernel: PCI host bridge to bus 0000:00 Jan 13 21:29:30.909179 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 21:29:30.909340 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 21:29:30.909491 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 21:29:30.909637 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 13 21:29:30.909792 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 13 21:29:30.909991 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jan 13 21:29:30.910147 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 21:29:30.910340 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 13 21:29:30.910547 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 13 21:29:30.910710 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jan 13 21:29:30.910877 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jan 13 21:29:30.911058 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 13 21:29:30.911223 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 13 21:29:30.911397 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 21:29:30.911572 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 21:29:30.911736 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jan 13 21:29:30.911902 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jan 13 21:29:30.912093 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jan 13 21:29:30.912278 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 13 21:29:30.912461 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jan 13 21:29:30.912627 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jan 13 21:29:30.912793 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jan 13 21:29:30.912994 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 13 21:29:30.913161 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jan 13 21:29:30.913338 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jan 13 21:29:30.913506 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jan 13 21:29:30.913665 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 13 21:29:30.913839 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 13 21:29:30.914055 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 13 21:29:30.914227 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 13 21:29:30.914395 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jan 13 21:29:30.914554 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jan 13 21:29:30.914728 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 13 21:29:30.914887 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jan 13 21:29:30.914904 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 21:29:30.914916 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 21:29:30.914927 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 21:29:30.914954 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 21:29:30.914971 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 13 21:29:30.914983 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 13 21:29:30.914994 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 13 21:29:30.915005 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 13 21:29:30.915016 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 13 21:29:30.915027 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 13 21:29:30.915038 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 13 21:29:30.915048 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 13 21:29:30.915059 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 13 21:29:30.915075 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 13 21:29:30.915086 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 13 21:29:30.915096 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 13 21:29:30.915107 kernel: iommu: Default domain type: Translated Jan 13 21:29:30.915119 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 21:29:30.915130 kernel: efivars: Registered efivars operations Jan 13 21:29:30.915141 kernel: PCI: Using ACPI for IRQ routing Jan 13 21:29:30.915152 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 21:29:30.915163 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 13 21:29:30.915178 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jan 13 21:29:30.915189 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jan 13 21:29:30.915199 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jan 13 21:29:30.915377 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 13 21:29:30.915541 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 13 21:29:30.915710 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 21:29:30.915726 kernel: vgaarb: loaded Jan 13 21:29:30.915737 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 13 21:29:30.915747 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 13 21:29:30.915762 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 21:29:30.915772 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 21:29:30.915783 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 21:29:30.915794 kernel: pnp: PnP ACPI init Jan 13 21:29:30.916042 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 13 21:29:30.916061 kernel: pnp: PnP ACPI: found 6 devices Jan 13 21:29:30.916072 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 21:29:30.916083 kernel: NET: Registered PF_INET protocol family Jan 13 21:29:30.916099 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 21:29:30.916109 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 21:29:30.916120 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 21:29:30.916130 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 21:29:30.916141 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 21:29:30.916151 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 21:29:30.916162 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:29:30.916172 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:29:30.916183 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 21:29:30.916198 kernel: NET: Registered PF_XDP protocol family Jan 13 21:29:30.916377 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 13 21:29:30.916544 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jan 13 21:29:30.916697 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 21:29:30.916846 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 21:29:30.917024 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 21:29:30.917175 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 13 21:29:30.917334 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 13 21:29:30.917491 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jan 13 21:29:30.917508 kernel: PCI: CLS 0 bytes, default 64 Jan 13 21:29:30.917519 kernel: Initialise system trusted keyrings Jan 13 21:29:30.917530 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 21:29:30.917541 kernel: Key type asymmetric registered Jan 13 21:29:30.917553 kernel: Asymmetric key parser 'x509' registered Jan 13 21:29:30.917564 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 21:29:30.917575 kernel: io scheduler mq-deadline registered Jan 13 21:29:30.917585 kernel: io scheduler kyber registered Jan 13 21:29:30.917600 kernel: io scheduler bfq registered Jan 13 21:29:30.917611 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 21:29:30.917623 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 13 21:29:30.917634 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 13 21:29:30.917645 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 13 21:29:30.917655 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 21:29:30.917666 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 21:29:30.917677 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 21:29:30.917687 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 21:29:30.917701 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 21:29:30.917875 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 13 21:29:30.918069 kernel: rtc_cmos 00:04: registered as rtc0 Jan 13 21:29:30.918086 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 21:29:30.918232 kernel: rtc_cmos 00:04: setting system clock to 2025-01-13T21:29:30 UTC (1736803770) Jan 13 21:29:30.918395 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 13 21:29:30.918412 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 13 21:29:30.918428 kernel: efifb: probing for efifb Jan 13 21:29:30.918438 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jan 13 21:29:30.918448 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jan 13 21:29:30.918458 kernel: efifb: scrolling: redraw Jan 13 21:29:30.918469 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jan 13 21:29:30.918480 kernel: Console: switching to colour frame buffer device 100x37 Jan 13 21:29:30.918515 kernel: fb0: EFI VGA frame buffer device Jan 13 21:29:30.918529 kernel: pstore: Using crash dump compression: deflate Jan 13 21:29:30.918540 kernel: pstore: Registered efi_pstore as persistent store backend Jan 13 21:29:30.918553 kernel: NET: Registered PF_INET6 protocol family Jan 13 21:29:30.918564 kernel: Segment Routing with IPv6 Jan 13 21:29:30.918575 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 21:29:30.918586 kernel: NET: Registered PF_PACKET protocol family Jan 13 21:29:30.918598 kernel: Key type dns_resolver registered Jan 13 21:29:30.918609 kernel: IPI shorthand broadcast: enabled Jan 13 21:29:30.918621 kernel: sched_clock: Marking stable (841002547, 114801145)->(969660609, -13856917) Jan 13 21:29:30.918632 kernel: registered taskstats version 1 Jan 13 21:29:30.918643 kernel: Loading compiled-in X.509 certificates Jan 13 21:29:30.918655 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e8ca4908f7ff887d90a0430272c92dde55624447' Jan 13 21:29:30.918670 kernel: Key type .fscrypt registered Jan 13 21:29:30.918681 kernel: Key type fscrypt-provisioning registered Jan 13 21:29:30.918693 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 21:29:30.918705 kernel: ima: Allocated hash algorithm: sha1 Jan 13 21:29:30.918715 kernel: ima: No architecture policies found Jan 13 21:29:30.918727 kernel: clk: Disabling unused clocks Jan 13 21:29:30.918739 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 13 21:29:30.918751 kernel: Write protecting the kernel read-only data: 36864k Jan 13 21:29:30.918765 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 13 21:29:30.918776 kernel: Run /init as init process Jan 13 21:29:30.918787 kernel: with arguments: Jan 13 21:29:30.918798 kernel: /init Jan 13 21:29:30.918809 kernel: with environment: Jan 13 21:29:30.918820 kernel: HOME=/ Jan 13 21:29:30.918830 kernel: TERM=linux Jan 13 21:29:30.918841 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 21:29:30.918855 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:29:30.918872 systemd[1]: Detected virtualization kvm. Jan 13 21:29:30.918884 systemd[1]: Detected architecture x86-64. Jan 13 21:29:30.918895 systemd[1]: Running in initrd. Jan 13 21:29:30.918912 systemd[1]: No hostname configured, using default hostname. Jan 13 21:29:30.918927 systemd[1]: Hostname set to . Jan 13 21:29:30.918940 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:29:30.918968 systemd[1]: Queued start job for default target initrd.target. Jan 13 21:29:30.918981 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:29:30.918994 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:29:30.919007 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 21:29:30.919020 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:29:30.919032 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 21:29:30.919048 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 21:29:30.919063 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 21:29:30.919076 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 21:29:30.919088 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:29:30.919099 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:29:30.919111 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:29:30.919123 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:29:30.919139 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:29:30.919151 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:29:30.919162 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:29:30.919173 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:29:30.919185 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:29:30.919197 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:29:30.919209 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:29:30.919222 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:29:30.919239 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:29:30.919250 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:29:30.919262 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 21:29:30.919274 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:29:30.919286 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 21:29:30.919299 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 21:29:30.919311 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:29:30.919333 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:29:30.919345 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:29:30.919362 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 21:29:30.919375 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:29:30.919386 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 21:29:30.919422 systemd-journald[193]: Collecting audit messages is disabled. Jan 13 21:29:30.919454 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:29:30.919467 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:29:30.919479 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:29:30.919491 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:29:30.919507 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:29:30.919520 systemd-journald[193]: Journal started Jan 13 21:29:30.919545 systemd-journald[193]: Runtime Journal (/run/log/journal/517c5bdbcaaa493ca6cbdbce93536e03) is 6.0M, max 48.3M, 42.2M free. Jan 13 21:29:30.900578 systemd-modules-load[194]: Inserted module 'overlay' Jan 13 21:29:30.921964 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:29:30.925306 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:29:30.933130 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 21:29:30.935451 kernel: Bridge firewalling registered Jan 13 21:29:30.934591 systemd-modules-load[194]: Inserted module 'br_netfilter' Jan 13 21:29:30.935769 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:29:30.936492 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:29:30.941357 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:29:30.942113 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:29:30.951605 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:29:30.966102 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 21:29:30.967544 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:29:30.971576 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:29:30.977803 dracut-cmdline[226]: dracut-dracut-053 Jan 13 21:29:30.986019 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:29:31.026588 systemd-resolved[234]: Positive Trust Anchors: Jan 13 21:29:31.026608 systemd-resolved[234]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:29:31.026647 systemd-resolved[234]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:29:31.029724 systemd-resolved[234]: Defaulting to hostname 'linux'. Jan 13 21:29:31.030920 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:29:31.037310 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:29:31.080971 kernel: SCSI subsystem initialized Jan 13 21:29:31.091968 kernel: Loading iSCSI transport class v2.0-870. Jan 13 21:29:31.102979 kernel: iscsi: registered transport (tcp) Jan 13 21:29:31.123978 kernel: iscsi: registered transport (qla4xxx) Jan 13 21:29:31.124012 kernel: QLogic iSCSI HBA Driver Jan 13 21:29:31.173026 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 21:29:31.187097 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 21:29:31.210982 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 21:29:31.211029 kernel: device-mapper: uevent: version 1.0.3 Jan 13 21:29:31.211043 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 21:29:31.252979 kernel: raid6: avx2x4 gen() 30392 MB/s Jan 13 21:29:31.269970 kernel: raid6: avx2x2 gen() 30101 MB/s Jan 13 21:29:31.287294 kernel: raid6: avx2x1 gen() 24774 MB/s Jan 13 21:29:31.287342 kernel: raid6: using algorithm avx2x4 gen() 30392 MB/s Jan 13 21:29:31.305149 kernel: raid6: .... xor() 6906 MB/s, rmw enabled Jan 13 21:29:31.305224 kernel: raid6: using avx2x2 recovery algorithm Jan 13 21:29:31.324969 kernel: xor: automatically using best checksumming function avx Jan 13 21:29:31.476974 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 21:29:31.489084 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:29:31.503189 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:29:31.514213 systemd-udevd[414]: Using default interface naming scheme 'v255'. Jan 13 21:29:31.518685 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:29:31.520450 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 21:29:31.535873 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation Jan 13 21:29:31.566670 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:29:31.585095 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:29:31.647172 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:29:31.659127 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 21:29:31.673613 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 21:29:31.676880 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:29:31.680348 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:29:31.684459 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:29:31.687053 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 13 21:29:31.702058 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 13 21:29:31.702213 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 21:29:31.702226 kernel: GPT:9289727 != 19775487 Jan 13 21:29:31.702236 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 21:29:31.702246 kernel: GPT:9289727 != 19775487 Jan 13 21:29:31.702256 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 21:29:31.702266 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:29:31.696368 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 21:29:31.706570 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:29:31.710972 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 21:29:31.711000 kernel: libata version 3.00 loaded. Jan 13 21:29:31.713837 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:29:31.713988 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:29:31.717163 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:29:31.718312 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:29:31.718466 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:29:31.721219 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:29:31.731971 kernel: ahci 0000:00:1f.2: version 3.0 Jan 13 21:29:31.760144 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 13 21:29:31.760170 kernel: BTRFS: device fsid b8e2d3c5-4bed-4339-bed5-268c66823686 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (472) Jan 13 21:29:31.760181 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 13 21:29:31.760349 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 13 21:29:31.760490 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (474) Jan 13 21:29:31.760502 kernel: scsi host0: ahci Jan 13 21:29:31.760654 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 21:29:31.760666 kernel: AES CTR mode by8 optimization enabled Jan 13 21:29:31.760679 kernel: scsi host1: ahci Jan 13 21:29:31.760833 kernel: scsi host2: ahci Jan 13 21:29:31.760993 kernel: scsi host3: ahci Jan 13 21:29:31.761134 kernel: scsi host4: ahci Jan 13 21:29:31.761274 kernel: scsi host5: ahci Jan 13 21:29:31.761427 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jan 13 21:29:31.761438 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jan 13 21:29:31.761453 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jan 13 21:29:31.761463 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jan 13 21:29:31.761474 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jan 13 21:29:31.761484 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jan 13 21:29:31.733207 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:29:31.753136 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:29:31.767357 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 21:29:31.773885 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 21:29:31.774073 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 21:29:31.779643 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:29:31.784715 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 21:29:31.797066 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 21:29:31.798211 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:29:31.798264 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:29:31.798512 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:29:31.799490 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:29:31.808674 disk-uuid[560]: Primary Header is updated. Jan 13 21:29:31.808674 disk-uuid[560]: Secondary Entries is updated. Jan 13 21:29:31.808674 disk-uuid[560]: Secondary Header is updated. Jan 13 21:29:31.812980 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:29:31.816964 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:29:31.818092 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:29:31.829063 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:29:31.852905 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:29:32.072964 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 13 21:29:32.073029 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 13 21:29:32.073957 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 13 21:29:32.074964 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 13 21:29:32.074977 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 13 21:29:32.075962 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 13 21:29:32.076967 kernel: ata3.00: applying bridge limits Jan 13 21:29:32.076980 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 13 21:29:32.077965 kernel: ata3.00: configured for UDMA/100 Jan 13 21:29:32.079967 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 13 21:29:32.123483 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 13 21:29:32.136538 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 13 21:29:32.136559 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 13 21:29:32.817979 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:29:32.818319 disk-uuid[562]: The operation has completed successfully. Jan 13 21:29:32.847772 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 21:29:32.847930 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 21:29:32.881162 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 21:29:32.886418 sh[600]: Success Jan 13 21:29:32.897970 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 13 21:29:32.928829 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 21:29:32.942359 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 21:29:32.947801 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 21:29:32.957920 kernel: BTRFS info (device dm-0): first mount of filesystem b8e2d3c5-4bed-4339-bed5-268c66823686 Jan 13 21:29:32.957967 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:29:32.957983 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 21:29:32.957998 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 21:29:32.958647 kernel: BTRFS info (device dm-0): using free space tree Jan 13 21:29:32.963325 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 21:29:32.966039 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 21:29:32.976090 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 21:29:32.979239 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 21:29:32.987316 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:29:32.987350 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:29:32.987361 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:29:32.990974 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:29:32.999209 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 21:29:33.001969 kernel: BTRFS info (device vda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:29:33.012102 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 21:29:33.020096 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 21:29:33.074158 ignition[700]: Ignition 2.19.0 Jan 13 21:29:33.074172 ignition[700]: Stage: fetch-offline Jan 13 21:29:33.074217 ignition[700]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:29:33.074230 ignition[700]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:29:33.074360 ignition[700]: parsed url from cmdline: "" Jan 13 21:29:33.074365 ignition[700]: no config URL provided Jan 13 21:29:33.074371 ignition[700]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:29:33.074382 ignition[700]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:29:33.074427 ignition[700]: op(1): [started] loading QEMU firmware config module Jan 13 21:29:33.074435 ignition[700]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 13 21:29:33.085834 ignition[700]: op(1): [finished] loading QEMU firmware config module Jan 13 21:29:33.087550 ignition[700]: parsing config with SHA512: 12939e5ad5145810618f0d7e2b933df74488c3155a13b0e31882bcb1e8eebdd01af28b586c984756cec168ae024e3fda037e8a36faa4fc8c625b510da65d45e9 Jan 13 21:29:33.090531 unknown[700]: fetched base config from "system" Jan 13 21:29:33.090726 unknown[700]: fetched user config from "qemu" Jan 13 21:29:33.091032 ignition[700]: fetch-offline: fetch-offline passed Jan 13 21:29:33.091126 ignition[700]: Ignition finished successfully Jan 13 21:29:33.096628 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:29:33.101578 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:29:33.112075 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:29:33.140404 systemd-networkd[791]: lo: Link UP Jan 13 21:29:33.140417 systemd-networkd[791]: lo: Gained carrier Jan 13 21:29:33.142368 systemd-networkd[791]: Enumeration completed Jan 13 21:29:33.142475 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:29:33.142860 systemd-networkd[791]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:29:33.142866 systemd-networkd[791]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:29:33.144334 systemd-networkd[791]: eth0: Link UP Jan 13 21:29:33.144338 systemd-networkd[791]: eth0: Gained carrier Jan 13 21:29:33.144346 systemd-networkd[791]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:29:33.144901 systemd[1]: Reached target network.target - Network. Jan 13 21:29:33.147297 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 13 21:29:33.157094 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 21:29:33.161992 systemd-networkd[791]: eth0: DHCPv4 address 10.0.0.125/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 21:29:33.171624 ignition[793]: Ignition 2.19.0 Jan 13 21:29:33.171636 ignition[793]: Stage: kargs Jan 13 21:29:33.171787 ignition[793]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:29:33.175585 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 21:29:33.171799 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:29:33.172422 ignition[793]: kargs: kargs passed Jan 13 21:29:33.172462 ignition[793]: Ignition finished successfully Jan 13 21:29:33.190092 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 21:29:33.203982 ignition[802]: Ignition 2.19.0 Jan 13 21:29:33.203993 ignition[802]: Stage: disks Jan 13 21:29:33.204149 ignition[802]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:29:33.204160 ignition[802]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:29:33.206889 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 21:29:33.204830 ignition[802]: disks: disks passed Jan 13 21:29:33.208685 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 21:29:33.204868 ignition[802]: Ignition finished successfully Jan 13 21:29:33.210630 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:29:33.212508 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:29:33.214598 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:29:33.215641 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:29:33.227164 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 21:29:33.240497 systemd-fsck[812]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 21:29:33.324883 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 21:29:33.331058 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 21:29:33.422803 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 21:29:33.424629 kernel: EXT4-fs (vda9): mounted filesystem 39899d4c-a8b1-4feb-9875-e812cc535888 r/w with ordered data mode. Quota mode: none. Jan 13 21:29:33.424283 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 21:29:33.437063 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:29:33.438842 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 21:29:33.440216 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 21:29:33.440254 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 21:29:33.452548 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (821) Jan 13 21:29:33.452571 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:29:33.452582 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:29:33.452592 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:29:33.452603 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:29:33.440285 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:29:33.447346 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 21:29:33.453811 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:29:33.456650 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 21:29:33.491156 initrd-setup-root[846]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 21:29:33.496282 initrd-setup-root[853]: cut: /sysroot/etc/group: No such file or directory Jan 13 21:29:33.501104 initrd-setup-root[860]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 21:29:33.505052 initrd-setup-root[867]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 21:29:33.597336 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 21:29:33.619029 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 21:29:33.622566 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 21:29:33.627965 kernel: BTRFS info (device vda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:29:33.647576 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 21:29:33.687265 ignition[939]: INFO : Ignition 2.19.0 Jan 13 21:29:33.687265 ignition[939]: INFO : Stage: mount Jan 13 21:29:33.689349 ignition[939]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:29:33.689349 ignition[939]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:29:33.692380 ignition[939]: INFO : mount: mount passed Jan 13 21:29:33.693289 ignition[939]: INFO : Ignition finished successfully Jan 13 21:29:33.696291 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 21:29:33.706148 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 21:29:33.956388 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 21:29:34.077114 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:29:34.085968 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (948) Jan 13 21:29:34.085996 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:29:34.087525 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:29:34.087552 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:29:34.090972 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:29:34.092286 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:29:34.117780 ignition[965]: INFO : Ignition 2.19.0 Jan 13 21:29:34.117780 ignition[965]: INFO : Stage: files Jan 13 21:29:34.129400 ignition[965]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:29:34.129400 ignition[965]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:29:34.129400 ignition[965]: DEBUG : files: compiled without relabeling support, skipping Jan 13 21:29:34.129400 ignition[965]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 21:29:34.129400 ignition[965]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 21:29:34.135883 ignition[965]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 21:29:34.137429 ignition[965]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 21:29:34.139236 unknown[965]: wrote ssh authorized keys file for user: core Jan 13 21:29:34.140389 ignition[965]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 21:29:34.142635 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 13 21:29:34.149881 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 21:29:34.152085 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:29:34.154311 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:29:34.154311 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 21:29:34.154311 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 21:29:34.154311 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 21:29:34.154311 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 13 21:29:34.418089 systemd-networkd[791]: eth0: Gained IPv6LL Jan 13 21:29:34.493765 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 13 21:29:34.839979 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 21:29:34.839979 ignition[965]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Jan 13 21:29:34.843807 ignition[965]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 21:29:34.846007 ignition[965]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 21:29:34.846007 ignition[965]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Jan 13 21:29:34.846007 ignition[965]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Jan 13 21:29:34.871505 ignition[965]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 21:29:34.876419 ignition[965]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 21:29:34.878054 ignition[965]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Jan 13 21:29:34.878054 ignition[965]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:29:34.878054 ignition[965]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:29:34.878054 ignition[965]: INFO : files: files passed Jan 13 21:29:34.878054 ignition[965]: INFO : Ignition finished successfully Jan 13 21:29:34.887200 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 21:29:34.900098 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 21:29:34.902642 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 21:29:34.905170 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 21:29:34.905336 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 21:29:34.928409 initrd-setup-root-after-ignition[993]: grep: /sysroot/oem/oem-release: No such file or directory Jan 13 21:29:34.932457 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:29:34.932457 initrd-setup-root-after-ignition[995]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:29:34.935961 initrd-setup-root-after-ignition[999]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:29:34.939114 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:29:34.941857 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 21:29:34.957240 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 21:29:34.985025 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 21:29:34.985174 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 21:29:34.987805 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 21:29:34.989048 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 21:29:34.991405 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 21:29:35.002174 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 21:29:35.017102 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:29:35.024123 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 21:29:35.034000 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:29:35.035415 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:29:35.037799 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 21:29:35.039807 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 21:29:35.039937 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:29:35.042233 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 21:29:35.043777 systemd[1]: Stopped target basic.target - Basic System. Jan 13 21:29:35.046311 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 21:29:35.048340 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:29:35.050351 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 21:29:35.052503 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 21:29:35.054625 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:29:35.056966 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 21:29:35.059014 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 21:29:35.061178 systemd[1]: Stopped target swap.target - Swaps. Jan 13 21:29:35.062987 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 21:29:35.063093 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:29:35.065433 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:29:35.066898 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:29:35.069014 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 21:29:35.069146 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:29:35.071273 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 21:29:35.071376 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 21:29:35.073796 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 21:29:35.073916 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:29:35.075775 systemd[1]: Stopped target paths.target - Path Units. Jan 13 21:29:35.077514 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 21:29:35.081019 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:29:35.083200 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 21:29:35.085183 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 21:29:35.086973 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 21:29:35.087062 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:29:35.089010 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 21:29:35.089093 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:29:35.091452 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 21:29:35.091556 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:29:35.093809 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 21:29:35.093960 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 21:29:35.106096 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 21:29:35.107710 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 21:29:35.109139 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 21:29:35.109266 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:29:35.111898 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 21:29:35.112092 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:29:35.118855 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 21:29:35.119799 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 21:29:35.122777 ignition[1020]: INFO : Ignition 2.19.0 Jan 13 21:29:35.122777 ignition[1020]: INFO : Stage: umount Jan 13 21:29:35.122777 ignition[1020]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:29:35.122777 ignition[1020]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:29:35.122777 ignition[1020]: INFO : umount: umount passed Jan 13 21:29:35.122777 ignition[1020]: INFO : Ignition finished successfully Jan 13 21:29:35.123435 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 21:29:35.123572 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 21:29:35.125366 systemd[1]: Stopped target network.target - Network. Jan 13 21:29:35.126421 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 21:29:35.126483 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 21:29:35.128747 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 21:29:35.128797 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 21:29:35.131145 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 21:29:35.131197 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 21:29:35.133184 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 21:29:35.133252 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 21:29:35.135383 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 21:29:35.137562 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 21:29:35.140551 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 21:29:35.146955 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 21:29:35.147084 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 21:29:35.150298 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 21:29:35.150374 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:29:35.150981 systemd-networkd[791]: eth0: DHCPv6 lease lost Jan 13 21:29:35.156356 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 21:29:35.156477 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 21:29:35.158637 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 21:29:35.158677 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:29:35.172031 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 21:29:35.172997 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 21:29:35.173057 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:29:35.175263 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:29:35.175315 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:29:35.177527 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 21:29:35.177577 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 21:29:35.180018 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:29:35.191521 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 21:29:35.191653 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 21:29:35.204893 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 21:29:35.206002 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:29:35.208758 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 21:29:35.208817 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 21:29:35.212009 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 21:29:35.212059 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:29:35.215077 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 21:29:35.215140 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:29:35.218312 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 21:29:35.218362 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 21:29:35.221310 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:29:35.221366 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:29:35.235083 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 21:29:35.237345 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 21:29:35.237417 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:29:35.240914 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 13 21:29:35.242045 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:29:35.244717 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 21:29:35.244773 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:29:35.246111 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:29:35.246157 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:29:35.251543 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 21:29:35.252667 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 21:29:35.305832 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 21:29:35.305994 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 21:29:35.308065 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 21:29:35.308747 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 21:29:35.308802 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 21:29:35.325126 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 21:29:35.333358 systemd[1]: Switching root. Jan 13 21:29:35.359832 systemd-journald[193]: Journal stopped Jan 13 21:29:36.966775 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Jan 13 21:29:36.966851 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 21:29:36.966867 kernel: SELinux: policy capability open_perms=1 Jan 13 21:29:36.966885 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 21:29:36.966903 kernel: SELinux: policy capability always_check_network=0 Jan 13 21:29:36.966917 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 21:29:36.966930 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 21:29:36.967064 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 21:29:36.967082 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 21:29:36.967096 kernel: audit: type=1403 audit(1736803775.732:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 21:29:36.967110 systemd[1]: Successfully loaded SELinux policy in 41.060ms. Jan 13 21:29:36.967132 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.777ms. Jan 13 21:29:36.967148 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:29:36.967175 systemd[1]: Detected virtualization kvm. Jan 13 21:29:36.967193 systemd[1]: Detected architecture x86-64. Jan 13 21:29:36.967209 systemd[1]: Detected first boot. Jan 13 21:29:36.967223 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:29:36.967237 zram_generator::config[1065]: No configuration found. Jan 13 21:29:36.967253 systemd[1]: Populated /etc with preset unit settings. Jan 13 21:29:36.967267 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 21:29:36.967282 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 21:29:36.967297 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 21:29:36.967315 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 21:29:36.967329 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 21:29:36.967343 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 21:29:36.967358 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 21:29:36.967372 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 21:29:36.967387 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 21:29:36.967401 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 21:29:36.967420 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 21:29:36.967437 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:29:36.967451 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:29:36.967465 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 21:29:36.967479 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 21:29:36.967493 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 21:29:36.967508 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:29:36.967524 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 21:29:36.967538 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:29:36.967553 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 21:29:36.967569 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 21:29:36.967584 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 21:29:36.967598 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 21:29:36.967612 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:29:36.967627 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:29:36.967641 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:29:36.967660 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:29:36.967674 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 21:29:36.967690 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 21:29:36.967704 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:29:36.967719 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:29:36.967733 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:29:36.967747 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 21:29:36.967762 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 21:29:36.967776 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 21:29:36.967790 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 21:29:36.967804 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:29:36.967821 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 21:29:36.967837 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 21:29:36.967852 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 21:29:36.967869 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 21:29:36.967885 systemd[1]: Reached target machines.target - Containers. Jan 13 21:29:36.967900 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 21:29:36.967914 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:29:36.967928 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:29:36.967956 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 21:29:36.967973 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:29:36.967987 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:29:36.968002 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:29:36.968016 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 21:29:36.968030 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:29:36.968044 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 21:29:36.968059 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 21:29:36.968073 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 21:29:36.968090 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 21:29:36.968104 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 21:29:36.968118 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:29:36.968133 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:29:36.968146 kernel: loop: module loaded Jan 13 21:29:36.968161 kernel: fuse: init (API version 7.39) Jan 13 21:29:36.968182 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 21:29:36.968214 systemd-journald[1128]: Collecting audit messages is disabled. Jan 13 21:29:36.968242 systemd-journald[1128]: Journal started Jan 13 21:29:36.968267 systemd-journald[1128]: Runtime Journal (/run/log/journal/517c5bdbcaaa493ca6cbdbce93536e03) is 6.0M, max 48.3M, 42.2M free. Jan 13 21:29:36.249282 systemd[1]: Queued start job for default target multi-user.target. Jan 13 21:29:36.265747 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 21:29:36.266266 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 21:29:36.971959 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 21:29:36.975957 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:29:36.978556 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 21:29:36.978587 systemd[1]: Stopped verity-setup.service. Jan 13 21:29:36.981954 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:29:36.984962 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:29:36.986116 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 21:29:36.987301 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 21:29:36.988513 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 21:29:36.989606 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 21:29:36.990812 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 21:29:36.992149 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 21:29:36.993469 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:29:36.995100 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 21:29:36.995326 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 21:29:36.996825 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:29:36.997044 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:29:36.998492 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:29:36.998692 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:29:37.000249 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 21:29:37.000456 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 21:29:37.001861 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:29:37.002077 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:29:37.003485 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:29:37.004891 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 21:29:37.006459 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 21:29:37.010963 kernel: ACPI: bus type drm_connector registered Jan 13 21:29:37.011233 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:29:37.011433 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:29:37.022913 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 21:29:37.029064 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 21:29:37.031451 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 21:29:37.032575 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 21:29:37.032613 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:29:37.034609 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 21:29:37.038078 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 21:29:37.040755 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 21:29:37.042124 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:29:37.087114 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 21:29:37.089777 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 21:29:37.091664 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:29:37.093874 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 21:29:37.095363 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:29:37.097497 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:29:37.108560 systemd-journald[1128]: Time spent on flushing to /var/log/journal/517c5bdbcaaa493ca6cbdbce93536e03 is 16.316ms for 978 entries. Jan 13 21:29:37.108560 systemd-journald[1128]: System Journal (/var/log/journal/517c5bdbcaaa493ca6cbdbce93536e03) is 8.0M, max 195.6M, 187.6M free. Jan 13 21:29:37.334629 systemd-journald[1128]: Received client request to flush runtime journal. Jan 13 21:29:37.334676 kernel: loop0: detected capacity change from 0 to 142488 Jan 13 21:29:37.334691 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 21:29:37.334704 kernel: loop1: detected capacity change from 0 to 210664 Jan 13 21:29:37.105829 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 21:29:37.108855 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:29:37.113492 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:29:37.115377 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 21:29:37.117011 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 21:29:37.120616 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 21:29:37.135140 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 21:29:37.144131 udevadm[1179]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 13 21:29:37.284898 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:29:37.288574 systemd-tmpfiles[1166]: ACLs are not supported, ignoring. Jan 13 21:29:37.288589 systemd-tmpfiles[1166]: ACLs are not supported, ignoring. Jan 13 21:29:37.294436 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:29:37.317030 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 21:29:37.319649 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 21:29:37.333188 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 21:29:37.336573 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 21:29:37.492975 kernel: loop2: detected capacity change from 0 to 140768 Jan 13 21:29:37.518788 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 21:29:37.526963 kernel: loop3: detected capacity change from 0 to 142488 Jan 13 21:29:37.529708 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 21:29:37.542972 kernel: loop4: detected capacity change from 0 to 210664 Jan 13 21:29:37.550970 kernel: loop5: detected capacity change from 0 to 140768 Jan 13 21:29:37.556964 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 21:29:37.564185 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:29:37.566138 (sd-merge)[1200]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 13 21:29:37.566716 (sd-merge)[1200]: Merged extensions into '/usr'. Jan 13 21:29:37.570515 systemd[1]: Reloading requested from client PID 1164 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 21:29:37.570530 systemd[1]: Reloading... Jan 13 21:29:37.585894 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. Jan 13 21:29:37.585916 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. Jan 13 21:29:37.627352 zram_generator::config[1232]: No configuration found. Jan 13 21:29:37.792567 ldconfig[1159]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 21:29:37.845468 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:29:37.894552 systemd[1]: Reloading finished in 323 ms. Jan 13 21:29:37.929612 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:29:37.931344 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 21:29:37.952101 systemd[1]: Starting ensure-sysext.service... Jan 13 21:29:37.964337 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:29:37.969135 systemd[1]: Reloading requested from client PID 1269 ('systemctl') (unit ensure-sysext.service)... Jan 13 21:29:37.969165 systemd[1]: Reloading... Jan 13 21:29:37.999503 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 21:29:37.999811 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 21:29:38.000714 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 21:29:38.001011 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. Jan 13 21:29:38.001087 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. Jan 13 21:29:38.004490 systemd-tmpfiles[1270]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:29:38.004501 systemd-tmpfiles[1270]: Skipping /boot Jan 13 21:29:38.021237 zram_generator::config[1296]: No configuration found. Jan 13 21:29:38.026242 systemd-tmpfiles[1270]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:29:38.026257 systemd-tmpfiles[1270]: Skipping /boot Jan 13 21:29:38.143999 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:29:38.194629 systemd[1]: Reloading finished in 225 ms. Jan 13 21:29:38.216474 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 21:29:38.218086 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 21:29:38.219807 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 21:29:38.231595 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:29:38.242295 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:29:38.246208 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 21:29:38.249127 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 21:29:38.252904 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:29:38.255201 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 21:29:38.258803 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:29:38.259398 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:29:38.260905 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:29:38.263850 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:29:38.266260 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:29:38.267646 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:29:38.267825 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:29:38.269327 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:29:38.269524 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:29:38.275165 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 21:29:38.276690 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:29:38.276868 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:29:38.278667 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:29:38.278831 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:29:38.284418 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:29:38.284667 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:29:38.287096 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:29:38.292045 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:29:38.297916 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:29:38.299230 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:29:38.299364 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:29:38.300327 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:29:38.300518 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:29:38.303592 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:29:38.304799 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:29:38.315577 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:29:38.315781 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:29:38.319060 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:29:38.319383 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:29:38.329221 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:29:38.331762 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:29:38.334154 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:29:38.338463 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:29:38.338549 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:29:38.339174 systemd[1]: Finished ensure-sysext.service. Jan 13 21:29:38.340394 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 21:29:38.342175 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 21:29:38.345764 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:29:38.345934 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:29:38.346251 augenrules[1374]: No rules Jan 13 21:29:38.347781 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:29:38.349172 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 21:29:38.350707 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 21:29:38.352297 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:29:38.352476 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:29:38.355709 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:29:38.355875 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:29:38.365555 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:29:38.365657 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:29:38.377284 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 21:29:38.380040 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:29:38.382485 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 21:29:38.383973 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 21:29:38.386397 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 21:29:38.403073 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 21:29:38.413313 systemd-udevd[1393]: Using default interface naming scheme 'v255'. Jan 13 21:29:38.419381 systemd-resolved[1342]: Positive Trust Anchors: Jan 13 21:29:38.419401 systemd-resolved[1342]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:29:38.419445 systemd-resolved[1342]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:29:38.424583 systemd-resolved[1342]: Defaulting to hostname 'linux'. Jan 13 21:29:38.426692 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:29:38.428155 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:29:38.435343 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:29:38.445877 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:29:38.461171 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 21:29:38.464602 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 21:29:38.471502 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 21:29:38.493974 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1401) Jan 13 21:29:38.529637 systemd-networkd[1403]: lo: Link UP Jan 13 21:29:38.529654 systemd-networkd[1403]: lo: Gained carrier Jan 13 21:29:38.531650 systemd-networkd[1403]: Enumeration completed Jan 13 21:29:38.532284 systemd-networkd[1403]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:29:38.532289 systemd-networkd[1403]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:29:38.533116 systemd-networkd[1403]: eth0: Link UP Jan 13 21:29:38.533136 systemd-networkd[1403]: eth0: Gained carrier Jan 13 21:29:38.533151 systemd-networkd[1403]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:29:38.534701 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:29:38.536706 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:29:38.538072 systemd[1]: Reached target network.target - Network. Jan 13 21:29:38.545021 systemd-networkd[1403]: eth0: DHCPv4 address 10.0.0.125/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 21:29:38.545203 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 21:29:38.549420 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 21:29:38.550667 systemd-networkd[1403]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:29:38.551279 systemd-timesyncd[1392]: Network configuration changed, trying to establish connection. Jan 13 21:29:39.026116 systemd-resolved[1342]: Clock change detected. Flushing caches. Jan 13 21:29:39.026170 systemd-timesyncd[1392]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 13 21:29:39.026257 systemd-timesyncd[1392]: Initial clock synchronization to Mon 2025-01-13 21:29:39.026003 UTC. Jan 13 21:29:39.028689 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 13 21:29:39.033687 kernel: ACPI: button: Power Button [PWRF] Jan 13 21:29:39.038600 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 21:29:39.059717 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 13 21:29:39.091036 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:29:39.094248 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 13 21:29:39.097260 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 13 21:29:39.097464 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 13 21:29:39.098495 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 13 21:29:39.103954 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:29:39.104384 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:29:39.109688 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 21:29:39.163045 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:29:39.173989 kernel: kvm_amd: TSC scaling supported Jan 13 21:29:39.174073 kernel: kvm_amd: Nested Virtualization enabled Jan 13 21:29:39.174092 kernel: kvm_amd: Nested Paging enabled Jan 13 21:29:39.174119 kernel: kvm_amd: LBR virtualization supported Jan 13 21:29:39.175205 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 13 21:29:39.175235 kernel: kvm_amd: Virtual GIF supported Jan 13 21:29:39.196784 kernel: EDAC MC: Ver: 3.0.0 Jan 13 21:29:39.223898 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:29:39.242935 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 21:29:39.256904 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 21:29:39.264834 lvm[1445]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:29:39.298501 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 21:29:39.300078 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:29:39.301258 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:29:39.302485 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 21:29:39.303816 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 21:29:39.305357 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 21:29:39.306615 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 21:29:39.307969 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 21:29:39.309243 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 21:29:39.309266 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:29:39.310235 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:29:39.311900 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 21:29:39.315085 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 21:29:39.324636 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 21:29:39.327168 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 21:29:39.328783 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 21:29:39.330012 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:29:39.331013 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:29:39.332050 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:29:39.332085 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:29:39.333143 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 21:29:39.335854 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 21:29:39.338784 lvm[1450]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:29:39.340541 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 21:29:39.343284 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 21:29:39.344422 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 21:29:39.348828 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 21:29:39.351989 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 21:29:39.352684 jq[1453]: false Jan 13 21:29:39.356408 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 21:29:39.362367 extend-filesystems[1454]: Found loop3 Jan 13 21:29:39.365981 extend-filesystems[1454]: Found loop4 Jan 13 21:29:39.365981 extend-filesystems[1454]: Found loop5 Jan 13 21:29:39.365981 extend-filesystems[1454]: Found sr0 Jan 13 21:29:39.365981 extend-filesystems[1454]: Found vda Jan 13 21:29:39.365981 extend-filesystems[1454]: Found vda1 Jan 13 21:29:39.365981 extend-filesystems[1454]: Found vda2 Jan 13 21:29:39.365981 extend-filesystems[1454]: Found vda3 Jan 13 21:29:39.365981 extend-filesystems[1454]: Found usr Jan 13 21:29:39.365981 extend-filesystems[1454]: Found vda4 Jan 13 21:29:39.365981 extend-filesystems[1454]: Found vda6 Jan 13 21:29:39.365981 extend-filesystems[1454]: Found vda7 Jan 13 21:29:39.365981 extend-filesystems[1454]: Found vda9 Jan 13 21:29:39.365981 extend-filesystems[1454]: Checking size of /dev/vda9 Jan 13 21:29:39.391168 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 13 21:29:39.363912 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 21:29:39.391315 extend-filesystems[1454]: Resized partition /dev/vda9 Jan 13 21:29:39.393777 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1410) Jan 13 21:29:39.378054 dbus-daemon[1452]: [system] SELinux support is enabled Jan 13 21:29:39.367538 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 21:29:39.394129 extend-filesystems[1472]: resize2fs 1.47.1 (20-May-2024) Jan 13 21:29:39.368021 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 21:29:39.395910 jq[1471]: true Jan 13 21:29:39.368884 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 21:29:39.379808 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 21:29:39.382335 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 21:29:39.387705 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 21:29:39.405531 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 21:29:39.406750 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 21:29:39.407265 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 21:29:39.407492 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 21:29:39.409701 update_engine[1465]: I20250113 21:29:39.409611 1465 main.cc:92] Flatcar Update Engine starting Jan 13 21:29:39.410224 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 21:29:39.410417 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 21:29:39.411868 update_engine[1465]: I20250113 21:29:39.410820 1465 update_check_scheduler.cc:74] Next update check in 6m2s Jan 13 21:29:39.420690 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 13 21:29:39.429415 (ntainerd)[1478]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 21:29:39.441769 jq[1477]: true Jan 13 21:29:39.445693 extend-filesystems[1472]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 21:29:39.445693 extend-filesystems[1472]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 21:29:39.445693 extend-filesystems[1472]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 13 21:29:39.450792 extend-filesystems[1454]: Resized filesystem in /dev/vda9 Jan 13 21:29:39.447555 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 21:29:39.447824 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 21:29:39.459622 systemd[1]: Started update-engine.service - Update Engine. Jan 13 21:29:39.461100 systemd-logind[1461]: Watching system buttons on /dev/input/event1 (Power Button) Jan 13 21:29:39.461121 systemd-logind[1461]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 21:29:39.461184 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 21:29:39.461209 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 21:29:39.462861 systemd-logind[1461]: New seat seat0. Jan 13 21:29:39.464129 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 21:29:39.464148 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 21:29:39.472939 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 21:29:39.474404 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 21:29:39.487706 bash[1503]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:29:39.490243 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 21:29:39.492967 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 13 21:29:39.497019 locksmithd[1504]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 21:29:39.540185 sshd_keygen[1476]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 21:29:39.569683 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 21:29:39.579905 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 21:29:39.587157 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 21:29:39.587466 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 21:29:39.591872 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 21:29:39.607992 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 21:29:39.617133 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 21:29:39.619786 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 21:29:39.621099 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 21:29:39.634078 containerd[1478]: time="2025-01-13T21:29:39.633980496Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 13 21:29:39.655766 containerd[1478]: time="2025-01-13T21:29:39.655711279Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:29:39.657444 containerd[1478]: time="2025-01-13T21:29:39.657400728Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:29:39.657444 containerd[1478]: time="2025-01-13T21:29:39.657429141Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 21:29:39.657444 containerd[1478]: time="2025-01-13T21:29:39.657443548Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 21:29:39.657645 containerd[1478]: time="2025-01-13T21:29:39.657617384Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 21:29:39.657683 containerd[1478]: time="2025-01-13T21:29:39.657644084Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 21:29:39.657772 containerd[1478]: time="2025-01-13T21:29:39.657742879Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:29:39.657772 containerd[1478]: time="2025-01-13T21:29:39.657764981Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:29:39.657990 containerd[1478]: time="2025-01-13T21:29:39.657963954Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:29:39.657990 containerd[1478]: time="2025-01-13T21:29:39.657984593Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 21:29:39.658038 containerd[1478]: time="2025-01-13T21:29:39.657997847Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:29:39.658038 containerd[1478]: time="2025-01-13T21:29:39.658008067Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 21:29:39.658113 containerd[1478]: time="2025-01-13T21:29:39.658097174Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:29:39.658353 containerd[1478]: time="2025-01-13T21:29:39.658327055Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:29:39.658474 containerd[1478]: time="2025-01-13T21:29:39.658451508Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:29:39.658474 containerd[1478]: time="2025-01-13T21:29:39.658467669Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 21:29:39.658584 containerd[1478]: time="2025-01-13T21:29:39.658562026Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 21:29:39.658635 containerd[1478]: time="2025-01-13T21:29:39.658618962Z" level=info msg="metadata content store policy set" policy=shared Jan 13 21:29:39.665262 containerd[1478]: time="2025-01-13T21:29:39.665222586Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 21:29:39.665316 containerd[1478]: time="2025-01-13T21:29:39.665281507Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 21:29:39.665316 containerd[1478]: time="2025-01-13T21:29:39.665301665Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 21:29:39.665366 containerd[1478]: time="2025-01-13T21:29:39.665317034Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 21:29:39.665366 containerd[1478]: time="2025-01-13T21:29:39.665344134Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 21:29:39.665511 containerd[1478]: time="2025-01-13T21:29:39.665478727Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 21:29:39.665785 containerd[1478]: time="2025-01-13T21:29:39.665754594Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 21:29:39.665947 containerd[1478]: time="2025-01-13T21:29:39.665931245Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 21:29:39.665973 containerd[1478]: time="2025-01-13T21:29:39.665951183Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 21:29:39.665973 containerd[1478]: time="2025-01-13T21:29:39.665963616Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 21:29:39.666018 containerd[1478]: time="2025-01-13T21:29:39.665977602Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 21:29:39.666018 containerd[1478]: time="2025-01-13T21:29:39.665990476Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 21:29:39.666018 containerd[1478]: time="2025-01-13T21:29:39.666002589Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 21:29:39.666119 containerd[1478]: time="2025-01-13T21:29:39.666016776Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 21:29:39.666119 containerd[1478]: time="2025-01-13T21:29:39.666030361Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 21:29:39.666119 containerd[1478]: time="2025-01-13T21:29:39.666042554Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 21:29:39.666119 containerd[1478]: time="2025-01-13T21:29:39.666058504Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 21:29:39.666119 containerd[1478]: time="2025-01-13T21:29:39.666070176Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 21:29:39.666119 containerd[1478]: time="2025-01-13T21:29:39.666089733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 21:29:39.666119 containerd[1478]: time="2025-01-13T21:29:39.666101895Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 21:29:39.666119 containerd[1478]: time="2025-01-13T21:29:39.666114559Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 21:29:39.666254 containerd[1478]: time="2025-01-13T21:29:39.666126492Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 21:29:39.666254 containerd[1478]: time="2025-01-13T21:29:39.666139716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 21:29:39.666254 containerd[1478]: time="2025-01-13T21:29:39.666151859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 21:29:39.666254 containerd[1478]: time="2025-01-13T21:29:39.666163120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 21:29:39.666254 containerd[1478]: time="2025-01-13T21:29:39.666175143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 21:29:39.666254 containerd[1478]: time="2025-01-13T21:29:39.666187536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 21:29:39.666254 containerd[1478]: time="2025-01-13T21:29:39.666201522Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 21:29:39.666254 containerd[1478]: time="2025-01-13T21:29:39.666212393Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 21:29:39.666254 containerd[1478]: time="2025-01-13T21:29:39.666223554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 21:29:39.666254 containerd[1478]: time="2025-01-13T21:29:39.666235466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 21:29:39.666254 containerd[1478]: time="2025-01-13T21:29:39.666254722Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 21:29:39.666434 containerd[1478]: time="2025-01-13T21:29:39.666274429Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 21:29:39.666434 containerd[1478]: time="2025-01-13T21:29:39.666285991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 21:29:39.666434 containerd[1478]: time="2025-01-13T21:29:39.666299316Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 21:29:39.666577 containerd[1478]: time="2025-01-13T21:29:39.666488240Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 21:29:39.666616 containerd[1478]: time="2025-01-13T21:29:39.666592165Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 21:29:39.666636 containerd[1478]: time="2025-01-13T21:29:39.666616731Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 21:29:39.666654 containerd[1478]: time="2025-01-13T21:29:39.666637400Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 21:29:39.666687 containerd[1478]: time="2025-01-13T21:29:39.666651857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 21:29:39.666714 containerd[1478]: time="2025-01-13T21:29:39.666684418Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 21:29:39.666714 containerd[1478]: time="2025-01-13T21:29:39.666701790Z" level=info msg="NRI interface is disabled by configuration." Jan 13 21:29:39.666753 containerd[1478]: time="2025-01-13T21:29:39.666717400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 21:29:39.667087 containerd[1478]: time="2025-01-13T21:29:39.667016891Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 21:29:39.667255 containerd[1478]: time="2025-01-13T21:29:39.667090600Z" level=info msg="Connect containerd service" Jan 13 21:29:39.667255 containerd[1478]: time="2025-01-13T21:29:39.667154219Z" level=info msg="using legacy CRI server" Jan 13 21:29:39.667255 containerd[1478]: time="2025-01-13T21:29:39.667163586Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 21:29:39.667449 containerd[1478]: time="2025-01-13T21:29:39.667426469Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 21:29:39.668113 containerd[1478]: time="2025-01-13T21:29:39.668086648Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:29:39.668254 containerd[1478]: time="2025-01-13T21:29:39.668220979Z" level=info msg="Start subscribing containerd event" Jan 13 21:29:39.668276 containerd[1478]: time="2025-01-13T21:29:39.668262177Z" level=info msg="Start recovering state" Jan 13 21:29:39.668413 containerd[1478]: time="2025-01-13T21:29:39.668393513Z" level=info msg="Start event monitor" Jan 13 21:29:39.668437 containerd[1478]: time="2025-01-13T21:29:39.668397931Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 21:29:39.668472 containerd[1478]: time="2025-01-13T21:29:39.668416606Z" level=info msg="Start snapshots syncer" Jan 13 21:29:39.668493 containerd[1478]: time="2025-01-13T21:29:39.668480847Z" level=info msg="Start cni network conf syncer for default" Jan 13 21:29:39.668513 containerd[1478]: time="2025-01-13T21:29:39.668493521Z" level=info msg="Start streaming server" Jan 13 21:29:39.668537 containerd[1478]: time="2025-01-13T21:29:39.668461060Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 21:29:39.668650 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 21:29:39.669841 containerd[1478]: time="2025-01-13T21:29:39.668789506Z" level=info msg="containerd successfully booted in 0.035943s" Jan 13 21:29:40.651885 systemd-networkd[1403]: eth0: Gained IPv6LL Jan 13 21:29:40.654790 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 21:29:40.656588 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 21:29:40.669873 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 13 21:29:40.672283 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:29:40.674380 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 21:29:40.693377 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 13 21:29:40.693605 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 13 21:29:40.695373 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 21:29:40.698334 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 21:29:41.309549 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:29:41.311312 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 21:29:41.312883 systemd[1]: Startup finished in 979ms (kernel) + 5.022s (initrd) + 5.145s (userspace) = 11.148s. Jan 13 21:29:41.314957 (kubelet)[1558]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:29:41.765194 kubelet[1558]: E0113 21:29:41.765069 1558 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:29:41.769560 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:29:41.769822 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:29:49.103639 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 21:29:49.105109 systemd[1]: Started sshd@0-10.0.0.125:22-10.0.0.1:35914.service - OpenSSH per-connection server daemon (10.0.0.1:35914). Jan 13 21:29:49.153976 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 35914 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:29:49.156307 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:29:49.164891 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 21:29:49.175009 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 21:29:49.177061 systemd-logind[1461]: New session 1 of user core. Jan 13 21:29:49.188044 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 21:29:49.191092 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 21:29:49.200974 (systemd)[1576]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 21:29:49.327181 systemd[1576]: Queued start job for default target default.target. Jan 13 21:29:49.338991 systemd[1576]: Created slice app.slice - User Application Slice. Jan 13 21:29:49.339018 systemd[1576]: Reached target paths.target - Paths. Jan 13 21:29:49.339032 systemd[1576]: Reached target timers.target - Timers. Jan 13 21:29:49.340609 systemd[1576]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 21:29:49.351903 systemd[1576]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 21:29:49.352090 systemd[1576]: Reached target sockets.target - Sockets. Jan 13 21:29:49.352118 systemd[1576]: Reached target basic.target - Basic System. Jan 13 21:29:49.352181 systemd[1576]: Reached target default.target - Main User Target. Jan 13 21:29:49.352230 systemd[1576]: Startup finished in 143ms. Jan 13 21:29:49.352485 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 21:29:49.354009 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 21:29:49.415416 systemd[1]: Started sshd@1-10.0.0.125:22-10.0.0.1:35924.service - OpenSSH per-connection server daemon (10.0.0.1:35924). Jan 13 21:29:49.452879 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 35924 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:29:49.454472 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:29:49.458568 systemd-logind[1461]: New session 2 of user core. Jan 13 21:29:49.470786 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 21:29:49.525278 sshd[1587]: pam_unix(sshd:session): session closed for user core Jan 13 21:29:49.540875 systemd[1]: sshd@1-10.0.0.125:22-10.0.0.1:35924.service: Deactivated successfully. Jan 13 21:29:49.542721 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 21:29:49.544370 systemd-logind[1461]: Session 2 logged out. Waiting for processes to exit. Jan 13 21:29:49.552091 systemd[1]: Started sshd@2-10.0.0.125:22-10.0.0.1:35932.service - OpenSSH per-connection server daemon (10.0.0.1:35932). Jan 13 21:29:49.553314 systemd-logind[1461]: Removed session 2. Jan 13 21:29:49.585697 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 35932 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:29:49.587500 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:29:49.591916 systemd-logind[1461]: New session 3 of user core. Jan 13 21:29:49.601830 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 21:29:49.653050 sshd[1594]: pam_unix(sshd:session): session closed for user core Jan 13 21:29:49.663839 systemd[1]: sshd@2-10.0.0.125:22-10.0.0.1:35932.service: Deactivated successfully. Jan 13 21:29:49.665513 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 21:29:49.667090 systemd-logind[1461]: Session 3 logged out. Waiting for processes to exit. Jan 13 21:29:49.673932 systemd[1]: Started sshd@3-10.0.0.125:22-10.0.0.1:35936.service - OpenSSH per-connection server daemon (10.0.0.1:35936). Jan 13 21:29:49.674880 systemd-logind[1461]: Removed session 3. Jan 13 21:29:49.706771 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 35936 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:29:49.708504 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:29:49.712431 systemd-logind[1461]: New session 4 of user core. Jan 13 21:29:49.726938 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 21:29:49.783614 sshd[1601]: pam_unix(sshd:session): session closed for user core Jan 13 21:29:49.795600 systemd[1]: sshd@3-10.0.0.125:22-10.0.0.1:35936.service: Deactivated successfully. Jan 13 21:29:49.797379 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 21:29:49.798742 systemd-logind[1461]: Session 4 logged out. Waiting for processes to exit. Jan 13 21:29:49.806008 systemd[1]: Started sshd@4-10.0.0.125:22-10.0.0.1:35944.service - OpenSSH per-connection server daemon (10.0.0.1:35944). Jan 13 21:29:49.807098 systemd-logind[1461]: Removed session 4. Jan 13 21:29:49.840916 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 35944 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:29:49.842760 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:29:49.846853 systemd-logind[1461]: New session 5 of user core. Jan 13 21:29:49.855807 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 21:29:49.913185 sudo[1611]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 21:29:49.913593 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:29:49.931149 sudo[1611]: pam_unix(sudo:session): session closed for user root Jan 13 21:29:49.933343 sshd[1608]: pam_unix(sshd:session): session closed for user core Jan 13 21:29:49.946576 systemd[1]: sshd@4-10.0.0.125:22-10.0.0.1:35944.service: Deactivated successfully. Jan 13 21:29:49.948205 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 21:29:49.949579 systemd-logind[1461]: Session 5 logged out. Waiting for processes to exit. Jan 13 21:29:49.950890 systemd[1]: Started sshd@5-10.0.0.125:22-10.0.0.1:35946.service - OpenSSH per-connection server daemon (10.0.0.1:35946). Jan 13 21:29:49.951692 systemd-logind[1461]: Removed session 5. Jan 13 21:29:49.989231 sshd[1616]: Accepted publickey for core from 10.0.0.1 port 35946 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:29:49.990824 sshd[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:29:49.994385 systemd-logind[1461]: New session 6 of user core. Jan 13 21:29:50.003813 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 21:29:50.058221 sudo[1620]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 21:29:50.058554 sudo[1620]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:29:50.062326 sudo[1620]: pam_unix(sudo:session): session closed for user root Jan 13 21:29:50.067995 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 13 21:29:50.068318 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:29:50.085875 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 13 21:29:50.087484 auditctl[1623]: No rules Jan 13 21:29:50.088738 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 21:29:50.088975 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 13 21:29:50.090538 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:29:50.118499 augenrules[1641]: No rules Jan 13 21:29:50.120134 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:29:50.121268 sudo[1619]: pam_unix(sudo:session): session closed for user root Jan 13 21:29:50.123052 sshd[1616]: pam_unix(sshd:session): session closed for user core Jan 13 21:29:50.138264 systemd[1]: sshd@5-10.0.0.125:22-10.0.0.1:35946.service: Deactivated successfully. Jan 13 21:29:50.139838 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 21:29:50.141255 systemd-logind[1461]: Session 6 logged out. Waiting for processes to exit. Jan 13 21:29:50.148922 systemd[1]: Started sshd@6-10.0.0.125:22-10.0.0.1:35960.service - OpenSSH per-connection server daemon (10.0.0.1:35960). Jan 13 21:29:50.149664 systemd-logind[1461]: Removed session 6. Jan 13 21:29:50.179756 sshd[1649]: Accepted publickey for core from 10.0.0.1 port 35960 ssh2: RSA SHA256:PBaQDD+CxAT8qBP6F3GfAXEs6QYYpXiSdS98dxxqdPI Jan 13 21:29:50.181043 sshd[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:29:50.184340 systemd-logind[1461]: New session 7 of user core. Jan 13 21:29:50.190775 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 21:29:50.243411 sudo[1652]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 21:29:50.243757 sudo[1652]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:29:50.263921 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 13 21:29:50.283019 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 13 21:29:50.283237 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 13 21:29:50.881217 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:29:50.892854 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:29:50.910443 systemd[1]: Reloading requested from client PID 1699 ('systemctl') (unit session-7.scope)... Jan 13 21:29:50.910458 systemd[1]: Reloading... Jan 13 21:29:51.006701 zram_generator::config[1740]: No configuration found. Jan 13 21:29:51.802829 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:29:51.901235 systemd[1]: Reloading finished in 990 ms. Jan 13 21:29:51.959632 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 21:29:51.959763 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 21:29:51.960087 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:29:51.961893 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:29:52.111265 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:29:52.116115 (kubelet)[1786]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:29:52.155437 kubelet[1786]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:29:52.155437 kubelet[1786]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:29:52.155437 kubelet[1786]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:29:52.156415 kubelet[1786]: I0113 21:29:52.156378 1786 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:29:52.748757 kubelet[1786]: I0113 21:29:52.748316 1786 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 13 21:29:52.748757 kubelet[1786]: I0113 21:29:52.748362 1786 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:29:52.749218 kubelet[1786]: I0113 21:29:52.749191 1786 server.go:927] "Client rotation is on, will bootstrap in background" Jan 13 21:29:52.765242 kubelet[1786]: I0113 21:29:52.765023 1786 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:29:52.779402 kubelet[1786]: I0113 21:29:52.779356 1786 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:29:52.781239 kubelet[1786]: I0113 21:29:52.781197 1786 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:29:52.781411 kubelet[1786]: I0113 21:29:52.781233 1786 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.125","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:29:52.781516 kubelet[1786]: I0113 21:29:52.781424 1786 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:29:52.781516 kubelet[1786]: I0113 21:29:52.781433 1786 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:29:52.781604 kubelet[1786]: I0113 21:29:52.781582 1786 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:29:52.782281 kubelet[1786]: I0113 21:29:52.782254 1786 kubelet.go:400] "Attempting to sync node with API server" Jan 13 21:29:52.782281 kubelet[1786]: I0113 21:29:52.782277 1786 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:29:52.782338 kubelet[1786]: I0113 21:29:52.782304 1786 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:29:52.782338 kubelet[1786]: I0113 21:29:52.782327 1786 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:29:52.782596 kubelet[1786]: E0113 21:29:52.782440 1786 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:29:52.782596 kubelet[1786]: E0113 21:29:52.782513 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:29:52.787306 kubelet[1786]: I0113 21:29:52.787270 1786 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:29:52.788532 kubelet[1786]: W0113 21:29:52.788484 1786 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.125" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 13 21:29:52.788532 kubelet[1786]: E0113 21:29:52.788529 1786 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.125" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 13 21:29:52.788777 kubelet[1786]: W0113 21:29:52.788714 1786 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 13 21:29:52.788777 kubelet[1786]: E0113 21:29:52.788733 1786 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 13 21:29:52.788888 kubelet[1786]: I0113 21:29:52.788870 1786 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:29:52.788963 kubelet[1786]: W0113 21:29:52.788940 1786 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 21:29:52.789706 kubelet[1786]: I0113 21:29:52.789688 1786 server.go:1264] "Started kubelet" Jan 13 21:29:52.790200 kubelet[1786]: I0113 21:29:52.790153 1786 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:29:52.791275 kubelet[1786]: I0113 21:29:52.791022 1786 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:29:52.791275 kubelet[1786]: I0113 21:29:52.791080 1786 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:29:52.791275 kubelet[1786]: I0113 21:29:52.791242 1786 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:29:52.792116 kubelet[1786]: I0113 21:29:52.792078 1786 server.go:455] "Adding debug handlers to kubelet server" Jan 13 21:29:52.799694 kubelet[1786]: E0113 21:29:52.798388 1786 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.125\" not found" Jan 13 21:29:52.799694 kubelet[1786]: I0113 21:29:52.798545 1786 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:29:52.799694 kubelet[1786]: I0113 21:29:52.798814 1786 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 13 21:29:52.799694 kubelet[1786]: I0113 21:29:52.798946 1786 reconciler.go:26] "Reconciler: start to sync state" Jan 13 21:29:52.799694 kubelet[1786]: E0113 21:29:52.799240 1786 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.125.181a5dd8f9c7410e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.125,UID:10.0.0.125,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.125,},FirstTimestamp:2025-01-13 21:29:52.789651726 +0000 UTC m=+0.669600802,LastTimestamp:2025-01-13 21:29:52.789651726 +0000 UTC m=+0.669600802,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.125,}" Jan 13 21:29:52.800975 kubelet[1786]: I0113 21:29:52.800944 1786 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:29:52.801096 kubelet[1786]: I0113 21:29:52.801065 1786 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:29:52.802688 kubelet[1786]: I0113 21:29:52.802634 1786 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:29:52.806686 kubelet[1786]: E0113 21:29:52.805529 1786 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.125\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 13 21:29:52.806686 kubelet[1786]: W0113 21:29:52.805630 1786 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 13 21:29:52.806686 kubelet[1786]: E0113 21:29:52.805657 1786 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 13 21:29:52.807825 kubelet[1786]: E0113 21:29:52.807785 1786 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:29:52.808919 kubelet[1786]: E0113 21:29:52.808780 1786 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.125.181a5dd8fadb9e06 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.125,UID:10.0.0.125,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.125,},FirstTimestamp:2025-01-13 21:29:52.807763462 +0000 UTC m=+0.687712558,LastTimestamp:2025-01-13 21:29:52.807763462 +0000 UTC m=+0.687712558,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.125,}" Jan 13 21:29:52.817520 kubelet[1786]: I0113 21:29:52.817281 1786 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:29:52.817520 kubelet[1786]: I0113 21:29:52.817302 1786 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:29:52.817520 kubelet[1786]: I0113 21:29:52.817327 1786 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:29:52.822111 kubelet[1786]: E0113 21:29:52.822003 1786 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.125.181a5dd8fb5d273c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.125,UID:10.0.0.125,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.125 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.125,},FirstTimestamp:2025-01-13 21:29:52.816252732 +0000 UTC m=+0.696201808,LastTimestamp:2025-01-13 21:29:52.816252732 +0000 UTC m=+0.696201808,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.125,}" Jan 13 21:29:52.826110 kubelet[1786]: E0113 21:29:52.826019 1786 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.125.181a5dd8fb5d3ab0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.125,UID:10.0.0.125,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.125 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.125,},FirstTimestamp:2025-01-13 21:29:52.816257712 +0000 UTC m=+0.696206788,LastTimestamp:2025-01-13 21:29:52.816257712 +0000 UTC m=+0.696206788,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.125,}" Jan 13 21:29:52.829751 kubelet[1786]: E0113 21:29:52.829643 1786 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.125.181a5dd8fb5d4b8e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.125,UID:10.0.0.125,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.125 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.125,},FirstTimestamp:2025-01-13 21:29:52.81626203 +0000 UTC m=+0.696211106,LastTimestamp:2025-01-13 21:29:52.81626203 +0000 UTC m=+0.696211106,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.125,}" Jan 13 21:29:52.900069 kubelet[1786]: I0113 21:29:52.900023 1786 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.125" Jan 13 21:29:52.904103 kubelet[1786]: E0113 21:29:52.904049 1786 kubelet_node_status.go:96] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.125" Jan 13 21:29:52.904227 kubelet[1786]: E0113 21:29:52.904086 1786 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.125.181a5dd8fb5d273c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.125.181a5dd8fb5d273c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.125,UID:10.0.0.125,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.125 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.125,},FirstTimestamp:2025-01-13 21:29:52.816252732 +0000 UTC m=+0.696201808,LastTimestamp:2025-01-13 21:29:52.899975613 +0000 UTC m=+0.779924689,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.125,}" Jan 13 21:29:52.907138 kubelet[1786]: E0113 21:29:52.907059 1786 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.125.181a5dd8fb5d3ab0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.125.181a5dd8fb5d3ab0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.125,UID:10.0.0.125,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.125 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.125,},FirstTimestamp:2025-01-13 21:29:52.816257712 +0000 UTC m=+0.696206788,LastTimestamp:2025-01-13 21:29:52.899984289 +0000 UTC m=+0.779933365,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.125,}" Jan 13 21:29:52.910585 kubelet[1786]: E0113 21:29:52.910463 1786 event.go:359] "Server rejected event (will not retry!)" err="events \"10.0.0.125.181a5dd8fb5d4b8e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.125.181a5dd8fb5d4b8e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.125,UID:10.0.0.125,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.125 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.125,},FirstTimestamp:2025-01-13 21:29:52.81626203 +0000 UTC m=+0.696211106,LastTimestamp:2025-01-13 21:29:52.899986653 +0000 UTC m=+0.779935729,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.125,}" Jan 13 21:29:53.036980 kubelet[1786]: E0113 21:29:53.036818 1786 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.125\" not found" node="10.0.0.125" Jan 13 21:29:53.105246 kubelet[1786]: I0113 21:29:53.105214 1786 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.125" Jan 13 21:29:53.330349 kubelet[1786]: I0113 21:29:53.330292 1786 policy_none.go:49] "None policy: Start" Jan 13 21:29:53.330951 kubelet[1786]: I0113 21:29:53.330920 1786 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.125" Jan 13 21:29:53.331505 kubelet[1786]: I0113 21:29:53.331252 1786 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:29:53.331505 kubelet[1786]: I0113 21:29:53.331327 1786 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:29:53.332806 kubelet[1786]: I0113 21:29:53.332778 1786 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 13 21:29:53.333153 containerd[1478]: time="2025-01-13T21:29:53.333100817Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 21:29:53.333531 kubelet[1786]: I0113 21:29:53.333306 1786 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 13 21:29:53.339455 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 21:29:53.341740 sudo[1652]: pam_unix(sudo:session): session closed for user root Jan 13 21:29:53.343573 sshd[1649]: pam_unix(sshd:session): session closed for user core Jan 13 21:29:53.346982 kubelet[1786]: E0113 21:29:53.346947 1786 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.125\" not found" Jan 13 21:29:53.347619 systemd[1]: sshd@6-10.0.0.125:22-10.0.0.1:35960.service: Deactivated successfully. Jan 13 21:29:53.350112 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 21:29:53.351108 systemd-logind[1461]: Session 7 logged out. Waiting for processes to exit. Jan 13 21:29:53.354774 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 21:29:53.355435 systemd-logind[1461]: Removed session 7. Jan 13 21:29:53.358994 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 21:29:53.360868 kubelet[1786]: I0113 21:29:53.360824 1786 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:29:53.362356 kubelet[1786]: I0113 21:29:53.362325 1786 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:29:53.362356 kubelet[1786]: I0113 21:29:53.362355 1786 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:29:53.362453 kubelet[1786]: I0113 21:29:53.362377 1786 kubelet.go:2337] "Starting kubelet main sync loop" Jan 13 21:29:53.362453 kubelet[1786]: E0113 21:29:53.362431 1786 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:29:53.366852 kubelet[1786]: I0113 21:29:53.366812 1786 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:29:53.367223 kubelet[1786]: I0113 21:29:53.367050 1786 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 21:29:53.367264 kubelet[1786]: I0113 21:29:53.367233 1786 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:29:53.368385 kubelet[1786]: E0113 21:29:53.368356 1786 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.125\" not found" Jan 13 21:29:53.447703 kubelet[1786]: E0113 21:29:53.447608 1786 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.125\" not found" Jan 13 21:29:53.548592 kubelet[1786]: E0113 21:29:53.548535 1786 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.125\" not found" Jan 13 21:29:53.649335 kubelet[1786]: E0113 21:29:53.649199 1786 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.125\" not found" Jan 13 21:29:53.751055 kubelet[1786]: I0113 21:29:53.751006 1786 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 13 21:29:53.751202 kubelet[1786]: W0113 21:29:53.751176 1786 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 13 21:29:53.751202 kubelet[1786]: W0113 21:29:53.751194 1786 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 13 21:29:53.782698 kubelet[1786]: E0113 21:29:53.782613 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:29:53.782698 kubelet[1786]: I0113 21:29:53.782660 1786 apiserver.go:52] "Watching apiserver" Jan 13 21:29:53.785879 kubelet[1786]: I0113 21:29:53.785845 1786 topology_manager.go:215] "Topology Admit Handler" podUID="866f611f-1768-48eb-a581-aac909eeb174" podNamespace="kube-system" podName="cilium-jrg78" Jan 13 21:29:53.786010 kubelet[1786]: I0113 21:29:53.785990 1786 topology_manager.go:215] "Topology Admit Handler" podUID="4df07308-3169-4c77-8f1e-cfb23c8fd1fd" podNamespace="kube-system" podName="kube-proxy-27h4h" Jan 13 21:29:53.796251 systemd[1]: Created slice kubepods-burstable-pod866f611f_1768_48eb_a581_aac909eeb174.slice - libcontainer container kubepods-burstable-pod866f611f_1768_48eb_a581_aac909eeb174.slice. Jan 13 21:29:53.799703 kubelet[1786]: I0113 21:29:53.799659 1786 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 13 21:29:53.805760 kubelet[1786]: I0113 21:29:53.805719 1786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/866f611f-1768-48eb-a581-aac909eeb174-hostproc\") pod \"cilium-jrg78\" (UID: \"866f611f-1768-48eb-a581-aac909eeb174\") " pod="kube-system/cilium-jrg78" Jan 13 21:29:53.805799 kubelet[1786]: I0113 21:29:53.805762 1786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/866f611f-1768-48eb-a581-aac909eeb174-cilium-cgroup\") pod \"cilium-jrg78\" (UID: \"866f611f-1768-48eb-a581-aac909eeb174\") " pod="kube-system/cilium-jrg78" Jan 13 21:29:53.805799 kubelet[1786]: I0113 21:29:53.805782 1786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/866f611f-1768-48eb-a581-aac909eeb174-host-proc-sys-net\") pod \"cilium-jrg78\" (UID: \"866f611f-1768-48eb-a581-aac909eeb174\") " pod="kube-system/cilium-jrg78" Jan 13 21:29:53.805867 kubelet[1786]: I0113 21:29:53.805799 1786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/866f611f-1768-48eb-a581-aac909eeb174-host-proc-sys-kernel\") pod \"cilium-jrg78\" (UID: \"866f611f-1768-48eb-a581-aac909eeb174\") " pod="kube-system/cilium-jrg78" Jan 13 21:29:53.805867 kubelet[1786]: I0113 21:29:53.805850 1786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/866f611f-1768-48eb-a581-aac909eeb174-hubble-tls\") pod \"cilium-jrg78\" (UID: \"866f611f-1768-48eb-a581-aac909eeb174\") " pod="kube-system/cilium-jrg78" Jan 13 21:29:53.805934 kubelet[1786]: I0113 21:29:53.805915 1786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg8c4\" (UniqueName: \"kubernetes.io/projected/4df07308-3169-4c77-8f1e-cfb23c8fd1fd-kube-api-access-hg8c4\") pod \"kube-proxy-27h4h\" (UID: \"4df07308-3169-4c77-8f1e-cfb23c8fd1fd\") " pod="kube-system/kube-proxy-27h4h" Jan 13 21:29:53.805961 kubelet[1786]: I0113 21:29:53.805950 1786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/866f611f-1768-48eb-a581-aac909eeb174-cilium-run\") pod \"cilium-jrg78\" (UID: \"866f611f-1768-48eb-a581-aac909eeb174\") " pod="kube-system/cilium-jrg78" Jan 13 21:29:53.806000 kubelet[1786]: I0113 21:29:53.805971 1786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/866f611f-1768-48eb-a581-aac909eeb174-cni-path\") pod \"cilium-jrg78\" (UID: \"866f611f-1768-48eb-a581-aac909eeb174\") " pod="kube-system/cilium-jrg78" Jan 13 21:29:53.806000 kubelet[1786]: I0113 21:29:53.805992 1786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/866f611f-1768-48eb-a581-aac909eeb174-etc-cni-netd\") pod \"cilium-jrg78\" (UID: \"866f611f-1768-48eb-a581-aac909eeb174\") " pod="kube-system/cilium-jrg78" Jan 13 21:29:53.806054 kubelet[1786]: I0113 21:29:53.806013 1786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/866f611f-1768-48eb-a581-aac909eeb174-cilium-config-path\") pod \"cilium-jrg78\" (UID: \"866f611f-1768-48eb-a581-aac909eeb174\") " pod="kube-system/cilium-jrg78" Jan 13 21:29:53.806054 kubelet[1786]: I0113 21:29:53.806032 1786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/866f611f-1768-48eb-a581-aac909eeb174-lib-modules\") pod \"cilium-jrg78\" (UID: \"866f611f-1768-48eb-a581-aac909eeb174\") " pod="kube-system/cilium-jrg78" Jan 13 21:29:53.806106 kubelet[1786]: I0113 21:29:53.806058 1786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/866f611f-1768-48eb-a581-aac909eeb174-xtables-lock\") pod \"cilium-jrg78\" (UID: \"866f611f-1768-48eb-a581-aac909eeb174\") " pod="kube-system/cilium-jrg78" Jan 13 21:29:53.806106 kubelet[1786]: I0113 21:29:53.806080 1786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbrdc\" (UniqueName: \"kubernetes.io/projected/866f611f-1768-48eb-a581-aac909eeb174-kube-api-access-nbrdc\") pod \"cilium-jrg78\" (UID: \"866f611f-1768-48eb-a581-aac909eeb174\") " pod="kube-system/cilium-jrg78" Jan 13 21:29:53.806106 kubelet[1786]: I0113 21:29:53.806100 1786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4df07308-3169-4c77-8f1e-cfb23c8fd1fd-xtables-lock\") pod \"kube-proxy-27h4h\" (UID: \"4df07308-3169-4c77-8f1e-cfb23c8fd1fd\") " pod="kube-system/kube-proxy-27h4h" Jan 13 21:29:53.806187 kubelet[1786]: I0113 21:29:53.806120 1786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/866f611f-1768-48eb-a581-aac909eeb174-bpf-maps\") pod \"cilium-jrg78\" (UID: \"866f611f-1768-48eb-a581-aac909eeb174\") " pod="kube-system/cilium-jrg78" Jan 13 21:29:53.806187 kubelet[1786]: I0113 21:29:53.806149 1786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/866f611f-1768-48eb-a581-aac909eeb174-clustermesh-secrets\") pod \"cilium-jrg78\" (UID: \"866f611f-1768-48eb-a581-aac909eeb174\") " pod="kube-system/cilium-jrg78" Jan 13 21:29:53.806187 kubelet[1786]: I0113 21:29:53.806184 1786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4df07308-3169-4c77-8f1e-cfb23c8fd1fd-kube-proxy\") pod \"kube-proxy-27h4h\" (UID: \"4df07308-3169-4c77-8f1e-cfb23c8fd1fd\") " pod="kube-system/kube-proxy-27h4h" Jan 13 21:29:53.806276 kubelet[1786]: I0113 21:29:53.806204 1786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4df07308-3169-4c77-8f1e-cfb23c8fd1fd-lib-modules\") pod \"kube-proxy-27h4h\" (UID: \"4df07308-3169-4c77-8f1e-cfb23c8fd1fd\") " pod="kube-system/kube-proxy-27h4h" Jan 13 21:29:53.820323 systemd[1]: Created slice kubepods-besteffort-pod4df07308_3169_4c77_8f1e_cfb23c8fd1fd.slice - libcontainer container kubepods-besteffort-pod4df07308_3169_4c77_8f1e_cfb23c8fd1fd.slice. Jan 13 21:29:54.118879 kubelet[1786]: E0113 21:29:54.118828 1786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:29:54.119586 containerd[1478]: time="2025-01-13T21:29:54.119547696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jrg78,Uid:866f611f-1768-48eb-a581-aac909eeb174,Namespace:kube-system,Attempt:0,}" Jan 13 21:29:54.132871 kubelet[1786]: E0113 21:29:54.132755 1786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:29:54.133390 containerd[1478]: time="2025-01-13T21:29:54.133346143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-27h4h,Uid:4df07308-3169-4c77-8f1e-cfb23c8fd1fd,Namespace:kube-system,Attempt:0,}" Jan 13 21:29:54.694284 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1667632419.mount: Deactivated successfully. Jan 13 21:29:54.702823 containerd[1478]: time="2025-01-13T21:29:54.702773131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:29:54.703841 containerd[1478]: time="2025-01-13T21:29:54.703791561Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:29:54.704663 containerd[1478]: time="2025-01-13T21:29:54.704615056Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 13 21:29:54.705417 containerd[1478]: time="2025-01-13T21:29:54.705367497Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:29:54.706430 containerd[1478]: time="2025-01-13T21:29:54.706386959Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:29:54.712073 containerd[1478]: time="2025-01-13T21:29:54.712021726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:29:54.713223 containerd[1478]: time="2025-01-13T21:29:54.713187442Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 593.556902ms" Jan 13 21:29:54.715990 containerd[1478]: time="2025-01-13T21:29:54.715955904Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 582.334745ms" Jan 13 21:29:54.783792 kubelet[1786]: E0113 21:29:54.783743 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:29:54.812903 containerd[1478]: time="2025-01-13T21:29:54.812809088Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:29:54.812903 containerd[1478]: time="2025-01-13T21:29:54.812870413Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:29:54.812903 containerd[1478]: time="2025-01-13T21:29:54.812886814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:29:54.813061 containerd[1478]: time="2025-01-13T21:29:54.812991771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:29:54.815832 containerd[1478]: time="2025-01-13T21:29:54.815715789Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:29:54.815832 containerd[1478]: time="2025-01-13T21:29:54.815777395Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:29:54.815832 containerd[1478]: time="2025-01-13T21:29:54.815792974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:29:54.816052 containerd[1478]: time="2025-01-13T21:29:54.815876701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:29:54.880829 systemd[1]: Started cri-containerd-20cf6b59af312858d6be429d4878511b49ef0d04d855c4e90277a39867d2d7d8.scope - libcontainer container 20cf6b59af312858d6be429d4878511b49ef0d04d855c4e90277a39867d2d7d8. Jan 13 21:29:54.883049 systemd[1]: Started cri-containerd-2141167472ae1ce2140bcf818aef654c964a0bae632b96434e03de16f5391517.scope - libcontainer container 2141167472ae1ce2140bcf818aef654c964a0bae632b96434e03de16f5391517. Jan 13 21:29:54.906515 containerd[1478]: time="2025-01-13T21:29:54.906419251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-27h4h,Uid:4df07308-3169-4c77-8f1e-cfb23c8fd1fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"20cf6b59af312858d6be429d4878511b49ef0d04d855c4e90277a39867d2d7d8\"" Jan 13 21:29:54.907598 kubelet[1786]: E0113 21:29:54.907569 1786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:29:54.909308 containerd[1478]: time="2025-01-13T21:29:54.909280166Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Jan 13 21:29:54.911237 containerd[1478]: time="2025-01-13T21:29:54.911209424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jrg78,Uid:866f611f-1768-48eb-a581-aac909eeb174,Namespace:kube-system,Attempt:0,} returns sandbox id \"2141167472ae1ce2140bcf818aef654c964a0bae632b96434e03de16f5391517\"" Jan 13 21:29:54.912014 kubelet[1786]: E0113 21:29:54.911877 1786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:29:55.784150 kubelet[1786]: E0113 21:29:55.784088 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:29:55.959001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount830837857.mount: Deactivated successfully. Jan 13 21:29:56.254755 containerd[1478]: time="2025-01-13T21:29:56.254698255Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:56.297273 containerd[1478]: time="2025-01-13T21:29:56.297242811Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=29057470" Jan 13 21:29:56.342422 containerd[1478]: time="2025-01-13T21:29:56.342359211Z" level=info msg="ImageCreate event name:\"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:56.374624 containerd[1478]: time="2025-01-13T21:29:56.374573153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:29:56.375139 containerd[1478]: time="2025-01-13T21:29:56.375108217Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"29056489\" in 1.465792594s" Jan 13 21:29:56.375178 containerd[1478]: time="2025-01-13T21:29:56.375139165Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Jan 13 21:29:56.376359 containerd[1478]: time="2025-01-13T21:29:56.376323636Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 13 21:29:56.377774 containerd[1478]: time="2025-01-13T21:29:56.377746755Z" level=info msg="CreateContainer within sandbox \"20cf6b59af312858d6be429d4878511b49ef0d04d855c4e90277a39867d2d7d8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 21:29:56.697093 containerd[1478]: time="2025-01-13T21:29:56.697043029Z" level=info msg="CreateContainer within sandbox \"20cf6b59af312858d6be429d4878511b49ef0d04d855c4e90277a39867d2d7d8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a0843f2e5b777f752e51fcdf05ebd6291c07550bae2d464c385459647e4dc7bf\"" Jan 13 21:29:56.697772 containerd[1478]: time="2025-01-13T21:29:56.697728124Z" level=info msg="StartContainer for \"a0843f2e5b777f752e51fcdf05ebd6291c07550bae2d464c385459647e4dc7bf\"" Jan 13 21:29:56.726795 systemd[1]: Started cri-containerd-a0843f2e5b777f752e51fcdf05ebd6291c07550bae2d464c385459647e4dc7bf.scope - libcontainer container a0843f2e5b777f752e51fcdf05ebd6291c07550bae2d464c385459647e4dc7bf. Jan 13 21:29:56.754976 containerd[1478]: time="2025-01-13T21:29:56.754923005Z" level=info msg="StartContainer for \"a0843f2e5b777f752e51fcdf05ebd6291c07550bae2d464c385459647e4dc7bf\" returns successfully" Jan 13 21:29:56.784934 kubelet[1786]: E0113 21:29:56.784224 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:29:56.959144 systemd[1]: run-containerd-runc-k8s.io-a0843f2e5b777f752e51fcdf05ebd6291c07550bae2d464c385459647e4dc7bf-runc.hBhjJP.mount: Deactivated successfully. Jan 13 21:29:57.371858 kubelet[1786]: E0113 21:29:57.371828 1786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:29:57.381454 kubelet[1786]: I0113 21:29:57.381387 1786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-27h4h" podStartSLOduration=2.914018419 podStartE2EDuration="4.381366821s" podCreationTimestamp="2025-01-13 21:29:53 +0000 UTC" firstStartedPulling="2025-01-13 21:29:54.908771582 +0000 UTC m=+2.788720668" lastFinishedPulling="2025-01-13 21:29:56.376119994 +0000 UTC m=+4.256069070" observedRunningTime="2025-01-13 21:29:57.381240444 +0000 UTC m=+5.261189510" watchObservedRunningTime="2025-01-13 21:29:57.381366821 +0000 UTC m=+5.261315897" Jan 13 21:29:57.785147 kubelet[1786]: E0113 21:29:57.785031 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:29:58.372690 kubelet[1786]: E0113 21:29:58.372648 1786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:29:58.786356 kubelet[1786]: E0113 21:29:58.786190 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:29:59.786822 kubelet[1786]: E0113 21:29:59.786782 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:29:59.925328 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3781834371.mount: Deactivated successfully. Jan 13 21:30:00.787123 kubelet[1786]: E0113 21:30:00.787058 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:01.787584 kubelet[1786]: E0113 21:30:01.787512 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:02.787789 kubelet[1786]: E0113 21:30:02.787724 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:03.346200 containerd[1478]: time="2025-01-13T21:30:03.346130943Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:03.346867 containerd[1478]: time="2025-01-13T21:30:03.346830014Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166734075" Jan 13 21:30:03.347793 containerd[1478]: time="2025-01-13T21:30:03.347767221Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:03.349235 containerd[1478]: time="2025-01-13T21:30:03.349173979Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 6.9728062s" Jan 13 21:30:03.349235 containerd[1478]: time="2025-01-13T21:30:03.349225356Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 13 21:30:03.351229 containerd[1478]: time="2025-01-13T21:30:03.351187115Z" level=info msg="CreateContainer within sandbox \"2141167472ae1ce2140bcf818aef654c964a0bae632b96434e03de16f5391517\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 21:30:03.362982 containerd[1478]: time="2025-01-13T21:30:03.362932001Z" level=info msg="CreateContainer within sandbox \"2141167472ae1ce2140bcf818aef654c964a0bae632b96434e03de16f5391517\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a8771adb4b06331a092469ee173caeef6be1c0c792fdb57059714e3ae390a2b1\"" Jan 13 21:30:03.363447 containerd[1478]: time="2025-01-13T21:30:03.363421208Z" level=info msg="StartContainer for \"a8771adb4b06331a092469ee173caeef6be1c0c792fdb57059714e3ae390a2b1\"" Jan 13 21:30:03.387460 systemd[1]: run-containerd-runc-k8s.io-a8771adb4b06331a092469ee173caeef6be1c0c792fdb57059714e3ae390a2b1-runc.1X8FWX.mount: Deactivated successfully. Jan 13 21:30:03.397816 systemd[1]: Started cri-containerd-a8771adb4b06331a092469ee173caeef6be1c0c792fdb57059714e3ae390a2b1.scope - libcontainer container a8771adb4b06331a092469ee173caeef6be1c0c792fdb57059714e3ae390a2b1. Jan 13 21:30:03.435640 systemd[1]: cri-containerd-a8771adb4b06331a092469ee173caeef6be1c0c792fdb57059714e3ae390a2b1.scope: Deactivated successfully. Jan 13 21:30:03.470337 containerd[1478]: time="2025-01-13T21:30:03.470253937Z" level=info msg="StartContainer for \"a8771adb4b06331a092469ee173caeef6be1c0c792fdb57059714e3ae390a2b1\" returns successfully" Jan 13 21:30:03.787971 kubelet[1786]: E0113 21:30:03.787823 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:04.359905 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a8771adb4b06331a092469ee173caeef6be1c0c792fdb57059714e3ae390a2b1-rootfs.mount: Deactivated successfully. Jan 13 21:30:04.383980 kubelet[1786]: E0113 21:30:04.383955 1786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:04.788945 kubelet[1786]: E0113 21:30:04.788813 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:04.851955 containerd[1478]: time="2025-01-13T21:30:04.851892453Z" level=info msg="shim disconnected" id=a8771adb4b06331a092469ee173caeef6be1c0c792fdb57059714e3ae390a2b1 namespace=k8s.io Jan 13 21:30:04.851955 containerd[1478]: time="2025-01-13T21:30:04.851945031Z" level=warning msg="cleaning up after shim disconnected" id=a8771adb4b06331a092469ee173caeef6be1c0c792fdb57059714e3ae390a2b1 namespace=k8s.io Jan 13 21:30:04.851955 containerd[1478]: time="2025-01-13T21:30:04.851955221Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:30:05.386335 kubelet[1786]: E0113 21:30:05.386306 1786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:05.387853 containerd[1478]: time="2025-01-13T21:30:05.387815951Z" level=info msg="CreateContainer within sandbox \"2141167472ae1ce2140bcf818aef654c964a0bae632b96434e03de16f5391517\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 21:30:05.789387 kubelet[1786]: E0113 21:30:05.789258 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:06.345221 containerd[1478]: time="2025-01-13T21:30:06.345164641Z" level=info msg="CreateContainer within sandbox \"2141167472ae1ce2140bcf818aef654c964a0bae632b96434e03de16f5391517\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"83d87d9f49e58f991c413ab4f64a8d26a41e772d37f4f487d9b2742adafd4265\"" Jan 13 21:30:06.345769 containerd[1478]: time="2025-01-13T21:30:06.345709112Z" level=info msg="StartContainer for \"83d87d9f49e58f991c413ab4f64a8d26a41e772d37f4f487d9b2742adafd4265\"" Jan 13 21:30:06.367149 systemd[1]: run-containerd-runc-k8s.io-83d87d9f49e58f991c413ab4f64a8d26a41e772d37f4f487d9b2742adafd4265-runc.wkn6ke.mount: Deactivated successfully. Jan 13 21:30:06.379810 systemd[1]: Started cri-containerd-83d87d9f49e58f991c413ab4f64a8d26a41e772d37f4f487d9b2742adafd4265.scope - libcontainer container 83d87d9f49e58f991c413ab4f64a8d26a41e772d37f4f487d9b2742adafd4265. Jan 13 21:30:06.414009 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:30:06.414253 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:30:06.414315 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:30:06.421054 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:30:06.421274 systemd[1]: cri-containerd-83d87d9f49e58f991c413ab4f64a8d26a41e772d37f4f487d9b2742adafd4265.scope: Deactivated successfully. Jan 13 21:30:06.434544 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:30:06.490455 containerd[1478]: time="2025-01-13T21:30:06.490384260Z" level=info msg="StartContainer for \"83d87d9f49e58f991c413ab4f64a8d26a41e772d37f4f487d9b2742adafd4265\" returns successfully" Jan 13 21:30:06.718772 containerd[1478]: time="2025-01-13T21:30:06.718609654Z" level=info msg="shim disconnected" id=83d87d9f49e58f991c413ab4f64a8d26a41e772d37f4f487d9b2742adafd4265 namespace=k8s.io Jan 13 21:30:06.718772 containerd[1478]: time="2025-01-13T21:30:06.718653125Z" level=warning msg="cleaning up after shim disconnected" id=83d87d9f49e58f991c413ab4f64a8d26a41e772d37f4f487d9b2742adafd4265 namespace=k8s.io Jan 13 21:30:06.718772 containerd[1478]: time="2025-01-13T21:30:06.718664597Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:30:06.790334 kubelet[1786]: E0113 21:30:06.790288 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:07.111072 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-83d87d9f49e58f991c413ab4f64a8d26a41e772d37f4f487d9b2742adafd4265-rootfs.mount: Deactivated successfully. Jan 13 21:30:07.394872 kubelet[1786]: E0113 21:30:07.394601 1786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:07.396639 containerd[1478]: time="2025-01-13T21:30:07.396582866Z" level=info msg="CreateContainer within sandbox \"2141167472ae1ce2140bcf818aef654c964a0bae632b96434e03de16f5391517\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 21:30:07.496959 containerd[1478]: time="2025-01-13T21:30:07.496903342Z" level=info msg="CreateContainer within sandbox \"2141167472ae1ce2140bcf818aef654c964a0bae632b96434e03de16f5391517\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7d7e38eab70d420f3d0e8a757f52cf3b185858fcec72adc9ed9d26715f1e4d28\"" Jan 13 21:30:07.497496 containerd[1478]: time="2025-01-13T21:30:07.497464835Z" level=info msg="StartContainer for \"7d7e38eab70d420f3d0e8a757f52cf3b185858fcec72adc9ed9d26715f1e4d28\"" Jan 13 21:30:07.534821 systemd[1]: Started cri-containerd-7d7e38eab70d420f3d0e8a757f52cf3b185858fcec72adc9ed9d26715f1e4d28.scope - libcontainer container 7d7e38eab70d420f3d0e8a757f52cf3b185858fcec72adc9ed9d26715f1e4d28. Jan 13 21:30:07.565362 systemd[1]: cri-containerd-7d7e38eab70d420f3d0e8a757f52cf3b185858fcec72adc9ed9d26715f1e4d28.scope: Deactivated successfully. Jan 13 21:30:07.566493 containerd[1478]: time="2025-01-13T21:30:07.566442944Z" level=info msg="StartContainer for \"7d7e38eab70d420f3d0e8a757f52cf3b185858fcec72adc9ed9d26715f1e4d28\" returns successfully" Jan 13 21:30:07.589941 containerd[1478]: time="2025-01-13T21:30:07.589853026Z" level=info msg="shim disconnected" id=7d7e38eab70d420f3d0e8a757f52cf3b185858fcec72adc9ed9d26715f1e4d28 namespace=k8s.io Jan 13 21:30:07.589941 containerd[1478]: time="2025-01-13T21:30:07.589932525Z" level=warning msg="cleaning up after shim disconnected" id=7d7e38eab70d420f3d0e8a757f52cf3b185858fcec72adc9ed9d26715f1e4d28 namespace=k8s.io Jan 13 21:30:07.589941 containerd[1478]: time="2025-01-13T21:30:07.589944408Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:30:07.790560 kubelet[1786]: E0113 21:30:07.790413 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:08.111045 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d7e38eab70d420f3d0e8a757f52cf3b185858fcec72adc9ed9d26715f1e4d28-rootfs.mount: Deactivated successfully. Jan 13 21:30:08.398034 kubelet[1786]: E0113 21:30:08.397909 1786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:08.399773 containerd[1478]: time="2025-01-13T21:30:08.399737739Z" level=info msg="CreateContainer within sandbox \"2141167472ae1ce2140bcf818aef654c964a0bae632b96434e03de16f5391517\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 21:30:08.415419 containerd[1478]: time="2025-01-13T21:30:08.415377509Z" level=info msg="CreateContainer within sandbox \"2141167472ae1ce2140bcf818aef654c964a0bae632b96434e03de16f5391517\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cf51cea0862d44e8ca9cd52a661dc26d79fb5877b1a6b8154438bdebebaeb05e\"" Jan 13 21:30:08.415894 containerd[1478]: time="2025-01-13T21:30:08.415868230Z" level=info msg="StartContainer for \"cf51cea0862d44e8ca9cd52a661dc26d79fb5877b1a6b8154438bdebebaeb05e\"" Jan 13 21:30:08.443793 systemd[1]: Started cri-containerd-cf51cea0862d44e8ca9cd52a661dc26d79fb5877b1a6b8154438bdebebaeb05e.scope - libcontainer container cf51cea0862d44e8ca9cd52a661dc26d79fb5877b1a6b8154438bdebebaeb05e. Jan 13 21:30:08.465791 systemd[1]: cri-containerd-cf51cea0862d44e8ca9cd52a661dc26d79fb5877b1a6b8154438bdebebaeb05e.scope: Deactivated successfully. Jan 13 21:30:08.468470 containerd[1478]: time="2025-01-13T21:30:08.468420925Z" level=info msg="StartContainer for \"cf51cea0862d44e8ca9cd52a661dc26d79fb5877b1a6b8154438bdebebaeb05e\" returns successfully" Jan 13 21:30:08.490754 containerd[1478]: time="2025-01-13T21:30:08.490658147Z" level=info msg="shim disconnected" id=cf51cea0862d44e8ca9cd52a661dc26d79fb5877b1a6b8154438bdebebaeb05e namespace=k8s.io Jan 13 21:30:08.490754 containerd[1478]: time="2025-01-13T21:30:08.490744610Z" level=warning msg="cleaning up after shim disconnected" id=cf51cea0862d44e8ca9cd52a661dc26d79fb5877b1a6b8154438bdebebaeb05e namespace=k8s.io Jan 13 21:30:08.490754 containerd[1478]: time="2025-01-13T21:30:08.490756051Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:30:08.791594 kubelet[1786]: E0113 21:30:08.791449 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:09.111197 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cf51cea0862d44e8ca9cd52a661dc26d79fb5877b1a6b8154438bdebebaeb05e-rootfs.mount: Deactivated successfully. Jan 13 21:30:09.402112 kubelet[1786]: E0113 21:30:09.401979 1786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:09.403616 containerd[1478]: time="2025-01-13T21:30:09.403576976Z" level=info msg="CreateContainer within sandbox \"2141167472ae1ce2140bcf818aef654c964a0bae632b96434e03de16f5391517\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 21:30:09.792226 kubelet[1786]: E0113 21:30:09.792108 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:09.932081 containerd[1478]: time="2025-01-13T21:30:09.931995249Z" level=info msg="CreateContainer within sandbox \"2141167472ae1ce2140bcf818aef654c964a0bae632b96434e03de16f5391517\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1f6ecf66064c5e7e6fa9cd38c804f619a8c33aa7d7624c42e8a7330d52a3f39a\"" Jan 13 21:30:09.932492 containerd[1478]: time="2025-01-13T21:30:09.932471202Z" level=info msg="StartContainer for \"1f6ecf66064c5e7e6fa9cd38c804f619a8c33aa7d7624c42e8a7330d52a3f39a\"" Jan 13 21:30:09.961858 systemd[1]: Started cri-containerd-1f6ecf66064c5e7e6fa9cd38c804f619a8c33aa7d7624c42e8a7330d52a3f39a.scope - libcontainer container 1f6ecf66064c5e7e6fa9cd38c804f619a8c33aa7d7624c42e8a7330d52a3f39a. Jan 13 21:30:09.988843 containerd[1478]: time="2025-01-13T21:30:09.988801070Z" level=info msg="StartContainer for \"1f6ecf66064c5e7e6fa9cd38c804f619a8c33aa7d7624c42e8a7330d52a3f39a\" returns successfully" Jan 13 21:30:10.157360 kubelet[1786]: I0113 21:30:10.157271 1786 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 21:30:10.406483 kubelet[1786]: E0113 21:30:10.406441 1786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:10.420065 kubelet[1786]: I0113 21:30:10.419939 1786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jrg78" podStartSLOduration=8.982424096 podStartE2EDuration="17.419921345s" podCreationTimestamp="2025-01-13 21:29:53 +0000 UTC" firstStartedPulling="2025-01-13 21:29:54.912518539 +0000 UTC m=+2.792467615" lastFinishedPulling="2025-01-13 21:30:03.350015788 +0000 UTC m=+11.229964864" observedRunningTime="2025-01-13 21:30:10.41979634 +0000 UTC m=+18.299745426" watchObservedRunningTime="2025-01-13 21:30:10.419921345 +0000 UTC m=+18.299870421" Jan 13 21:30:10.474696 kernel: Initializing XFRM netlink socket Jan 13 21:30:10.793244 kubelet[1786]: E0113 21:30:10.793084 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:11.407270 kubelet[1786]: E0113 21:30:11.407224 1786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:11.793610 kubelet[1786]: E0113 21:30:11.793451 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:12.174095 systemd-networkd[1403]: cilium_host: Link UP Jan 13 21:30:12.174297 systemd-networkd[1403]: cilium_net: Link UP Jan 13 21:30:12.174522 systemd-networkd[1403]: cilium_net: Gained carrier Jan 13 21:30:12.174760 systemd-networkd[1403]: cilium_host: Gained carrier Jan 13 21:30:12.273907 systemd-networkd[1403]: cilium_vxlan: Link UP Jan 13 21:30:12.273923 systemd-networkd[1403]: cilium_vxlan: Gained carrier Jan 13 21:30:12.409417 kubelet[1786]: E0113 21:30:12.409365 1786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:12.481708 kernel: NET: Registered PF_ALG protocol family Jan 13 21:30:12.635911 systemd-networkd[1403]: cilium_host: Gained IPv6LL Jan 13 21:30:12.782602 kubelet[1786]: E0113 21:30:12.782541 1786 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:12.794571 kubelet[1786]: E0113 21:30:12.794508 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:12.843881 systemd-networkd[1403]: cilium_net: Gained IPv6LL Jan 13 21:30:13.193967 systemd-networkd[1403]: lxc_health: Link UP Jan 13 21:30:13.203549 systemd-networkd[1403]: lxc_health: Gained carrier Jan 13 21:30:13.422802 systemd-networkd[1403]: cilium_vxlan: Gained IPv6LL Jan 13 21:30:13.795336 kubelet[1786]: E0113 21:30:13.795288 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:14.095517 kubelet[1786]: E0113 21:30:14.095478 1786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:14.415252 kubelet[1786]: E0113 21:30:14.414983 1786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:14.796484 kubelet[1786]: E0113 21:30:14.796330 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:15.083887 systemd-networkd[1403]: lxc_health: Gained IPv6LL Jan 13 21:30:15.413532 kubelet[1786]: E0113 21:30:15.413407 1786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:15.513685 kubelet[1786]: I0113 21:30:15.513635 1786 topology_manager.go:215] "Topology Admit Handler" podUID="c3802ea8-e746-4b4a-bc8a-86816249d92c" podNamespace="default" podName="nginx-deployment-85f456d6dd-6chzw" Jan 13 21:30:15.519250 systemd[1]: Created slice kubepods-besteffort-podc3802ea8_e746_4b4a_bc8a_86816249d92c.slice - libcontainer container kubepods-besteffort-podc3802ea8_e746_4b4a_bc8a_86816249d92c.slice. Jan 13 21:30:15.541475 kubelet[1786]: I0113 21:30:15.541420 1786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzr45\" (UniqueName: \"kubernetes.io/projected/c3802ea8-e746-4b4a-bc8a-86816249d92c-kube-api-access-lzr45\") pod \"nginx-deployment-85f456d6dd-6chzw\" (UID: \"c3802ea8-e746-4b4a-bc8a-86816249d92c\") " pod="default/nginx-deployment-85f456d6dd-6chzw" Jan 13 21:30:15.797499 kubelet[1786]: E0113 21:30:15.797337 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:15.822241 containerd[1478]: time="2025-01-13T21:30:15.822184750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-6chzw,Uid:c3802ea8-e746-4b4a-bc8a-86816249d92c,Namespace:default,Attempt:0,}" Jan 13 21:30:16.267689 systemd-networkd[1403]: lxcc4e4aad896f8: Link UP Jan 13 21:30:16.275712 kernel: eth0: renamed from tmp2a72d Jan 13 21:30:16.283891 systemd-networkd[1403]: lxcc4e4aad896f8: Gained carrier Jan 13 21:30:16.797752 kubelet[1786]: E0113 21:30:16.797661 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:17.452781 systemd-networkd[1403]: lxcc4e4aad896f8: Gained IPv6LL Jan 13 21:30:17.475157 containerd[1478]: time="2025-01-13T21:30:17.475018412Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:30:17.475157 containerd[1478]: time="2025-01-13T21:30:17.475104697Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:30:17.475157 containerd[1478]: time="2025-01-13T21:30:17.475124345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:30:17.475646 containerd[1478]: time="2025-01-13T21:30:17.475237912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:30:17.505882 systemd[1]: Started cri-containerd-2a72d0951d2fd0d34d39df23054f3405243ab555f4f7e7517915d41b884cf01e.scope - libcontainer container 2a72d0951d2fd0d34d39df23054f3405243ab555f4f7e7517915d41b884cf01e. Jan 13 21:30:17.519083 systemd-resolved[1342]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:30:17.543852 containerd[1478]: time="2025-01-13T21:30:17.543794865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-6chzw,Uid:c3802ea8-e746-4b4a-bc8a-86816249d92c,Namespace:default,Attempt:0,} returns sandbox id \"2a72d0951d2fd0d34d39df23054f3405243ab555f4f7e7517915d41b884cf01e\"" Jan 13 21:30:17.545409 containerd[1478]: time="2025-01-13T21:30:17.545351941Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 13 21:30:17.798433 kubelet[1786]: E0113 21:30:17.798272 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:18.799164 kubelet[1786]: E0113 21:30:18.799105 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:19.799408 kubelet[1786]: E0113 21:30:19.799345 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:20.799758 kubelet[1786]: E0113 21:30:20.799721 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:21.253286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3336555606.mount: Deactivated successfully. Jan 13 21:30:21.800461 kubelet[1786]: E0113 21:30:21.800378 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:22.800708 kubelet[1786]: E0113 21:30:22.800639 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:22.917890 containerd[1478]: time="2025-01-13T21:30:22.917808115Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:22.918562 containerd[1478]: time="2025-01-13T21:30:22.918513425Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71036018" Jan 13 21:30:22.919632 containerd[1478]: time="2025-01-13T21:30:22.919575796Z" level=info msg="ImageCreate event name:\"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:22.921984 containerd[1478]: time="2025-01-13T21:30:22.921946894Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:22.923034 containerd[1478]: time="2025-01-13T21:30:22.922988625Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 5.377593021s" Jan 13 21:30:22.923069 containerd[1478]: time="2025-01-13T21:30:22.923033450Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 13 21:30:22.924960 containerd[1478]: time="2025-01-13T21:30:22.924933783Z" level=info msg="CreateContainer within sandbox \"2a72d0951d2fd0d34d39df23054f3405243ab555f4f7e7517915d41b884cf01e\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 13 21:30:22.937240 containerd[1478]: time="2025-01-13T21:30:22.937203484Z" level=info msg="CreateContainer within sandbox \"2a72d0951d2fd0d34d39df23054f3405243ab555f4f7e7517915d41b884cf01e\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"d1d52720a90272cceeae12530e19497afd4c30c930a77ebf7e520be04d04cfc3\"" Jan 13 21:30:22.937713 containerd[1478]: time="2025-01-13T21:30:22.937659832Z" level=info msg="StartContainer for \"d1d52720a90272cceeae12530e19497afd4c30c930a77ebf7e520be04d04cfc3\"" Jan 13 21:30:22.973965 systemd[1]: Started cri-containerd-d1d52720a90272cceeae12530e19497afd4c30c930a77ebf7e520be04d04cfc3.scope - libcontainer container d1d52720a90272cceeae12530e19497afd4c30c930a77ebf7e520be04d04cfc3. Jan 13 21:30:23.002411 containerd[1478]: time="2025-01-13T21:30:23.002359976Z" level=info msg="StartContainer for \"d1d52720a90272cceeae12530e19497afd4c30c930a77ebf7e520be04d04cfc3\" returns successfully" Jan 13 21:30:23.494786 kubelet[1786]: I0113 21:30:23.494702 1786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-6chzw" podStartSLOduration=3.115915596 podStartE2EDuration="8.494642155s" podCreationTimestamp="2025-01-13 21:30:15 +0000 UTC" firstStartedPulling="2025-01-13 21:30:17.545099679 +0000 UTC m=+25.425048755" lastFinishedPulling="2025-01-13 21:30:22.923826237 +0000 UTC m=+30.803775314" observedRunningTime="2025-01-13 21:30:23.494531654 +0000 UTC m=+31.374480720" watchObservedRunningTime="2025-01-13 21:30:23.494642155 +0000 UTC m=+31.374591231" Jan 13 21:30:23.801654 kubelet[1786]: E0113 21:30:23.801466 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:24.166893 update_engine[1465]: I20250113 21:30:24.166812 1465 update_attempter.cc:509] Updating boot flags... Jan 13 21:30:24.201730 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3002) Jan 13 21:30:24.239855 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3000) Jan 13 21:30:24.273593 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3000) Jan 13 21:30:24.802043 kubelet[1786]: E0113 21:30:24.801985 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:25.802905 kubelet[1786]: E0113 21:30:25.802846 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:26.803793 kubelet[1786]: E0113 21:30:26.803743 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:27.307179 kubelet[1786]: I0113 21:30:27.307131 1786 topology_manager.go:215] "Topology Admit Handler" podUID="0c12e671-3e93-4e38-80af-98206e57aa5a" podNamespace="default" podName="nfs-server-provisioner-0" Jan 13 21:30:27.313911 systemd[1]: Created slice kubepods-besteffort-pod0c12e671_3e93_4e38_80af_98206e57aa5a.slice - libcontainer container kubepods-besteffort-pod0c12e671_3e93_4e38_80af_98206e57aa5a.slice. Jan 13 21:30:27.411724 kubelet[1786]: I0113 21:30:27.411651 1786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwzh7\" (UniqueName: \"kubernetes.io/projected/0c12e671-3e93-4e38-80af-98206e57aa5a-kube-api-access-bwzh7\") pod \"nfs-server-provisioner-0\" (UID: \"0c12e671-3e93-4e38-80af-98206e57aa5a\") " pod="default/nfs-server-provisioner-0" Jan 13 21:30:27.411875 kubelet[1786]: I0113 21:30:27.411731 1786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/0c12e671-3e93-4e38-80af-98206e57aa5a-data\") pod \"nfs-server-provisioner-0\" (UID: \"0c12e671-3e93-4e38-80af-98206e57aa5a\") " pod="default/nfs-server-provisioner-0" Jan 13 21:30:27.617288 containerd[1478]: time="2025-01-13T21:30:27.617082852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:0c12e671-3e93-4e38-80af-98206e57aa5a,Namespace:default,Attempt:0,}" Jan 13 21:30:27.804415 kubelet[1786]: E0113 21:30:27.804352 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:28.016804 systemd-networkd[1403]: lxc26f80e10aafa: Link UP Jan 13 21:30:28.022690 kernel: eth0: renamed from tmp7413c Jan 13 21:30:28.028355 systemd-networkd[1403]: lxc26f80e10aafa: Gained carrier Jan 13 21:30:28.267463 containerd[1478]: time="2025-01-13T21:30:28.267278974Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:30:28.267463 containerd[1478]: time="2025-01-13T21:30:28.267337595Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:30:28.267463 containerd[1478]: time="2025-01-13T21:30:28.267349768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:30:28.267616 containerd[1478]: time="2025-01-13T21:30:28.267427705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:30:28.295807 systemd[1]: Started cri-containerd-7413c0d64586e1740f826e73b4aa2e2148e831ee0af42c5969e14784ab6c8b4d.scope - libcontainer container 7413c0d64586e1740f826e73b4aa2e2148e831ee0af42c5969e14784ab6c8b4d. Jan 13 21:30:28.306007 systemd-resolved[1342]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:30:28.326577 containerd[1478]: time="2025-01-13T21:30:28.326541744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:0c12e671-3e93-4e38-80af-98206e57aa5a,Namespace:default,Attempt:0,} returns sandbox id \"7413c0d64586e1740f826e73b4aa2e2148e831ee0af42c5969e14784ab6c8b4d\"" Jan 13 21:30:28.328072 containerd[1478]: time="2025-01-13T21:30:28.328045040Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 13 21:30:28.804813 kubelet[1786]: E0113 21:30:28.804740 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:29.291842 systemd-networkd[1403]: lxc26f80e10aafa: Gained IPv6LL Jan 13 21:30:29.805275 kubelet[1786]: E0113 21:30:29.805232 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:30.259783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3235523971.mount: Deactivated successfully. Jan 13 21:30:30.805936 kubelet[1786]: E0113 21:30:30.805892 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:31.807070 kubelet[1786]: E0113 21:30:31.806994 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:32.609066 containerd[1478]: time="2025-01-13T21:30:32.608989123Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:32.610436 containerd[1478]: time="2025-01-13T21:30:32.610398214Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jan 13 21:30:32.612141 containerd[1478]: time="2025-01-13T21:30:32.612099648Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:32.614852 containerd[1478]: time="2025-01-13T21:30:32.614811962Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:32.615697 containerd[1478]: time="2025-01-13T21:30:32.615633514Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 4.287549971s" Jan 13 21:30:32.615697 containerd[1478]: time="2025-01-13T21:30:32.615685342Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 13 21:30:32.617732 containerd[1478]: time="2025-01-13T21:30:32.617699697Z" level=info msg="CreateContainer within sandbox \"7413c0d64586e1740f826e73b4aa2e2148e831ee0af42c5969e14784ab6c8b4d\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 13 21:30:32.630187 containerd[1478]: time="2025-01-13T21:30:32.630141090Z" level=info msg="CreateContainer within sandbox \"7413c0d64586e1740f826e73b4aa2e2148e831ee0af42c5969e14784ab6c8b4d\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"95225c577e8e5eb40ada296b88d980a1f145c3f2b0c48f505d010f67fc86a287\"" Jan 13 21:30:32.630569 containerd[1478]: time="2025-01-13T21:30:32.630533821Z" level=info msg="StartContainer for \"95225c577e8e5eb40ada296b88d980a1f145c3f2b0c48f505d010f67fc86a287\"" Jan 13 21:30:32.696081 systemd[1]: run-containerd-runc-k8s.io-95225c577e8e5eb40ada296b88d980a1f145c3f2b0c48f505d010f67fc86a287-runc.GujUZl.mount: Deactivated successfully. Jan 13 21:30:32.712872 systemd[1]: Started cri-containerd-95225c577e8e5eb40ada296b88d980a1f145c3f2b0c48f505d010f67fc86a287.scope - libcontainer container 95225c577e8e5eb40ada296b88d980a1f145c3f2b0c48f505d010f67fc86a287. Jan 13 21:30:32.782794 kubelet[1786]: E0113 21:30:32.782732 1786 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:32.807152 kubelet[1786]: E0113 21:30:32.807106 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:32.937405 containerd[1478]: time="2025-01-13T21:30:32.937267241Z" level=info msg="StartContainer for \"95225c577e8e5eb40ada296b88d980a1f145c3f2b0c48f505d010f67fc86a287\" returns successfully" Jan 13 21:30:33.457735 kubelet[1786]: I0113 21:30:33.457653 1786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.168965362 podStartE2EDuration="6.457637023s" podCreationTimestamp="2025-01-13 21:30:27 +0000 UTC" firstStartedPulling="2025-01-13 21:30:28.327803393 +0000 UTC m=+36.207752469" lastFinishedPulling="2025-01-13 21:30:32.616475054 +0000 UTC m=+40.496424130" observedRunningTime="2025-01-13 21:30:33.45759363 +0000 UTC m=+41.337542706" watchObservedRunningTime="2025-01-13 21:30:33.457637023 +0000 UTC m=+41.337586089" Jan 13 21:30:33.808149 kubelet[1786]: E0113 21:30:33.808111 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:34.808880 kubelet[1786]: E0113 21:30:34.808824 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:35.809210 kubelet[1786]: E0113 21:30:35.809152 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:36.810212 kubelet[1786]: E0113 21:30:36.810161 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:37.810786 kubelet[1786]: E0113 21:30:37.810733 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:38.811545 kubelet[1786]: E0113 21:30:38.811485 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:39.812229 kubelet[1786]: E0113 21:30:39.812159 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:40.813316 kubelet[1786]: E0113 21:30:40.813263 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:41.814452 kubelet[1786]: E0113 21:30:41.814387 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:42.815328 kubelet[1786]: E0113 21:30:42.815277 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:42.884918 kubelet[1786]: I0113 21:30:42.884861 1786 topology_manager.go:215] "Topology Admit Handler" podUID="6cf4094f-9e42-4c26-9a75-b2d7b8533ed1" podNamespace="default" podName="test-pod-1" Jan 13 21:30:42.890783 systemd[1]: Created slice kubepods-besteffort-pod6cf4094f_9e42_4c26_9a75_b2d7b8533ed1.slice - libcontainer container kubepods-besteffort-pod6cf4094f_9e42_4c26_9a75_b2d7b8533ed1.slice. Jan 13 21:30:42.898231 kubelet[1786]: I0113 21:30:42.898192 1786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkz8t\" (UniqueName: \"kubernetes.io/projected/6cf4094f-9e42-4c26-9a75-b2d7b8533ed1-kube-api-access-qkz8t\") pod \"test-pod-1\" (UID: \"6cf4094f-9e42-4c26-9a75-b2d7b8533ed1\") " pod="default/test-pod-1" Jan 13 21:30:42.898231 kubelet[1786]: I0113 21:30:42.898225 1786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-12a00158-a504-4086-89ec-4ea9b5b048ac\" (UniqueName: \"kubernetes.io/nfs/6cf4094f-9e42-4c26-9a75-b2d7b8533ed1-pvc-12a00158-a504-4086-89ec-4ea9b5b048ac\") pod \"test-pod-1\" (UID: \"6cf4094f-9e42-4c26-9a75-b2d7b8533ed1\") " pod="default/test-pod-1" Jan 13 21:30:43.028698 kernel: FS-Cache: Loaded Jan 13 21:30:43.096919 kernel: RPC: Registered named UNIX socket transport module. Jan 13 21:30:43.097032 kernel: RPC: Registered udp transport module. Jan 13 21:30:43.097055 kernel: RPC: Registered tcp transport module. Jan 13 21:30:43.097075 kernel: RPC: Registered tcp-with-tls transport module. Jan 13 21:30:43.097880 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 13 21:30:43.355780 kernel: NFS: Registering the id_resolver key type Jan 13 21:30:43.355914 kernel: Key type id_resolver registered Jan 13 21:30:43.355935 kernel: Key type id_legacy registered Jan 13 21:30:43.385727 nfsidmap[3205]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 13 21:30:43.390797 nfsidmap[3208]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 13 21:30:43.493537 containerd[1478]: time="2025-01-13T21:30:43.493490459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:6cf4094f-9e42-4c26-9a75-b2d7b8533ed1,Namespace:default,Attempt:0,}" Jan 13 21:30:43.521638 systemd-networkd[1403]: lxc3e9c14260bc0: Link UP Jan 13 21:30:43.532702 kernel: eth0: renamed from tmp0e05b Jan 13 21:30:43.540431 systemd-networkd[1403]: lxc3e9c14260bc0: Gained carrier Jan 13 21:30:43.762765 containerd[1478]: time="2025-01-13T21:30:43.762591904Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:30:43.762765 containerd[1478]: time="2025-01-13T21:30:43.762640695Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:30:43.762765 containerd[1478]: time="2025-01-13T21:30:43.762650804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:30:43.762996 containerd[1478]: time="2025-01-13T21:30:43.762768255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:30:43.781805 systemd[1]: Started cri-containerd-0e05b4dc7b1d58f55cbcf07607288adf94b4948882d0016e76a2bf61616d75d5.scope - libcontainer container 0e05b4dc7b1d58f55cbcf07607288adf94b4948882d0016e76a2bf61616d75d5. Jan 13 21:30:43.792467 systemd-resolved[1342]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:30:43.814303 containerd[1478]: time="2025-01-13T21:30:43.814253546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:6cf4094f-9e42-4c26-9a75-b2d7b8533ed1,Namespace:default,Attempt:0,} returns sandbox id \"0e05b4dc7b1d58f55cbcf07607288adf94b4948882d0016e76a2bf61616d75d5\"" Jan 13 21:30:43.815465 kubelet[1786]: E0113 21:30:43.815419 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:43.815828 containerd[1478]: time="2025-01-13T21:30:43.815624496Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 13 21:30:44.276517 containerd[1478]: time="2025-01-13T21:30:44.276433748Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:30:44.277261 containerd[1478]: time="2025-01-13T21:30:44.277213105Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 13 21:30:44.280303 containerd[1478]: time="2025-01-13T21:30:44.280245290Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 464.589976ms" Jan 13 21:30:44.280303 containerd[1478]: time="2025-01-13T21:30:44.280289864Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 13 21:30:44.282046 containerd[1478]: time="2025-01-13T21:30:44.282017244Z" level=info msg="CreateContainer within sandbox \"0e05b4dc7b1d58f55cbcf07607288adf94b4948882d0016e76a2bf61616d75d5\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 13 21:30:44.298044 containerd[1478]: time="2025-01-13T21:30:44.297992464Z" level=info msg="CreateContainer within sandbox \"0e05b4dc7b1d58f55cbcf07607288adf94b4948882d0016e76a2bf61616d75d5\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"40729137747314e6364a45c283bf977a7d79de0fbbf0a189ab3b76e5f90a3277\"" Jan 13 21:30:44.298728 containerd[1478]: time="2025-01-13T21:30:44.298467388Z" level=info msg="StartContainer for \"40729137747314e6364a45c283bf977a7d79de0fbbf0a189ab3b76e5f90a3277\"" Jan 13 21:30:44.330949 systemd[1]: Started cri-containerd-40729137747314e6364a45c283bf977a7d79de0fbbf0a189ab3b76e5f90a3277.scope - libcontainer container 40729137747314e6364a45c283bf977a7d79de0fbbf0a189ab3b76e5f90a3277. Jan 13 21:30:44.355535 containerd[1478]: time="2025-01-13T21:30:44.355480739Z" level=info msg="StartContainer for \"40729137747314e6364a45c283bf977a7d79de0fbbf0a189ab3b76e5f90a3277\" returns successfully" Jan 13 21:30:44.471563 kubelet[1786]: I0113 21:30:44.471494 1786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=17.005931824 podStartE2EDuration="17.471477959s" podCreationTimestamp="2025-01-13 21:30:27 +0000 UTC" firstStartedPulling="2025-01-13 21:30:43.815395375 +0000 UTC m=+51.695344452" lastFinishedPulling="2025-01-13 21:30:44.280941511 +0000 UTC m=+52.160890587" observedRunningTime="2025-01-13 21:30:44.471114103 +0000 UTC m=+52.351063199" watchObservedRunningTime="2025-01-13 21:30:44.471477959 +0000 UTC m=+52.351427035" Jan 13 21:30:44.816573 kubelet[1786]: E0113 21:30:44.816503 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:45.355856 systemd-networkd[1403]: lxc3e9c14260bc0: Gained IPv6LL Jan 13 21:30:45.817554 kubelet[1786]: E0113 21:30:45.817487 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:46.818515 kubelet[1786]: E0113 21:30:46.818434 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:47.819462 kubelet[1786]: E0113 21:30:47.819397 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:48.820547 kubelet[1786]: E0113 21:30:48.820474 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:49.821624 kubelet[1786]: E0113 21:30:49.821543 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:49.907601 containerd[1478]: time="2025-01-13T21:30:49.907551802Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:30:49.915174 containerd[1478]: time="2025-01-13T21:30:49.915128295Z" level=info msg="StopContainer for \"1f6ecf66064c5e7e6fa9cd38c804f619a8c33aa7d7624c42e8a7330d52a3f39a\" with timeout 2 (s)" Jan 13 21:30:49.915350 containerd[1478]: time="2025-01-13T21:30:49.915330245Z" level=info msg="Stop container \"1f6ecf66064c5e7e6fa9cd38c804f619a8c33aa7d7624c42e8a7330d52a3f39a\" with signal terminated" Jan 13 21:30:49.921935 systemd-networkd[1403]: lxc_health: Link DOWN Jan 13 21:30:49.921942 systemd-networkd[1403]: lxc_health: Lost carrier Jan 13 21:30:49.952566 systemd[1]: cri-containerd-1f6ecf66064c5e7e6fa9cd38c804f619a8c33aa7d7624c42e8a7330d52a3f39a.scope: Deactivated successfully. Jan 13 21:30:49.953211 systemd[1]: cri-containerd-1f6ecf66064c5e7e6fa9cd38c804f619a8c33aa7d7624c42e8a7330d52a3f39a.scope: Consumed 6.867s CPU time. Jan 13 21:30:49.973260 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f6ecf66064c5e7e6fa9cd38c804f619a8c33aa7d7624c42e8a7330d52a3f39a-rootfs.mount: Deactivated successfully. Jan 13 21:30:49.982099 containerd[1478]: time="2025-01-13T21:30:49.982035456Z" level=info msg="shim disconnected" id=1f6ecf66064c5e7e6fa9cd38c804f619a8c33aa7d7624c42e8a7330d52a3f39a namespace=k8s.io Jan 13 21:30:49.982099 containerd[1478]: time="2025-01-13T21:30:49.982096812Z" level=warning msg="cleaning up after shim disconnected" id=1f6ecf66064c5e7e6fa9cd38c804f619a8c33aa7d7624c42e8a7330d52a3f39a namespace=k8s.io Jan 13 21:30:49.982099 containerd[1478]: time="2025-01-13T21:30:49.982106400Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:30:49.999152 containerd[1478]: time="2025-01-13T21:30:49.999091781Z" level=info msg="StopContainer for \"1f6ecf66064c5e7e6fa9cd38c804f619a8c33aa7d7624c42e8a7330d52a3f39a\" returns successfully" Jan 13 21:30:49.999825 containerd[1478]: time="2025-01-13T21:30:49.999789523Z" level=info msg="StopPodSandbox for \"2141167472ae1ce2140bcf818aef654c964a0bae632b96434e03de16f5391517\"" Jan 13 21:30:49.999883 containerd[1478]: time="2025-01-13T21:30:49.999848343Z" level=info msg="Container to stop \"a8771adb4b06331a092469ee173caeef6be1c0c792fdb57059714e3ae390a2b1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:30:49.999883 containerd[1478]: time="2025-01-13T21:30:49.999864032Z" level=info msg="Container to stop \"7d7e38eab70d420f3d0e8a757f52cf3b185858fcec72adc9ed9d26715f1e4d28\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:30:49.999883 containerd[1478]: time="2025-01-13T21:30:49.999875885Z" level=info msg="Container to stop \"cf51cea0862d44e8ca9cd52a661dc26d79fb5877b1a6b8154438bdebebaeb05e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:30:50.000029 containerd[1478]: time="2025-01-13T21:30:49.999889731Z" level=info msg="Container to stop \"1f6ecf66064c5e7e6fa9cd38c804f619a8c33aa7d7624c42e8a7330d52a3f39a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:30:50.000029 containerd[1478]: time="2025-01-13T21:30:49.999900020Z" level=info msg="Container to stop \"83d87d9f49e58f991c413ab4f64a8d26a41e772d37f4f487d9b2742adafd4265\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:30:50.002239 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2141167472ae1ce2140bcf818aef654c964a0bae632b96434e03de16f5391517-shm.mount: Deactivated successfully. Jan 13 21:30:50.006329 systemd[1]: cri-containerd-2141167472ae1ce2140bcf818aef654c964a0bae632b96434e03de16f5391517.scope: Deactivated successfully. Jan 13 21:30:50.026295 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2141167472ae1ce2140bcf818aef654c964a0bae632b96434e03de16f5391517-rootfs.mount: Deactivated successfully. Jan 13 21:30:50.030517 containerd[1478]: time="2025-01-13T21:30:50.030438081Z" level=info msg="shim disconnected" id=2141167472ae1ce2140bcf818aef654c964a0bae632b96434e03de16f5391517 namespace=k8s.io Jan 13 21:30:50.030517 containerd[1478]: time="2025-01-13T21:30:50.030492043Z" level=warning msg="cleaning up after shim disconnected" id=2141167472ae1ce2140bcf818aef654c964a0bae632b96434e03de16f5391517 namespace=k8s.io Jan 13 21:30:50.030517 containerd[1478]: time="2025-01-13T21:30:50.030501540Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:30:50.044193 containerd[1478]: time="2025-01-13T21:30:50.044082859Z" level=info msg="TearDown network for sandbox \"2141167472ae1ce2140bcf818aef654c964a0bae632b96434e03de16f5391517\" successfully" Jan 13 21:30:50.044193 containerd[1478]: time="2025-01-13T21:30:50.044121872Z" level=info msg="StopPodSandbox for \"2141167472ae1ce2140bcf818aef654c964a0bae632b96434e03de16f5391517\" returns successfully" Jan 13 21:30:50.233212 kubelet[1786]: I0113 21:30:50.233062 1786 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/866f611f-1768-48eb-a581-aac909eeb174-cilium-cgroup\") pod \"866f611f-1768-48eb-a581-aac909eeb174\" (UID: \"866f611f-1768-48eb-a581-aac909eeb174\") " Jan 13 21:30:50.233212 kubelet[1786]: I0113 21:30:50.233115 1786 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/866f611f-1768-48eb-a581-aac909eeb174-clustermesh-secrets\") pod \"866f611f-1768-48eb-a581-aac909eeb174\" (UID: \"866f611f-1768-48eb-a581-aac909eeb174\") " Jan 13 21:30:50.233212 kubelet[1786]: I0113 21:30:50.233136 1786 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/866f611f-1768-48eb-a581-aac909eeb174-hostproc\") pod \"866f611f-1768-48eb-a581-aac909eeb174\" (UID: \"866f611f-1768-48eb-a581-aac909eeb174\") " Jan 13 21:30:50.233212 kubelet[1786]: I0113 21:30:50.233154 1786 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/866f611f-1768-48eb-a581-aac909eeb174-etc-cni-netd\") pod \"866f611f-1768-48eb-a581-aac909eeb174\" (UID: \"866f611f-1768-48eb-a581-aac909eeb174\") " Jan 13 21:30:50.233212 kubelet[1786]: I0113 21:30:50.233175 1786 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/866f611f-1768-48eb-a581-aac909eeb174-lib-modules\") pod \"866f611f-1768-48eb-a581-aac909eeb174\" (UID: \"866f611f-1768-48eb-a581-aac909eeb174\") " Jan 13 21:30:50.233212 kubelet[1786]: I0113 21:30:50.233179 1786 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/866f611f-1768-48eb-a581-aac909eeb174-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "866f611f-1768-48eb-a581-aac909eeb174" (UID: "866f611f-1768-48eb-a581-aac909eeb174"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:30:50.233477 kubelet[1786]: I0113 21:30:50.233195 1786 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbrdc\" (UniqueName: \"kubernetes.io/projected/866f611f-1768-48eb-a581-aac909eeb174-kube-api-access-nbrdc\") pod \"866f611f-1768-48eb-a581-aac909eeb174\" (UID: \"866f611f-1768-48eb-a581-aac909eeb174\") " Jan 13 21:30:50.233477 kubelet[1786]: I0113 21:30:50.233212 1786 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/866f611f-1768-48eb-a581-aac909eeb174-bpf-maps\") pod \"866f611f-1768-48eb-a581-aac909eeb174\" (UID: \"866f611f-1768-48eb-a581-aac909eeb174\") " Jan 13 21:30:50.233477 kubelet[1786]: I0113 21:30:50.233217 1786 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/866f611f-1768-48eb-a581-aac909eeb174-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "866f611f-1768-48eb-a581-aac909eeb174" (UID: "866f611f-1768-48eb-a581-aac909eeb174"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:30:50.233477 kubelet[1786]: I0113 21:30:50.233233 1786 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/866f611f-1768-48eb-a581-aac909eeb174-hubble-tls\") pod \"866f611f-1768-48eb-a581-aac909eeb174\" (UID: \"866f611f-1768-48eb-a581-aac909eeb174\") " Jan 13 21:30:50.233477 kubelet[1786]: I0113 21:30:50.233254 1786 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/866f611f-1768-48eb-a581-aac909eeb174-host-proc-sys-kernel\") pod \"866f611f-1768-48eb-a581-aac909eeb174\" (UID: \"866f611f-1768-48eb-a581-aac909eeb174\") " Jan 13 21:30:50.233477 kubelet[1786]: I0113 21:30:50.233270 1786 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/866f611f-1768-48eb-a581-aac909eeb174-host-proc-sys-net\") pod \"866f611f-1768-48eb-a581-aac909eeb174\" (UID: \"866f611f-1768-48eb-a581-aac909eeb174\") " Jan 13 21:30:50.233636 kubelet[1786]: I0113 21:30:50.233288 1786 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/866f611f-1768-48eb-a581-aac909eeb174-cilium-run\") pod \"866f611f-1768-48eb-a581-aac909eeb174\" (UID: \"866f611f-1768-48eb-a581-aac909eeb174\") " Jan 13 21:30:50.233636 kubelet[1786]: I0113 21:30:50.233305 1786 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/866f611f-1768-48eb-a581-aac909eeb174-cni-path\") pod \"866f611f-1768-48eb-a581-aac909eeb174\" (UID: \"866f611f-1768-48eb-a581-aac909eeb174\") " Jan 13 21:30:50.233636 kubelet[1786]: I0113 21:30:50.233327 1786 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/866f611f-1768-48eb-a581-aac909eeb174-cilium-config-path\") pod \"866f611f-1768-48eb-a581-aac909eeb174\" (UID: \"866f611f-1768-48eb-a581-aac909eeb174\") " Jan 13 21:30:50.233636 kubelet[1786]: I0113 21:30:50.233345 1786 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/866f611f-1768-48eb-a581-aac909eeb174-xtables-lock\") pod \"866f611f-1768-48eb-a581-aac909eeb174\" (UID: \"866f611f-1768-48eb-a581-aac909eeb174\") " Jan 13 21:30:50.233636 kubelet[1786]: I0113 21:30:50.233378 1786 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/866f611f-1768-48eb-a581-aac909eeb174-cilium-cgroup\") on node \"10.0.0.125\" DevicePath \"\"" Jan 13 21:30:50.236688 kubelet[1786]: I0113 21:30:50.233235 1786 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/866f611f-1768-48eb-a581-aac909eeb174-hostproc" (OuterVolumeSpecName: "hostproc") pod "866f611f-1768-48eb-a581-aac909eeb174" (UID: "866f611f-1768-48eb-a581-aac909eeb174"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:30:50.236688 kubelet[1786]: I0113 21:30:50.233407 1786 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/866f611f-1768-48eb-a581-aac909eeb174-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "866f611f-1768-48eb-a581-aac909eeb174" (UID: "866f611f-1768-48eb-a581-aac909eeb174"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:30:50.236688 kubelet[1786]: I0113 21:30:50.233422 1786 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/866f611f-1768-48eb-a581-aac909eeb174-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "866f611f-1768-48eb-a581-aac909eeb174" (UID: "866f611f-1768-48eb-a581-aac909eeb174"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:30:50.236688 kubelet[1786]: I0113 21:30:50.233943 1786 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/866f611f-1768-48eb-a581-aac909eeb174-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "866f611f-1768-48eb-a581-aac909eeb174" (UID: "866f611f-1768-48eb-a581-aac909eeb174"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:30:50.236688 kubelet[1786]: I0113 21:30:50.234029 1786 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/866f611f-1768-48eb-a581-aac909eeb174-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "866f611f-1768-48eb-a581-aac909eeb174" (UID: "866f611f-1768-48eb-a581-aac909eeb174"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:30:50.237001 kubelet[1786]: I0113 21:30:50.234049 1786 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/866f611f-1768-48eb-a581-aac909eeb174-cni-path" (OuterVolumeSpecName: "cni-path") pod "866f611f-1768-48eb-a581-aac909eeb174" (UID: "866f611f-1768-48eb-a581-aac909eeb174"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:30:50.237001 kubelet[1786]: I0113 21:30:50.234066 1786 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/866f611f-1768-48eb-a581-aac909eeb174-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "866f611f-1768-48eb-a581-aac909eeb174" (UID: "866f611f-1768-48eb-a581-aac909eeb174"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:30:50.237001 kubelet[1786]: I0113 21:30:50.236183 1786 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/866f611f-1768-48eb-a581-aac909eeb174-kube-api-access-nbrdc" (OuterVolumeSpecName: "kube-api-access-nbrdc") pod "866f611f-1768-48eb-a581-aac909eeb174" (UID: "866f611f-1768-48eb-a581-aac909eeb174"). InnerVolumeSpecName "kube-api-access-nbrdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:30:50.237001 kubelet[1786]: I0113 21:30:50.236225 1786 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/866f611f-1768-48eb-a581-aac909eeb174-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "866f611f-1768-48eb-a581-aac909eeb174" (UID: "866f611f-1768-48eb-a581-aac909eeb174"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:30:50.237001 kubelet[1786]: I0113 21:30:50.236538 1786 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/866f611f-1768-48eb-a581-aac909eeb174-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "866f611f-1768-48eb-a581-aac909eeb174" (UID: "866f611f-1768-48eb-a581-aac909eeb174"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:30:50.237711 kubelet[1786]: I0113 21:30:50.237656 1786 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/866f611f-1768-48eb-a581-aac909eeb174-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "866f611f-1768-48eb-a581-aac909eeb174" (UID: "866f611f-1768-48eb-a581-aac909eeb174"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 21:30:50.238330 systemd[1]: var-lib-kubelet-pods-866f611f\x2d1768\x2d48eb\x2da581\x2daac909eeb174-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnbrdc.mount: Deactivated successfully. Jan 13 21:30:50.238455 systemd[1]: var-lib-kubelet-pods-866f611f\x2d1768\x2d48eb\x2da581\x2daac909eeb174-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 13 21:30:50.238516 kubelet[1786]: I0113 21:30:50.238460 1786 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/866f611f-1768-48eb-a581-aac909eeb174-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "866f611f-1768-48eb-a581-aac909eeb174" (UID: "866f611f-1768-48eb-a581-aac909eeb174"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 21:30:50.334446 kubelet[1786]: I0113 21:30:50.334393 1786 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/866f611f-1768-48eb-a581-aac909eeb174-clustermesh-secrets\") on node \"10.0.0.125\" DevicePath \"\"" Jan 13 21:30:50.334446 kubelet[1786]: I0113 21:30:50.334436 1786 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/866f611f-1768-48eb-a581-aac909eeb174-hostproc\") on node \"10.0.0.125\" DevicePath \"\"" Jan 13 21:30:50.334446 kubelet[1786]: I0113 21:30:50.334445 1786 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/866f611f-1768-48eb-a581-aac909eeb174-etc-cni-netd\") on node \"10.0.0.125\" DevicePath \"\"" Jan 13 21:30:50.334446 kubelet[1786]: I0113 21:30:50.334453 1786 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/866f611f-1768-48eb-a581-aac909eeb174-lib-modules\") on node \"10.0.0.125\" DevicePath \"\"" Jan 13 21:30:50.334446 kubelet[1786]: I0113 21:30:50.334462 1786 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/866f611f-1768-48eb-a581-aac909eeb174-bpf-maps\") on node \"10.0.0.125\" DevicePath \"\"" Jan 13 21:30:50.334759 kubelet[1786]: I0113 21:30:50.334469 1786 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/866f611f-1768-48eb-a581-aac909eeb174-hubble-tls\") on node \"10.0.0.125\" DevicePath \"\"" Jan 13 21:30:50.334759 kubelet[1786]: I0113 21:30:50.334478 1786 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/866f611f-1768-48eb-a581-aac909eeb174-host-proc-sys-kernel\") on node \"10.0.0.125\" DevicePath \"\"" Jan 13 21:30:50.334759 kubelet[1786]: I0113 21:30:50.334487 1786 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/866f611f-1768-48eb-a581-aac909eeb174-host-proc-sys-net\") on node \"10.0.0.125\" DevicePath \"\"" Jan 13 21:30:50.334759 kubelet[1786]: I0113 21:30:50.334494 1786 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/866f611f-1768-48eb-a581-aac909eeb174-cilium-run\") on node \"10.0.0.125\" DevicePath \"\"" Jan 13 21:30:50.334759 kubelet[1786]: I0113 21:30:50.334502 1786 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/866f611f-1768-48eb-a581-aac909eeb174-cni-path\") on node \"10.0.0.125\" DevicePath \"\"" Jan 13 21:30:50.334759 kubelet[1786]: I0113 21:30:50.334509 1786 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/866f611f-1768-48eb-a581-aac909eeb174-cilium-config-path\") on node \"10.0.0.125\" DevicePath \"\"" Jan 13 21:30:50.334759 kubelet[1786]: I0113 21:30:50.334516 1786 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-nbrdc\" (UniqueName: \"kubernetes.io/projected/866f611f-1768-48eb-a581-aac909eeb174-kube-api-access-nbrdc\") on node \"10.0.0.125\" DevicePath \"\"" Jan 13 21:30:50.334759 kubelet[1786]: I0113 21:30:50.334523 1786 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/866f611f-1768-48eb-a581-aac909eeb174-xtables-lock\") on node \"10.0.0.125\" DevicePath \"\"" Jan 13 21:30:50.478324 kubelet[1786]: I0113 21:30:50.478284 1786 scope.go:117] "RemoveContainer" containerID="1f6ecf66064c5e7e6fa9cd38c804f619a8c33aa7d7624c42e8a7330d52a3f39a" Jan 13 21:30:50.479812 containerd[1478]: time="2025-01-13T21:30:50.479782632Z" level=info msg="RemoveContainer for \"1f6ecf66064c5e7e6fa9cd38c804f619a8c33aa7d7624c42e8a7330d52a3f39a\"" Jan 13 21:30:50.483482 containerd[1478]: time="2025-01-13T21:30:50.483389593Z" level=info msg="RemoveContainer for \"1f6ecf66064c5e7e6fa9cd38c804f619a8c33aa7d7624c42e8a7330d52a3f39a\" returns successfully" Jan 13 21:30:50.483600 kubelet[1786]: I0113 21:30:50.483568 1786 scope.go:117] "RemoveContainer" containerID="cf51cea0862d44e8ca9cd52a661dc26d79fb5877b1a6b8154438bdebebaeb05e" Jan 13 21:30:50.484853 systemd[1]: Removed slice kubepods-burstable-pod866f611f_1768_48eb_a581_aac909eeb174.slice - libcontainer container kubepods-burstable-pod866f611f_1768_48eb_a581_aac909eeb174.slice. Jan 13 21:30:50.485066 containerd[1478]: time="2025-01-13T21:30:50.484919608Z" level=info msg="RemoveContainer for \"cf51cea0862d44e8ca9cd52a661dc26d79fb5877b1a6b8154438bdebebaeb05e\"" Jan 13 21:30:50.485238 systemd[1]: kubepods-burstable-pod866f611f_1768_48eb_a581_aac909eeb174.slice: Consumed 6.962s CPU time. Jan 13 21:30:50.488314 containerd[1478]: time="2025-01-13T21:30:50.488283341Z" level=info msg="RemoveContainer for \"cf51cea0862d44e8ca9cd52a661dc26d79fb5877b1a6b8154438bdebebaeb05e\" returns successfully" Jan 13 21:30:50.488474 kubelet[1786]: I0113 21:30:50.488448 1786 scope.go:117] "RemoveContainer" containerID="7d7e38eab70d420f3d0e8a757f52cf3b185858fcec72adc9ed9d26715f1e4d28" Jan 13 21:30:50.489522 containerd[1478]: time="2025-01-13T21:30:50.489480922Z" level=info msg="RemoveContainer for \"7d7e38eab70d420f3d0e8a757f52cf3b185858fcec72adc9ed9d26715f1e4d28\"" Jan 13 21:30:50.492790 containerd[1478]: time="2025-01-13T21:30:50.492753213Z" level=info msg="RemoveContainer for \"7d7e38eab70d420f3d0e8a757f52cf3b185858fcec72adc9ed9d26715f1e4d28\" returns successfully" Jan 13 21:30:50.492963 kubelet[1786]: I0113 21:30:50.492929 1786 scope.go:117] "RemoveContainer" containerID="83d87d9f49e58f991c413ab4f64a8d26a41e772d37f4f487d9b2742adafd4265" Jan 13 21:30:50.493931 containerd[1478]: time="2025-01-13T21:30:50.493891462Z" level=info msg="RemoveContainer for \"83d87d9f49e58f991c413ab4f64a8d26a41e772d37f4f487d9b2742adafd4265\"" Jan 13 21:30:50.497382 containerd[1478]: time="2025-01-13T21:30:50.497339584Z" level=info msg="RemoveContainer for \"83d87d9f49e58f991c413ab4f64a8d26a41e772d37f4f487d9b2742adafd4265\" returns successfully" Jan 13 21:30:50.497537 kubelet[1786]: I0113 21:30:50.497511 1786 scope.go:117] "RemoveContainer" containerID="a8771adb4b06331a092469ee173caeef6be1c0c792fdb57059714e3ae390a2b1" Jan 13 21:30:50.498468 containerd[1478]: time="2025-01-13T21:30:50.498432076Z" level=info msg="RemoveContainer for \"a8771adb4b06331a092469ee173caeef6be1c0c792fdb57059714e3ae390a2b1\"" Jan 13 21:30:50.501566 containerd[1478]: time="2025-01-13T21:30:50.501531363Z" level=info msg="RemoveContainer for \"a8771adb4b06331a092469ee173caeef6be1c0c792fdb57059714e3ae390a2b1\" returns successfully" Jan 13 21:30:50.501772 kubelet[1786]: I0113 21:30:50.501738 1786 scope.go:117] "RemoveContainer" containerID="1f6ecf66064c5e7e6fa9cd38c804f619a8c33aa7d7624c42e8a7330d52a3f39a" Jan 13 21:30:50.501974 containerd[1478]: time="2025-01-13T21:30:50.501940582Z" level=error msg="ContainerStatus for \"1f6ecf66064c5e7e6fa9cd38c804f619a8c33aa7d7624c42e8a7330d52a3f39a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1f6ecf66064c5e7e6fa9cd38c804f619a8c33aa7d7624c42e8a7330d52a3f39a\": not found" Jan 13 21:30:50.502153 kubelet[1786]: E0113 21:30:50.502129 1786 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1f6ecf66064c5e7e6fa9cd38c804f619a8c33aa7d7624c42e8a7330d52a3f39a\": not found" containerID="1f6ecf66064c5e7e6fa9cd38c804f619a8c33aa7d7624c42e8a7330d52a3f39a" Jan 13 21:30:50.502248 kubelet[1786]: I0113 21:30:50.502158 1786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1f6ecf66064c5e7e6fa9cd38c804f619a8c33aa7d7624c42e8a7330d52a3f39a"} err="failed to get container status \"1f6ecf66064c5e7e6fa9cd38c804f619a8c33aa7d7624c42e8a7330d52a3f39a\": rpc error: code = NotFound desc = an error occurred when try to find container \"1f6ecf66064c5e7e6fa9cd38c804f619a8c33aa7d7624c42e8a7330d52a3f39a\": not found" Jan 13 21:30:50.502248 kubelet[1786]: I0113 21:30:50.502247 1786 scope.go:117] "RemoveContainer" containerID="cf51cea0862d44e8ca9cd52a661dc26d79fb5877b1a6b8154438bdebebaeb05e" Jan 13 21:30:50.502456 containerd[1478]: time="2025-01-13T21:30:50.502417278Z" level=error msg="ContainerStatus for \"cf51cea0862d44e8ca9cd52a661dc26d79fb5877b1a6b8154438bdebebaeb05e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cf51cea0862d44e8ca9cd52a661dc26d79fb5877b1a6b8154438bdebebaeb05e\": not found" Jan 13 21:30:50.502559 kubelet[1786]: E0113 21:30:50.502534 1786 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cf51cea0862d44e8ca9cd52a661dc26d79fb5877b1a6b8154438bdebebaeb05e\": not found" containerID="cf51cea0862d44e8ca9cd52a661dc26d79fb5877b1a6b8154438bdebebaeb05e" Jan 13 21:30:50.502600 kubelet[1786]: I0113 21:30:50.502559 1786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cf51cea0862d44e8ca9cd52a661dc26d79fb5877b1a6b8154438bdebebaeb05e"} err="failed to get container status \"cf51cea0862d44e8ca9cd52a661dc26d79fb5877b1a6b8154438bdebebaeb05e\": rpc error: code = NotFound desc = an error occurred when try to find container \"cf51cea0862d44e8ca9cd52a661dc26d79fb5877b1a6b8154438bdebebaeb05e\": not found" Jan 13 21:30:50.502600 kubelet[1786]: I0113 21:30:50.502573 1786 scope.go:117] "RemoveContainer" containerID="7d7e38eab70d420f3d0e8a757f52cf3b185858fcec72adc9ed9d26715f1e4d28" Jan 13 21:30:50.502764 containerd[1478]: time="2025-01-13T21:30:50.502739814Z" level=error msg="ContainerStatus for \"7d7e38eab70d420f3d0e8a757f52cf3b185858fcec72adc9ed9d26715f1e4d28\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7d7e38eab70d420f3d0e8a757f52cf3b185858fcec72adc9ed9d26715f1e4d28\": not found" Jan 13 21:30:50.502881 kubelet[1786]: E0113 21:30:50.502856 1786 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7d7e38eab70d420f3d0e8a757f52cf3b185858fcec72adc9ed9d26715f1e4d28\": not found" containerID="7d7e38eab70d420f3d0e8a757f52cf3b185858fcec72adc9ed9d26715f1e4d28" Jan 13 21:30:50.502915 kubelet[1786]: I0113 21:30:50.502890 1786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7d7e38eab70d420f3d0e8a757f52cf3b185858fcec72adc9ed9d26715f1e4d28"} err="failed to get container status \"7d7e38eab70d420f3d0e8a757f52cf3b185858fcec72adc9ed9d26715f1e4d28\": rpc error: code = NotFound desc = an error occurred when try to find container \"7d7e38eab70d420f3d0e8a757f52cf3b185858fcec72adc9ed9d26715f1e4d28\": not found" Jan 13 21:30:50.502944 kubelet[1786]: I0113 21:30:50.502913 1786 scope.go:117] "RemoveContainer" containerID="83d87d9f49e58f991c413ab4f64a8d26a41e772d37f4f487d9b2742adafd4265" Jan 13 21:30:50.503139 containerd[1478]: time="2025-01-13T21:30:50.503089220Z" level=error msg="ContainerStatus for \"83d87d9f49e58f991c413ab4f64a8d26a41e772d37f4f487d9b2742adafd4265\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"83d87d9f49e58f991c413ab4f64a8d26a41e772d37f4f487d9b2742adafd4265\": not found" Jan 13 21:30:50.503240 kubelet[1786]: E0113 21:30:50.503216 1786 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"83d87d9f49e58f991c413ab4f64a8d26a41e772d37f4f487d9b2742adafd4265\": not found" containerID="83d87d9f49e58f991c413ab4f64a8d26a41e772d37f4f487d9b2742adafd4265" Jan 13 21:30:50.503282 kubelet[1786]: I0113 21:30:50.503240 1786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"83d87d9f49e58f991c413ab4f64a8d26a41e772d37f4f487d9b2742adafd4265"} err="failed to get container status \"83d87d9f49e58f991c413ab4f64a8d26a41e772d37f4f487d9b2742adafd4265\": rpc error: code = NotFound desc = an error occurred when try to find container \"83d87d9f49e58f991c413ab4f64a8d26a41e772d37f4f487d9b2742adafd4265\": not found" Jan 13 21:30:50.503282 kubelet[1786]: I0113 21:30:50.503253 1786 scope.go:117] "RemoveContainer" containerID="a8771adb4b06331a092469ee173caeef6be1c0c792fdb57059714e3ae390a2b1" Jan 13 21:30:50.503436 containerd[1478]: time="2025-01-13T21:30:50.503408321Z" level=error msg="ContainerStatus for \"a8771adb4b06331a092469ee173caeef6be1c0c792fdb57059714e3ae390a2b1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a8771adb4b06331a092469ee173caeef6be1c0c792fdb57059714e3ae390a2b1\": not found" Jan 13 21:30:50.503553 kubelet[1786]: E0113 21:30:50.503532 1786 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a8771adb4b06331a092469ee173caeef6be1c0c792fdb57059714e3ae390a2b1\": not found" containerID="a8771adb4b06331a092469ee173caeef6be1c0c792fdb57059714e3ae390a2b1" Jan 13 21:30:50.503699 kubelet[1786]: I0113 21:30:50.503554 1786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a8771adb4b06331a092469ee173caeef6be1c0c792fdb57059714e3ae390a2b1"} err="failed to get container status \"a8771adb4b06331a092469ee173caeef6be1c0c792fdb57059714e3ae390a2b1\": rpc error: code = NotFound desc = an error occurred when try to find container \"a8771adb4b06331a092469ee173caeef6be1c0c792fdb57059714e3ae390a2b1\": not found" Jan 13 21:30:50.822192 kubelet[1786]: E0113 21:30:50.822143 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:50.894503 systemd[1]: var-lib-kubelet-pods-866f611f\x2d1768\x2d48eb\x2da581\x2daac909eeb174-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 13 21:30:51.366042 kubelet[1786]: I0113 21:30:51.366000 1786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="866f611f-1768-48eb-a581-aac909eeb174" path="/var/lib/kubelet/pods/866f611f-1768-48eb-a581-aac909eeb174/volumes" Jan 13 21:30:51.822697 kubelet[1786]: E0113 21:30:51.822631 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:52.393828 kubelet[1786]: I0113 21:30:52.393781 1786 topology_manager.go:215] "Topology Admit Handler" podUID="328fa8bc-275d-4797-ad3d-fda38608cea3" podNamespace="kube-system" podName="cilium-operator-599987898-7hcx5" Jan 13 21:30:52.393828 kubelet[1786]: E0113 21:30:52.393837 1786 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="866f611f-1768-48eb-a581-aac909eeb174" containerName="apply-sysctl-overwrites" Jan 13 21:30:52.393828 kubelet[1786]: E0113 21:30:52.393846 1786 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="866f611f-1768-48eb-a581-aac909eeb174" containerName="clean-cilium-state" Jan 13 21:30:52.394047 kubelet[1786]: E0113 21:30:52.393853 1786 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="866f611f-1768-48eb-a581-aac909eeb174" containerName="mount-bpf-fs" Jan 13 21:30:52.394047 kubelet[1786]: E0113 21:30:52.393860 1786 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="866f611f-1768-48eb-a581-aac909eeb174" containerName="cilium-agent" Jan 13 21:30:52.394047 kubelet[1786]: E0113 21:30:52.393866 1786 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="866f611f-1768-48eb-a581-aac909eeb174" containerName="mount-cgroup" Jan 13 21:30:52.394047 kubelet[1786]: I0113 21:30:52.393883 1786 memory_manager.go:354] "RemoveStaleState removing state" podUID="866f611f-1768-48eb-a581-aac909eeb174" containerName="cilium-agent" Jan 13 21:30:52.398735 kubelet[1786]: I0113 21:30:52.398699 1786 topology_manager.go:215] "Topology Admit Handler" podUID="d0a42954-515c-4d5a-8ec5-560479566bbf" podNamespace="kube-system" podName="cilium-5p4wb" Jan 13 21:30:52.399570 systemd[1]: Created slice kubepods-besteffort-pod328fa8bc_275d_4797_ad3d_fda38608cea3.slice - libcontainer container kubepods-besteffort-pod328fa8bc_275d_4797_ad3d_fda38608cea3.slice. Jan 13 21:30:52.406740 kubelet[1786]: W0113 21:30:52.406655 1786 reflector.go:547] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:10.0.0.125" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.125' and this object Jan 13 21:30:52.406740 kubelet[1786]: E0113 21:30:52.406729 1786 reflector.go:150] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:10.0.0.125" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.125' and this object Jan 13 21:30:52.408014 systemd[1]: Created slice kubepods-burstable-podd0a42954_515c_4d5a_8ec5_560479566bbf.slice - libcontainer container kubepods-burstable-podd0a42954_515c_4d5a_8ec5_560479566bbf.slice. Jan 13 21:30:52.408375 kubelet[1786]: W0113 21:30:52.408351 1786 reflector.go:547] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:10.0.0.125" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.125' and this object Jan 13 21:30:52.408429 kubelet[1786]: E0113 21:30:52.408382 1786 reflector.go:150] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:10.0.0.125" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.125' and this object Jan 13 21:30:52.446948 kubelet[1786]: I0113 21:30:52.446902 1786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d0a42954-515c-4d5a-8ec5-560479566bbf-xtables-lock\") pod \"cilium-5p4wb\" (UID: \"d0a42954-515c-4d5a-8ec5-560479566bbf\") " pod="kube-system/cilium-5p4wb" Jan 13 21:30:52.446948 kubelet[1786]: I0113 21:30:52.446945 1786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d0a42954-515c-4d5a-8ec5-560479566bbf-clustermesh-secrets\") pod \"cilium-5p4wb\" (UID: \"d0a42954-515c-4d5a-8ec5-560479566bbf\") " pod="kube-system/cilium-5p4wb" Jan 13 21:30:52.446948 kubelet[1786]: I0113 21:30:52.446961 1786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d0a42954-515c-4d5a-8ec5-560479566bbf-hostproc\") pod \"cilium-5p4wb\" (UID: \"d0a42954-515c-4d5a-8ec5-560479566bbf\") " pod="kube-system/cilium-5p4wb" Jan 13 21:30:52.447141 kubelet[1786]: I0113 21:30:52.446976 1786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d0a42954-515c-4d5a-8ec5-560479566bbf-cilium-config-path\") pod \"cilium-5p4wb\" (UID: \"d0a42954-515c-4d5a-8ec5-560479566bbf\") " pod="kube-system/cilium-5p4wb" Jan 13 21:30:52.447141 kubelet[1786]: I0113 21:30:52.446991 1786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d0a42954-515c-4d5a-8ec5-560479566bbf-cilium-ipsec-secrets\") pod \"cilium-5p4wb\" (UID: \"d0a42954-515c-4d5a-8ec5-560479566bbf\") " pod="kube-system/cilium-5p4wb" Jan 13 21:30:52.447141 kubelet[1786]: I0113 21:30:52.447006 1786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d0a42954-515c-4d5a-8ec5-560479566bbf-cilium-run\") pod \"cilium-5p4wb\" (UID: \"d0a42954-515c-4d5a-8ec5-560479566bbf\") " pod="kube-system/cilium-5p4wb" Jan 13 21:30:52.447141 kubelet[1786]: I0113 21:30:52.447019 1786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d0a42954-515c-4d5a-8ec5-560479566bbf-bpf-maps\") pod \"cilium-5p4wb\" (UID: \"d0a42954-515c-4d5a-8ec5-560479566bbf\") " pod="kube-system/cilium-5p4wb" Jan 13 21:30:52.447141 kubelet[1786]: I0113 21:30:52.447072 1786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d0a42954-515c-4d5a-8ec5-560479566bbf-host-proc-sys-kernel\") pod \"cilium-5p4wb\" (UID: \"d0a42954-515c-4d5a-8ec5-560479566bbf\") " pod="kube-system/cilium-5p4wb" Jan 13 21:30:52.447141 kubelet[1786]: I0113 21:30:52.447088 1786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d0a42954-515c-4d5a-8ec5-560479566bbf-hubble-tls\") pod \"cilium-5p4wb\" (UID: \"d0a42954-515c-4d5a-8ec5-560479566bbf\") " pod="kube-system/cilium-5p4wb" Jan 13 21:30:52.447276 kubelet[1786]: I0113 21:30:52.447137 1786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/328fa8bc-275d-4797-ad3d-fda38608cea3-cilium-config-path\") pod \"cilium-operator-599987898-7hcx5\" (UID: \"328fa8bc-275d-4797-ad3d-fda38608cea3\") " pod="kube-system/cilium-operator-599987898-7hcx5" Jan 13 21:30:52.447276 kubelet[1786]: I0113 21:30:52.447173 1786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72vvh\" (UniqueName: \"kubernetes.io/projected/328fa8bc-275d-4797-ad3d-fda38608cea3-kube-api-access-72vvh\") pod \"cilium-operator-599987898-7hcx5\" (UID: \"328fa8bc-275d-4797-ad3d-fda38608cea3\") " pod="kube-system/cilium-operator-599987898-7hcx5" Jan 13 21:30:52.447276 kubelet[1786]: I0113 21:30:52.447190 1786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d0a42954-515c-4d5a-8ec5-560479566bbf-cilium-cgroup\") pod \"cilium-5p4wb\" (UID: \"d0a42954-515c-4d5a-8ec5-560479566bbf\") " pod="kube-system/cilium-5p4wb" Jan 13 21:30:52.447276 kubelet[1786]: I0113 21:30:52.447205 1786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d0a42954-515c-4d5a-8ec5-560479566bbf-lib-modules\") pod \"cilium-5p4wb\" (UID: \"d0a42954-515c-4d5a-8ec5-560479566bbf\") " pod="kube-system/cilium-5p4wb" Jan 13 21:30:52.447276 kubelet[1786]: I0113 21:30:52.447236 1786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvmh4\" (UniqueName: \"kubernetes.io/projected/d0a42954-515c-4d5a-8ec5-560479566bbf-kube-api-access-pvmh4\") pod \"cilium-5p4wb\" (UID: \"d0a42954-515c-4d5a-8ec5-560479566bbf\") " pod="kube-system/cilium-5p4wb" Jan 13 21:30:52.447391 kubelet[1786]: I0113 21:30:52.447273 1786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d0a42954-515c-4d5a-8ec5-560479566bbf-cni-path\") pod \"cilium-5p4wb\" (UID: \"d0a42954-515c-4d5a-8ec5-560479566bbf\") " pod="kube-system/cilium-5p4wb" Jan 13 21:30:52.447391 kubelet[1786]: I0113 21:30:52.447292 1786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d0a42954-515c-4d5a-8ec5-560479566bbf-etc-cni-netd\") pod \"cilium-5p4wb\" (UID: \"d0a42954-515c-4d5a-8ec5-560479566bbf\") " pod="kube-system/cilium-5p4wb" Jan 13 21:30:52.447391 kubelet[1786]: I0113 21:30:52.447307 1786 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d0a42954-515c-4d5a-8ec5-560479566bbf-host-proc-sys-net\") pod \"cilium-5p4wb\" (UID: \"d0a42954-515c-4d5a-8ec5-560479566bbf\") " pod="kube-system/cilium-5p4wb" Jan 13 21:30:52.702789 kubelet[1786]: E0113 21:30:52.702609 1786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:52.703417 containerd[1478]: time="2025-01-13T21:30:52.703209780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-7hcx5,Uid:328fa8bc-275d-4797-ad3d-fda38608cea3,Namespace:kube-system,Attempt:0,}" Jan 13 21:30:52.723731 containerd[1478]: time="2025-01-13T21:30:52.723594265Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:30:52.723731 containerd[1478]: time="2025-01-13T21:30:52.723655810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:30:52.723731 containerd[1478]: time="2025-01-13T21:30:52.723708119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:30:52.723887 containerd[1478]: time="2025-01-13T21:30:52.723812425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:30:52.739795 systemd[1]: Started cri-containerd-7e306691c4964be6d0522f4305a1994aa6b795b99149f2da80e2a06fa149e128.scope - libcontainer container 7e306691c4964be6d0522f4305a1994aa6b795b99149f2da80e2a06fa149e128. Jan 13 21:30:52.777494 containerd[1478]: time="2025-01-13T21:30:52.777421663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-7hcx5,Uid:328fa8bc-275d-4797-ad3d-fda38608cea3,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e306691c4964be6d0522f4305a1994aa6b795b99149f2da80e2a06fa149e128\"" Jan 13 21:30:52.778166 kubelet[1786]: E0113 21:30:52.778142 1786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:52.779172 containerd[1478]: time="2025-01-13T21:30:52.779143509Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 13 21:30:52.782389 kubelet[1786]: E0113 21:30:52.782342 1786 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:52.802530 containerd[1478]: time="2025-01-13T21:30:52.802489449Z" level=info msg="StopPodSandbox for \"2141167472ae1ce2140bcf818aef654c964a0bae632b96434e03de16f5391517\"" Jan 13 21:30:52.802594 containerd[1478]: time="2025-01-13T21:30:52.802575340Z" level=info msg="TearDown network for sandbox \"2141167472ae1ce2140bcf818aef654c964a0bae632b96434e03de16f5391517\" successfully" Jan 13 21:30:52.802594 containerd[1478]: time="2025-01-13T21:30:52.802586892Z" level=info msg="StopPodSandbox for \"2141167472ae1ce2140bcf818aef654c964a0bae632b96434e03de16f5391517\" returns successfully" Jan 13 21:30:52.802969 containerd[1478]: time="2025-01-13T21:30:52.802926962Z" level=info msg="RemovePodSandbox for \"2141167472ae1ce2140bcf818aef654c964a0bae632b96434e03de16f5391517\"" Jan 13 21:30:52.802969 containerd[1478]: time="2025-01-13T21:30:52.802953141Z" level=info msg="Forcibly stopping sandbox \"2141167472ae1ce2140bcf818aef654c964a0bae632b96434e03de16f5391517\"" Jan 13 21:30:52.803033 containerd[1478]: time="2025-01-13T21:30:52.802998436Z" level=info msg="TearDown network for sandbox \"2141167472ae1ce2140bcf818aef654c964a0bae632b96434e03de16f5391517\" successfully" Jan 13 21:30:52.806192 containerd[1478]: time="2025-01-13T21:30:52.806161359Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2141167472ae1ce2140bcf818aef654c964a0bae632b96434e03de16f5391517\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:30:52.806243 containerd[1478]: time="2025-01-13T21:30:52.806200663Z" level=info msg="RemovePodSandbox \"2141167472ae1ce2140bcf818aef654c964a0bae632b96434e03de16f5391517\" returns successfully" Jan 13 21:30:52.823151 kubelet[1786]: E0113 21:30:52.823121 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:53.380179 kubelet[1786]: E0113 21:30:53.380140 1786 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 21:30:53.548367 kubelet[1786]: E0113 21:30:53.548318 1786 projected.go:269] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Jan 13 21:30:53.548367 kubelet[1786]: E0113 21:30:53.548351 1786 projected.go:200] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-5p4wb: failed to sync secret cache: timed out waiting for the condition Jan 13 21:30:53.548539 kubelet[1786]: E0113 21:30:53.548425 1786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0a42954-515c-4d5a-8ec5-560479566bbf-hubble-tls podName:d0a42954-515c-4d5a-8ec5-560479566bbf nodeName:}" failed. No retries permitted until 2025-01-13 21:30:54.04840464 +0000 UTC m=+61.928353716 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/d0a42954-515c-4d5a-8ec5-560479566bbf-hubble-tls") pod "cilium-5p4wb" (UID: "d0a42954-515c-4d5a-8ec5-560479566bbf") : failed to sync secret cache: timed out waiting for the condition Jan 13 21:30:53.823741 kubelet[1786]: E0113 21:30:53.823692 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:54.219900 kubelet[1786]: E0113 21:30:54.219755 1786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:54.220317 containerd[1478]: time="2025-01-13T21:30:54.220270979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5p4wb,Uid:d0a42954-515c-4d5a-8ec5-560479566bbf,Namespace:kube-system,Attempt:0,}" Jan 13 21:30:54.242595 containerd[1478]: time="2025-01-13T21:30:54.242508987Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:30:54.242595 containerd[1478]: time="2025-01-13T21:30:54.242578788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:30:54.242778 containerd[1478]: time="2025-01-13T21:30:54.242648108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:30:54.242866 containerd[1478]: time="2025-01-13T21:30:54.242832936Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:30:54.263818 systemd[1]: Started cri-containerd-4c6c86fbf0357d31d83a51201541d36eafb647992284cd4d879e4727991c5523.scope - libcontainer container 4c6c86fbf0357d31d83a51201541d36eafb647992284cd4d879e4727991c5523. Jan 13 21:30:54.283221 containerd[1478]: time="2025-01-13T21:30:54.283172310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5p4wb,Uid:d0a42954-515c-4d5a-8ec5-560479566bbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c6c86fbf0357d31d83a51201541d36eafb647992284cd4d879e4727991c5523\"" Jan 13 21:30:54.283947 kubelet[1786]: E0113 21:30:54.283749 1786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:54.285652 containerd[1478]: time="2025-01-13T21:30:54.285605682Z" level=info msg="CreateContainer within sandbox \"4c6c86fbf0357d31d83a51201541d36eafb647992284cd4d879e4727991c5523\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 21:30:54.298967 containerd[1478]: time="2025-01-13T21:30:54.298933420Z" level=info msg="CreateContainer within sandbox \"4c6c86fbf0357d31d83a51201541d36eafb647992284cd4d879e4727991c5523\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2c836a06f837a69a8dc67d804858abfac6b1754d8792fd38a7664a3a58d1638f\"" Jan 13 21:30:54.299374 containerd[1478]: time="2025-01-13T21:30:54.299346335Z" level=info msg="StartContainer for \"2c836a06f837a69a8dc67d804858abfac6b1754d8792fd38a7664a3a58d1638f\"" Jan 13 21:30:54.328840 systemd[1]: Started cri-containerd-2c836a06f837a69a8dc67d804858abfac6b1754d8792fd38a7664a3a58d1638f.scope - libcontainer container 2c836a06f837a69a8dc67d804858abfac6b1754d8792fd38a7664a3a58d1638f. Jan 13 21:30:54.353992 containerd[1478]: time="2025-01-13T21:30:54.353939959Z" level=info msg="StartContainer for \"2c836a06f837a69a8dc67d804858abfac6b1754d8792fd38a7664a3a58d1638f\" returns successfully" Jan 13 21:30:54.362972 systemd[1]: cri-containerd-2c836a06f837a69a8dc67d804858abfac6b1754d8792fd38a7664a3a58d1638f.scope: Deactivated successfully. Jan 13 21:30:54.398951 containerd[1478]: time="2025-01-13T21:30:54.398888402Z" level=info msg="shim disconnected" id=2c836a06f837a69a8dc67d804858abfac6b1754d8792fd38a7664a3a58d1638f namespace=k8s.io Jan 13 21:30:54.398951 containerd[1478]: time="2025-01-13T21:30:54.398945980Z" level=warning msg="cleaning up after shim disconnected" id=2c836a06f837a69a8dc67d804858abfac6b1754d8792fd38a7664a3a58d1638f namespace=k8s.io Jan 13 21:30:54.398951 containerd[1478]: time="2025-01-13T21:30:54.398956971Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:30:54.488174 kubelet[1786]: E0113 21:30:54.488060 1786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:54.490082 containerd[1478]: time="2025-01-13T21:30:54.490051597Z" level=info msg="CreateContainer within sandbox \"4c6c86fbf0357d31d83a51201541d36eafb647992284cd4d879e4727991c5523\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 21:30:54.504834 containerd[1478]: time="2025-01-13T21:30:54.504779226Z" level=info msg="CreateContainer within sandbox \"4c6c86fbf0357d31d83a51201541d36eafb647992284cd4d879e4727991c5523\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f17013e6a7c22bb41128166434c6295988d7cb5072db64a84eded7c74e9e93d5\"" Jan 13 21:30:54.505372 containerd[1478]: time="2025-01-13T21:30:54.505321575Z" level=info msg="StartContainer for \"f17013e6a7c22bb41128166434c6295988d7cb5072db64a84eded7c74e9e93d5\"" Jan 13 21:30:54.531810 systemd[1]: Started cri-containerd-f17013e6a7c22bb41128166434c6295988d7cb5072db64a84eded7c74e9e93d5.scope - libcontainer container f17013e6a7c22bb41128166434c6295988d7cb5072db64a84eded7c74e9e93d5. Jan 13 21:30:54.558956 containerd[1478]: time="2025-01-13T21:30:54.558905882Z" level=info msg="StartContainer for \"f17013e6a7c22bb41128166434c6295988d7cb5072db64a84eded7c74e9e93d5\" returns successfully" Jan 13 21:30:54.563800 systemd[1]: cri-containerd-f17013e6a7c22bb41128166434c6295988d7cb5072db64a84eded7c74e9e93d5.scope: Deactivated successfully. Jan 13 21:30:54.581861 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f17013e6a7c22bb41128166434c6295988d7cb5072db64a84eded7c74e9e93d5-rootfs.mount: Deactivated successfully. Jan 13 21:30:54.586892 containerd[1478]: time="2025-01-13T21:30:54.586820114Z" level=info msg="shim disconnected" id=f17013e6a7c22bb41128166434c6295988d7cb5072db64a84eded7c74e9e93d5 namespace=k8s.io Jan 13 21:30:54.587007 containerd[1478]: time="2025-01-13T21:30:54.586888793Z" level=warning msg="cleaning up after shim disconnected" id=f17013e6a7c22bb41128166434c6295988d7cb5072db64a84eded7c74e9e93d5 namespace=k8s.io Jan 13 21:30:54.587007 containerd[1478]: time="2025-01-13T21:30:54.586902739Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:30:54.646747 kubelet[1786]: I0113 21:30:54.646686 1786 setters.go:580] "Node became not ready" node="10.0.0.125" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-13T21:30:54Z","lastTransitionTime":"2025-01-13T21:30:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 13 21:30:54.824592 kubelet[1786]: E0113 21:30:54.824563 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:55.491724 kubelet[1786]: E0113 21:30:55.491688 1786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:55.493455 containerd[1478]: time="2025-01-13T21:30:55.493416751Z" level=info msg="CreateContainer within sandbox \"4c6c86fbf0357d31d83a51201541d36eafb647992284cd4d879e4727991c5523\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 21:30:55.510046 containerd[1478]: time="2025-01-13T21:30:55.510009640Z" level=info msg="CreateContainer within sandbox \"4c6c86fbf0357d31d83a51201541d36eafb647992284cd4d879e4727991c5523\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6561b5f8f212f20915e32380dd92cc0b7e5dd5992b414737bd9006208e4f1f9d\"" Jan 13 21:30:55.510518 containerd[1478]: time="2025-01-13T21:30:55.510485564Z" level=info msg="StartContainer for \"6561b5f8f212f20915e32380dd92cc0b7e5dd5992b414737bd9006208e4f1f9d\"" Jan 13 21:30:55.541803 systemd[1]: Started cri-containerd-6561b5f8f212f20915e32380dd92cc0b7e5dd5992b414737bd9006208e4f1f9d.scope - libcontainer container 6561b5f8f212f20915e32380dd92cc0b7e5dd5992b414737bd9006208e4f1f9d. Jan 13 21:30:55.569274 containerd[1478]: time="2025-01-13T21:30:55.569233479Z" level=info msg="StartContainer for \"6561b5f8f212f20915e32380dd92cc0b7e5dd5992b414737bd9006208e4f1f9d\" returns successfully" Jan 13 21:30:55.569735 systemd[1]: cri-containerd-6561b5f8f212f20915e32380dd92cc0b7e5dd5992b414737bd9006208e4f1f9d.scope: Deactivated successfully. Jan 13 21:30:55.590600 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6561b5f8f212f20915e32380dd92cc0b7e5dd5992b414737bd9006208e4f1f9d-rootfs.mount: Deactivated successfully. Jan 13 21:30:55.593706 containerd[1478]: time="2025-01-13T21:30:55.593629657Z" level=info msg="shim disconnected" id=6561b5f8f212f20915e32380dd92cc0b7e5dd5992b414737bd9006208e4f1f9d namespace=k8s.io Jan 13 21:30:55.593706 containerd[1478]: time="2025-01-13T21:30:55.593700299Z" level=warning msg="cleaning up after shim disconnected" id=6561b5f8f212f20915e32380dd92cc0b7e5dd5992b414737bd9006208e4f1f9d namespace=k8s.io Jan 13 21:30:55.593706 containerd[1478]: time="2025-01-13T21:30:55.593711540Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:30:55.824939 kubelet[1786]: E0113 21:30:55.824879 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:56.494772 kubelet[1786]: E0113 21:30:56.494743 1786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:56.499242 containerd[1478]: time="2025-01-13T21:30:56.499200832Z" level=info msg="CreateContainer within sandbox \"4c6c86fbf0357d31d83a51201541d36eafb647992284cd4d879e4727991c5523\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 21:30:56.511884 containerd[1478]: time="2025-01-13T21:30:56.511846484Z" level=info msg="CreateContainer within sandbox \"4c6c86fbf0357d31d83a51201541d36eafb647992284cd4d879e4727991c5523\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b357c9d9f1357ded260b5c971318ea1c86996017ba66dedf3cb1f06bffe94626\"" Jan 13 21:30:56.512339 containerd[1478]: time="2025-01-13T21:30:56.512313582Z" level=info msg="StartContainer for \"b357c9d9f1357ded260b5c971318ea1c86996017ba66dedf3cb1f06bffe94626\"" Jan 13 21:30:56.541821 systemd[1]: Started cri-containerd-b357c9d9f1357ded260b5c971318ea1c86996017ba66dedf3cb1f06bffe94626.scope - libcontainer container b357c9d9f1357ded260b5c971318ea1c86996017ba66dedf3cb1f06bffe94626. Jan 13 21:30:56.567328 systemd[1]: cri-containerd-b357c9d9f1357ded260b5c971318ea1c86996017ba66dedf3cb1f06bffe94626.scope: Deactivated successfully. Jan 13 21:30:56.569355 containerd[1478]: time="2025-01-13T21:30:56.569320055Z" level=info msg="StartContainer for \"b357c9d9f1357ded260b5c971318ea1c86996017ba66dedf3cb1f06bffe94626\" returns successfully" Jan 13 21:30:56.587783 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b357c9d9f1357ded260b5c971318ea1c86996017ba66dedf3cb1f06bffe94626-rootfs.mount: Deactivated successfully. Jan 13 21:30:56.592718 containerd[1478]: time="2025-01-13T21:30:56.592612084Z" level=info msg="shim disconnected" id=b357c9d9f1357ded260b5c971318ea1c86996017ba66dedf3cb1f06bffe94626 namespace=k8s.io Jan 13 21:30:56.592718 containerd[1478]: time="2025-01-13T21:30:56.592698566Z" level=warning msg="cleaning up after shim disconnected" id=b357c9d9f1357ded260b5c971318ea1c86996017ba66dedf3cb1f06bffe94626 namespace=k8s.io Jan 13 21:30:56.592718 containerd[1478]: time="2025-01-13T21:30:56.592710348Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:30:56.826050 kubelet[1786]: E0113 21:30:56.826003 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:57.498941 kubelet[1786]: E0113 21:30:57.498900 1786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:57.500898 containerd[1478]: time="2025-01-13T21:30:57.500852113Z" level=info msg="CreateContainer within sandbox \"4c6c86fbf0357d31d83a51201541d36eafb647992284cd4d879e4727991c5523\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 21:30:57.516865 containerd[1478]: time="2025-01-13T21:30:57.516795126Z" level=info msg="CreateContainer within sandbox \"4c6c86fbf0357d31d83a51201541d36eafb647992284cd4d879e4727991c5523\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"48871250c77c2bd384598036fcc563dd59e365e14d0719d02dbf0a9d3497db25\"" Jan 13 21:30:57.517520 containerd[1478]: time="2025-01-13T21:30:57.517481985Z" level=info msg="StartContainer for \"48871250c77c2bd384598036fcc563dd59e365e14d0719d02dbf0a9d3497db25\"" Jan 13 21:30:57.543799 systemd[1]: Started cri-containerd-48871250c77c2bd384598036fcc563dd59e365e14d0719d02dbf0a9d3497db25.scope - libcontainer container 48871250c77c2bd384598036fcc563dd59e365e14d0719d02dbf0a9d3497db25. Jan 13 21:30:57.573323 containerd[1478]: time="2025-01-13T21:30:57.573273386Z" level=info msg="StartContainer for \"48871250c77c2bd384598036fcc563dd59e365e14d0719d02dbf0a9d3497db25\" returns successfully" Jan 13 21:30:57.826314 kubelet[1786]: E0113 21:30:57.826280 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:57.977718 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 13 21:30:58.502422 kubelet[1786]: E0113 21:30:58.502383 1786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:30:58.514870 kubelet[1786]: I0113 21:30:58.514823 1786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5p4wb" podStartSLOduration=6.514813423 podStartE2EDuration="6.514813423s" podCreationTimestamp="2025-01-13 21:30:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:30:58.51443905 +0000 UTC m=+66.394388146" watchObservedRunningTime="2025-01-13 21:30:58.514813423 +0000 UTC m=+66.394762499" Jan 13 21:30:58.827375 kubelet[1786]: E0113 21:30:58.827337 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:30:59.828142 kubelet[1786]: E0113 21:30:59.828082 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:31:00.221563 kubelet[1786]: E0113 21:31:00.221428 1786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:31:00.828616 kubelet[1786]: E0113 21:31:00.828554 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:31:01.004361 systemd-networkd[1403]: lxc_health: Link UP Jan 13 21:31:01.016713 systemd-networkd[1403]: lxc_health: Gained carrier Jan 13 21:31:01.828971 kubelet[1786]: E0113 21:31:01.828926 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:31:02.221905 kubelet[1786]: E0113 21:31:02.221790 1786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:31:02.318740 systemd-networkd[1403]: lxc_health: Gained IPv6LL Jan 13 21:31:02.510054 kubelet[1786]: E0113 21:31:02.509929 1786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:31:02.829588 kubelet[1786]: E0113 21:31:02.829531 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:31:03.511726 kubelet[1786]: E0113 21:31:03.511689 1786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:31:03.829982 kubelet[1786]: E0113 21:31:03.829937 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:31:04.830552 kubelet[1786]: E0113 21:31:04.830494 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:31:05.277116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4206304436.mount: Deactivated successfully. Jan 13 21:31:05.831168 kubelet[1786]: E0113 21:31:05.831112 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:31:06.831553 kubelet[1786]: E0113 21:31:06.831492 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:31:07.831976 kubelet[1786]: E0113 21:31:07.831862 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:31:08.832462 kubelet[1786]: E0113 21:31:08.832409 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:31:09.833576 kubelet[1786]: E0113 21:31:09.833514 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:31:10.701164 containerd[1478]: time="2025-01-13T21:31:10.701106772Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:31:10.701783 containerd[1478]: time="2025-01-13T21:31:10.701751281Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18906649" Jan 13 21:31:10.702923 containerd[1478]: time="2025-01-13T21:31:10.702870401Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:31:10.704110 containerd[1478]: time="2025-01-13T21:31:10.704073899Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 17.924891667s" Jan 13 21:31:10.704168 containerd[1478]: time="2025-01-13T21:31:10.704108544Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 13 21:31:10.706053 containerd[1478]: time="2025-01-13T21:31:10.706025963Z" level=info msg="CreateContainer within sandbox \"7e306691c4964be6d0522f4305a1994aa6b795b99149f2da80e2a06fa149e128\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 13 21:31:10.717993 containerd[1478]: time="2025-01-13T21:31:10.717944709Z" level=info msg="CreateContainer within sandbox \"7e306691c4964be6d0522f4305a1994aa6b795b99149f2da80e2a06fa149e128\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"fcd99e3b9003b59aed3a5a927280c708eb2a48456b27508bd9443d5d772cb3bc\"" Jan 13 21:31:10.718434 containerd[1478]: time="2025-01-13T21:31:10.718404351Z" level=info msg="StartContainer for \"fcd99e3b9003b59aed3a5a927280c708eb2a48456b27508bd9443d5d772cb3bc\"" Jan 13 21:31:10.742770 systemd[1]: run-containerd-runc-k8s.io-fcd99e3b9003b59aed3a5a927280c708eb2a48456b27508bd9443d5d772cb3bc-runc.A4opck.mount: Deactivated successfully. Jan 13 21:31:10.751796 systemd[1]: Started cri-containerd-fcd99e3b9003b59aed3a5a927280c708eb2a48456b27508bd9443d5d772cb3bc.scope - libcontainer container fcd99e3b9003b59aed3a5a927280c708eb2a48456b27508bd9443d5d772cb3bc. Jan 13 21:31:10.776457 containerd[1478]: time="2025-01-13T21:31:10.776399561Z" level=info msg="StartContainer for \"fcd99e3b9003b59aed3a5a927280c708eb2a48456b27508bd9443d5d772cb3bc\" returns successfully" Jan 13 21:31:10.834120 kubelet[1786]: E0113 21:31:10.833918 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:31:11.363727 kubelet[1786]: E0113 21:31:11.363627 1786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:31:11.525693 kubelet[1786]: E0113 21:31:11.525638 1786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:31:11.534016 kubelet[1786]: I0113 21:31:11.533952 1786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-7hcx5" podStartSLOduration=1.608000321 podStartE2EDuration="19.533935828s" podCreationTimestamp="2025-01-13 21:30:52 +0000 UTC" firstStartedPulling="2025-01-13 21:30:52.778781068 +0000 UTC m=+60.658730134" lastFinishedPulling="2025-01-13 21:31:10.704716565 +0000 UTC m=+78.584665641" observedRunningTime="2025-01-13 21:31:11.53379813 +0000 UTC m=+79.413747226" watchObservedRunningTime="2025-01-13 21:31:11.533935828 +0000 UTC m=+79.413884904" Jan 13 21:31:11.834345 kubelet[1786]: E0113 21:31:11.834292 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:31:12.527251 kubelet[1786]: E0113 21:31:12.527220 1786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:31:12.783157 kubelet[1786]: E0113 21:31:12.783015 1786 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:31:12.835439 kubelet[1786]: E0113 21:31:12.835387 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:31:13.836180 kubelet[1786]: E0113 21:31:13.836134 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:31:14.836367 kubelet[1786]: E0113 21:31:14.836294 1786 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"