Jan 17 12:23:48.877945 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 17 10:39:07 -00 2025 Jan 17 12:23:48.877973 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:23:48.877988 kernel: BIOS-provided physical RAM map: Jan 17 12:23:48.877997 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 17 12:23:48.878005 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 17 12:23:48.878013 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 17 12:23:48.878023 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 17 12:23:48.878031 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 17 12:23:48.878040 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jan 17 12:23:48.878048 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jan 17 12:23:48.878060 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jan 17 12:23:48.878069 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jan 17 12:23:48.878077 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jan 17 12:23:48.878086 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jan 17 12:23:48.878097 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jan 17 12:23:48.878106 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 17 12:23:48.878118 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jan 17 12:23:48.878128 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jan 17 12:23:48.878137 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 17 12:23:48.878145 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 17 12:23:48.878154 kernel: NX (Execute Disable) protection: active Jan 17 12:23:48.878168 kernel: APIC: Static calls initialized Jan 17 12:23:48.878191 kernel: efi: EFI v2.7 by EDK II Jan 17 12:23:48.878212 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Jan 17 12:23:48.878236 kernel: SMBIOS 2.8 present. Jan 17 12:23:48.878254 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jan 17 12:23:48.878271 kernel: Hypervisor detected: KVM Jan 17 12:23:48.878301 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 12:23:48.878310 kernel: kvm-clock: using sched offset of 3956153233 cycles Jan 17 12:23:48.878320 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 12:23:48.878330 kernel: tsc: Detected 2794.748 MHz processor Jan 17 12:23:48.878340 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 12:23:48.878350 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 12:23:48.878359 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jan 17 12:23:48.878369 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 17 12:23:48.878378 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 12:23:48.878391 kernel: Using GB pages for direct mapping Jan 17 12:23:48.878400 kernel: Secure boot disabled Jan 17 12:23:48.878410 kernel: ACPI: Early table checksum verification disabled Jan 17 12:23:48.878419 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 17 12:23:48.878434 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 17 12:23:48.878444 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:23:48.878454 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:23:48.878467 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 17 12:23:48.878477 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:23:48.878487 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:23:48.878497 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:23:48.878506 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:23:48.878526 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 17 12:23:48.878537 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 17 12:23:48.878551 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Jan 17 12:23:48.878561 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 17 12:23:48.878570 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 17 12:23:48.878580 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 17 12:23:48.878590 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 17 12:23:48.878601 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 17 12:23:48.878612 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 17 12:23:48.878624 kernel: No NUMA configuration found Jan 17 12:23:48.878635 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jan 17 12:23:48.878647 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jan 17 12:23:48.878657 kernel: Zone ranges: Jan 17 12:23:48.878667 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 12:23:48.878677 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jan 17 12:23:48.878686 kernel: Normal empty Jan 17 12:23:48.878696 kernel: Movable zone start for each node Jan 17 12:23:48.878719 kernel: Early memory node ranges Jan 17 12:23:48.878729 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 17 12:23:48.878739 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 17 12:23:48.878748 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 17 12:23:48.878761 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jan 17 12:23:48.878770 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jan 17 12:23:48.878780 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jan 17 12:23:48.878789 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jan 17 12:23:48.878799 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 12:23:48.878809 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 17 12:23:48.878819 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 17 12:23:48.878829 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 12:23:48.878839 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jan 17 12:23:48.878852 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 17 12:23:48.878862 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jan 17 12:23:48.878871 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 17 12:23:48.878881 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 12:23:48.878891 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 12:23:48.878901 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 17 12:23:48.878911 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 12:23:48.878921 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 12:23:48.878931 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 12:23:48.878953 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 12:23:48.878963 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 12:23:48.878973 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 17 12:23:48.878983 kernel: TSC deadline timer available Jan 17 12:23:48.878992 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 17 12:23:48.879002 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 17 12:23:48.879012 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 17 12:23:48.879022 kernel: kvm-guest: setup PV sched yield Jan 17 12:23:48.879032 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 17 12:23:48.879042 kernel: Booting paravirtualized kernel on KVM Jan 17 12:23:48.879055 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 12:23:48.879065 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 17 12:23:48.879075 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 17 12:23:48.879085 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 17 12:23:48.879094 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 17 12:23:48.879103 kernel: kvm-guest: PV spinlocks enabled Jan 17 12:23:48.879113 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 17 12:23:48.879124 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:23:48.879138 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 17 12:23:48.879148 kernel: random: crng init done Jan 17 12:23:48.879157 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 12:23:48.879167 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 12:23:48.879177 kernel: Fallback order for Node 0: 0 Jan 17 12:23:48.879187 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jan 17 12:23:48.879197 kernel: Policy zone: DMA32 Jan 17 12:23:48.879207 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 12:23:48.879217 kernel: Memory: 2395616K/2567000K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42848K init, 2344K bss, 171124K reserved, 0K cma-reserved) Jan 17 12:23:48.879230 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 17 12:23:48.879240 kernel: ftrace: allocating 37918 entries in 149 pages Jan 17 12:23:48.879250 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 12:23:48.879260 kernel: Dynamic Preempt: voluntary Jan 17 12:23:48.879280 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 12:23:48.879294 kernel: rcu: RCU event tracing is enabled. Jan 17 12:23:48.879316 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 17 12:23:48.879326 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 12:23:48.879337 kernel: Rude variant of Tasks RCU enabled. Jan 17 12:23:48.879347 kernel: Tracing variant of Tasks RCU enabled. Jan 17 12:23:48.879358 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 12:23:48.879368 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 17 12:23:48.879383 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 17 12:23:48.879393 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 12:23:48.879404 kernel: Console: colour dummy device 80x25 Jan 17 12:23:48.879414 kernel: printk: console [ttyS0] enabled Jan 17 12:23:48.879424 kernel: ACPI: Core revision 20230628 Jan 17 12:23:48.879438 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 17 12:23:48.879449 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 12:23:48.879459 kernel: x2apic enabled Jan 17 12:23:48.879470 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 12:23:48.879480 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 17 12:23:48.879491 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 17 12:23:48.879501 kernel: kvm-guest: setup PV IPIs Jan 17 12:23:48.879512 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 17 12:23:48.879522 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 17 12:23:48.879536 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jan 17 12:23:48.879546 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 17 12:23:48.879557 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 17 12:23:48.879568 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 17 12:23:48.879578 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 12:23:48.879589 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 12:23:48.879600 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 17 12:23:48.879610 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 17 12:23:48.879622 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 17 12:23:48.879637 kernel: RETBleed: Mitigation: untrained return thunk Jan 17 12:23:48.879650 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 17 12:23:48.879661 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 17 12:23:48.879672 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 17 12:23:48.879683 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 17 12:23:48.879694 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 17 12:23:48.879762 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 12:23:48.879773 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 12:23:48.879787 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 12:23:48.879798 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 12:23:48.879808 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 17 12:23:48.879819 kernel: Freeing SMP alternatives memory: 32K Jan 17 12:23:48.879829 kernel: pid_max: default: 32768 minimum: 301 Jan 17 12:23:48.879840 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 12:23:48.879850 kernel: landlock: Up and running. Jan 17 12:23:48.879860 kernel: SELinux: Initializing. Jan 17 12:23:48.879871 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 12:23:48.879885 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 12:23:48.879895 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 17 12:23:48.879906 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 12:23:48.879916 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 12:23:48.879927 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 12:23:48.879945 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 17 12:23:48.879955 kernel: ... version: 0 Jan 17 12:23:48.879966 kernel: ... bit width: 48 Jan 17 12:23:48.879976 kernel: ... generic registers: 6 Jan 17 12:23:48.879990 kernel: ... value mask: 0000ffffffffffff Jan 17 12:23:48.880000 kernel: ... max period: 00007fffffffffff Jan 17 12:23:48.880010 kernel: ... fixed-purpose events: 0 Jan 17 12:23:48.880021 kernel: ... event mask: 000000000000003f Jan 17 12:23:48.880031 kernel: signal: max sigframe size: 1776 Jan 17 12:23:48.880041 kernel: rcu: Hierarchical SRCU implementation. Jan 17 12:23:48.880052 kernel: rcu: Max phase no-delay instances is 400. Jan 17 12:23:48.880063 kernel: smp: Bringing up secondary CPUs ... Jan 17 12:23:48.880073 kernel: smpboot: x86: Booting SMP configuration: Jan 17 12:23:48.880087 kernel: .... node #0, CPUs: #1 #2 #3 Jan 17 12:23:48.880097 kernel: smp: Brought up 1 node, 4 CPUs Jan 17 12:23:48.880108 kernel: smpboot: Max logical packages: 1 Jan 17 12:23:48.880118 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jan 17 12:23:48.880128 kernel: devtmpfs: initialized Jan 17 12:23:48.880139 kernel: x86/mm: Memory block size: 128MB Jan 17 12:23:48.880149 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 17 12:23:48.880160 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 17 12:23:48.880170 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jan 17 12:23:48.880183 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 17 12:23:48.880193 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 17 12:23:48.880204 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 12:23:48.880214 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 17 12:23:48.880225 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 12:23:48.880235 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 12:23:48.880246 kernel: audit: initializing netlink subsys (disabled) Jan 17 12:23:48.880257 kernel: audit: type=2000 audit(1737116628.359:1): state=initialized audit_enabled=0 res=1 Jan 17 12:23:48.880267 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 12:23:48.880281 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 12:23:48.880291 kernel: cpuidle: using governor menu Jan 17 12:23:48.880301 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 12:23:48.880312 kernel: dca service started, version 1.12.1 Jan 17 12:23:48.880322 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 17 12:23:48.880333 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 17 12:23:48.880343 kernel: PCI: Using configuration type 1 for base access Jan 17 12:23:48.880354 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 12:23:48.880365 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 12:23:48.880378 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 12:23:48.880388 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 12:23:48.880399 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 12:23:48.880409 kernel: ACPI: Added _OSI(Module Device) Jan 17 12:23:48.880420 kernel: ACPI: Added _OSI(Processor Device) Jan 17 12:23:48.880430 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 17 12:23:48.880440 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 12:23:48.880451 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 12:23:48.880461 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 12:23:48.880484 kernel: ACPI: Interpreter enabled Jan 17 12:23:48.880495 kernel: ACPI: PM: (supports S0 S3 S5) Jan 17 12:23:48.880506 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 12:23:48.880516 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 12:23:48.880527 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 12:23:48.880537 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 17 12:23:48.880548 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 12:23:48.880828 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 17 12:23:48.881019 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 17 12:23:48.881170 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 17 12:23:48.881185 kernel: PCI host bridge to bus 0000:00 Jan 17 12:23:48.881337 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 12:23:48.881475 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 12:23:48.881614 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 12:23:48.881807 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 17 12:23:48.881961 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 17 12:23:48.882099 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jan 17 12:23:48.882238 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 12:23:48.882415 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 17 12:23:48.882581 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 17 12:23:48.882779 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jan 17 12:23:48.882951 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jan 17 12:23:48.883108 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 17 12:23:48.883263 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 17 12:23:48.883418 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 12:23:48.883588 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 17 12:23:48.883764 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jan 17 12:23:48.883901 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jan 17 12:23:48.884038 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jan 17 12:23:48.884173 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 17 12:23:48.884293 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jan 17 12:23:48.884411 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jan 17 12:23:48.884528 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jan 17 12:23:48.884655 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 17 12:23:48.884806 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jan 17 12:23:48.884928 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jan 17 12:23:48.885057 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jan 17 12:23:48.885175 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 17 12:23:48.885302 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 17 12:23:48.885421 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 17 12:23:48.885547 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 17 12:23:48.885675 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jan 17 12:23:48.885843 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jan 17 12:23:48.885980 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 17 12:23:48.886098 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jan 17 12:23:48.886108 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 12:23:48.886116 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 12:23:48.886123 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 12:23:48.886131 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 12:23:48.886142 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 17 12:23:48.886150 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 17 12:23:48.886157 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 17 12:23:48.886165 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 17 12:23:48.886172 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 17 12:23:48.886180 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 17 12:23:48.886187 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 17 12:23:48.886195 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 17 12:23:48.886202 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 17 12:23:48.886212 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 17 12:23:48.886220 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 17 12:23:48.886227 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 17 12:23:48.886235 kernel: iommu: Default domain type: Translated Jan 17 12:23:48.886242 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 12:23:48.886250 kernel: efivars: Registered efivars operations Jan 17 12:23:48.886257 kernel: PCI: Using ACPI for IRQ routing Jan 17 12:23:48.886265 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 12:23:48.886272 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 17 12:23:48.886282 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jan 17 12:23:48.886289 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jan 17 12:23:48.886297 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jan 17 12:23:48.886413 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 17 12:23:48.886528 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 17 12:23:48.886644 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 12:23:48.886654 kernel: vgaarb: loaded Jan 17 12:23:48.886662 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 17 12:23:48.886669 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 17 12:23:48.886680 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 12:23:48.886688 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 12:23:48.886696 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 12:23:48.886719 kernel: pnp: PnP ACPI init Jan 17 12:23:48.886853 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 17 12:23:48.886865 kernel: pnp: PnP ACPI: found 6 devices Jan 17 12:23:48.886873 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 12:23:48.886881 kernel: NET: Registered PF_INET protocol family Jan 17 12:23:48.886892 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 12:23:48.886900 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 17 12:23:48.886908 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 12:23:48.886915 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 12:23:48.886923 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 17 12:23:48.886931 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 17 12:23:48.886947 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 12:23:48.886954 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 12:23:48.886964 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 12:23:48.886972 kernel: NET: Registered PF_XDP protocol family Jan 17 12:23:48.887093 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 17 12:23:48.887213 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jan 17 12:23:48.887323 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 12:23:48.887431 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 12:23:48.887538 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 12:23:48.887646 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 17 12:23:48.887861 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 17 12:23:48.888012 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jan 17 12:23:48.888029 kernel: PCI: CLS 0 bytes, default 64 Jan 17 12:23:48.888040 kernel: Initialise system trusted keyrings Jan 17 12:23:48.888051 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 17 12:23:48.888061 kernel: Key type asymmetric registered Jan 17 12:23:48.888071 kernel: Asymmetric key parser 'x509' registered Jan 17 12:23:48.888082 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 12:23:48.888093 kernel: io scheduler mq-deadline registered Jan 17 12:23:48.888108 kernel: io scheduler kyber registered Jan 17 12:23:48.888118 kernel: io scheduler bfq registered Jan 17 12:23:48.888128 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 12:23:48.888139 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 17 12:23:48.888150 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 17 12:23:48.888160 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 17 12:23:48.888171 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 12:23:48.888181 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 12:23:48.888192 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 12:23:48.888205 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 12:23:48.888215 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 12:23:48.888367 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 17 12:23:48.888382 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 12:23:48.888514 kernel: rtc_cmos 00:04: registered as rtc0 Jan 17 12:23:48.888650 kernel: rtc_cmos 00:04: setting system clock to 2025-01-17T12:23:48 UTC (1737116628) Jan 17 12:23:48.888818 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 17 12:23:48.888834 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 17 12:23:48.888850 kernel: efifb: probing for efifb Jan 17 12:23:48.888861 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jan 17 12:23:48.888871 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jan 17 12:23:48.888882 kernel: efifb: scrolling: redraw Jan 17 12:23:48.888892 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jan 17 12:23:48.888903 kernel: Console: switching to colour frame buffer device 100x37 Jan 17 12:23:48.888945 kernel: fb0: EFI VGA frame buffer device Jan 17 12:23:48.888959 kernel: pstore: Using crash dump compression: deflate Jan 17 12:23:48.888970 kernel: pstore: Registered efi_pstore as persistent store backend Jan 17 12:23:48.888984 kernel: NET: Registered PF_INET6 protocol family Jan 17 12:23:48.888995 kernel: Segment Routing with IPv6 Jan 17 12:23:48.889006 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 12:23:48.889016 kernel: NET: Registered PF_PACKET protocol family Jan 17 12:23:48.889027 kernel: Key type dns_resolver registered Jan 17 12:23:48.889038 kernel: IPI shorthand broadcast: enabled Jan 17 12:23:48.889049 kernel: sched_clock: Marking stable (566002691, 116616028)->(728984075, -46365356) Jan 17 12:23:48.889060 kernel: registered taskstats version 1 Jan 17 12:23:48.889071 kernel: Loading compiled-in X.509 certificates Jan 17 12:23:48.889086 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 6baa290b0089ed5c4c5f7248306af816ac8c7f80' Jan 17 12:23:48.889097 kernel: Key type .fscrypt registered Jan 17 12:23:48.889108 kernel: Key type fscrypt-provisioning registered Jan 17 12:23:48.889122 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 12:23:48.889134 kernel: ima: Allocated hash algorithm: sha1 Jan 17 12:23:48.889144 kernel: ima: No architecture policies found Jan 17 12:23:48.889155 kernel: clk: Disabling unused clocks Jan 17 12:23:48.889167 kernel: Freeing unused kernel image (initmem) memory: 42848K Jan 17 12:23:48.889178 kernel: Write protecting the kernel read-only data: 36864k Jan 17 12:23:48.889191 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 17 12:23:48.889203 kernel: Run /init as init process Jan 17 12:23:48.889214 kernel: with arguments: Jan 17 12:23:48.889225 kernel: /init Jan 17 12:23:48.889236 kernel: with environment: Jan 17 12:23:48.889246 kernel: HOME=/ Jan 17 12:23:48.889257 kernel: TERM=linux Jan 17 12:23:48.889268 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 17 12:23:48.889282 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:23:48.889300 systemd[1]: Detected virtualization kvm. Jan 17 12:23:48.889312 systemd[1]: Detected architecture x86-64. Jan 17 12:23:48.889323 systemd[1]: Running in initrd. Jan 17 12:23:48.889338 systemd[1]: No hostname configured, using default hostname. Jan 17 12:23:48.889352 systemd[1]: Hostname set to . Jan 17 12:23:48.889364 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:23:48.889375 systemd[1]: Queued start job for default target initrd.target. Jan 17 12:23:48.889387 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:23:48.889399 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:23:48.889412 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 12:23:48.889424 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:23:48.889436 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 12:23:48.889451 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 12:23:48.889482 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 12:23:48.889494 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 12:23:48.889510 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:23:48.889522 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:23:48.889534 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:23:48.889546 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:23:48.889561 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:23:48.889573 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:23:48.889585 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:23:48.889596 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:23:48.889608 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:23:48.889620 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:23:48.889631 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:23:48.889644 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:23:48.889660 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:23:48.889672 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:23:48.889683 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 12:23:48.889694 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:23:48.889747 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 12:23:48.889760 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 12:23:48.889771 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:23:48.889783 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:23:48.889795 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:23:48.889811 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 12:23:48.889823 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:23:48.889834 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 12:23:48.889870 systemd-journald[192]: Collecting audit messages is disabled. Jan 17 12:23:48.889902 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:23:48.889914 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:23:48.889926 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:23:48.889945 systemd-journald[192]: Journal started Jan 17 12:23:48.889973 systemd-journald[192]: Runtime Journal (/run/log/journal/3ffdad07a03944b28d4ba0ac1218a4dd) is 6.0M, max 48.3M, 42.2M free. Jan 17 12:23:48.879329 systemd-modules-load[193]: Inserted module 'overlay' Jan 17 12:23:48.892767 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:23:48.897833 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:23:48.898091 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:23:48.899433 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:23:48.914254 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 12:23:48.915001 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:23:48.919139 kernel: Bridge firewalling registered Jan 17 12:23:48.915390 systemd-modules-load[193]: Inserted module 'br_netfilter' Jan 17 12:23:48.917836 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 12:23:48.919436 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:23:48.920469 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:23:48.922928 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:23:48.927865 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:23:48.932799 dracut-cmdline[219]: dracut-dracut-053 Jan 17 12:23:48.935260 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:23:48.938931 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:23:48.943537 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:23:48.980529 systemd-resolved[247]: Positive Trust Anchors: Jan 17 12:23:48.980546 systemd-resolved[247]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:23:48.980576 systemd-resolved[247]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:23:48.983105 systemd-resolved[247]: Defaulting to hostname 'linux'. Jan 17 12:23:48.984133 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:23:48.991344 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:23:49.028732 kernel: SCSI subsystem initialized Jan 17 12:23:49.039731 kernel: Loading iSCSI transport class v2.0-870. Jan 17 12:23:49.049730 kernel: iscsi: registered transport (tcp) Jan 17 12:23:49.070842 kernel: iscsi: registered transport (qla4xxx) Jan 17 12:23:49.070916 kernel: QLogic iSCSI HBA Driver Jan 17 12:23:49.121772 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 12:23:49.130821 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 12:23:49.155554 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 12:23:49.155603 kernel: device-mapper: uevent: version 1.0.3 Jan 17 12:23:49.155622 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 12:23:49.195728 kernel: raid6: avx2x4 gen() 30549 MB/s Jan 17 12:23:49.212721 kernel: raid6: avx2x2 gen() 31482 MB/s Jan 17 12:23:49.229802 kernel: raid6: avx2x1 gen() 26090 MB/s Jan 17 12:23:49.229816 kernel: raid6: using algorithm avx2x2 gen() 31482 MB/s Jan 17 12:23:49.247797 kernel: raid6: .... xor() 19932 MB/s, rmw enabled Jan 17 12:23:49.247818 kernel: raid6: using avx2x2 recovery algorithm Jan 17 12:23:49.267726 kernel: xor: automatically using best checksumming function avx Jan 17 12:23:49.420728 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 12:23:49.434422 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:23:49.447866 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:23:49.460366 systemd-udevd[410]: Using default interface naming scheme 'v255'. Jan 17 12:23:49.465319 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:23:49.467338 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 12:23:49.485813 dracut-pre-trigger[413]: rd.md=0: removing MD RAID activation Jan 17 12:23:49.518426 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:23:49.541864 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:23:49.605602 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:23:49.616899 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 12:23:49.637723 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 17 12:23:49.660459 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 17 12:23:49.660629 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 12:23:49.660641 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 12:23:49.660652 kernel: GPT:9289727 != 19775487 Jan 17 12:23:49.660662 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 12:23:49.660673 kernel: GPT:9289727 != 19775487 Jan 17 12:23:49.660683 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 12:23:49.660693 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:23:49.645363 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 12:23:49.649627 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:23:49.651152 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:23:49.655980 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:23:49.669503 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 12:23:49.669519 kernel: AES CTR mode by8 optimization enabled Jan 17 12:23:49.664972 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 12:23:49.683719 kernel: libata version 3.00 loaded. Jan 17 12:23:49.682567 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:23:49.683046 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:23:49.686112 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:23:49.690777 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (453) Jan 17 12:23:49.686324 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:23:49.686488 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:23:49.700466 kernel: BTRFS: device fsid e459b8ee-f1f7-4c3d-a087-3f1955f52c85 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (460) Jan 17 12:23:49.700482 kernel: ahci 0000:00:1f.2: version 3.0 Jan 17 12:23:49.714406 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 17 12:23:49.714421 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 17 12:23:49.714571 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 17 12:23:49.714724 kernel: scsi host0: ahci Jan 17 12:23:49.714882 kernel: scsi host1: ahci Jan 17 12:23:49.715037 kernel: scsi host2: ahci Jan 17 12:23:49.715218 kernel: scsi host3: ahci Jan 17 12:23:49.715373 kernel: scsi host4: ahci Jan 17 12:23:49.715514 kernel: scsi host5: ahci Jan 17 12:23:49.715663 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jan 17 12:23:49.715674 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jan 17 12:23:49.715685 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jan 17 12:23:49.715737 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jan 17 12:23:49.715749 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jan 17 12:23:49.715760 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jan 17 12:23:49.694617 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:23:49.704986 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:23:49.707256 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:23:49.727661 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:23:49.734539 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 17 12:23:49.740539 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 17 12:23:49.746249 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 12:23:49.751197 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 17 12:23:49.752464 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 17 12:23:49.764816 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 12:23:49.766597 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:23:49.772250 disk-uuid[564]: Primary Header is updated. Jan 17 12:23:49.772250 disk-uuid[564]: Secondary Entries is updated. Jan 17 12:23:49.772250 disk-uuid[564]: Secondary Header is updated. Jan 17 12:23:49.775728 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:23:49.779729 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:23:49.784019 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:23:50.027891 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 17 12:23:50.027964 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 17 12:23:50.027975 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 17 12:23:50.029396 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 17 12:23:50.029466 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 17 12:23:50.030733 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 17 12:23:50.030750 kernel: ata3.00: applying bridge limits Jan 17 12:23:50.031727 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 17 12:23:50.032725 kernel: ata3.00: configured for UDMA/100 Jan 17 12:23:50.034734 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 17 12:23:50.076738 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 17 12:23:50.098340 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 17 12:23:50.098353 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 17 12:23:50.781428 disk-uuid[566]: The operation has completed successfully. Jan 17 12:23:50.782714 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:23:50.807091 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 12:23:50.807211 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 12:23:50.829839 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 12:23:50.833032 sh[590]: Success Jan 17 12:23:50.845725 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 17 12:23:50.878231 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 12:23:50.892403 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 12:23:50.897099 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 12:23:50.906487 kernel: BTRFS info (device dm-0): first mount of filesystem e459b8ee-f1f7-4c3d-a087-3f1955f52c85 Jan 17 12:23:50.906516 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:23:50.906527 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 12:23:50.907577 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 12:23:50.908334 kernel: BTRFS info (device dm-0): using free space tree Jan 17 12:23:50.912895 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 12:23:50.913599 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 12:23:50.926831 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 12:23:50.928403 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 12:23:50.936923 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:23:50.936953 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:23:50.936967 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:23:50.940834 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:23:50.949397 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 12:23:50.951146 kernel: BTRFS info (device vda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:23:50.996404 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 12:23:51.000866 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 12:23:51.034667 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:23:51.049275 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:23:51.052937 ignition[732]: Ignition 2.19.0 Jan 17 12:23:51.052948 ignition[732]: Stage: fetch-offline Jan 17 12:23:51.052985 ignition[732]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:23:51.052996 ignition[732]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:23:51.053086 ignition[732]: parsed url from cmdline: "" Jan 17 12:23:51.053090 ignition[732]: no config URL provided Jan 17 12:23:51.053095 ignition[732]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:23:51.053104 ignition[732]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:23:51.053131 ignition[732]: op(1): [started] loading QEMU firmware config module Jan 17 12:23:51.053136 ignition[732]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 17 12:23:51.060974 ignition[732]: op(1): [finished] loading QEMU firmware config module Jan 17 12:23:51.062270 ignition[732]: parsing config with SHA512: d6d9efeec47bff547959dd19f5955b87d544519f0ef35700f01e75396b98e515d74c6a2b5b70286f891de2ca6d2dd94ea6fe1cca58edf76a87ac9e4f98902c39 Jan 17 12:23:51.064635 unknown[732]: fetched base config from "system" Jan 17 12:23:51.064649 unknown[732]: fetched user config from "qemu" Jan 17 12:23:51.064912 ignition[732]: fetch-offline: fetch-offline passed Jan 17 12:23:51.067660 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:23:51.064976 ignition[732]: Ignition finished successfully Jan 17 12:23:51.072834 systemd-networkd[775]: lo: Link UP Jan 17 12:23:51.072845 systemd-networkd[775]: lo: Gained carrier Jan 17 12:23:51.074384 systemd-networkd[775]: Enumeration completed Jan 17 12:23:51.074459 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:23:51.074784 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:23:51.074788 systemd-networkd[775]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:23:51.075586 systemd-networkd[775]: eth0: Link UP Jan 17 12:23:51.075590 systemd-networkd[775]: eth0: Gained carrier Jan 17 12:23:51.075596 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:23:51.076601 systemd[1]: Reached target network.target - Network. Jan 17 12:23:51.078642 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 17 12:23:51.092869 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 12:23:51.094769 systemd-networkd[775]: eth0: DHCPv4 address 10.0.0.161/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 17 12:23:51.108644 ignition[783]: Ignition 2.19.0 Jan 17 12:23:51.108656 ignition[783]: Stage: kargs Jan 17 12:23:51.108869 ignition[783]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:23:51.108888 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:23:51.109612 ignition[783]: kargs: kargs passed Jan 17 12:23:51.109661 ignition[783]: Ignition finished successfully Jan 17 12:23:51.116204 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 12:23:51.125868 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 12:23:51.137062 ignition[793]: Ignition 2.19.0 Jan 17 12:23:51.137074 ignition[793]: Stage: disks Jan 17 12:23:51.137226 ignition[793]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:23:51.137237 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:23:51.141151 ignition[793]: disks: disks passed Jan 17 12:23:51.141917 ignition[793]: Ignition finished successfully Jan 17 12:23:51.145002 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 12:23:51.145255 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 12:23:51.147313 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:23:51.149795 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:23:51.152513 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:23:51.154941 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:23:51.173823 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 12:23:51.189266 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 12:23:51.195468 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 12:23:51.210854 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 12:23:51.296724 kernel: EXT4-fs (vda9): mounted filesystem 0ba4fe0e-76d7-406f-b570-4642d86198f6 r/w with ordered data mode. Quota mode: none. Jan 17 12:23:51.297163 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 12:23:51.297772 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 12:23:51.309774 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:23:51.311934 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 12:23:51.313248 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 12:23:51.318755 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (812) Jan 17 12:23:51.313284 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 12:23:51.313306 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:23:51.327500 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:23:51.327517 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:23:51.327532 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:23:51.327543 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:23:51.320211 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 12:23:51.322562 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 12:23:51.329288 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:23:51.362448 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 12:23:51.367913 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Jan 17 12:23:51.373026 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 12:23:51.377745 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 12:23:51.461457 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 12:23:51.468883 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 12:23:51.472274 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 12:23:51.477726 kernel: BTRFS info (device vda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:23:51.496045 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 12:23:51.538938 ignition[929]: INFO : Ignition 2.19.0 Jan 17 12:23:51.538938 ignition[929]: INFO : Stage: mount Jan 17 12:23:51.540648 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:23:51.540648 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:23:51.540648 ignition[929]: INFO : mount: mount passed Jan 17 12:23:51.540648 ignition[929]: INFO : Ignition finished successfully Jan 17 12:23:51.546487 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 12:23:51.558807 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 12:23:51.905895 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 12:23:51.912998 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:23:51.918726 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (938) Jan 17 12:23:51.920995 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:23:51.921020 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:23:51.921031 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:23:51.924719 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:23:51.925509 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:23:51.951246 ignition[955]: INFO : Ignition 2.19.0 Jan 17 12:23:51.951246 ignition[955]: INFO : Stage: files Jan 17 12:23:51.952922 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:23:51.952922 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:23:51.955443 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Jan 17 12:23:51.956741 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 12:23:51.956741 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 12:23:51.960353 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 12:23:51.961820 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 12:23:51.963489 unknown[955]: wrote ssh authorized keys file for user: core Jan 17 12:23:51.964564 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 12:23:51.966782 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 17 12:23:51.968596 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 12:23:51.970427 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:23:51.972253 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:23:51.973991 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 17 12:23:51.976504 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 17 12:23:51.978922 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 17 12:23:51.981015 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 17 12:23:52.329018 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 17 12:23:52.699077 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 17 12:23:52.699077 ignition[955]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Jan 17 12:23:52.702673 ignition[955]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 12:23:52.705070 ignition[955]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 12:23:52.705070 ignition[955]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Jan 17 12:23:52.705070 ignition[955]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Jan 17 12:23:52.729030 ignition[955]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 12:23:52.734106 ignition[955]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 12:23:52.735650 ignition[955]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Jan 17 12:23:52.735650 ignition[955]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:23:52.735650 ignition[955]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:23:52.735650 ignition[955]: INFO : files: files passed Jan 17 12:23:52.735650 ignition[955]: INFO : Ignition finished successfully Jan 17 12:23:52.744143 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 12:23:52.751953 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 12:23:52.754946 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 12:23:52.757616 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 12:23:52.757799 systemd-networkd[775]: eth0: Gained IPv6LL Jan 17 12:23:52.758822 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 12:23:52.764300 initrd-setup-root-after-ignition[983]: grep: /sysroot/oem/oem-release: No such file or directory Jan 17 12:23:52.768406 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:23:52.768406 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:23:52.771742 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:23:52.775145 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:23:52.777820 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 12:23:52.788828 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 12:23:52.813618 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 12:23:52.814660 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 12:23:52.817275 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 12:23:52.819325 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 12:23:52.821356 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 12:23:52.836893 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 12:23:52.852330 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:23:52.856895 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 12:23:52.866933 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:23:52.869268 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:23:52.871639 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 12:23:52.873486 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 12:23:52.874495 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:23:52.877074 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 12:23:52.879145 systemd[1]: Stopped target basic.target - Basic System. Jan 17 12:23:52.881000 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 12:23:52.883201 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:23:52.885519 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 12:23:52.887776 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 12:23:52.889873 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:23:52.892357 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 12:23:52.894445 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 12:23:52.896503 systemd[1]: Stopped target swap.target - Swaps. Jan 17 12:23:52.898151 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 12:23:52.899157 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:23:52.901413 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:23:52.903601 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:23:52.905980 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 12:23:52.906965 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:23:52.909558 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 12:23:52.910562 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 12:23:52.912812 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 12:23:52.913905 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:23:52.916295 systemd[1]: Stopped target paths.target - Path Units. Jan 17 12:23:52.918088 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 12:23:52.921764 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:23:52.924518 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 12:23:52.926368 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 12:23:52.928261 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 12:23:52.929133 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:23:52.931123 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 12:23:52.932024 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:23:52.934081 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 12:23:52.935260 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:23:52.937823 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 12:23:52.938812 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 12:23:52.952845 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 12:23:52.954738 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 12:23:52.954858 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:23:52.958872 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 12:23:52.960629 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 12:23:52.960754 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:23:52.964134 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 12:23:52.965200 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:23:52.967050 ignition[1009]: INFO : Ignition 2.19.0 Jan 17 12:23:52.967050 ignition[1009]: INFO : Stage: umount Jan 17 12:23:52.967050 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:23:52.967050 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:23:52.967050 ignition[1009]: INFO : umount: umount passed Jan 17 12:23:52.967050 ignition[1009]: INFO : Ignition finished successfully Jan 17 12:23:52.975323 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 12:23:52.979217 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 12:23:52.983911 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 12:23:52.984922 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 12:23:52.988119 systemd[1]: Stopped target network.target - Network. Jan 17 12:23:52.989977 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 12:23:52.990948 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 12:23:52.992968 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 12:23:52.993023 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 12:23:52.995923 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 12:23:52.996994 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 12:23:52.999092 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 12:23:53.000091 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 12:23:53.002369 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 12:23:53.004790 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 12:23:53.007941 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 12:23:53.008739 systemd-networkd[775]: eth0: DHCPv6 lease lost Jan 17 12:23:53.010016 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 12:23:53.011064 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 12:23:53.013520 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 12:23:53.014677 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 12:23:53.019122 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 12:23:53.020128 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:23:53.029845 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 12:23:53.030798 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 12:23:53.030871 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:23:53.034291 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:23:53.036529 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:23:53.038620 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 12:23:53.038672 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 12:23:53.041778 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 12:23:53.041841 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:23:53.045363 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:23:53.055650 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 12:23:53.055817 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 12:23:53.061497 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 12:23:53.061677 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:23:53.062835 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 12:23:53.062886 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 12:23:53.064899 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 12:23:53.064937 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:23:53.066820 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 12:23:53.066877 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:23:53.070734 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 12:23:53.070782 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 12:23:53.074545 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:23:53.074593 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:23:53.093872 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 12:23:53.093938 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 12:23:53.093989 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:23:53.097266 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:23:53.097313 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:23:53.109548 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 12:23:53.109664 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 12:23:53.173431 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 12:23:53.173558 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 12:23:53.175530 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 12:23:53.177314 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 12:23:53.177365 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 12:23:53.191842 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 12:23:53.198742 systemd[1]: Switching root. Jan 17 12:23:53.228479 systemd-journald[192]: Journal stopped Jan 17 12:23:54.290980 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Jan 17 12:23:54.291052 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 12:23:54.291066 kernel: SELinux: policy capability open_perms=1 Jan 17 12:23:54.291077 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 12:23:54.291088 kernel: SELinux: policy capability always_check_network=0 Jan 17 12:23:54.291099 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 12:23:54.291110 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 12:23:54.291126 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 12:23:54.291137 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 12:23:54.291151 kernel: audit: type=1403 audit(1737116633.567:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 12:23:54.291168 systemd[1]: Successfully loaded SELinux policy in 43.451ms. Jan 17 12:23:54.291188 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.238ms. Jan 17 12:23:54.291205 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:23:54.291218 systemd[1]: Detected virtualization kvm. Jan 17 12:23:54.291230 systemd[1]: Detected architecture x86-64. Jan 17 12:23:54.291241 systemd[1]: Detected first boot. Jan 17 12:23:54.291255 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:23:54.291267 zram_generator::config[1052]: No configuration found. Jan 17 12:23:54.291282 systemd[1]: Populated /etc with preset unit settings. Jan 17 12:23:54.291294 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 12:23:54.291306 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 12:23:54.291321 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 12:23:54.291333 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 12:23:54.291347 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 12:23:54.291359 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 12:23:54.291371 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 12:23:54.291384 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 12:23:54.291396 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 12:23:54.291408 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 12:23:54.291419 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 12:23:54.291431 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:23:54.291443 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:23:54.291458 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 12:23:54.291469 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 12:23:54.291487 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 12:23:54.291499 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:23:54.291511 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 12:23:54.291528 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:23:54.291541 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 12:23:54.291552 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 12:23:54.291567 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 12:23:54.291579 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 12:23:54.291591 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:23:54.291603 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:23:54.291614 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:23:54.291626 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:23:54.291638 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 12:23:54.291650 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 12:23:54.291665 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:23:54.291677 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:23:54.291688 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:23:54.291713 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 12:23:54.291726 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 12:23:54.291738 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 12:23:54.291749 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 12:23:54.291761 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:23:54.291773 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 12:23:54.291787 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 12:23:54.291807 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 12:23:54.291819 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 12:23:54.291831 systemd[1]: Reached target machines.target - Containers. Jan 17 12:23:54.291843 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 12:23:54.291855 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:23:54.291867 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:23:54.291880 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 12:23:54.291891 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:23:54.291906 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:23:54.291918 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:23:54.291930 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 12:23:54.291942 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:23:54.291954 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 12:23:54.291966 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 12:23:54.291978 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 12:23:54.291990 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 12:23:54.292004 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 12:23:54.292015 kernel: fuse: init (API version 7.39) Jan 17 12:23:54.292026 kernel: loop: module loaded Jan 17 12:23:54.292038 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:23:54.292050 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:23:54.292063 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 12:23:54.292075 kernel: ACPI: bus type drm_connector registered Jan 17 12:23:54.292086 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 12:23:54.292115 systemd-journald[1129]: Collecting audit messages is disabled. Jan 17 12:23:54.292139 systemd-journald[1129]: Journal started Jan 17 12:23:54.292161 systemd-journald[1129]: Runtime Journal (/run/log/journal/3ffdad07a03944b28d4ba0ac1218a4dd) is 6.0M, max 48.3M, 42.2M free. Jan 17 12:23:54.074355 systemd[1]: Queued start job for default target multi-user.target. Jan 17 12:23:54.093196 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 17 12:23:54.093659 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 12:23:54.293718 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:23:54.296917 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 12:23:54.296981 systemd[1]: Stopped verity-setup.service. Jan 17 12:23:54.299725 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:23:54.302833 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:23:54.304114 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 12:23:54.305300 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 12:23:54.306543 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 12:23:54.307660 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 12:23:54.308939 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 12:23:54.310207 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 12:23:54.311491 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 12:23:54.312956 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:23:54.314496 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 12:23:54.314664 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 12:23:54.316152 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:23:54.316320 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:23:54.317782 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:23:54.317974 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:23:54.319517 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:23:54.319680 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:23:54.321334 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 12:23:54.321498 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 12:23:54.323062 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:23:54.323241 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:23:54.324772 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:23:54.326333 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 12:23:54.328051 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 12:23:54.343509 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 12:23:54.350908 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 12:23:54.356823 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 12:23:54.358022 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 12:23:54.358061 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:23:54.360176 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 12:23:54.362695 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 12:23:54.365127 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 12:23:54.366474 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:23:54.371909 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 12:23:54.375169 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 12:23:54.376397 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:23:54.382590 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 12:23:54.383745 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:23:54.386095 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:23:54.391443 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 12:23:54.394932 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 12:23:54.399970 systemd-journald[1129]: Time spent on flushing to /var/log/journal/3ffdad07a03944b28d4ba0ac1218a4dd is 14.975ms for 974 entries. Jan 17 12:23:54.399970 systemd-journald[1129]: System Journal (/var/log/journal/3ffdad07a03944b28d4ba0ac1218a4dd) is 8.0M, max 195.6M, 187.6M free. Jan 17 12:23:54.503202 systemd-journald[1129]: Received client request to flush runtime journal. Jan 17 12:23:54.503245 kernel: loop0: detected capacity change from 0 to 142488 Jan 17 12:23:54.503283 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 12:23:54.397863 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 12:23:54.402355 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 12:23:54.404049 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 12:23:54.419007 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:23:54.429412 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 12:23:54.444992 udevadm[1174]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 17 12:23:54.462846 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:23:54.476800 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 12:23:54.487968 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:23:54.489528 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 12:23:54.491743 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 12:23:54.496167 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 12:23:54.505339 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 12:23:54.511068 kernel: loop1: detected capacity change from 0 to 140768 Jan 17 12:23:54.517369 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Jan 17 12:23:54.517395 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Jan 17 12:23:54.525301 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:23:54.532123 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 12:23:54.534080 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 12:23:54.551733 kernel: loop2: detected capacity change from 0 to 205544 Jan 17 12:23:54.581733 kernel: loop3: detected capacity change from 0 to 142488 Jan 17 12:23:54.592739 kernel: loop4: detected capacity change from 0 to 140768 Jan 17 12:23:54.602908 kernel: loop5: detected capacity change from 0 to 205544 Jan 17 12:23:54.609747 (sd-merge)[1191]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 17 12:23:54.610525 (sd-merge)[1191]: Merged extensions into '/usr'. Jan 17 12:23:54.614836 systemd[1]: Reloading requested from client PID 1166 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 12:23:54.614852 systemd[1]: Reloading... Jan 17 12:23:54.665065 zram_generator::config[1216]: No configuration found. Jan 17 12:23:54.754479 ldconfig[1161]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 12:23:54.797745 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:23:54.845954 systemd[1]: Reloading finished in 230 ms. Jan 17 12:23:54.876645 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 12:23:54.878368 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 12:23:54.895845 systemd[1]: Starting ensure-sysext.service... Jan 17 12:23:54.897668 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:23:54.908137 systemd[1]: Reloading requested from client PID 1254 ('systemctl') (unit ensure-sysext.service)... Jan 17 12:23:54.908152 systemd[1]: Reloading... Jan 17 12:23:54.922801 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 12:23:54.923163 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 12:23:54.924142 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 12:23:54.924430 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Jan 17 12:23:54.924512 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Jan 17 12:23:54.927882 systemd-tmpfiles[1255]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:23:54.927896 systemd-tmpfiles[1255]: Skipping /boot Jan 17 12:23:54.939132 systemd-tmpfiles[1255]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:23:54.939271 systemd-tmpfiles[1255]: Skipping /boot Jan 17 12:23:54.987767 zram_generator::config[1288]: No configuration found. Jan 17 12:23:55.076483 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:23:55.125501 systemd[1]: Reloading finished in 216 ms. Jan 17 12:23:55.151435 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 12:23:55.169374 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:23:55.178240 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:23:55.180713 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 12:23:55.183139 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 12:23:55.188283 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:23:55.193460 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:23:55.196012 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 12:23:55.199774 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:23:55.200021 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:23:55.202505 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:23:55.210935 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:23:55.215104 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:23:55.216353 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:23:55.219862 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 12:23:55.221615 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:23:55.222966 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 12:23:55.235019 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:23:55.235187 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:23:55.235777 augenrules[1346]: No rules Jan 17 12:23:55.237475 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:23:55.237745 systemd-udevd[1327]: Using default interface naming scheme 'v255'. Jan 17 12:23:55.239781 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:23:55.240752 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:23:55.242938 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:23:55.243173 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:23:55.250315 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:23:55.250564 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:23:55.256446 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 12:23:55.260007 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:23:55.262757 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:23:55.262969 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:23:55.268098 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:23:55.271663 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:23:55.274871 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:23:55.276922 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:23:55.280928 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:23:55.282757 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:23:55.284890 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 12:23:55.286666 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 12:23:55.290007 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:23:55.290171 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:23:55.293311 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:23:55.293477 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:23:55.297305 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:23:55.297466 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:23:55.306946 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 12:23:55.313500 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 12:23:55.320789 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1359) Jan 17 12:23:55.331327 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 17 12:23:55.347499 systemd[1]: Finished ensure-sysext.service. Jan 17 12:23:55.356641 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:23:55.356836 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:23:55.363886 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:23:55.370150 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:23:55.376037 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:23:55.380359 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:23:55.381521 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:23:55.383990 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 12:23:55.385139 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 12:23:55.385160 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:23:55.385629 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:23:55.385832 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:23:55.388090 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:23:55.388251 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:23:55.390114 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:23:55.390285 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:23:55.393496 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 12:23:55.396749 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 17 12:23:55.398615 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:23:55.398821 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:23:55.402930 kernel: ACPI: button: Power Button [PWRF] Jan 17 12:23:55.408541 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 12:23:55.409755 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 17 12:23:55.410873 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:23:55.410999 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:23:55.426589 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 17 12:23:55.427466 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 17 12:23:55.427639 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 17 12:23:55.427842 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 17 12:23:55.444802 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 12:23:55.447284 systemd-networkd[1375]: lo: Link UP Jan 17 12:23:55.447296 systemd-networkd[1375]: lo: Gained carrier Jan 17 12:23:55.448885 systemd-networkd[1375]: Enumeration completed Jan 17 12:23:55.449270 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:23:55.449282 systemd-networkd[1375]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:23:55.449920 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:23:55.458582 systemd-networkd[1375]: eth0: Link UP Jan 17 12:23:55.458595 systemd-networkd[1375]: eth0: Gained carrier Jan 17 12:23:55.458606 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:23:55.464998 systemd-resolved[1325]: Positive Trust Anchors: Jan 17 12:23:55.465018 systemd-resolved[1325]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:23:55.465058 systemd-resolved[1325]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:23:55.466932 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 12:23:55.471081 systemd-resolved[1325]: Defaulting to hostname 'linux'. Jan 17 12:23:55.475986 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:23:55.478140 systemd[1]: Reached target network.target - Network. Jan 17 12:23:55.479146 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:23:55.486265 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:23:55.493489 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:23:55.493723 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:23:55.500872 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:23:55.510842 systemd-networkd[1375]: eth0: DHCPv4 address 10.0.0.161/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 17 12:23:55.532620 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 12:23:55.535029 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 12:23:55.535193 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 12:23:55.990611 systemd-resolved[1325]: Clock change detected. Flushing caches. Jan 17 12:23:55.990738 systemd-timesyncd[1404]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 17 12:23:55.990797 systemd-timesyncd[1404]: Initial clock synchronization to Fri 2025-01-17 12:23:55.990558 UTC. Jan 17 12:23:55.999738 kernel: kvm_amd: TSC scaling supported Jan 17 12:23:55.999770 kernel: kvm_amd: Nested Virtualization enabled Jan 17 12:23:55.999784 kernel: kvm_amd: Nested Paging enabled Jan 17 12:23:55.999796 kernel: kvm_amd: LBR virtualization supported Jan 17 12:23:56.000138 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 17 12:23:56.001356 kernel: kvm_amd: Virtual GIF supported Jan 17 12:23:56.022052 kernel: EDAC MC: Ver: 3.0.0 Jan 17 12:23:56.030563 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:23:56.070495 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 12:23:56.092345 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 12:23:56.100709 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:23:56.128992 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 12:23:56.131330 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:23:56.132474 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:23:56.133646 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 12:23:56.134898 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 12:23:56.136339 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 12:23:56.137501 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 12:23:56.138743 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 12:23:56.139974 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 12:23:56.140000 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:23:56.140910 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:23:56.142586 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 12:23:56.145360 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 12:23:56.154609 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 12:23:56.156911 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 12:23:56.158513 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 12:23:56.159650 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:23:56.160607 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:23:56.161570 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:23:56.161594 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:23:56.162513 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 12:23:56.164518 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 12:23:56.169088 lvm[1434]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:23:56.169587 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 12:23:56.176063 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 12:23:56.177713 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 12:23:56.181034 jq[1437]: false Jan 17 12:23:56.181164 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 12:23:56.182262 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 12:23:56.183611 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 12:23:56.192311 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 12:23:56.195422 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 12:23:56.195968 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 12:23:56.198451 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 12:23:56.201757 dbus-daemon[1436]: [system] SELinux support is enabled Jan 17 12:23:56.202165 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 12:23:56.204030 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 12:23:56.208937 extend-filesystems[1438]: Found loop3 Jan 17 12:23:56.209986 extend-filesystems[1438]: Found loop4 Jan 17 12:23:56.209986 extend-filesystems[1438]: Found loop5 Jan 17 12:23:56.209986 extend-filesystems[1438]: Found sr0 Jan 17 12:23:56.209986 extend-filesystems[1438]: Found vda Jan 17 12:23:56.209986 extend-filesystems[1438]: Found vda1 Jan 17 12:23:56.209986 extend-filesystems[1438]: Found vda2 Jan 17 12:23:56.209986 extend-filesystems[1438]: Found vda3 Jan 17 12:23:56.209986 extend-filesystems[1438]: Found usr Jan 17 12:23:56.209986 extend-filesystems[1438]: Found vda4 Jan 17 12:23:56.209986 extend-filesystems[1438]: Found vda6 Jan 17 12:23:56.209986 extend-filesystems[1438]: Found vda7 Jan 17 12:23:56.209986 extend-filesystems[1438]: Found vda9 Jan 17 12:23:56.209986 extend-filesystems[1438]: Checking size of /dev/vda9 Jan 17 12:23:56.210800 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 12:23:56.236183 update_engine[1448]: I20250117 12:23:56.220299 1448 main.cc:92] Flatcar Update Engine starting Jan 17 12:23:56.236183 update_engine[1448]: I20250117 12:23:56.221561 1448 update_check_scheduler.cc:74] Next update check in 9m43s Jan 17 12:23:56.220430 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 12:23:56.236443 jq[1450]: true Jan 17 12:23:56.220684 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 12:23:56.221009 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 12:23:56.221292 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 12:23:56.222816 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 12:23:56.223012 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 12:23:56.238053 jq[1457]: true Jan 17 12:23:56.242003 (ntainerd)[1459]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 12:23:56.245415 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 12:23:56.245449 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 12:23:56.249265 extend-filesystems[1438]: Resized partition /dev/vda9 Jan 17 12:23:56.248472 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 12:23:56.248491 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 12:23:56.250298 systemd[1]: Started update-engine.service - Update Engine. Jan 17 12:23:56.256433 systemd-logind[1443]: Watching system buttons on /dev/input/event1 (Power Button) Jan 17 12:23:56.256463 systemd-logind[1443]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 12:23:56.258173 extend-filesystems[1470]: resize2fs 1.47.1 (20-May-2024) Jan 17 12:23:56.271637 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1360) Jan 17 12:23:56.258302 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 12:23:56.261037 systemd-logind[1443]: New seat seat0. Jan 17 12:23:56.274624 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 12:23:56.300872 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 12:23:56.336787 locksmithd[1471]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 12:23:56.358364 sshd_keygen[1456]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 12:23:56.385449 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 12:23:56.387077 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 17 12:23:56.395349 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 12:23:56.397663 systemd[1]: Started sshd@0-10.0.0.161:22-10.0.0.1:44532.service - OpenSSH per-connection server daemon (10.0.0.1:44532). Jan 17 12:23:56.402966 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 12:23:56.403279 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 12:23:56.407506 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 12:23:56.482486 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 12:23:56.491380 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 12:23:56.494328 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 12:23:56.495898 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 12:23:56.531070 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 17 12:23:56.558136 extend-filesystems[1470]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 17 12:23:56.558136 extend-filesystems[1470]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 17 12:23:56.558136 extend-filesystems[1470]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 17 12:23:56.565663 extend-filesystems[1438]: Resized filesystem in /dev/vda9 Jan 17 12:23:56.566664 sshd[1501]: Connection closed by authenticating user core 10.0.0.1 port 44532 [preauth] Jan 17 12:23:56.566770 bash[1487]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:23:56.560151 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 12:23:56.560398 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 12:23:56.562793 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 12:23:56.566400 systemd[1]: sshd@0-10.0.0.161:22-10.0.0.1:44532.service: Deactivated successfully. Jan 17 12:23:56.570796 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 17 12:23:56.630160 containerd[1459]: time="2025-01-17T12:23:56.629918094Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 12:23:56.654546 containerd[1459]: time="2025-01-17T12:23:56.654471039Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:23:56.656526 containerd[1459]: time="2025-01-17T12:23:56.656483894Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:23:56.656526 containerd[1459]: time="2025-01-17T12:23:56.656511586Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 12:23:56.656526 containerd[1459]: time="2025-01-17T12:23:56.656526454Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 12:23:56.656766 containerd[1459]: time="2025-01-17T12:23:56.656727571Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 12:23:56.656766 containerd[1459]: time="2025-01-17T12:23:56.656747388Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 12:23:56.656825 containerd[1459]: time="2025-01-17T12:23:56.656810417Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:23:56.656825 containerd[1459]: time="2025-01-17T12:23:56.656821948Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:23:56.657101 containerd[1459]: time="2025-01-17T12:23:56.657068150Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:23:56.657101 containerd[1459]: time="2025-01-17T12:23:56.657090843Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 12:23:56.657159 containerd[1459]: time="2025-01-17T12:23:56.657103206Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:23:56.657159 containerd[1459]: time="2025-01-17T12:23:56.657114256Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 12:23:56.657241 containerd[1459]: time="2025-01-17T12:23:56.657217149Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:23:56.657482 containerd[1459]: time="2025-01-17T12:23:56.657448513Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:23:56.657595 containerd[1459]: time="2025-01-17T12:23:56.657572275Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:23:56.657595 containerd[1459]: time="2025-01-17T12:23:56.657589087Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 12:23:56.657712 containerd[1459]: time="2025-01-17T12:23:56.657688313Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 12:23:56.657773 containerd[1459]: time="2025-01-17T12:23:56.657754056Z" level=info msg="metadata content store policy set" policy=shared Jan 17 12:23:56.662950 containerd[1459]: time="2025-01-17T12:23:56.662918401Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 12:23:56.662989 containerd[1459]: time="2025-01-17T12:23:56.662965449Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 12:23:56.662989 containerd[1459]: time="2025-01-17T12:23:56.662981920Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 12:23:56.663037 containerd[1459]: time="2025-01-17T12:23:56.662998101Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 12:23:56.663037 containerd[1459]: time="2025-01-17T12:23:56.663013850Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 12:23:56.663209 containerd[1459]: time="2025-01-17T12:23:56.663178750Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 12:23:56.663434 containerd[1459]: time="2025-01-17T12:23:56.663406076Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 12:23:56.663536 containerd[1459]: time="2025-01-17T12:23:56.663509961Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 12:23:56.663536 containerd[1459]: time="2025-01-17T12:23:56.663528576Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 12:23:56.663575 containerd[1459]: time="2025-01-17T12:23:56.663540849Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 12:23:56.663575 containerd[1459]: time="2025-01-17T12:23:56.663553773Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 12:23:56.663575 containerd[1459]: time="2025-01-17T12:23:56.663566096Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 12:23:56.663635 containerd[1459]: time="2025-01-17T12:23:56.663578108Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 12:23:56.663635 containerd[1459]: time="2025-01-17T12:23:56.663591233Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 12:23:56.663635 containerd[1459]: time="2025-01-17T12:23:56.663609207Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 12:23:56.663635 containerd[1459]: time="2025-01-17T12:23:56.663622762Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 12:23:56.663635 containerd[1459]: time="2025-01-17T12:23:56.663634244Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 12:23:56.663722 containerd[1459]: time="2025-01-17T12:23:56.663646867Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 12:23:56.663722 containerd[1459]: time="2025-01-17T12:23:56.663666524Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 12:23:56.663722 containerd[1459]: time="2025-01-17T12:23:56.663678817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 12:23:56.663722 containerd[1459]: time="2025-01-17T12:23:56.663690529Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 12:23:56.663722 containerd[1459]: time="2025-01-17T12:23:56.663701911Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 12:23:56.663722 containerd[1459]: time="2025-01-17T12:23:56.663714154Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 12:23:56.663832 containerd[1459]: time="2025-01-17T12:23:56.663727599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 12:23:56.663832 containerd[1459]: time="2025-01-17T12:23:56.663739882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 12:23:56.663832 containerd[1459]: time="2025-01-17T12:23:56.663753297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 12:23:56.663832 containerd[1459]: time="2025-01-17T12:23:56.663766572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 12:23:56.663832 containerd[1459]: time="2025-01-17T12:23:56.663780608Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 12:23:56.663832 containerd[1459]: time="2025-01-17T12:23:56.663791739Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 12:23:56.663832 containerd[1459]: time="2025-01-17T12:23:56.663802549Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 12:23:56.663832 containerd[1459]: time="2025-01-17T12:23:56.663813650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 12:23:56.663832 containerd[1459]: time="2025-01-17T12:23:56.663829169Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 12:23:56.663985 containerd[1459]: time="2025-01-17T12:23:56.663847934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 12:23:56.663985 containerd[1459]: time="2025-01-17T12:23:56.663859236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 12:23:56.663985 containerd[1459]: time="2025-01-17T12:23:56.663870797Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 12:23:56.663985 containerd[1459]: time="2025-01-17T12:23:56.663927704Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 12:23:56.663985 containerd[1459]: time="2025-01-17T12:23:56.663943293Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 12:23:56.663985 containerd[1459]: time="2025-01-17T12:23:56.663953593Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 12:23:56.663985 containerd[1459]: time="2025-01-17T12:23:56.663965224Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 12:23:56.663985 containerd[1459]: time="2025-01-17T12:23:56.663974672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 12:23:56.663985 containerd[1459]: time="2025-01-17T12:23:56.663985583Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 12:23:56.664172 containerd[1459]: time="2025-01-17T12:23:56.663996142Z" level=info msg="NRI interface is disabled by configuration." Jan 17 12:23:56.664172 containerd[1459]: time="2025-01-17T12:23:56.664006021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 12:23:56.664368 containerd[1459]: time="2025-01-17T12:23:56.664288461Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 12:23:56.664368 containerd[1459]: time="2025-01-17T12:23:56.664347050Z" level=info msg="Connect containerd service" Jan 17 12:23:56.664368 containerd[1459]: time="2025-01-17T12:23:56.664376315Z" level=info msg="using legacy CRI server" Jan 17 12:23:56.664569 containerd[1459]: time="2025-01-17T12:23:56.664382868Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 12:23:56.664569 containerd[1459]: time="2025-01-17T12:23:56.664465513Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 12:23:56.665109 containerd[1459]: time="2025-01-17T12:23:56.665075927Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:23:56.665252 containerd[1459]: time="2025-01-17T12:23:56.665210740Z" level=info msg="Start subscribing containerd event" Jan 17 12:23:56.665276 containerd[1459]: time="2025-01-17T12:23:56.665260634Z" level=info msg="Start recovering state" Jan 17 12:23:56.665398 containerd[1459]: time="2025-01-17T12:23:56.665375760Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 12:23:56.665421 containerd[1459]: time="2025-01-17T12:23:56.665379717Z" level=info msg="Start event monitor" Jan 17 12:23:56.665441 containerd[1459]: time="2025-01-17T12:23:56.665424120Z" level=info msg="Start snapshots syncer" Jan 17 12:23:56.665441 containerd[1459]: time="2025-01-17T12:23:56.665432526Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 12:23:56.665476 containerd[1459]: time="2025-01-17T12:23:56.665436373Z" level=info msg="Start cni network conf syncer for default" Jan 17 12:23:56.665476 containerd[1459]: time="2025-01-17T12:23:56.665452894Z" level=info msg="Start streaming server" Jan 17 12:23:56.665598 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 12:23:56.667078 containerd[1459]: time="2025-01-17T12:23:56.667032346Z" level=info msg="containerd successfully booted in 0.038835s" Jan 17 12:23:57.626239 systemd-networkd[1375]: eth0: Gained IPv6LL Jan 17 12:23:57.629479 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 12:23:57.631295 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 12:23:57.642281 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 17 12:23:57.644678 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:23:57.646793 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 12:23:57.664708 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 17 12:23:57.665090 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 17 12:23:57.666904 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 12:23:57.669162 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 12:23:58.270657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:23:58.272687 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 12:23:58.277062 (kubelet)[1547]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:23:58.277099 systemd[1]: Startup finished in 695ms (kernel) + 4.865s (initrd) + 4.297s (userspace) = 9.858s. Jan 17 12:23:58.676239 kubelet[1547]: E0117 12:23:58.676127 1547 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:23:58.680194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:23:58.680408 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:24:06.579697 systemd[1]: Started sshd@1-10.0.0.161:22-10.0.0.1:47464.service - OpenSSH per-connection server daemon (10.0.0.1:47464). Jan 17 12:24:06.620508 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 47464 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:24:06.622597 sshd[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:24:06.632347 systemd-logind[1443]: New session 1 of user core. Jan 17 12:24:06.633944 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 12:24:06.658299 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 12:24:06.672163 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 12:24:06.675111 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 12:24:06.684757 (systemd)[1564]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 12:24:06.788182 systemd[1564]: Queued start job for default target default.target. Jan 17 12:24:06.798293 systemd[1564]: Created slice app.slice - User Application Slice. Jan 17 12:24:06.798318 systemd[1564]: Reached target paths.target - Paths. Jan 17 12:24:06.798333 systemd[1564]: Reached target timers.target - Timers. Jan 17 12:24:06.799916 systemd[1564]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 12:24:06.812016 systemd[1564]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 12:24:06.812156 systemd[1564]: Reached target sockets.target - Sockets. Jan 17 12:24:06.812175 systemd[1564]: Reached target basic.target - Basic System. Jan 17 12:24:06.812215 systemd[1564]: Reached target default.target - Main User Target. Jan 17 12:24:06.812248 systemd[1564]: Startup finished in 119ms. Jan 17 12:24:06.812437 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 12:24:06.814441 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 12:24:06.880693 systemd[1]: Started sshd@2-10.0.0.161:22-10.0.0.1:47474.service - OpenSSH per-connection server daemon (10.0.0.1:47474). Jan 17 12:24:06.923452 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 47474 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:24:06.925345 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:24:06.930050 systemd-logind[1443]: New session 2 of user core. Jan 17 12:24:06.940188 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 12:24:06.995531 sshd[1575]: pam_unix(sshd:session): session closed for user core Jan 17 12:24:07.003942 systemd[1]: sshd@2-10.0.0.161:22-10.0.0.1:47474.service: Deactivated successfully. Jan 17 12:24:07.005752 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 12:24:07.007396 systemd-logind[1443]: Session 2 logged out. Waiting for processes to exit. Jan 17 12:24:07.008783 systemd[1]: Started sshd@3-10.0.0.161:22-10.0.0.1:47486.service - OpenSSH per-connection server daemon (10.0.0.1:47486). Jan 17 12:24:07.009571 systemd-logind[1443]: Removed session 2. Jan 17 12:24:07.051117 sshd[1582]: Accepted publickey for core from 10.0.0.1 port 47486 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:24:07.052574 sshd[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:24:07.056877 systemd-logind[1443]: New session 3 of user core. Jan 17 12:24:07.069151 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 12:24:07.119611 sshd[1582]: pam_unix(sshd:session): session closed for user core Jan 17 12:24:07.134714 systemd[1]: sshd@3-10.0.0.161:22-10.0.0.1:47486.service: Deactivated successfully. Jan 17 12:24:07.136667 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 12:24:07.138241 systemd-logind[1443]: Session 3 logged out. Waiting for processes to exit. Jan 17 12:24:07.139517 systemd[1]: Started sshd@4-10.0.0.161:22-10.0.0.1:47498.service - OpenSSH per-connection server daemon (10.0.0.1:47498). Jan 17 12:24:07.140302 systemd-logind[1443]: Removed session 3. Jan 17 12:24:07.180283 sshd[1589]: Accepted publickey for core from 10.0.0.1 port 47498 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:24:07.181752 sshd[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:24:07.185562 systemd-logind[1443]: New session 4 of user core. Jan 17 12:24:07.195131 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 12:24:07.249326 sshd[1589]: pam_unix(sshd:session): session closed for user core Jan 17 12:24:07.260637 systemd[1]: sshd@4-10.0.0.161:22-10.0.0.1:47498.service: Deactivated successfully. Jan 17 12:24:07.262285 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 12:24:07.263826 systemd-logind[1443]: Session 4 logged out. Waiting for processes to exit. Jan 17 12:24:07.276339 systemd[1]: Started sshd@5-10.0.0.161:22-10.0.0.1:47514.service - OpenSSH per-connection server daemon (10.0.0.1:47514). Jan 17 12:24:07.277164 systemd-logind[1443]: Removed session 4. Jan 17 12:24:07.308955 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 47514 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:24:07.310420 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:24:07.313938 systemd-logind[1443]: New session 5 of user core. Jan 17 12:24:07.324138 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 12:24:07.381652 sudo[1600]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 12:24:07.381997 sudo[1600]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:24:07.401843 sudo[1600]: pam_unix(sudo:session): session closed for user root Jan 17 12:24:07.403410 sshd[1596]: pam_unix(sshd:session): session closed for user core Jan 17 12:24:07.414701 systemd[1]: sshd@5-10.0.0.161:22-10.0.0.1:47514.service: Deactivated successfully. Jan 17 12:24:07.416378 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 12:24:07.417970 systemd-logind[1443]: Session 5 logged out. Waiting for processes to exit. Jan 17 12:24:07.427322 systemd[1]: Started sshd@6-10.0.0.161:22-10.0.0.1:36936.service - OpenSSH per-connection server daemon (10.0.0.1:36936). Jan 17 12:24:07.428147 systemd-logind[1443]: Removed session 5. Jan 17 12:24:07.459833 sshd[1605]: Accepted publickey for core from 10.0.0.1 port 36936 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:24:07.461214 sshd[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:24:07.464698 systemd-logind[1443]: New session 6 of user core. Jan 17 12:24:07.476132 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 12:24:07.529256 sudo[1609]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 12:24:07.529578 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:24:07.532854 sudo[1609]: pam_unix(sudo:session): session closed for user root Jan 17 12:24:07.539148 sudo[1608]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 12:24:07.539480 sudo[1608]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:24:07.562220 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 12:24:07.563819 auditctl[1612]: No rules Jan 17 12:24:07.565247 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 12:24:07.565508 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 12:24:07.567298 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:24:07.596293 augenrules[1630]: No rules Jan 17 12:24:07.598122 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:24:07.599418 sudo[1608]: pam_unix(sudo:session): session closed for user root Jan 17 12:24:07.601140 sshd[1605]: pam_unix(sshd:session): session closed for user core Jan 17 12:24:07.610570 systemd[1]: sshd@6-10.0.0.161:22-10.0.0.1:36936.service: Deactivated successfully. Jan 17 12:24:07.612281 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 12:24:07.613829 systemd-logind[1443]: Session 6 logged out. Waiting for processes to exit. Jan 17 12:24:07.614937 systemd[1]: Started sshd@7-10.0.0.161:22-10.0.0.1:36938.service - OpenSSH per-connection server daemon (10.0.0.1:36938). Jan 17 12:24:07.615586 systemd-logind[1443]: Removed session 6. Jan 17 12:24:07.652017 sshd[1638]: Accepted publickey for core from 10.0.0.1 port 36938 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:24:07.653496 sshd[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:24:07.656921 systemd-logind[1443]: New session 7 of user core. Jan 17 12:24:07.666135 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 12:24:07.717850 sudo[1641]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 12:24:07.718230 sudo[1641]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:24:07.746299 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 17 12:24:07.764517 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 17 12:24:07.764773 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 17 12:24:08.717096 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 12:24:08.731234 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:24:08.744610 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 12:24:08.744728 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 12:24:08.745056 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:24:08.758319 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:24:08.781073 systemd[1]: Reloading requested from client PID 1686 ('systemctl') (unit session-7.scope)... Jan 17 12:24:08.781090 systemd[1]: Reloading... Jan 17 12:24:08.902078 zram_generator::config[1730]: No configuration found. Jan 17 12:24:09.964680 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:24:10.037613 systemd[1]: Reloading finished in 1256 ms. Jan 17 12:24:10.088073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:24:10.091046 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:24:10.093306 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 12:24:10.093551 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:24:10.106207 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:24:10.247622 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:24:10.253117 (kubelet)[1774]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:24:10.287079 kubelet[1774]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:24:10.287079 kubelet[1774]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:24:10.287079 kubelet[1774]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:24:10.287480 kubelet[1774]: I0117 12:24:10.287130 1774 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:24:10.528594 kubelet[1774]: I0117 12:24:10.528487 1774 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 17 12:24:10.528594 kubelet[1774]: I0117 12:24:10.528520 1774 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:24:10.528794 kubelet[1774]: I0117 12:24:10.528771 1774 server.go:929] "Client rotation is on, will bootstrap in background" Jan 17 12:24:10.554388 kubelet[1774]: I0117 12:24:10.554348 1774 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:24:10.560484 kubelet[1774]: E0117 12:24:10.560455 1774 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 12:24:10.560522 kubelet[1774]: I0117 12:24:10.560484 1774 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 12:24:10.567830 kubelet[1774]: I0117 12:24:10.567787 1774 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:24:10.568777 kubelet[1774]: I0117 12:24:10.568741 1774 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 17 12:24:10.568945 kubelet[1774]: I0117 12:24:10.568905 1774 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:24:10.569162 kubelet[1774]: I0117 12:24:10.568932 1774 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.161","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 12:24:10.569162 kubelet[1774]: I0117 12:24:10.569152 1774 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:24:10.569162 kubelet[1774]: I0117 12:24:10.569161 1774 container_manager_linux.go:300] "Creating device plugin manager" Jan 17 12:24:10.569322 kubelet[1774]: I0117 12:24:10.569275 1774 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:24:10.570670 kubelet[1774]: I0117 12:24:10.570633 1774 kubelet.go:408] "Attempting to sync node with API server" Jan 17 12:24:10.570670 kubelet[1774]: I0117 12:24:10.570662 1774 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:24:10.570763 kubelet[1774]: I0117 12:24:10.570712 1774 kubelet.go:314] "Adding apiserver pod source" Jan 17 12:24:10.570763 kubelet[1774]: I0117 12:24:10.570727 1774 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:24:10.570811 kubelet[1774]: E0117 12:24:10.570769 1774 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:10.570836 kubelet[1774]: E0117 12:24:10.570816 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:10.575710 kubelet[1774]: I0117 12:24:10.575668 1774 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:24:10.576616 kubelet[1774]: W0117 12:24:10.576588 1774 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.161" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 17 12:24:10.576616 kubelet[1774]: E0117 12:24:10.576623 1774 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.161\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 17 12:24:10.576616 kubelet[1774]: W0117 12:24:10.576627 1774 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 17 12:24:10.576906 kubelet[1774]: E0117 12:24:10.576657 1774 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 17 12:24:10.577263 kubelet[1774]: I0117 12:24:10.577242 1774 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:24:10.577733 kubelet[1774]: W0117 12:24:10.577711 1774 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 12:24:10.578394 kubelet[1774]: I0117 12:24:10.578353 1774 server.go:1269] "Started kubelet" Jan 17 12:24:10.579475 kubelet[1774]: I0117 12:24:10.579064 1774 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:24:10.579475 kubelet[1774]: I0117 12:24:10.578757 1774 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:24:10.579475 kubelet[1774]: I0117 12:24:10.579473 1774 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:24:10.580239 kubelet[1774]: I0117 12:24:10.579823 1774 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:24:10.580239 kubelet[1774]: I0117 12:24:10.579994 1774 server.go:460] "Adding debug handlers to kubelet server" Jan 17 12:24:10.580767 kubelet[1774]: I0117 12:24:10.580741 1774 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 12:24:10.582495 kubelet[1774]: I0117 12:24:10.581651 1774 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 17 12:24:10.582495 kubelet[1774]: I0117 12:24:10.581748 1774 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 17 12:24:10.582495 kubelet[1774]: I0117 12:24:10.581791 1774 reconciler.go:26] "Reconciler: start to sync state" Jan 17 12:24:10.582495 kubelet[1774]: E0117 12:24:10.582121 1774 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:24:10.582495 kubelet[1774]: E0117 12:24:10.582442 1774 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.161\" not found" Jan 17 12:24:10.584314 kubelet[1774]: I0117 12:24:10.584275 1774 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:24:10.584314 kubelet[1774]: I0117 12:24:10.584297 1774 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:24:10.584406 kubelet[1774]: I0117 12:24:10.584378 1774 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:24:10.594986 kubelet[1774]: E0117 12:24:10.594631 1774 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.161\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 17 12:24:10.594986 kubelet[1774]: W0117 12:24:10.594917 1774 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 17 12:24:10.594986 kubelet[1774]: E0117 12:24:10.594939 1774 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Jan 17 12:24:10.597507 kubelet[1774]: E0117 12:24:10.594085 1774 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.161.181b7a63db233628 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.161,UID:10.0.0.161,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.161,},FirstTimestamp:2025-01-17 12:24:10.57832708 +0000 UTC m=+0.321549411,LastTimestamp:2025-01-17 12:24:10.57832708 +0000 UTC m=+0.321549411,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.161,}" Jan 17 12:24:10.598467 kubelet[1774]: I0117 12:24:10.598283 1774 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:24:10.598467 kubelet[1774]: I0117 12:24:10.598309 1774 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:24:10.598467 kubelet[1774]: I0117 12:24:10.598328 1774 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:24:10.598467 kubelet[1774]: E0117 12:24:10.598354 1774 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.161.181b7a63db5cf332 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.161,UID:10.0.0.161,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.161,},FirstTimestamp:2025-01-17 12:24:10.582111026 +0000 UTC m=+0.325333357,LastTimestamp:2025-01-17 12:24:10.582111026 +0000 UTC m=+0.325333357,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.161,}" Jan 17 12:24:10.605210 kubelet[1774]: E0117 12:24:10.604992 1774 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.161.181b7a63dc4833c7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.161,UID:10.0.0.161,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.161 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.161,},FirstTimestamp:2025-01-17 12:24:10.597528519 +0000 UTC m=+0.340750850,LastTimestamp:2025-01-17 12:24:10.597528519 +0000 UTC m=+0.340750850,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.161,}" Jan 17 12:24:10.608166 kubelet[1774]: E0117 12:24:10.608104 1774 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.161.181b7a63dc4866b2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.161,UID:10.0.0.161,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.161 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.161,},FirstTimestamp:2025-01-17 12:24:10.597541554 +0000 UTC m=+0.340763885,LastTimestamp:2025-01-17 12:24:10.597541554 +0000 UTC m=+0.340763885,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.161,}" Jan 17 12:24:10.611673 kubelet[1774]: E0117 12:24:10.611514 1774 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.161.181b7a63dc4874fb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.161,UID:10.0.0.161,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.161 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.161,},FirstTimestamp:2025-01-17 12:24:10.597545211 +0000 UTC m=+0.340767542,LastTimestamp:2025-01-17 12:24:10.597545211 +0000 UTC m=+0.340767542,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.161,}" Jan 17 12:24:10.683331 kubelet[1774]: E0117 12:24:10.683280 1774 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.161\" not found" Jan 17 12:24:10.784359 kubelet[1774]: E0117 12:24:10.784232 1774 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.161\" not found" Jan 17 12:24:10.799273 kubelet[1774]: E0117 12:24:10.799195 1774 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.161\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Jan 17 12:24:10.885326 kubelet[1774]: E0117 12:24:10.885304 1774 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.161\" not found" Jan 17 12:24:10.985933 kubelet[1774]: E0117 12:24:10.985882 1774 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.161\" not found" Jan 17 12:24:11.086369 kubelet[1774]: E0117 12:24:11.086230 1774 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.161\" not found" Jan 17 12:24:11.183423 kubelet[1774]: I0117 12:24:11.183365 1774 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:24:11.184774 kubelet[1774]: I0117 12:24:11.184739 1774 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:24:11.184846 kubelet[1774]: I0117 12:24:11.184797 1774 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:24:11.184846 kubelet[1774]: I0117 12:24:11.184825 1774 kubelet.go:2321] "Starting kubelet main sync loop" Jan 17 12:24:11.185281 kubelet[1774]: E0117 12:24:11.184963 1774 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:24:11.186350 kubelet[1774]: E0117 12:24:11.186319 1774 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.161\" not found" Jan 17 12:24:11.188148 kubelet[1774]: I0117 12:24:11.188119 1774 policy_none.go:49] "None policy: Start" Jan 17 12:24:11.188827 kubelet[1774]: I0117 12:24:11.188770 1774 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:24:11.188883 kubelet[1774]: I0117 12:24:11.188854 1774 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:24:11.203644 kubelet[1774]: E0117 12:24:11.203586 1774 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.161\" not found" node="10.0.0.161" Jan 17 12:24:11.285090 kubelet[1774]: E0117 12:24:11.285043 1774 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 12:24:11.287248 kubelet[1774]: E0117 12:24:11.287218 1774 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.161\" not found" Jan 17 12:24:11.301886 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 12:24:11.315882 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 12:24:11.318691 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 12:24:11.335143 kubelet[1774]: I0117 12:24:11.334979 1774 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:24:11.335283 kubelet[1774]: I0117 12:24:11.335264 1774 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 12:24:11.335328 kubelet[1774]: I0117 12:24:11.335280 1774 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 12:24:11.335527 kubelet[1774]: I0117 12:24:11.335503 1774 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:24:11.336554 kubelet[1774]: E0117 12:24:11.336491 1774 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.161\" not found" Jan 17 12:24:11.436129 kubelet[1774]: I0117 12:24:11.436089 1774 kubelet_node_status.go:72] "Attempting to register node" node="10.0.0.161" Jan 17 12:24:11.489262 kubelet[1774]: I0117 12:24:11.489229 1774 kubelet_node_status.go:75] "Successfully registered node" node="10.0.0.161" Jan 17 12:24:11.489262 kubelet[1774]: E0117 12:24:11.489256 1774 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"10.0.0.161\": node \"10.0.0.161\" not found" Jan 17 12:24:11.530355 kubelet[1774]: I0117 12:24:11.530329 1774 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 17 12:24:11.530479 kubelet[1774]: W0117 12:24:11.530444 1774 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 17 12:24:11.530608 kubelet[1774]: E0117 12:24:11.530500 1774 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.160:6443/api/v1/namespaces/default/events\": read tcp 10.0.0.161:43414->10.0.0.160:6443: use of closed network connection" event="&Event{ObjectMeta:{10.0.0.161.181b7a63dc4833c7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.161,UID:10.0.0.161,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.161 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.161,},FirstTimestamp:2025-01-17 12:24:10.597528519 +0000 UTC m=+0.340750850,LastTimestamp:2025-01-17 12:24:11.436014339 +0000 UTC m=+1.179236670,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.161,}" Jan 17 12:24:11.571716 kubelet[1774]: E0117 12:24:11.571684 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:11.861277 sudo[1641]: pam_unix(sudo:session): session closed for user root Jan 17 12:24:11.863058 sshd[1638]: pam_unix(sshd:session): session closed for user core Jan 17 12:24:11.867170 systemd[1]: sshd@7-10.0.0.161:22-10.0.0.1:36938.service: Deactivated successfully. Jan 17 12:24:11.869149 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 12:24:11.869845 systemd-logind[1443]: Session 7 logged out. Waiting for processes to exit. Jan 17 12:24:11.870780 systemd-logind[1443]: Removed session 7. Jan 17 12:24:12.533237 kubelet[1774]: I0117 12:24:12.533206 1774 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 17 12:24:12.533762 containerd[1459]: time="2025-01-17T12:24:12.533607198Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 12:24:12.534094 kubelet[1774]: I0117 12:24:12.533829 1774 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 17 12:24:12.572348 kubelet[1774]: E0117 12:24:12.572316 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:12.572348 kubelet[1774]: I0117 12:24:12.572335 1774 apiserver.go:52] "Watching apiserver" Jan 17 12:24:12.580549 systemd[1]: Created slice kubepods-besteffort-pod1a6a19d4_a208_49e4_8105_7bb6cd2ac189.slice - libcontainer container kubepods-besteffort-pod1a6a19d4_a208_49e4_8105_7bb6cd2ac189.slice. Jan 17 12:24:12.582169 kubelet[1774]: I0117 12:24:12.582147 1774 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 17 12:24:12.591709 systemd[1]: Created slice kubepods-burstable-podbd66da01_525c_4756_a809_7ce5d81058f3.slice - libcontainer container kubepods-burstable-podbd66da01_525c_4756_a809_7ce5d81058f3.slice. Jan 17 12:24:12.592013 kubelet[1774]: I0117 12:24:12.591892 1774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bd66da01-525c-4756-a809-7ce5d81058f3-cilium-run\") pod \"cilium-ckqpx\" (UID: \"bd66da01-525c-4756-a809-7ce5d81058f3\") " pod="kube-system/cilium-ckqpx" Jan 17 12:24:12.592013 kubelet[1774]: I0117 12:24:12.591929 1774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bd66da01-525c-4756-a809-7ce5d81058f3-hostproc\") pod \"cilium-ckqpx\" (UID: \"bd66da01-525c-4756-a809-7ce5d81058f3\") " pod="kube-system/cilium-ckqpx" Jan 17 12:24:12.592013 kubelet[1774]: I0117 12:24:12.591953 1774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bd66da01-525c-4756-a809-7ce5d81058f3-cilium-cgroup\") pod \"cilium-ckqpx\" (UID: \"bd66da01-525c-4756-a809-7ce5d81058f3\") " pod="kube-system/cilium-ckqpx" Jan 17 12:24:12.592013 kubelet[1774]: I0117 12:24:12.591972 1774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bd66da01-525c-4756-a809-7ce5d81058f3-cni-path\") pod \"cilium-ckqpx\" (UID: \"bd66da01-525c-4756-a809-7ce5d81058f3\") " pod="kube-system/cilium-ckqpx" Jan 17 12:24:12.592013 kubelet[1774]: I0117 12:24:12.591990 1774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bd66da01-525c-4756-a809-7ce5d81058f3-bpf-maps\") pod \"cilium-ckqpx\" (UID: \"bd66da01-525c-4756-a809-7ce5d81058f3\") " pod="kube-system/cilium-ckqpx" Jan 17 12:24:12.592013 kubelet[1774]: I0117 12:24:12.592006 1774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bd66da01-525c-4756-a809-7ce5d81058f3-etc-cni-netd\") pod \"cilium-ckqpx\" (UID: \"bd66da01-525c-4756-a809-7ce5d81058f3\") " pod="kube-system/cilium-ckqpx" Jan 17 12:24:12.592259 kubelet[1774]: I0117 12:24:12.592044 1774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bd66da01-525c-4756-a809-7ce5d81058f3-xtables-lock\") pod \"cilium-ckqpx\" (UID: \"bd66da01-525c-4756-a809-7ce5d81058f3\") " pod="kube-system/cilium-ckqpx" Jan 17 12:24:12.592259 kubelet[1774]: I0117 12:24:12.592066 1774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9rbp\" (UniqueName: \"kubernetes.io/projected/bd66da01-525c-4756-a809-7ce5d81058f3-kube-api-access-w9rbp\") pod \"cilium-ckqpx\" (UID: \"bd66da01-525c-4756-a809-7ce5d81058f3\") " pod="kube-system/cilium-ckqpx" Jan 17 12:24:12.592259 kubelet[1774]: I0117 12:24:12.592088 1774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1a6a19d4-a208-49e4-8105-7bb6cd2ac189-kube-proxy\") pod \"kube-proxy-zthtp\" (UID: \"1a6a19d4-a208-49e4-8105-7bb6cd2ac189\") " pod="kube-system/kube-proxy-zthtp" Jan 17 12:24:12.592259 kubelet[1774]: I0117 12:24:12.592107 1774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bd66da01-525c-4756-a809-7ce5d81058f3-cilium-config-path\") pod \"cilium-ckqpx\" (UID: \"bd66da01-525c-4756-a809-7ce5d81058f3\") " pod="kube-system/cilium-ckqpx" Jan 17 12:24:12.592259 kubelet[1774]: I0117 12:24:12.592126 1774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a6a19d4-a208-49e4-8105-7bb6cd2ac189-lib-modules\") pod \"kube-proxy-zthtp\" (UID: \"1a6a19d4-a208-49e4-8105-7bb6cd2ac189\") " pod="kube-system/kube-proxy-zthtp" Jan 17 12:24:12.592259 kubelet[1774]: I0117 12:24:12.592143 1774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd66da01-525c-4756-a809-7ce5d81058f3-lib-modules\") pod \"cilium-ckqpx\" (UID: \"bd66da01-525c-4756-a809-7ce5d81058f3\") " pod="kube-system/cilium-ckqpx" Jan 17 12:24:12.592431 kubelet[1774]: I0117 12:24:12.592162 1774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bd66da01-525c-4756-a809-7ce5d81058f3-clustermesh-secrets\") pod \"cilium-ckqpx\" (UID: \"bd66da01-525c-4756-a809-7ce5d81058f3\") " pod="kube-system/cilium-ckqpx" Jan 17 12:24:12.592431 kubelet[1774]: I0117 12:24:12.592181 1774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bd66da01-525c-4756-a809-7ce5d81058f3-host-proc-sys-net\") pod \"cilium-ckqpx\" (UID: \"bd66da01-525c-4756-a809-7ce5d81058f3\") " pod="kube-system/cilium-ckqpx" Jan 17 12:24:12.592431 kubelet[1774]: I0117 12:24:12.592200 1774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bd66da01-525c-4756-a809-7ce5d81058f3-host-proc-sys-kernel\") pod \"cilium-ckqpx\" (UID: \"bd66da01-525c-4756-a809-7ce5d81058f3\") " pod="kube-system/cilium-ckqpx" Jan 17 12:24:12.592431 kubelet[1774]: I0117 12:24:12.592220 1774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bd66da01-525c-4756-a809-7ce5d81058f3-hubble-tls\") pod \"cilium-ckqpx\" (UID: \"bd66da01-525c-4756-a809-7ce5d81058f3\") " pod="kube-system/cilium-ckqpx" Jan 17 12:24:12.592431 kubelet[1774]: I0117 12:24:12.592239 1774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1a6a19d4-a208-49e4-8105-7bb6cd2ac189-xtables-lock\") pod \"kube-proxy-zthtp\" (UID: \"1a6a19d4-a208-49e4-8105-7bb6cd2ac189\") " pod="kube-system/kube-proxy-zthtp" Jan 17 12:24:12.592580 kubelet[1774]: I0117 12:24:12.592259 1774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns9pw\" (UniqueName: \"kubernetes.io/projected/1a6a19d4-a208-49e4-8105-7bb6cd2ac189-kube-api-access-ns9pw\") pod \"kube-proxy-zthtp\" (UID: \"1a6a19d4-a208-49e4-8105-7bb6cd2ac189\") " pod="kube-system/kube-proxy-zthtp" Jan 17 12:24:12.890033 kubelet[1774]: E0117 12:24:12.889915 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:24:12.890567 containerd[1459]: time="2025-01-17T12:24:12.890521109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zthtp,Uid:1a6a19d4-a208-49e4-8105-7bb6cd2ac189,Namespace:kube-system,Attempt:0,}" Jan 17 12:24:12.903896 kubelet[1774]: E0117 12:24:12.903867 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:24:12.904320 containerd[1459]: time="2025-01-17T12:24:12.904295962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ckqpx,Uid:bd66da01-525c-4756-a809-7ce5d81058f3,Namespace:kube-system,Attempt:0,}" Jan 17 12:24:13.572949 kubelet[1774]: E0117 12:24:13.572873 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:14.366194 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount960783747.mount: Deactivated successfully. Jan 17 12:24:14.388825 containerd[1459]: time="2025-01-17T12:24:14.388776352Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:24:14.389760 containerd[1459]: time="2025-01-17T12:24:14.389724831Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:24:14.390511 containerd[1459]: time="2025-01-17T12:24:14.390454229Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:24:14.391431 containerd[1459]: time="2025-01-17T12:24:14.391398119Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 17 12:24:14.392340 containerd[1459]: time="2025-01-17T12:24:14.392300050Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:24:14.396051 containerd[1459]: time="2025-01-17T12:24:14.396004447Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:24:14.397017 containerd[1459]: time="2025-01-17T12:24:14.396990066Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.492646374s" Jan 17 12:24:14.397885 containerd[1459]: time="2025-01-17T12:24:14.397857994Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.507237287s" Jan 17 12:24:14.502349 containerd[1459]: time="2025-01-17T12:24:14.502234497Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:24:14.502647 containerd[1459]: time="2025-01-17T12:24:14.502435524Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:24:14.502647 containerd[1459]: time="2025-01-17T12:24:14.502485047Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:24:14.502647 containerd[1459]: time="2025-01-17T12:24:14.502495186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:24:14.502647 containerd[1459]: time="2025-01-17T12:24:14.502573242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:24:14.503368 containerd[1459]: time="2025-01-17T12:24:14.502945550Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:24:14.503368 containerd[1459]: time="2025-01-17T12:24:14.502964596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:24:14.503368 containerd[1459]: time="2025-01-17T12:24:14.503078580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:24:14.562171 systemd[1]: Started cri-containerd-3f5f363ede96ce713213ee302b158c570a18c7752266dbcab387781133960950.scope - libcontainer container 3f5f363ede96ce713213ee302b158c570a18c7752266dbcab387781133960950. Jan 17 12:24:14.564248 systemd[1]: Started cri-containerd-b192e7d5151bdcc5af319204fbdf0c240c875f7c1e13e8deb8f6a6f7f9b503c8.scope - libcontainer container b192e7d5151bdcc5af319204fbdf0c240c875f7c1e13e8deb8f6a6f7f9b503c8. Jan 17 12:24:14.574093 kubelet[1774]: E0117 12:24:14.573994 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:14.585347 containerd[1459]: time="2025-01-17T12:24:14.585297149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ckqpx,Uid:bd66da01-525c-4756-a809-7ce5d81058f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f5f363ede96ce713213ee302b158c570a18c7752266dbcab387781133960950\"" Jan 17 12:24:14.586493 kubelet[1774]: E0117 12:24:14.586466 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:24:14.587734 containerd[1459]: time="2025-01-17T12:24:14.587504368Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 17 12:24:14.589536 containerd[1459]: time="2025-01-17T12:24:14.589512835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zthtp,Uid:1a6a19d4-a208-49e4-8105-7bb6cd2ac189,Namespace:kube-system,Attempt:0,} returns sandbox id \"b192e7d5151bdcc5af319204fbdf0c240c875f7c1e13e8deb8f6a6f7f9b503c8\"" Jan 17 12:24:14.590241 kubelet[1774]: E0117 12:24:14.590220 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:24:15.574869 kubelet[1774]: E0117 12:24:15.574833 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:16.575432 kubelet[1774]: E0117 12:24:16.575367 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:17.576102 kubelet[1774]: E0117 12:24:17.576017 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:18.576513 kubelet[1774]: E0117 12:24:18.576460 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:19.576965 kubelet[1774]: E0117 12:24:19.576919 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:20.387391 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3051513608.mount: Deactivated successfully. Jan 17 12:24:20.577268 kubelet[1774]: E0117 12:24:20.577229 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:21.577489 kubelet[1774]: E0117 12:24:21.577451 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:22.577870 kubelet[1774]: E0117 12:24:22.577821 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:23.578923 kubelet[1774]: E0117 12:24:23.578874 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:24.579074 kubelet[1774]: E0117 12:24:24.579012 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:25.286458 containerd[1459]: time="2025-01-17T12:24:25.286409755Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:25.287182 containerd[1459]: time="2025-01-17T12:24:25.287137399Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735291" Jan 17 12:24:25.288383 containerd[1459]: time="2025-01-17T12:24:25.288353380Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:25.289772 containerd[1459]: time="2025-01-17T12:24:25.289741182Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.702194044s" Jan 17 12:24:25.289828 containerd[1459]: time="2025-01-17T12:24:25.289771639Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 17 12:24:25.290663 containerd[1459]: time="2025-01-17T12:24:25.290536173Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 17 12:24:25.291721 containerd[1459]: time="2025-01-17T12:24:25.291682503Z" level=info msg="CreateContainer within sandbox \"3f5f363ede96ce713213ee302b158c570a18c7752266dbcab387781133960950\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 12:24:25.305824 containerd[1459]: time="2025-01-17T12:24:25.305797233Z" level=info msg="CreateContainer within sandbox \"3f5f363ede96ce713213ee302b158c570a18c7752266dbcab387781133960950\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"df67182abd63df77448e06972fa8b7ea791f793533d780259b1317e7a8bbc488\"" Jan 17 12:24:25.306324 containerd[1459]: time="2025-01-17T12:24:25.306287933Z" level=info msg="StartContainer for \"df67182abd63df77448e06972fa8b7ea791f793533d780259b1317e7a8bbc488\"" Jan 17 12:24:25.333146 systemd[1]: Started cri-containerd-df67182abd63df77448e06972fa8b7ea791f793533d780259b1317e7a8bbc488.scope - libcontainer container df67182abd63df77448e06972fa8b7ea791f793533d780259b1317e7a8bbc488. Jan 17 12:24:25.358690 containerd[1459]: time="2025-01-17T12:24:25.358640593Z" level=info msg="StartContainer for \"df67182abd63df77448e06972fa8b7ea791f793533d780259b1317e7a8bbc488\" returns successfully" Jan 17 12:24:25.367850 systemd[1]: cri-containerd-df67182abd63df77448e06972fa8b7ea791f793533d780259b1317e7a8bbc488.scope: Deactivated successfully. Jan 17 12:24:25.580069 kubelet[1774]: E0117 12:24:25.579955 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:25.952734 containerd[1459]: time="2025-01-17T12:24:25.952685309Z" level=info msg="shim disconnected" id=df67182abd63df77448e06972fa8b7ea791f793533d780259b1317e7a8bbc488 namespace=k8s.io Jan 17 12:24:25.952827 containerd[1459]: time="2025-01-17T12:24:25.952736235Z" level=warning msg="cleaning up after shim disconnected" id=df67182abd63df77448e06972fa8b7ea791f793533d780259b1317e7a8bbc488 namespace=k8s.io Jan 17 12:24:25.952827 containerd[1459]: time="2025-01-17T12:24:25.952747406Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:24:26.212055 kubelet[1774]: E0117 12:24:26.211932 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:24:26.213355 containerd[1459]: time="2025-01-17T12:24:26.213323359Z" level=info msg="CreateContainer within sandbox \"3f5f363ede96ce713213ee302b158c570a18c7752266dbcab387781133960950\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 12:24:26.226395 containerd[1459]: time="2025-01-17T12:24:26.226352964Z" level=info msg="CreateContainer within sandbox \"3f5f363ede96ce713213ee302b158c570a18c7752266dbcab387781133960950\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b32a6cb6854f417133f2f8febceb7e48102c25f956e3c46531978fadebf86477\"" Jan 17 12:24:26.226820 containerd[1459]: time="2025-01-17T12:24:26.226757052Z" level=info msg="StartContainer for \"b32a6cb6854f417133f2f8febceb7e48102c25f956e3c46531978fadebf86477\"" Jan 17 12:24:26.255232 systemd[1]: Started cri-containerd-b32a6cb6854f417133f2f8febceb7e48102c25f956e3c46531978fadebf86477.scope - libcontainer container b32a6cb6854f417133f2f8febceb7e48102c25f956e3c46531978fadebf86477. Jan 17 12:24:26.278885 containerd[1459]: time="2025-01-17T12:24:26.278846818Z" level=info msg="StartContainer for \"b32a6cb6854f417133f2f8febceb7e48102c25f956e3c46531978fadebf86477\" returns successfully" Jan 17 12:24:26.289013 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:24:26.289344 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:24:26.289423 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:24:26.297389 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:24:26.297595 systemd[1]: cri-containerd-b32a6cb6854f417133f2f8febceb7e48102c25f956e3c46531978fadebf86477.scope: Deactivated successfully. Jan 17 12:24:26.304973 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df67182abd63df77448e06972fa8b7ea791f793533d780259b1317e7a8bbc488-rootfs.mount: Deactivated successfully. Jan 17 12:24:26.309422 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:24:26.314831 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b32a6cb6854f417133f2f8febceb7e48102c25f956e3c46531978fadebf86477-rootfs.mount: Deactivated successfully. Jan 17 12:24:26.334201 containerd[1459]: time="2025-01-17T12:24:26.334145153Z" level=info msg="shim disconnected" id=b32a6cb6854f417133f2f8febceb7e48102c25f956e3c46531978fadebf86477 namespace=k8s.io Jan 17 12:24:26.334507 containerd[1459]: time="2025-01-17T12:24:26.334200967Z" level=warning msg="cleaning up after shim disconnected" id=b32a6cb6854f417133f2f8febceb7e48102c25f956e3c46531978fadebf86477 namespace=k8s.io Jan 17 12:24:26.334507 containerd[1459]: time="2025-01-17T12:24:26.334208952Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:24:26.580274 kubelet[1774]: E0117 12:24:26.580169 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:26.734874 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1131406205.mount: Deactivated successfully. Jan 17 12:24:27.007952 containerd[1459]: time="2025-01-17T12:24:27.007907237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:27.008684 containerd[1459]: time="2025-01-17T12:24:27.008631716Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=30231128" Jan 17 12:24:27.009703 containerd[1459]: time="2025-01-17T12:24:27.009645417Z" level=info msg="ImageCreate event name:\"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:27.011625 containerd[1459]: time="2025-01-17T12:24:27.011590324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:27.012249 containerd[1459]: time="2025-01-17T12:24:27.012199176Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"30230147\" in 1.721637375s" Jan 17 12:24:27.012249 containerd[1459]: time="2025-01-17T12:24:27.012238991Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\"" Jan 17 12:24:27.014168 containerd[1459]: time="2025-01-17T12:24:27.014133774Z" level=info msg="CreateContainer within sandbox \"b192e7d5151bdcc5af319204fbdf0c240c875f7c1e13e8deb8f6a6f7f9b503c8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 12:24:27.030755 containerd[1459]: time="2025-01-17T12:24:27.030711062Z" level=info msg="CreateContainer within sandbox \"b192e7d5151bdcc5af319204fbdf0c240c875f7c1e13e8deb8f6a6f7f9b503c8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"14d340cf802a27514283a2893edaeda10aef07599682433e116316c512e223cb\"" Jan 17 12:24:27.031218 containerd[1459]: time="2025-01-17T12:24:27.031184811Z" level=info msg="StartContainer for \"14d340cf802a27514283a2893edaeda10aef07599682433e116316c512e223cb\"" Jan 17 12:24:27.059164 systemd[1]: Started cri-containerd-14d340cf802a27514283a2893edaeda10aef07599682433e116316c512e223cb.scope - libcontainer container 14d340cf802a27514283a2893edaeda10aef07599682433e116316c512e223cb. Jan 17 12:24:27.087677 containerd[1459]: time="2025-01-17T12:24:27.087634304Z" level=info msg="StartContainer for \"14d340cf802a27514283a2893edaeda10aef07599682433e116316c512e223cb\" returns successfully" Jan 17 12:24:27.215080 kubelet[1774]: E0117 12:24:27.215048 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:24:27.216845 containerd[1459]: time="2025-01-17T12:24:27.216797190Z" level=info msg="CreateContainer within sandbox \"3f5f363ede96ce713213ee302b158c570a18c7752266dbcab387781133960950\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 12:24:27.217135 kubelet[1774]: E0117 12:24:27.217113 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:24:27.235497 containerd[1459]: time="2025-01-17T12:24:27.235445893Z" level=info msg="CreateContainer within sandbox \"3f5f363ede96ce713213ee302b158c570a18c7752266dbcab387781133960950\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bc9c35558270be193a276e0d676955dbf41cdd47c5119a4393e0c20fb6b5f5d3\"" Jan 17 12:24:27.235979 containerd[1459]: time="2025-01-17T12:24:27.235927476Z" level=info msg="StartContainer for \"bc9c35558270be193a276e0d676955dbf41cdd47c5119a4393e0c20fb6b5f5d3\"" Jan 17 12:24:27.237912 kubelet[1774]: I0117 12:24:27.237861 1774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zthtp" podStartSLOduration=3.815714062 podStartE2EDuration="16.237845874s" podCreationTimestamp="2025-01-17 12:24:11 +0000 UTC" firstStartedPulling="2025-01-17 12:24:14.590734806 +0000 UTC m=+4.333957137" lastFinishedPulling="2025-01-17 12:24:27.012866628 +0000 UTC m=+16.756088949" observedRunningTime="2025-01-17 12:24:27.237751237 +0000 UTC m=+16.980973578" watchObservedRunningTime="2025-01-17 12:24:27.237845874 +0000 UTC m=+16.981068195" Jan 17 12:24:27.263156 systemd[1]: Started cri-containerd-bc9c35558270be193a276e0d676955dbf41cdd47c5119a4393e0c20fb6b5f5d3.scope - libcontainer container bc9c35558270be193a276e0d676955dbf41cdd47c5119a4393e0c20fb6b5f5d3. Jan 17 12:24:27.290919 containerd[1459]: time="2025-01-17T12:24:27.290863160Z" level=info msg="StartContainer for \"bc9c35558270be193a276e0d676955dbf41cdd47c5119a4393e0c20fb6b5f5d3\" returns successfully" Jan 17 12:24:27.292147 systemd[1]: cri-containerd-bc9c35558270be193a276e0d676955dbf41cdd47c5119a4393e0c20fb6b5f5d3.scope: Deactivated successfully. Jan 17 12:24:27.312439 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc9c35558270be193a276e0d676955dbf41cdd47c5119a4393e0c20fb6b5f5d3-rootfs.mount: Deactivated successfully. Jan 17 12:24:27.580557 kubelet[1774]: E0117 12:24:27.580453 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:27.658502 containerd[1459]: time="2025-01-17T12:24:27.658435490Z" level=info msg="shim disconnected" id=bc9c35558270be193a276e0d676955dbf41cdd47c5119a4393e0c20fb6b5f5d3 namespace=k8s.io Jan 17 12:24:27.658502 containerd[1459]: time="2025-01-17T12:24:27.658489080Z" level=warning msg="cleaning up after shim disconnected" id=bc9c35558270be193a276e0d676955dbf41cdd47c5119a4393e0c20fb6b5f5d3 namespace=k8s.io Jan 17 12:24:27.658502 containerd[1459]: time="2025-01-17T12:24:27.658497837Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:24:28.219868 kubelet[1774]: E0117 12:24:28.219839 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:24:28.220008 kubelet[1774]: E0117 12:24:28.219971 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:24:28.221440 containerd[1459]: time="2025-01-17T12:24:28.221404518Z" level=info msg="CreateContainer within sandbox \"3f5f363ede96ce713213ee302b158c570a18c7752266dbcab387781133960950\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 12:24:28.238680 containerd[1459]: time="2025-01-17T12:24:28.238624090Z" level=info msg="CreateContainer within sandbox \"3f5f363ede96ce713213ee302b158c570a18c7752266dbcab387781133960950\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"92fbbb175370b44487f8c192bb8c0ee128ba0907dc90930670f1559398e37ea0\"" Jan 17 12:24:28.239267 containerd[1459]: time="2025-01-17T12:24:28.239231069Z" level=info msg="StartContainer for \"92fbbb175370b44487f8c192bb8c0ee128ba0907dc90930670f1559398e37ea0\"" Jan 17 12:24:28.267157 systemd[1]: Started cri-containerd-92fbbb175370b44487f8c192bb8c0ee128ba0907dc90930670f1559398e37ea0.scope - libcontainer container 92fbbb175370b44487f8c192bb8c0ee128ba0907dc90930670f1559398e37ea0. Jan 17 12:24:28.289691 systemd[1]: cri-containerd-92fbbb175370b44487f8c192bb8c0ee128ba0907dc90930670f1559398e37ea0.scope: Deactivated successfully. Jan 17 12:24:28.291403 containerd[1459]: time="2025-01-17T12:24:28.291340412Z" level=info msg="StartContainer for \"92fbbb175370b44487f8c192bb8c0ee128ba0907dc90930670f1559398e37ea0\" returns successfully" Jan 17 12:24:28.307902 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-92fbbb175370b44487f8c192bb8c0ee128ba0907dc90930670f1559398e37ea0-rootfs.mount: Deactivated successfully. Jan 17 12:24:28.312720 containerd[1459]: time="2025-01-17T12:24:28.312676515Z" level=info msg="shim disconnected" id=92fbbb175370b44487f8c192bb8c0ee128ba0907dc90930670f1559398e37ea0 namespace=k8s.io Jan 17 12:24:28.312720 containerd[1459]: time="2025-01-17T12:24:28.312719436Z" level=warning msg="cleaning up after shim disconnected" id=92fbbb175370b44487f8c192bb8c0ee128ba0907dc90930670f1559398e37ea0 namespace=k8s.io Jan 17 12:24:28.312849 containerd[1459]: time="2025-01-17T12:24:28.312728092Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:24:28.581114 kubelet[1774]: E0117 12:24:28.580987 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:29.222610 kubelet[1774]: E0117 12:24:29.222582 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:24:29.224062 containerd[1459]: time="2025-01-17T12:24:29.224010763Z" level=info msg="CreateContainer within sandbox \"3f5f363ede96ce713213ee302b158c570a18c7752266dbcab387781133960950\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 12:24:29.239242 containerd[1459]: time="2025-01-17T12:24:29.239184927Z" level=info msg="CreateContainer within sandbox \"3f5f363ede96ce713213ee302b158c570a18c7752266dbcab387781133960950\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1413578d79d40623c434dcc175fa89eb80fb953f21804c06a1a8388d4f72a2a8\"" Jan 17 12:24:29.239677 containerd[1459]: time="2025-01-17T12:24:29.239654922Z" level=info msg="StartContainer for \"1413578d79d40623c434dcc175fa89eb80fb953f21804c06a1a8388d4f72a2a8\"" Jan 17 12:24:29.267187 systemd[1]: Started cri-containerd-1413578d79d40623c434dcc175fa89eb80fb953f21804c06a1a8388d4f72a2a8.scope - libcontainer container 1413578d79d40623c434dcc175fa89eb80fb953f21804c06a1a8388d4f72a2a8. Jan 17 12:24:29.294682 containerd[1459]: time="2025-01-17T12:24:29.294636851Z" level=info msg="StartContainer for \"1413578d79d40623c434dcc175fa89eb80fb953f21804c06a1a8388d4f72a2a8\" returns successfully" Jan 17 12:24:29.313326 systemd[1]: run-containerd-runc-k8s.io-1413578d79d40623c434dcc175fa89eb80fb953f21804c06a1a8388d4f72a2a8-runc.EvPqi8.mount: Deactivated successfully. Jan 17 12:24:29.420485 kubelet[1774]: I0117 12:24:29.420442 1774 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 17 12:24:29.581641 kubelet[1774]: E0117 12:24:29.581521 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:29.750055 kernel: Initializing XFRM netlink socket Jan 17 12:24:30.226206 kubelet[1774]: E0117 12:24:30.226179 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:24:30.240081 kubelet[1774]: I0117 12:24:30.240032 1774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ckqpx" podStartSLOduration=8.536663752 podStartE2EDuration="19.24000055s" podCreationTimestamp="2025-01-17 12:24:11 +0000 UTC" firstStartedPulling="2025-01-17 12:24:14.587112593 +0000 UTC m=+4.330334914" lastFinishedPulling="2025-01-17 12:24:25.29044938 +0000 UTC m=+15.033671712" observedRunningTime="2025-01-17 12:24:30.239894396 +0000 UTC m=+19.983116727" watchObservedRunningTime="2025-01-17 12:24:30.24000055 +0000 UTC m=+19.983222881" Jan 17 12:24:30.571696 kubelet[1774]: E0117 12:24:30.571569 1774 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:30.582177 kubelet[1774]: E0117 12:24:30.582154 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:31.227782 kubelet[1774]: E0117 12:24:31.227743 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:24:31.430837 systemd-networkd[1375]: cilium_host: Link UP Jan 17 12:24:31.431230 systemd-networkd[1375]: cilium_net: Link UP Jan 17 12:24:31.431516 systemd-networkd[1375]: cilium_net: Gained carrier Jan 17 12:24:31.431895 systemd-networkd[1375]: cilium_host: Gained carrier Jan 17 12:24:31.524212 systemd-networkd[1375]: cilium_vxlan: Link UP Jan 17 12:24:31.524224 systemd-networkd[1375]: cilium_vxlan: Gained carrier Jan 17 12:24:31.582844 kubelet[1774]: E0117 12:24:31.582809 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:31.715053 kernel: NET: Registered PF_ALG protocol family Jan 17 12:24:31.850321 systemd-networkd[1375]: cilium_host: Gained IPv6LL Jan 17 12:24:32.058194 systemd-networkd[1375]: cilium_net: Gained IPv6LL Jan 17 12:24:32.229904 kubelet[1774]: E0117 12:24:32.229847 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:24:32.322185 systemd-networkd[1375]: lxc_health: Link UP Jan 17 12:24:32.324663 systemd-networkd[1375]: lxc_health: Gained carrier Jan 17 12:24:32.583526 kubelet[1774]: E0117 12:24:32.583375 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:33.069500 systemd[1]: Created slice kubepods-besteffort-pod56e49707_58f1_47f4_bc22_f6a01116d4c5.slice - libcontainer container kubepods-besteffort-pod56e49707_58f1_47f4_bc22_f6a01116d4c5.slice. Jan 17 12:24:33.114297 kubelet[1774]: I0117 12:24:33.114227 1774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c9g6\" (UniqueName: \"kubernetes.io/projected/56e49707-58f1-47f4-bc22-f6a01116d4c5-kube-api-access-6c9g6\") pod \"nginx-deployment-8587fbcb89-2mmbj\" (UID: \"56e49707-58f1-47f4-bc22-f6a01116d4c5\") " pod="default/nginx-deployment-8587fbcb89-2mmbj" Jan 17 12:24:33.231887 kubelet[1774]: E0117 12:24:33.231785 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:24:33.373011 containerd[1459]: time="2025-01-17T12:24:33.372879299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-2mmbj,Uid:56e49707-58f1-47f4-bc22-f6a01116d4c5,Namespace:default,Attempt:0,}" Jan 17 12:24:33.403170 systemd-networkd[1375]: cilium_vxlan: Gained IPv6LL Jan 17 12:24:33.403854 systemd-networkd[1375]: lxc8ad91f593ebf: Link UP Jan 17 12:24:33.414390 kernel: eth0: renamed from tmpc54bc Jan 17 12:24:33.417544 systemd-networkd[1375]: lxc8ad91f593ebf: Gained carrier Jan 17 12:24:33.583824 kubelet[1774]: E0117 12:24:33.583759 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:33.978188 systemd-networkd[1375]: lxc_health: Gained IPv6LL Jan 17 12:24:34.233649 kubelet[1774]: E0117 12:24:34.233533 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:24:34.584869 kubelet[1774]: E0117 12:24:34.584720 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:35.130186 systemd-networkd[1375]: lxc8ad91f593ebf: Gained IPv6LL Jan 17 12:24:35.234709 kubelet[1774]: E0117 12:24:35.234669 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:24:35.585616 kubelet[1774]: E0117 12:24:35.585532 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:36.576289 containerd[1459]: time="2025-01-17T12:24:36.576148852Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:24:36.576878 containerd[1459]: time="2025-01-17T12:24:36.576785616Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:24:36.576878 containerd[1459]: time="2025-01-17T12:24:36.576854077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:24:36.577064 containerd[1459]: time="2025-01-17T12:24:36.576986539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:24:36.585871 kubelet[1774]: E0117 12:24:36.585835 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:36.601150 systemd[1]: Started cri-containerd-c54bc6c27fe0873ebf715124bf368fa5e9ac1686bafbea6d9d1837999d71394a.scope - libcontainer container c54bc6c27fe0873ebf715124bf368fa5e9ac1686bafbea6d9d1837999d71394a. Jan 17 12:24:36.612134 systemd-resolved[1325]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:24:36.635376 containerd[1459]: time="2025-01-17T12:24:36.635281330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-2mmbj,Uid:56e49707-58f1-47f4-bc22-f6a01116d4c5,Namespace:default,Attempt:0,} returns sandbox id \"c54bc6c27fe0873ebf715124bf368fa5e9ac1686bafbea6d9d1837999d71394a\"" Jan 17 12:24:36.636936 containerd[1459]: time="2025-01-17T12:24:36.636892242Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 17 12:24:37.586188 kubelet[1774]: E0117 12:24:37.586140 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:38.589400 kubelet[1774]: E0117 12:24:38.589257 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:39.442735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3350649283.mount: Deactivated successfully. Jan 17 12:24:39.590199 kubelet[1774]: E0117 12:24:39.590151 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:40.591172 kubelet[1774]: E0117 12:24:40.591115 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:40.677871 containerd[1459]: time="2025-01-17T12:24:40.677824553Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:40.678552 containerd[1459]: time="2025-01-17T12:24:40.678497120Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71036018" Jan 17 12:24:40.679576 containerd[1459]: time="2025-01-17T12:24:40.679545501Z" level=info msg="ImageCreate event name:\"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:40.684962 containerd[1459]: time="2025-01-17T12:24:40.682357653Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:40.684962 containerd[1459]: time="2025-01-17T12:24:40.683557142Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 4.046613703s" Jan 17 12:24:40.684962 containerd[1459]: time="2025-01-17T12:24:40.683586558Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 17 12:24:40.686690 containerd[1459]: time="2025-01-17T12:24:40.686657361Z" level=info msg="CreateContainer within sandbox \"c54bc6c27fe0873ebf715124bf368fa5e9ac1686bafbea6d9d1837999d71394a\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 17 12:24:40.698741 containerd[1459]: time="2025-01-17T12:24:40.698703063Z" level=info msg="CreateContainer within sandbox \"c54bc6c27fe0873ebf715124bf368fa5e9ac1686bafbea6d9d1837999d71394a\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"a35d48dc1c26b609c057c863378c00c467f4db7495100e5d47f024495c04d399\"" Jan 17 12:24:40.699109 containerd[1459]: time="2025-01-17T12:24:40.699074087Z" level=info msg="StartContainer for \"a35d48dc1c26b609c057c863378c00c467f4db7495100e5d47f024495c04d399\"" Jan 17 12:24:40.729156 systemd[1]: Started cri-containerd-a35d48dc1c26b609c057c863378c00c467f4db7495100e5d47f024495c04d399.scope - libcontainer container a35d48dc1c26b609c057c863378c00c467f4db7495100e5d47f024495c04d399. Jan 17 12:24:40.752505 containerd[1459]: time="2025-01-17T12:24:40.752455710Z" level=info msg="StartContainer for \"a35d48dc1c26b609c057c863378c00c467f4db7495100e5d47f024495c04d399\" returns successfully" Jan 17 12:24:41.253041 kubelet[1774]: I0117 12:24:41.252964 1774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-2mmbj" podStartSLOduration=4.203967295 podStartE2EDuration="8.252950441s" podCreationTimestamp="2025-01-17 12:24:33 +0000 UTC" firstStartedPulling="2025-01-17 12:24:36.636573204 +0000 UTC m=+26.379795535" lastFinishedPulling="2025-01-17 12:24:40.68555636 +0000 UTC m=+30.428778681" observedRunningTime="2025-01-17 12:24:41.252460531 +0000 UTC m=+30.995682862" watchObservedRunningTime="2025-01-17 12:24:41.252950441 +0000 UTC m=+30.996172772" Jan 17 12:24:41.419840 update_engine[1448]: I20250117 12:24:41.419768 1448 update_attempter.cc:509] Updating boot flags... Jan 17 12:24:41.443056 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2972) Jan 17 12:24:41.467040 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2972) Jan 17 12:24:41.496052 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2972) Jan 17 12:24:41.591784 kubelet[1774]: E0117 12:24:41.591670 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:42.592036 kubelet[1774]: E0117 12:24:42.591989 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:43.592843 kubelet[1774]: E0117 12:24:43.592796 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:44.593600 kubelet[1774]: E0117 12:24:44.593545 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:45.242369 systemd[1]: Created slice kubepods-besteffort-pod5cc522c6_4ef6_490a_ab69_73df4d039f93.slice - libcontainer container kubepods-besteffort-pod5cc522c6_4ef6_490a_ab69_73df4d039f93.slice. Jan 17 12:24:45.275131 kubelet[1774]: I0117 12:24:45.275092 1774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/5cc522c6-4ef6-490a-ab69-73df4d039f93-data\") pod \"nfs-server-provisioner-0\" (UID: \"5cc522c6-4ef6-490a-ab69-73df4d039f93\") " pod="default/nfs-server-provisioner-0" Jan 17 12:24:45.275131 kubelet[1774]: I0117 12:24:45.275132 1774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw7sr\" (UniqueName: \"kubernetes.io/projected/5cc522c6-4ef6-490a-ab69-73df4d039f93-kube-api-access-hw7sr\") pod \"nfs-server-provisioner-0\" (UID: \"5cc522c6-4ef6-490a-ab69-73df4d039f93\") " pod="default/nfs-server-provisioner-0" Jan 17 12:24:45.545215 containerd[1459]: time="2025-01-17T12:24:45.545094226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:5cc522c6-4ef6-490a-ab69-73df4d039f93,Namespace:default,Attempt:0,}" Jan 17 12:24:45.593948 kubelet[1774]: E0117 12:24:45.593920 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:45.714242 systemd-networkd[1375]: lxc7359b9b54a93: Link UP Jan 17 12:24:45.722063 kernel: eth0: renamed from tmp4feb6 Jan 17 12:24:45.728983 systemd-networkd[1375]: lxc7359b9b54a93: Gained carrier Jan 17 12:24:45.915677 containerd[1459]: time="2025-01-17T12:24:45.915038605Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:24:45.915677 containerd[1459]: time="2025-01-17T12:24:45.915413375Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:24:45.915677 containerd[1459]: time="2025-01-17T12:24:45.915432822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:24:45.915677 containerd[1459]: time="2025-01-17T12:24:45.915584519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:24:45.938152 systemd[1]: Started cri-containerd-4feb63784de17b210aba7447bc6e27b7b20ffcaeb9d589e7c1a5e37888f658af.scope - libcontainer container 4feb63784de17b210aba7447bc6e27b7b20ffcaeb9d589e7c1a5e37888f658af. Jan 17 12:24:45.949513 systemd-resolved[1325]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:24:45.971266 containerd[1459]: time="2025-01-17T12:24:45.971234980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:5cc522c6-4ef6-490a-ab69-73df4d039f93,Namespace:default,Attempt:0,} returns sandbox id \"4feb63784de17b210aba7447bc6e27b7b20ffcaeb9d589e7c1a5e37888f658af\"" Jan 17 12:24:45.972368 containerd[1459]: time="2025-01-17T12:24:45.972344990Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 17 12:24:46.594625 kubelet[1774]: E0117 12:24:46.594496 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:47.162515 systemd-networkd[1375]: lxc7359b9b54a93: Gained IPv6LL Jan 17 12:24:47.595086 kubelet[1774]: E0117 12:24:47.595033 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:47.736540 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3475727930.mount: Deactivated successfully. Jan 17 12:24:48.596095 kubelet[1774]: E0117 12:24:48.596048 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:49.597011 kubelet[1774]: E0117 12:24:49.596961 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:49.912249 containerd[1459]: time="2025-01-17T12:24:49.912094571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:49.912849 containerd[1459]: time="2025-01-17T12:24:49.912788992Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jan 17 12:24:49.914707 containerd[1459]: time="2025-01-17T12:24:49.914658443Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:49.917304 containerd[1459]: time="2025-01-17T12:24:49.917262994Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:24:49.918294 containerd[1459]: time="2025-01-17T12:24:49.918247272Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 3.945875311s" Jan 17 12:24:49.918294 containerd[1459]: time="2025-01-17T12:24:49.918289843Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 17 12:24:49.920520 containerd[1459]: time="2025-01-17T12:24:49.920478337Z" level=info msg="CreateContainer within sandbox \"4feb63784de17b210aba7447bc6e27b7b20ffcaeb9d589e7c1a5e37888f658af\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 17 12:24:49.933942 containerd[1459]: time="2025-01-17T12:24:49.933899879Z" level=info msg="CreateContainer within sandbox \"4feb63784de17b210aba7447bc6e27b7b20ffcaeb9d589e7c1a5e37888f658af\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"934bda9aae69c3588d6d6402bb0e8b9d375746c328c21a76bc7f6561fcc2d7ec\"" Jan 17 12:24:49.934257 containerd[1459]: time="2025-01-17T12:24:49.934219613Z" level=info msg="StartContainer for \"934bda9aae69c3588d6d6402bb0e8b9d375746c328c21a76bc7f6561fcc2d7ec\"" Jan 17 12:24:49.992186 systemd[1]: run-containerd-runc-k8s.io-934bda9aae69c3588d6d6402bb0e8b9d375746c328c21a76bc7f6561fcc2d7ec-runc.BBqZi0.mount: Deactivated successfully. Jan 17 12:24:50.001148 systemd[1]: Started cri-containerd-934bda9aae69c3588d6d6402bb0e8b9d375746c328c21a76bc7f6561fcc2d7ec.scope - libcontainer container 934bda9aae69c3588d6d6402bb0e8b9d375746c328c21a76bc7f6561fcc2d7ec. Jan 17 12:24:50.026674 containerd[1459]: time="2025-01-17T12:24:50.026629887Z" level=info msg="StartContainer for \"934bda9aae69c3588d6d6402bb0e8b9d375746c328c21a76bc7f6561fcc2d7ec\" returns successfully" Jan 17 12:24:50.273440 kubelet[1774]: I0117 12:24:50.273383 1774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.326324975 podStartE2EDuration="5.273369876s" podCreationTimestamp="2025-01-17 12:24:45 +0000 UTC" firstStartedPulling="2025-01-17 12:24:45.972117319 +0000 UTC m=+35.715339651" lastFinishedPulling="2025-01-17 12:24:49.919162221 +0000 UTC m=+39.662384552" observedRunningTime="2025-01-17 12:24:50.272841949 +0000 UTC m=+40.016064280" watchObservedRunningTime="2025-01-17 12:24:50.273369876 +0000 UTC m=+40.016592207" Jan 17 12:24:50.570978 kubelet[1774]: E0117 12:24:50.570818 1774 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:50.597528 kubelet[1774]: E0117 12:24:50.597478 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:51.598134 kubelet[1774]: E0117 12:24:51.598097 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:52.598493 kubelet[1774]: E0117 12:24:52.598446 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:53.598601 kubelet[1774]: E0117 12:24:53.598561 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:54.599702 kubelet[1774]: E0117 12:24:54.599646 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:55.600223 kubelet[1774]: E0117 12:24:55.600157 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:56.600465 kubelet[1774]: E0117 12:24:56.600407 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:57.601001 kubelet[1774]: E0117 12:24:57.600943 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:58.601274 kubelet[1774]: E0117 12:24:58.601212 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:59.259094 systemd[1]: Created slice kubepods-besteffort-pod02b62641_6865_46ce_acc7_822c0e3d00e2.slice - libcontainer container kubepods-besteffort-pod02b62641_6865_46ce_acc7_822c0e3d00e2.slice. Jan 17 12:24:59.444363 kubelet[1774]: I0117 12:24:59.444323 1774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89pnc\" (UniqueName: \"kubernetes.io/projected/02b62641-6865-46ce-acc7-822c0e3d00e2-kube-api-access-89pnc\") pod \"test-pod-1\" (UID: \"02b62641-6865-46ce-acc7-822c0e3d00e2\") " pod="default/test-pod-1" Jan 17 12:24:59.444363 kubelet[1774]: I0117 12:24:59.444365 1774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4c40b2a1-d30c-4b26-9753-b801dced10f8\" (UniqueName: \"kubernetes.io/nfs/02b62641-6865-46ce-acc7-822c0e3d00e2-pvc-4c40b2a1-d30c-4b26-9753-b801dced10f8\") pod \"test-pod-1\" (UID: \"02b62641-6865-46ce-acc7-822c0e3d00e2\") " pod="default/test-pod-1" Jan 17 12:24:59.573052 kernel: FS-Cache: Loaded Jan 17 12:24:59.601695 kubelet[1774]: E0117 12:24:59.601635 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:24:59.642330 kernel: RPC: Registered named UNIX socket transport module. Jan 17 12:24:59.642478 kernel: RPC: Registered udp transport module. Jan 17 12:24:59.642516 kernel: RPC: Registered tcp transport module. Jan 17 12:24:59.642539 kernel: RPC: Registered tcp-with-tls transport module. Jan 17 12:24:59.643728 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 17 12:24:59.903443 kernel: NFS: Registering the id_resolver key type Jan 17 12:24:59.903600 kernel: Key type id_resolver registered Jan 17 12:24:59.903620 kernel: Key type id_legacy registered Jan 17 12:24:59.928919 nfsidmap[3172]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 17 12:24:59.933389 nfsidmap[3175]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 17 12:25:00.163109 containerd[1459]: time="2025-01-17T12:25:00.162998838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:02b62641-6865-46ce-acc7-822c0e3d00e2,Namespace:default,Attempt:0,}" Jan 17 12:25:00.190580 systemd-networkd[1375]: lxcecb21864eaab: Link UP Jan 17 12:25:00.205052 kernel: eth0: renamed from tmp173db Jan 17 12:25:00.210490 systemd-networkd[1375]: lxcecb21864eaab: Gained carrier Jan 17 12:25:00.410540 containerd[1459]: time="2025-01-17T12:25:00.410259537Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:25:00.410540 containerd[1459]: time="2025-01-17T12:25:00.410338786Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:25:00.410540 containerd[1459]: time="2025-01-17T12:25:00.410359375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:25:00.410540 containerd[1459]: time="2025-01-17T12:25:00.410457219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:25:00.432177 systemd[1]: Started cri-containerd-173dbb9da8ef1cca3125b167b4ee0195295dd8031ca66f3a26299e8a563af093.scope - libcontainer container 173dbb9da8ef1cca3125b167b4ee0195295dd8031ca66f3a26299e8a563af093. Jan 17 12:25:00.442711 systemd-resolved[1325]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:25:00.465357 containerd[1459]: time="2025-01-17T12:25:00.465312167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:02b62641-6865-46ce-acc7-822c0e3d00e2,Namespace:default,Attempt:0,} returns sandbox id \"173dbb9da8ef1cca3125b167b4ee0195295dd8031ca66f3a26299e8a563af093\"" Jan 17 12:25:00.466870 containerd[1459]: time="2025-01-17T12:25:00.466784518Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 17 12:25:00.603125 kubelet[1774]: E0117 12:25:00.603084 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:00.930469 containerd[1459]: time="2025-01-17T12:25:00.930420661Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:25:00.931195 containerd[1459]: time="2025-01-17T12:25:00.931111420Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 17 12:25:00.933616 containerd[1459]: time="2025-01-17T12:25:00.933572152Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 466.69507ms" Jan 17 12:25:00.933616 containerd[1459]: time="2025-01-17T12:25:00.933602459Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 17 12:25:00.935381 containerd[1459]: time="2025-01-17T12:25:00.935339028Z" level=info msg="CreateContainer within sandbox \"173dbb9da8ef1cca3125b167b4ee0195295dd8031ca66f3a26299e8a563af093\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 17 12:25:00.952805 containerd[1459]: time="2025-01-17T12:25:00.952750035Z" level=info msg="CreateContainer within sandbox \"173dbb9da8ef1cca3125b167b4ee0195295dd8031ca66f3a26299e8a563af093\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"2baa7b3cb0bee608301bc7c1b31677ec41a8bfefbeedeb2d88bcb0f5c1ba869b\"" Jan 17 12:25:00.953552 containerd[1459]: time="2025-01-17T12:25:00.953514794Z" level=info msg="StartContainer for \"2baa7b3cb0bee608301bc7c1b31677ec41a8bfefbeedeb2d88bcb0f5c1ba869b\"" Jan 17 12:25:00.987156 systemd[1]: Started cri-containerd-2baa7b3cb0bee608301bc7c1b31677ec41a8bfefbeedeb2d88bcb0f5c1ba869b.scope - libcontainer container 2baa7b3cb0bee608301bc7c1b31677ec41a8bfefbeedeb2d88bcb0f5c1ba869b. Jan 17 12:25:01.013158 containerd[1459]: time="2025-01-17T12:25:01.013065836Z" level=info msg="StartContainer for \"2baa7b3cb0bee608301bc7c1b31677ec41a8bfefbeedeb2d88bcb0f5c1ba869b\" returns successfully" Jan 17 12:25:01.555413 systemd[1]: run-containerd-runc-k8s.io-2baa7b3cb0bee608301bc7c1b31677ec41a8bfefbeedeb2d88bcb0f5c1ba869b-runc.9THE3Y.mount: Deactivated successfully. Jan 17 12:25:01.603899 kubelet[1774]: E0117 12:25:01.603871 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:01.690282 systemd-networkd[1375]: lxcecb21864eaab: Gained IPv6LL Jan 17 12:25:02.604839 kubelet[1774]: E0117 12:25:02.604771 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:03.605581 kubelet[1774]: E0117 12:25:03.605507 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:04.606441 kubelet[1774]: E0117 12:25:04.606371 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:05.606840 kubelet[1774]: E0117 12:25:05.606762 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:06.607473 kubelet[1774]: E0117 12:25:06.607404 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:07.608208 kubelet[1774]: E0117 12:25:07.608121 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:07.751909 kubelet[1774]: I0117 12:25:07.751827 1774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=22.284116424 podStartE2EDuration="22.751806176s" podCreationTimestamp="2025-01-17 12:24:45 +0000 UTC" firstStartedPulling="2025-01-17 12:25:00.466508689 +0000 UTC m=+50.209731020" lastFinishedPulling="2025-01-17 12:25:00.934198441 +0000 UTC m=+50.677420772" observedRunningTime="2025-01-17 12:25:01.289426016 +0000 UTC m=+51.032648347" watchObservedRunningTime="2025-01-17 12:25:07.751806176 +0000 UTC m=+57.495028507" Jan 17 12:25:07.784116 containerd[1459]: time="2025-01-17T12:25:07.784044522Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:25:07.791318 containerd[1459]: time="2025-01-17T12:25:07.791283418Z" level=info msg="StopContainer for \"1413578d79d40623c434dcc175fa89eb80fb953f21804c06a1a8388d4f72a2a8\" with timeout 2 (s)" Jan 17 12:25:07.791545 containerd[1459]: time="2025-01-17T12:25:07.791518099Z" level=info msg="Stop container \"1413578d79d40623c434dcc175fa89eb80fb953f21804c06a1a8388d4f72a2a8\" with signal terminated" Jan 17 12:25:07.799364 systemd-networkd[1375]: lxc_health: Link DOWN Jan 17 12:25:07.799379 systemd-networkd[1375]: lxc_health: Lost carrier Jan 17 12:25:07.834646 systemd[1]: cri-containerd-1413578d79d40623c434dcc175fa89eb80fb953f21804c06a1a8388d4f72a2a8.scope: Deactivated successfully. Jan 17 12:25:07.835081 systemd[1]: cri-containerd-1413578d79d40623c434dcc175fa89eb80fb953f21804c06a1a8388d4f72a2a8.scope: Consumed 6.589s CPU time. Jan 17 12:25:07.858582 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1413578d79d40623c434dcc175fa89eb80fb953f21804c06a1a8388d4f72a2a8-rootfs.mount: Deactivated successfully. Jan 17 12:25:07.871179 containerd[1459]: time="2025-01-17T12:25:07.871098428Z" level=info msg="shim disconnected" id=1413578d79d40623c434dcc175fa89eb80fb953f21804c06a1a8388d4f72a2a8 namespace=k8s.io Jan 17 12:25:07.871179 containerd[1459]: time="2025-01-17T12:25:07.871160875Z" level=warning msg="cleaning up after shim disconnected" id=1413578d79d40623c434dcc175fa89eb80fb953f21804c06a1a8388d4f72a2a8 namespace=k8s.io Jan 17 12:25:07.871179 containerd[1459]: time="2025-01-17T12:25:07.871169942Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:25:07.888487 containerd[1459]: time="2025-01-17T12:25:07.888431387Z" level=info msg="StopContainer for \"1413578d79d40623c434dcc175fa89eb80fb953f21804c06a1a8388d4f72a2a8\" returns successfully" Jan 17 12:25:07.889196 containerd[1459]: time="2025-01-17T12:25:07.889170527Z" level=info msg="StopPodSandbox for \"3f5f363ede96ce713213ee302b158c570a18c7752266dbcab387781133960950\"" Jan 17 12:25:07.889264 containerd[1459]: time="2025-01-17T12:25:07.889207867Z" level=info msg="Container to stop \"92fbbb175370b44487f8c192bb8c0ee128ba0907dc90930670f1559398e37ea0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:25:07.889264 containerd[1459]: time="2025-01-17T12:25:07.889222494Z" level=info msg="Container to stop \"1413578d79d40623c434dcc175fa89eb80fb953f21804c06a1a8388d4f72a2a8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:25:07.889264 containerd[1459]: time="2025-01-17T12:25:07.889234446Z" level=info msg="Container to stop \"b32a6cb6854f417133f2f8febceb7e48102c25f956e3c46531978fadebf86477\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:25:07.889264 containerd[1459]: time="2025-01-17T12:25:07.889245898Z" level=info msg="Container to stop \"bc9c35558270be193a276e0d676955dbf41cdd47c5119a4393e0c20fb6b5f5d3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:25:07.889264 containerd[1459]: time="2025-01-17T12:25:07.889257620Z" level=info msg="Container to stop \"df67182abd63df77448e06972fa8b7ea791f793533d780259b1317e7a8bbc488\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:25:07.891392 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3f5f363ede96ce713213ee302b158c570a18c7752266dbcab387781133960950-shm.mount: Deactivated successfully. Jan 17 12:25:07.895837 systemd[1]: cri-containerd-3f5f363ede96ce713213ee302b158c570a18c7752266dbcab387781133960950.scope: Deactivated successfully. Jan 17 12:25:07.915277 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f5f363ede96ce713213ee302b158c570a18c7752266dbcab387781133960950-rootfs.mount: Deactivated successfully. Jan 17 12:25:07.919931 containerd[1459]: time="2025-01-17T12:25:07.919831386Z" level=info msg="shim disconnected" id=3f5f363ede96ce713213ee302b158c570a18c7752266dbcab387781133960950 namespace=k8s.io Jan 17 12:25:07.919931 containerd[1459]: time="2025-01-17T12:25:07.919905054Z" level=warning msg="cleaning up after shim disconnected" id=3f5f363ede96ce713213ee302b158c570a18c7752266dbcab387781133960950 namespace=k8s.io Jan 17 12:25:07.919931 containerd[1459]: time="2025-01-17T12:25:07.919915905Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:25:07.935625 containerd[1459]: time="2025-01-17T12:25:07.935568326Z" level=info msg="TearDown network for sandbox \"3f5f363ede96ce713213ee302b158c570a18c7752266dbcab387781133960950\" successfully" Jan 17 12:25:07.935625 containerd[1459]: time="2025-01-17T12:25:07.935609753Z" level=info msg="StopPodSandbox for \"3f5f363ede96ce713213ee302b158c570a18c7752266dbcab387781133960950\" returns successfully" Jan 17 12:25:08.094577 kubelet[1774]: I0117 12:25:08.094532 1774 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bd66da01-525c-4756-a809-7ce5d81058f3-cilium-run\") pod \"bd66da01-525c-4756-a809-7ce5d81058f3\" (UID: \"bd66da01-525c-4756-a809-7ce5d81058f3\") " Jan 17 12:25:08.094577 kubelet[1774]: I0117 12:25:08.094576 1774 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bd66da01-525c-4756-a809-7ce5d81058f3-cilium-cgroup\") pod \"bd66da01-525c-4756-a809-7ce5d81058f3\" (UID: \"bd66da01-525c-4756-a809-7ce5d81058f3\") " Jan 17 12:25:08.094767 kubelet[1774]: I0117 12:25:08.094606 1774 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bd66da01-525c-4756-a809-7ce5d81058f3-cilium-config-path\") pod \"bd66da01-525c-4756-a809-7ce5d81058f3\" (UID: \"bd66da01-525c-4756-a809-7ce5d81058f3\") " Jan 17 12:25:08.094767 kubelet[1774]: I0117 12:25:08.094628 1774 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bd66da01-525c-4756-a809-7ce5d81058f3-clustermesh-secrets\") pod \"bd66da01-525c-4756-a809-7ce5d81058f3\" (UID: \"bd66da01-525c-4756-a809-7ce5d81058f3\") " Jan 17 12:25:08.094767 kubelet[1774]: I0117 12:25:08.094624 1774 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd66da01-525c-4756-a809-7ce5d81058f3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "bd66da01-525c-4756-a809-7ce5d81058f3" (UID: "bd66da01-525c-4756-a809-7ce5d81058f3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:25:08.094767 kubelet[1774]: I0117 12:25:08.094644 1774 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bd66da01-525c-4756-a809-7ce5d81058f3-etc-cni-netd\") pod \"bd66da01-525c-4756-a809-7ce5d81058f3\" (UID: \"bd66da01-525c-4756-a809-7ce5d81058f3\") " Jan 17 12:25:08.094767 kubelet[1774]: I0117 12:25:08.094694 1774 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd66da01-525c-4756-a809-7ce5d81058f3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "bd66da01-525c-4756-a809-7ce5d81058f3" (UID: "bd66da01-525c-4756-a809-7ce5d81058f3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:25:08.094900 kubelet[1774]: I0117 12:25:08.094715 1774 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bd66da01-525c-4756-a809-7ce5d81058f3-hostproc\") pod \"bd66da01-525c-4756-a809-7ce5d81058f3\" (UID: \"bd66da01-525c-4756-a809-7ce5d81058f3\") " Jan 17 12:25:08.094900 kubelet[1774]: I0117 12:25:08.094737 1774 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bd66da01-525c-4756-a809-7ce5d81058f3-xtables-lock\") pod \"bd66da01-525c-4756-a809-7ce5d81058f3\" (UID: \"bd66da01-525c-4756-a809-7ce5d81058f3\") " Jan 17 12:25:08.094900 kubelet[1774]: I0117 12:25:08.094764 1774 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rbp\" (UniqueName: \"kubernetes.io/projected/bd66da01-525c-4756-a809-7ce5d81058f3-kube-api-access-w9rbp\") pod \"bd66da01-525c-4756-a809-7ce5d81058f3\" (UID: \"bd66da01-525c-4756-a809-7ce5d81058f3\") " Jan 17 12:25:08.094900 kubelet[1774]: I0117 12:25:08.094786 1774 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bd66da01-525c-4756-a809-7ce5d81058f3-host-proc-sys-net\") pod \"bd66da01-525c-4756-a809-7ce5d81058f3\" (UID: \"bd66da01-525c-4756-a809-7ce5d81058f3\") " Jan 17 12:25:08.094900 kubelet[1774]: I0117 12:25:08.094803 1774 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bd66da01-525c-4756-a809-7ce5d81058f3-bpf-maps\") pod \"bd66da01-525c-4756-a809-7ce5d81058f3\" (UID: \"bd66da01-525c-4756-a809-7ce5d81058f3\") " Jan 17 12:25:08.094900 kubelet[1774]: I0117 12:25:08.094821 1774 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd66da01-525c-4756-a809-7ce5d81058f3-lib-modules\") pod \"bd66da01-525c-4756-a809-7ce5d81058f3\" (UID: \"bd66da01-525c-4756-a809-7ce5d81058f3\") " Jan 17 12:25:08.095066 kubelet[1774]: I0117 12:25:08.094838 1774 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bd66da01-525c-4756-a809-7ce5d81058f3-hubble-tls\") pod \"bd66da01-525c-4756-a809-7ce5d81058f3\" (UID: \"bd66da01-525c-4756-a809-7ce5d81058f3\") " Jan 17 12:25:08.095066 kubelet[1774]: I0117 12:25:08.094856 1774 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bd66da01-525c-4756-a809-7ce5d81058f3-cni-path\") pod \"bd66da01-525c-4756-a809-7ce5d81058f3\" (UID: \"bd66da01-525c-4756-a809-7ce5d81058f3\") " Jan 17 12:25:08.095066 kubelet[1774]: I0117 12:25:08.094872 1774 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bd66da01-525c-4756-a809-7ce5d81058f3-host-proc-sys-kernel\") pod \"bd66da01-525c-4756-a809-7ce5d81058f3\" (UID: \"bd66da01-525c-4756-a809-7ce5d81058f3\") " Jan 17 12:25:08.095066 kubelet[1774]: I0117 12:25:08.094919 1774 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bd66da01-525c-4756-a809-7ce5d81058f3-cilium-run\") on node \"10.0.0.161\" DevicePath \"\"" Jan 17 12:25:08.095066 kubelet[1774]: I0117 12:25:08.094928 1774 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bd66da01-525c-4756-a809-7ce5d81058f3-cilium-cgroup\") on node \"10.0.0.161\" DevicePath \"\"" Jan 17 12:25:08.095066 kubelet[1774]: I0117 12:25:08.094950 1774 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd66da01-525c-4756-a809-7ce5d81058f3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "bd66da01-525c-4756-a809-7ce5d81058f3" (UID: "bd66da01-525c-4756-a809-7ce5d81058f3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:25:08.095208 kubelet[1774]: I0117 12:25:08.094970 1774 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd66da01-525c-4756-a809-7ce5d81058f3-hostproc" (OuterVolumeSpecName: "hostproc") pod "bd66da01-525c-4756-a809-7ce5d81058f3" (UID: "bd66da01-525c-4756-a809-7ce5d81058f3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:25:08.095208 kubelet[1774]: I0117 12:25:08.094985 1774 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd66da01-525c-4756-a809-7ce5d81058f3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "bd66da01-525c-4756-a809-7ce5d81058f3" (UID: "bd66da01-525c-4756-a809-7ce5d81058f3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:25:08.097175 kubelet[1774]: I0117 12:25:08.095296 1774 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd66da01-525c-4756-a809-7ce5d81058f3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "bd66da01-525c-4756-a809-7ce5d81058f3" (UID: "bd66da01-525c-4756-a809-7ce5d81058f3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:25:08.097175 kubelet[1774]: I0117 12:25:08.095346 1774 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd66da01-525c-4756-a809-7ce5d81058f3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "bd66da01-525c-4756-a809-7ce5d81058f3" (UID: "bd66da01-525c-4756-a809-7ce5d81058f3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:25:08.097175 kubelet[1774]: I0117 12:25:08.095362 1774 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd66da01-525c-4756-a809-7ce5d81058f3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "bd66da01-525c-4756-a809-7ce5d81058f3" (UID: "bd66da01-525c-4756-a809-7ce5d81058f3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:25:08.097175 kubelet[1774]: I0117 12:25:08.095377 1774 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd66da01-525c-4756-a809-7ce5d81058f3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "bd66da01-525c-4756-a809-7ce5d81058f3" (UID: "bd66da01-525c-4756-a809-7ce5d81058f3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:25:08.097912 kubelet[1774]: I0117 12:25:08.097882 1774 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd66da01-525c-4756-a809-7ce5d81058f3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "bd66da01-525c-4756-a809-7ce5d81058f3" (UID: "bd66da01-525c-4756-a809-7ce5d81058f3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 17 12:25:08.098039 kubelet[1774]: I0117 12:25:08.097993 1774 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd66da01-525c-4756-a809-7ce5d81058f3-cni-path" (OuterVolumeSpecName: "cni-path") pod "bd66da01-525c-4756-a809-7ce5d81058f3" (UID: "bd66da01-525c-4756-a809-7ce5d81058f3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:25:08.098122 kubelet[1774]: I0117 12:25:08.098088 1774 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd66da01-525c-4756-a809-7ce5d81058f3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bd66da01-525c-4756-a809-7ce5d81058f3" (UID: "bd66da01-525c-4756-a809-7ce5d81058f3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 17 12:25:08.098850 systemd[1]: var-lib-kubelet-pods-bd66da01\x2d525c\x2d4756\x2da809\x2d7ce5d81058f3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 17 12:25:08.099150 kubelet[1774]: I0117 12:25:08.099124 1774 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd66da01-525c-4756-a809-7ce5d81058f3-kube-api-access-w9rbp" (OuterVolumeSpecName: "kube-api-access-w9rbp") pod "bd66da01-525c-4756-a809-7ce5d81058f3" (UID: "bd66da01-525c-4756-a809-7ce5d81058f3"). InnerVolumeSpecName "kube-api-access-w9rbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 12:25:08.099232 kubelet[1774]: I0117 12:25:08.099215 1774 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd66da01-525c-4756-a809-7ce5d81058f3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "bd66da01-525c-4756-a809-7ce5d81058f3" (UID: "bd66da01-525c-4756-a809-7ce5d81058f3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 12:25:08.195769 kubelet[1774]: I0117 12:25:08.195672 1774 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bd66da01-525c-4756-a809-7ce5d81058f3-cilium-config-path\") on node \"10.0.0.161\" DevicePath \"\"" Jan 17 12:25:08.195769 kubelet[1774]: I0117 12:25:08.195704 1774 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bd66da01-525c-4756-a809-7ce5d81058f3-host-proc-sys-net\") on node \"10.0.0.161\" DevicePath \"\"" Jan 17 12:25:08.195769 kubelet[1774]: I0117 12:25:08.195713 1774 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bd66da01-525c-4756-a809-7ce5d81058f3-clustermesh-secrets\") on node \"10.0.0.161\" DevicePath \"\"" Jan 17 12:25:08.195769 kubelet[1774]: I0117 12:25:08.195722 1774 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bd66da01-525c-4756-a809-7ce5d81058f3-etc-cni-netd\") on node \"10.0.0.161\" DevicePath \"\"" Jan 17 12:25:08.195769 kubelet[1774]: I0117 12:25:08.195730 1774 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bd66da01-525c-4756-a809-7ce5d81058f3-hostproc\") on node \"10.0.0.161\" DevicePath \"\"" Jan 17 12:25:08.195769 kubelet[1774]: I0117 12:25:08.195738 1774 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bd66da01-525c-4756-a809-7ce5d81058f3-xtables-lock\") on node \"10.0.0.161\" DevicePath \"\"" Jan 17 12:25:08.195769 kubelet[1774]: I0117 12:25:08.195746 1774 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-w9rbp\" (UniqueName: \"kubernetes.io/projected/bd66da01-525c-4756-a809-7ce5d81058f3-kube-api-access-w9rbp\") on node \"10.0.0.161\" DevicePath \"\"" Jan 17 12:25:08.195769 kubelet[1774]: I0117 12:25:08.195755 1774 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bd66da01-525c-4756-a809-7ce5d81058f3-bpf-maps\") on node \"10.0.0.161\" DevicePath \"\"" Jan 17 12:25:08.196059 kubelet[1774]: I0117 12:25:08.195762 1774 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd66da01-525c-4756-a809-7ce5d81058f3-lib-modules\") on node \"10.0.0.161\" DevicePath \"\"" Jan 17 12:25:08.196059 kubelet[1774]: I0117 12:25:08.195770 1774 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bd66da01-525c-4756-a809-7ce5d81058f3-hubble-tls\") on node \"10.0.0.161\" DevicePath \"\"" Jan 17 12:25:08.196059 kubelet[1774]: I0117 12:25:08.195778 1774 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bd66da01-525c-4756-a809-7ce5d81058f3-cni-path\") on node \"10.0.0.161\" DevicePath \"\"" Jan 17 12:25:08.196059 kubelet[1774]: I0117 12:25:08.195785 1774 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bd66da01-525c-4756-a809-7ce5d81058f3-host-proc-sys-kernel\") on node \"10.0.0.161\" DevicePath \"\"" Jan 17 12:25:08.294316 kubelet[1774]: I0117 12:25:08.294275 1774 scope.go:117] "RemoveContainer" containerID="1413578d79d40623c434dcc175fa89eb80fb953f21804c06a1a8388d4f72a2a8" Jan 17 12:25:08.295411 containerd[1459]: time="2025-01-17T12:25:08.295369549Z" level=info msg="RemoveContainer for \"1413578d79d40623c434dcc175fa89eb80fb953f21804c06a1a8388d4f72a2a8\"" Jan 17 12:25:08.299681 systemd[1]: Removed slice kubepods-burstable-podbd66da01_525c_4756_a809_7ce5d81058f3.slice - libcontainer container kubepods-burstable-podbd66da01_525c_4756_a809_7ce5d81058f3.slice. Jan 17 12:25:08.299765 systemd[1]: kubepods-burstable-podbd66da01_525c_4756_a809_7ce5d81058f3.slice: Consumed 6.680s CPU time. Jan 17 12:25:08.357899 containerd[1459]: time="2025-01-17T12:25:08.357830454Z" level=info msg="RemoveContainer for \"1413578d79d40623c434dcc175fa89eb80fb953f21804c06a1a8388d4f72a2a8\" returns successfully" Jan 17 12:25:08.358165 kubelet[1774]: I0117 12:25:08.358124 1774 scope.go:117] "RemoveContainer" containerID="92fbbb175370b44487f8c192bb8c0ee128ba0907dc90930670f1559398e37ea0" Jan 17 12:25:08.359277 containerd[1459]: time="2025-01-17T12:25:08.359238781Z" level=info msg="RemoveContainer for \"92fbbb175370b44487f8c192bb8c0ee128ba0907dc90930670f1559398e37ea0\"" Jan 17 12:25:08.380484 containerd[1459]: time="2025-01-17T12:25:08.380417713Z" level=info msg="RemoveContainer for \"92fbbb175370b44487f8c192bb8c0ee128ba0907dc90930670f1559398e37ea0\" returns successfully" Jan 17 12:25:08.380752 kubelet[1774]: I0117 12:25:08.380716 1774 scope.go:117] "RemoveContainer" containerID="bc9c35558270be193a276e0d676955dbf41cdd47c5119a4393e0c20fb6b5f5d3" Jan 17 12:25:08.382033 containerd[1459]: time="2025-01-17T12:25:08.381980600Z" level=info msg="RemoveContainer for \"bc9c35558270be193a276e0d676955dbf41cdd47c5119a4393e0c20fb6b5f5d3\"" Jan 17 12:25:08.385591 containerd[1459]: time="2025-01-17T12:25:08.385544597Z" level=info msg="RemoveContainer for \"bc9c35558270be193a276e0d676955dbf41cdd47c5119a4393e0c20fb6b5f5d3\" returns successfully" Jan 17 12:25:08.385810 kubelet[1774]: I0117 12:25:08.385771 1774 scope.go:117] "RemoveContainer" containerID="b32a6cb6854f417133f2f8febceb7e48102c25f956e3c46531978fadebf86477" Jan 17 12:25:08.386858 containerd[1459]: time="2025-01-17T12:25:08.386822288Z" level=info msg="RemoveContainer for \"b32a6cb6854f417133f2f8febceb7e48102c25f956e3c46531978fadebf86477\"" Jan 17 12:25:08.390549 containerd[1459]: time="2025-01-17T12:25:08.390512644Z" level=info msg="RemoveContainer for \"b32a6cb6854f417133f2f8febceb7e48102c25f956e3c46531978fadebf86477\" returns successfully" Jan 17 12:25:08.390721 kubelet[1774]: I0117 12:25:08.390694 1774 scope.go:117] "RemoveContainer" containerID="df67182abd63df77448e06972fa8b7ea791f793533d780259b1317e7a8bbc488" Jan 17 12:25:08.391986 containerd[1459]: time="2025-01-17T12:25:08.391958431Z" level=info msg="RemoveContainer for \"df67182abd63df77448e06972fa8b7ea791f793533d780259b1317e7a8bbc488\"" Jan 17 12:25:08.395359 containerd[1459]: time="2025-01-17T12:25:08.395323215Z" level=info msg="RemoveContainer for \"df67182abd63df77448e06972fa8b7ea791f793533d780259b1317e7a8bbc488\" returns successfully" Jan 17 12:25:08.395570 kubelet[1774]: I0117 12:25:08.395541 1774 scope.go:117] "RemoveContainer" containerID="1413578d79d40623c434dcc175fa89eb80fb953f21804c06a1a8388d4f72a2a8" Jan 17 12:25:08.395752 containerd[1459]: time="2025-01-17T12:25:08.395722215Z" level=error msg="ContainerStatus for \"1413578d79d40623c434dcc175fa89eb80fb953f21804c06a1a8388d4f72a2a8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1413578d79d40623c434dcc175fa89eb80fb953f21804c06a1a8388d4f72a2a8\": not found" Jan 17 12:25:08.395851 kubelet[1774]: E0117 12:25:08.395829 1774 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1413578d79d40623c434dcc175fa89eb80fb953f21804c06a1a8388d4f72a2a8\": not found" containerID="1413578d79d40623c434dcc175fa89eb80fb953f21804c06a1a8388d4f72a2a8" Jan 17 12:25:08.396032 kubelet[1774]: I0117 12:25:08.395856 1774 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1413578d79d40623c434dcc175fa89eb80fb953f21804c06a1a8388d4f72a2a8"} err="failed to get container status \"1413578d79d40623c434dcc175fa89eb80fb953f21804c06a1a8388d4f72a2a8\": rpc error: code = NotFound desc = an error occurred when try to find container \"1413578d79d40623c434dcc175fa89eb80fb953f21804c06a1a8388d4f72a2a8\": not found" Jan 17 12:25:08.396032 kubelet[1774]: I0117 12:25:08.396015 1774 scope.go:117] "RemoveContainer" containerID="92fbbb175370b44487f8c192bb8c0ee128ba0907dc90930670f1559398e37ea0" Jan 17 12:25:08.396180 containerd[1459]: time="2025-01-17T12:25:08.396155660Z" level=error msg="ContainerStatus for \"92fbbb175370b44487f8c192bb8c0ee128ba0907dc90930670f1559398e37ea0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"92fbbb175370b44487f8c192bb8c0ee128ba0907dc90930670f1559398e37ea0\": not found" Jan 17 12:25:08.396301 kubelet[1774]: E0117 12:25:08.396270 1774 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"92fbbb175370b44487f8c192bb8c0ee128ba0907dc90930670f1559398e37ea0\": not found" containerID="92fbbb175370b44487f8c192bb8c0ee128ba0907dc90930670f1559398e37ea0" Jan 17 12:25:08.396345 kubelet[1774]: I0117 12:25:08.396297 1774 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"92fbbb175370b44487f8c192bb8c0ee128ba0907dc90930670f1559398e37ea0"} err="failed to get container status \"92fbbb175370b44487f8c192bb8c0ee128ba0907dc90930670f1559398e37ea0\": rpc error: code = NotFound desc = an error occurred when try to find container \"92fbbb175370b44487f8c192bb8c0ee128ba0907dc90930670f1559398e37ea0\": not found" Jan 17 12:25:08.396345 kubelet[1774]: I0117 12:25:08.396317 1774 scope.go:117] "RemoveContainer" containerID="bc9c35558270be193a276e0d676955dbf41cdd47c5119a4393e0c20fb6b5f5d3" Jan 17 12:25:08.396523 containerd[1459]: time="2025-01-17T12:25:08.396486461Z" level=error msg="ContainerStatus for \"bc9c35558270be193a276e0d676955dbf41cdd47c5119a4393e0c20fb6b5f5d3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bc9c35558270be193a276e0d676955dbf41cdd47c5119a4393e0c20fb6b5f5d3\": not found" Jan 17 12:25:08.396631 kubelet[1774]: E0117 12:25:08.396613 1774 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bc9c35558270be193a276e0d676955dbf41cdd47c5119a4393e0c20fb6b5f5d3\": not found" containerID="bc9c35558270be193a276e0d676955dbf41cdd47c5119a4393e0c20fb6b5f5d3" Jan 17 12:25:08.396668 kubelet[1774]: I0117 12:25:08.396629 1774 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bc9c35558270be193a276e0d676955dbf41cdd47c5119a4393e0c20fb6b5f5d3"} err="failed to get container status \"bc9c35558270be193a276e0d676955dbf41cdd47c5119a4393e0c20fb6b5f5d3\": rpc error: code = NotFound desc = an error occurred when try to find container \"bc9c35558270be193a276e0d676955dbf41cdd47c5119a4393e0c20fb6b5f5d3\": not found" Jan 17 12:25:08.396668 kubelet[1774]: I0117 12:25:08.396653 1774 scope.go:117] "RemoveContainer" containerID="b32a6cb6854f417133f2f8febceb7e48102c25f956e3c46531978fadebf86477" Jan 17 12:25:08.396855 containerd[1459]: time="2025-01-17T12:25:08.396824396Z" level=error msg="ContainerStatus for \"b32a6cb6854f417133f2f8febceb7e48102c25f956e3c46531978fadebf86477\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b32a6cb6854f417133f2f8febceb7e48102c25f956e3c46531978fadebf86477\": not found" Jan 17 12:25:08.396977 kubelet[1774]: E0117 12:25:08.396954 1774 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b32a6cb6854f417133f2f8febceb7e48102c25f956e3c46531978fadebf86477\": not found" containerID="b32a6cb6854f417133f2f8febceb7e48102c25f956e3c46531978fadebf86477" Jan 17 12:25:08.397041 kubelet[1774]: I0117 12:25:08.396980 1774 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b32a6cb6854f417133f2f8febceb7e48102c25f956e3c46531978fadebf86477"} err="failed to get container status \"b32a6cb6854f417133f2f8febceb7e48102c25f956e3c46531978fadebf86477\": rpc error: code = NotFound desc = an error occurred when try to find container \"b32a6cb6854f417133f2f8febceb7e48102c25f956e3c46531978fadebf86477\": not found" Jan 17 12:25:08.397041 kubelet[1774]: I0117 12:25:08.396996 1774 scope.go:117] "RemoveContainer" containerID="df67182abd63df77448e06972fa8b7ea791f793533d780259b1317e7a8bbc488" Jan 17 12:25:08.397214 containerd[1459]: time="2025-01-17T12:25:08.397180285Z" level=error msg="ContainerStatus for \"df67182abd63df77448e06972fa8b7ea791f793533d780259b1317e7a8bbc488\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"df67182abd63df77448e06972fa8b7ea791f793533d780259b1317e7a8bbc488\": not found" Jan 17 12:25:08.397310 kubelet[1774]: E0117 12:25:08.397287 1774 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"df67182abd63df77448e06972fa8b7ea791f793533d780259b1317e7a8bbc488\": not found" containerID="df67182abd63df77448e06972fa8b7ea791f793533d780259b1317e7a8bbc488" Jan 17 12:25:08.397359 kubelet[1774]: I0117 12:25:08.397312 1774 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"df67182abd63df77448e06972fa8b7ea791f793533d780259b1317e7a8bbc488"} err="failed to get container status \"df67182abd63df77448e06972fa8b7ea791f793533d780259b1317e7a8bbc488\": rpc error: code = NotFound desc = an error occurred when try to find container \"df67182abd63df77448e06972fa8b7ea791f793533d780259b1317e7a8bbc488\": not found" Jan 17 12:25:08.608962 kubelet[1774]: E0117 12:25:08.608897 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:08.765936 systemd[1]: var-lib-kubelet-pods-bd66da01\x2d525c\x2d4756\x2da809\x2d7ce5d81058f3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw9rbp.mount: Deactivated successfully. Jan 17 12:25:08.766082 systemd[1]: var-lib-kubelet-pods-bd66da01\x2d525c\x2d4756\x2da809\x2d7ce5d81058f3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 17 12:25:09.187910 kubelet[1774]: I0117 12:25:09.187852 1774 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd66da01-525c-4756-a809-7ce5d81058f3" path="/var/lib/kubelet/pods/bd66da01-525c-4756-a809-7ce5d81058f3/volumes" Jan 17 12:25:09.609644 kubelet[1774]: E0117 12:25:09.609588 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:10.424635 kubelet[1774]: E0117 12:25:10.424587 1774 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bd66da01-525c-4756-a809-7ce5d81058f3" containerName="mount-cgroup" Jan 17 12:25:10.424635 kubelet[1774]: E0117 12:25:10.424616 1774 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bd66da01-525c-4756-a809-7ce5d81058f3" containerName="mount-bpf-fs" Jan 17 12:25:10.424635 kubelet[1774]: E0117 12:25:10.424622 1774 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bd66da01-525c-4756-a809-7ce5d81058f3" containerName="cilium-agent" Jan 17 12:25:10.424635 kubelet[1774]: E0117 12:25:10.424628 1774 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bd66da01-525c-4756-a809-7ce5d81058f3" containerName="apply-sysctl-overwrites" Jan 17 12:25:10.424635 kubelet[1774]: E0117 12:25:10.424635 1774 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bd66da01-525c-4756-a809-7ce5d81058f3" containerName="clean-cilium-state" Jan 17 12:25:10.424635 kubelet[1774]: I0117 12:25:10.424651 1774 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd66da01-525c-4756-a809-7ce5d81058f3" containerName="cilium-agent" Jan 17 12:25:10.430765 systemd[1]: Created slice kubepods-besteffort-pod2efdb8c2_93ca_4998_9c88_b5d2e48dec65.slice - libcontainer container kubepods-besteffort-pod2efdb8c2_93ca_4998_9c88_b5d2e48dec65.slice. Jan 17 12:25:10.447640 systemd[1]: Created slice kubepods-burstable-podcaecb030_f084_47dc_bcfa_eb930051b8e9.slice - libcontainer container kubepods-burstable-podcaecb030_f084_47dc_bcfa_eb930051b8e9.slice. Jan 17 12:25:10.571816 kubelet[1774]: E0117 12:25:10.571770 1774 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:10.582715 containerd[1459]: time="2025-01-17T12:25:10.582684336Z" level=info msg="StopPodSandbox for \"3f5f363ede96ce713213ee302b158c570a18c7752266dbcab387781133960950\"" Jan 17 12:25:10.583130 containerd[1459]: time="2025-01-17T12:25:10.582760709Z" level=info msg="TearDown network for sandbox \"3f5f363ede96ce713213ee302b158c570a18c7752266dbcab387781133960950\" successfully" Jan 17 12:25:10.583130 containerd[1459]: time="2025-01-17T12:25:10.582771800Z" level=info msg="StopPodSandbox for \"3f5f363ede96ce713213ee302b158c570a18c7752266dbcab387781133960950\" returns successfully" Jan 17 12:25:10.583130 containerd[1459]: time="2025-01-17T12:25:10.583059340Z" level=info msg="RemovePodSandbox for \"3f5f363ede96ce713213ee302b158c570a18c7752266dbcab387781133960950\"" Jan 17 12:25:10.583130 containerd[1459]: time="2025-01-17T12:25:10.583077374Z" level=info msg="Forcibly stopping sandbox \"3f5f363ede96ce713213ee302b158c570a18c7752266dbcab387781133960950\"" Jan 17 12:25:10.583130 containerd[1459]: time="2025-01-17T12:25:10.583116698Z" level=info msg="TearDown network for sandbox \"3f5f363ede96ce713213ee302b158c570a18c7752266dbcab387781133960950\" successfully" Jan 17 12:25:10.586136 containerd[1459]: time="2025-01-17T12:25:10.586101276Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3f5f363ede96ce713213ee302b158c570a18c7752266dbcab387781133960950\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:25:10.586211 containerd[1459]: time="2025-01-17T12:25:10.586140119Z" level=info msg="RemovePodSandbox \"3f5f363ede96ce713213ee302b158c570a18c7752266dbcab387781133960950\" returns successfully" Jan 17 12:25:10.610680 kubelet[1774]: E0117 12:25:10.610642 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:10.611916 kubelet[1774]: I0117 12:25:10.611889 1774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/caecb030-f084-47dc-bcfa-eb930051b8e9-lib-modules\") pod \"cilium-7zpjg\" (UID: \"caecb030-f084-47dc-bcfa-eb930051b8e9\") " pod="kube-system/cilium-7zpjg" Jan 17 12:25:10.611983 kubelet[1774]: I0117 12:25:10.611918 1774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/caecb030-f084-47dc-bcfa-eb930051b8e9-hubble-tls\") pod \"cilium-7zpjg\" (UID: \"caecb030-f084-47dc-bcfa-eb930051b8e9\") " pod="kube-system/cilium-7zpjg" Jan 17 12:25:10.611983 kubelet[1774]: I0117 12:25:10.611942 1774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/caecb030-f084-47dc-bcfa-eb930051b8e9-bpf-maps\") pod \"cilium-7zpjg\" (UID: \"caecb030-f084-47dc-bcfa-eb930051b8e9\") " pod="kube-system/cilium-7zpjg" Jan 17 12:25:10.611983 kubelet[1774]: I0117 12:25:10.611963 1774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/caecb030-f084-47dc-bcfa-eb930051b8e9-cni-path\") pod \"cilium-7zpjg\" (UID: \"caecb030-f084-47dc-bcfa-eb930051b8e9\") " pod="kube-system/cilium-7zpjg" Jan 17 12:25:10.612079 kubelet[1774]: I0117 12:25:10.611983 1774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/caecb030-f084-47dc-bcfa-eb930051b8e9-etc-cni-netd\") pod \"cilium-7zpjg\" (UID: \"caecb030-f084-47dc-bcfa-eb930051b8e9\") " pod="kube-system/cilium-7zpjg" Jan 17 12:25:10.612079 kubelet[1774]: I0117 12:25:10.612011 1774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/caecb030-f084-47dc-bcfa-eb930051b8e9-clustermesh-secrets\") pod \"cilium-7zpjg\" (UID: \"caecb030-f084-47dc-bcfa-eb930051b8e9\") " pod="kube-system/cilium-7zpjg" Jan 17 12:25:10.612079 kubelet[1774]: I0117 12:25:10.612062 1774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/caecb030-f084-47dc-bcfa-eb930051b8e9-cilium-run\") pod \"cilium-7zpjg\" (UID: \"caecb030-f084-47dc-bcfa-eb930051b8e9\") " pod="kube-system/cilium-7zpjg" Jan 17 12:25:10.612180 kubelet[1774]: I0117 12:25:10.612091 1774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/caecb030-f084-47dc-bcfa-eb930051b8e9-hostproc\") pod \"cilium-7zpjg\" (UID: \"caecb030-f084-47dc-bcfa-eb930051b8e9\") " pod="kube-system/cilium-7zpjg" Jan 17 12:25:10.612180 kubelet[1774]: I0117 12:25:10.612119 1774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/caecb030-f084-47dc-bcfa-eb930051b8e9-cilium-ipsec-secrets\") pod \"cilium-7zpjg\" (UID: \"caecb030-f084-47dc-bcfa-eb930051b8e9\") " pod="kube-system/cilium-7zpjg" Jan 17 12:25:10.612180 kubelet[1774]: I0117 12:25:10.612140 1774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/caecb030-f084-47dc-bcfa-eb930051b8e9-host-proc-sys-net\") pod \"cilium-7zpjg\" (UID: \"caecb030-f084-47dc-bcfa-eb930051b8e9\") " pod="kube-system/cilium-7zpjg" Jan 17 12:25:10.612180 kubelet[1774]: I0117 12:25:10.612155 1774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m6fj\" (UniqueName: \"kubernetes.io/projected/2efdb8c2-93ca-4998-9c88-b5d2e48dec65-kube-api-access-4m6fj\") pod \"cilium-operator-5d85765b45-m4h9n\" (UID: \"2efdb8c2-93ca-4998-9c88-b5d2e48dec65\") " pod="kube-system/cilium-operator-5d85765b45-m4h9n" Jan 17 12:25:10.612180 kubelet[1774]: I0117 12:25:10.612170 1774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/caecb030-f084-47dc-bcfa-eb930051b8e9-xtables-lock\") pod \"cilium-7zpjg\" (UID: \"caecb030-f084-47dc-bcfa-eb930051b8e9\") " pod="kube-system/cilium-7zpjg" Jan 17 12:25:10.612310 kubelet[1774]: I0117 12:25:10.612186 1774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/caecb030-f084-47dc-bcfa-eb930051b8e9-cilium-config-path\") pod \"cilium-7zpjg\" (UID: \"caecb030-f084-47dc-bcfa-eb930051b8e9\") " pod="kube-system/cilium-7zpjg" Jan 17 12:25:10.612310 kubelet[1774]: I0117 12:25:10.612201 1774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/caecb030-f084-47dc-bcfa-eb930051b8e9-host-proc-sys-kernel\") pod \"cilium-7zpjg\" (UID: \"caecb030-f084-47dc-bcfa-eb930051b8e9\") " pod="kube-system/cilium-7zpjg" Jan 17 12:25:10.612310 kubelet[1774]: I0117 12:25:10.612216 1774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhvn4\" (UniqueName: \"kubernetes.io/projected/caecb030-f084-47dc-bcfa-eb930051b8e9-kube-api-access-zhvn4\") pod \"cilium-7zpjg\" (UID: \"caecb030-f084-47dc-bcfa-eb930051b8e9\") " pod="kube-system/cilium-7zpjg" Jan 17 12:25:10.612310 kubelet[1774]: I0117 12:25:10.612230 1774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2efdb8c2-93ca-4998-9c88-b5d2e48dec65-cilium-config-path\") pod \"cilium-operator-5d85765b45-m4h9n\" (UID: \"2efdb8c2-93ca-4998-9c88-b5d2e48dec65\") " pod="kube-system/cilium-operator-5d85765b45-m4h9n" Jan 17 12:25:10.612310 kubelet[1774]: I0117 12:25:10.612245 1774 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/caecb030-f084-47dc-bcfa-eb930051b8e9-cilium-cgroup\") pod \"cilium-7zpjg\" (UID: \"caecb030-f084-47dc-bcfa-eb930051b8e9\") " pod="kube-system/cilium-7zpjg" Jan 17 12:25:10.763934 kubelet[1774]: E0117 12:25:10.763896 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:25:10.764446 containerd[1459]: time="2025-01-17T12:25:10.764400504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7zpjg,Uid:caecb030-f084-47dc-bcfa-eb930051b8e9,Namespace:kube-system,Attempt:0,}" Jan 17 12:25:10.784334 containerd[1459]: time="2025-01-17T12:25:10.784257522Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:25:10.784444 containerd[1459]: time="2025-01-17T12:25:10.784318136Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:25:10.784444 containerd[1459]: time="2025-01-17T12:25:10.784360576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:25:10.784552 containerd[1459]: time="2025-01-17T12:25:10.784458880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:25:10.804165 systemd[1]: Started cri-containerd-64e73c104bb8e8e6f78985ff561b5d65813b4f5bb48a1c701d3b367c1fc2e80b.scope - libcontainer container 64e73c104bb8e8e6f78985ff561b5d65813b4f5bb48a1c701d3b367c1fc2e80b. Jan 17 12:25:10.824085 containerd[1459]: time="2025-01-17T12:25:10.824041741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7zpjg,Uid:caecb030-f084-47dc-bcfa-eb930051b8e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"64e73c104bb8e8e6f78985ff561b5d65813b4f5bb48a1c701d3b367c1fc2e80b\"" Jan 17 12:25:10.824791 kubelet[1774]: E0117 12:25:10.824747 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:25:10.826209 containerd[1459]: time="2025-01-17T12:25:10.826175429Z" level=info msg="CreateContainer within sandbox \"64e73c104bb8e8e6f78985ff561b5d65813b4f5bb48a1c701d3b367c1fc2e80b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 12:25:10.840574 containerd[1459]: time="2025-01-17T12:25:10.840521122Z" level=info msg="CreateContainer within sandbox \"64e73c104bb8e8e6f78985ff561b5d65813b4f5bb48a1c701d3b367c1fc2e80b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"15105350afb4c75b7825e39e58f97440ec31d20a5df31bfaba9bd3a462b984f5\"" Jan 17 12:25:10.841017 containerd[1459]: time="2025-01-17T12:25:10.840982179Z" level=info msg="StartContainer for \"15105350afb4c75b7825e39e58f97440ec31d20a5df31bfaba9bd3a462b984f5\"" Jan 17 12:25:10.868226 systemd[1]: Started cri-containerd-15105350afb4c75b7825e39e58f97440ec31d20a5df31bfaba9bd3a462b984f5.scope - libcontainer container 15105350afb4c75b7825e39e58f97440ec31d20a5df31bfaba9bd3a462b984f5. Jan 17 12:25:10.892174 containerd[1459]: time="2025-01-17T12:25:10.892135869Z" level=info msg="StartContainer for \"15105350afb4c75b7825e39e58f97440ec31d20a5df31bfaba9bd3a462b984f5\" returns successfully" Jan 17 12:25:10.900216 systemd[1]: cri-containerd-15105350afb4c75b7825e39e58f97440ec31d20a5df31bfaba9bd3a462b984f5.scope: Deactivated successfully. Jan 17 12:25:10.931534 containerd[1459]: time="2025-01-17T12:25:10.931465865Z" level=info msg="shim disconnected" id=15105350afb4c75b7825e39e58f97440ec31d20a5df31bfaba9bd3a462b984f5 namespace=k8s.io Jan 17 12:25:10.931534 containerd[1459]: time="2025-01-17T12:25:10.931523463Z" level=warning msg="cleaning up after shim disconnected" id=15105350afb4c75b7825e39e58f97440ec31d20a5df31bfaba9bd3a462b984f5 namespace=k8s.io Jan 17 12:25:10.931534 containerd[1459]: time="2025-01-17T12:25:10.931536417Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:25:11.033723 kubelet[1774]: E0117 12:25:11.033575 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:25:11.034141 containerd[1459]: time="2025-01-17T12:25:11.034098771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-m4h9n,Uid:2efdb8c2-93ca-4998-9c88-b5d2e48dec65,Namespace:kube-system,Attempt:0,}" Jan 17 12:25:11.054501 containerd[1459]: time="2025-01-17T12:25:11.054417033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:25:11.054501 containerd[1459]: time="2025-01-17T12:25:11.054479100Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:25:11.054578 containerd[1459]: time="2025-01-17T12:25:11.054492355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:25:11.054603 containerd[1459]: time="2025-01-17T12:25:11.054573196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:25:11.076147 systemd[1]: Started cri-containerd-826bbdbed7d18d1494b3c2a50b86cde78b110e5b5abf1a56d75ea374fb9573ab.scope - libcontainer container 826bbdbed7d18d1494b3c2a50b86cde78b110e5b5abf1a56d75ea374fb9573ab. Jan 17 12:25:11.107653 containerd[1459]: time="2025-01-17T12:25:11.107606296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-m4h9n,Uid:2efdb8c2-93ca-4998-9c88-b5d2e48dec65,Namespace:kube-system,Attempt:0,} returns sandbox id \"826bbdbed7d18d1494b3c2a50b86cde78b110e5b5abf1a56d75ea374fb9573ab\"" Jan 17 12:25:11.108199 kubelet[1774]: E0117 12:25:11.108178 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:25:11.108992 containerd[1459]: time="2025-01-17T12:25:11.108958896Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 17 12:25:11.302626 kubelet[1774]: E0117 12:25:11.302495 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:25:11.304174 containerd[1459]: time="2025-01-17T12:25:11.304134448Z" level=info msg="CreateContainer within sandbox \"64e73c104bb8e8e6f78985ff561b5d65813b4f5bb48a1c701d3b367c1fc2e80b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 12:25:11.346993 kubelet[1774]: E0117 12:25:11.346958 1774 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 17 12:25:11.524418 containerd[1459]: time="2025-01-17T12:25:11.524363980Z" level=info msg="CreateContainer within sandbox \"64e73c104bb8e8e6f78985ff561b5d65813b4f5bb48a1c701d3b367c1fc2e80b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e922fb9a5ae261914ba0eba8860ce0d92f20a6dc5769a6da6daee21ddfb5abdd\"" Jan 17 12:25:11.524912 containerd[1459]: time="2025-01-17T12:25:11.524883807Z" level=info msg="StartContainer for \"e922fb9a5ae261914ba0eba8860ce0d92f20a6dc5769a6da6daee21ddfb5abdd\"" Jan 17 12:25:11.551141 systemd[1]: Started cri-containerd-e922fb9a5ae261914ba0eba8860ce0d92f20a6dc5769a6da6daee21ddfb5abdd.scope - libcontainer container e922fb9a5ae261914ba0eba8860ce0d92f20a6dc5769a6da6daee21ddfb5abdd. Jan 17 12:25:11.580301 systemd[1]: cri-containerd-e922fb9a5ae261914ba0eba8860ce0d92f20a6dc5769a6da6daee21ddfb5abdd.scope: Deactivated successfully. Jan 17 12:25:11.611351 kubelet[1774]: E0117 12:25:11.611318 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:11.612933 containerd[1459]: time="2025-01-17T12:25:11.612902161Z" level=info msg="StartContainer for \"e922fb9a5ae261914ba0eba8860ce0d92f20a6dc5769a6da6daee21ddfb5abdd\" returns successfully" Jan 17 12:25:11.647945 containerd[1459]: time="2025-01-17T12:25:11.647882897Z" level=info msg="shim disconnected" id=e922fb9a5ae261914ba0eba8860ce0d92f20a6dc5769a6da6daee21ddfb5abdd namespace=k8s.io Jan 17 12:25:11.647945 containerd[1459]: time="2025-01-17T12:25:11.647939112Z" level=warning msg="cleaning up after shim disconnected" id=e922fb9a5ae261914ba0eba8860ce0d92f20a6dc5769a6da6daee21ddfb5abdd namespace=k8s.io Jan 17 12:25:11.647945 containerd[1459]: time="2025-01-17T12:25:11.647950754Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:25:12.305965 kubelet[1774]: E0117 12:25:12.305931 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:25:12.307857 containerd[1459]: time="2025-01-17T12:25:12.307817868Z" level=info msg="CreateContainer within sandbox \"64e73c104bb8e8e6f78985ff561b5d65813b4f5bb48a1c701d3b367c1fc2e80b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 12:25:12.321694 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount604430388.mount: Deactivated successfully. Jan 17 12:25:12.323015 containerd[1459]: time="2025-01-17T12:25:12.322971844Z" level=info msg="CreateContainer within sandbox \"64e73c104bb8e8e6f78985ff561b5d65813b4f5bb48a1c701d3b367c1fc2e80b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8ae35d5ce626bc7ca80d4507e60fcdd0972ab988785cce0adfbcd22659ae3609\"" Jan 17 12:25:12.323485 containerd[1459]: time="2025-01-17T12:25:12.323433601Z" level=info msg="StartContainer for \"8ae35d5ce626bc7ca80d4507e60fcdd0972ab988785cce0adfbcd22659ae3609\"" Jan 17 12:25:12.360196 systemd[1]: Started cri-containerd-8ae35d5ce626bc7ca80d4507e60fcdd0972ab988785cce0adfbcd22659ae3609.scope - libcontainer container 8ae35d5ce626bc7ca80d4507e60fcdd0972ab988785cce0adfbcd22659ae3609. Jan 17 12:25:12.388157 containerd[1459]: time="2025-01-17T12:25:12.388108780Z" level=info msg="StartContainer for \"8ae35d5ce626bc7ca80d4507e60fcdd0972ab988785cce0adfbcd22659ae3609\" returns successfully" Jan 17 12:25:12.389664 systemd[1]: cri-containerd-8ae35d5ce626bc7ca80d4507e60fcdd0972ab988785cce0adfbcd22659ae3609.scope: Deactivated successfully. Jan 17 12:25:12.414336 containerd[1459]: time="2025-01-17T12:25:12.414278068Z" level=info msg="shim disconnected" id=8ae35d5ce626bc7ca80d4507e60fcdd0972ab988785cce0adfbcd22659ae3609 namespace=k8s.io Jan 17 12:25:12.414336 containerd[1459]: time="2025-01-17T12:25:12.414326318Z" level=warning msg="cleaning up after shim disconnected" id=8ae35d5ce626bc7ca80d4507e60fcdd0972ab988785cce0adfbcd22659ae3609 namespace=k8s.io Jan 17 12:25:12.414336 containerd[1459]: time="2025-01-17T12:25:12.414335556Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:25:12.612352 kubelet[1774]: E0117 12:25:12.612226 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:12.722707 systemd[1]: run-containerd-runc-k8s.io-8ae35d5ce626bc7ca80d4507e60fcdd0972ab988785cce0adfbcd22659ae3609-runc.nZUmNi.mount: Deactivated successfully. Jan 17 12:25:12.722830 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ae35d5ce626bc7ca80d4507e60fcdd0972ab988785cce0adfbcd22659ae3609-rootfs.mount: Deactivated successfully. Jan 17 12:25:13.048085 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2158560010.mount: Deactivated successfully. Jan 17 12:25:13.309381 kubelet[1774]: E0117 12:25:13.309256 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:25:13.310937 containerd[1459]: time="2025-01-17T12:25:13.310891456Z" level=info msg="CreateContainer within sandbox \"64e73c104bb8e8e6f78985ff561b5d65813b4f5bb48a1c701d3b367c1fc2e80b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 12:25:13.328923 containerd[1459]: time="2025-01-17T12:25:13.328887016Z" level=info msg="CreateContainer within sandbox \"64e73c104bb8e8e6f78985ff561b5d65813b4f5bb48a1c701d3b367c1fc2e80b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0c7db2a6ea6c0dcbe0221cd4dd15ef2058e87da986e6825f1f7cab34768a2258\"" Jan 17 12:25:13.329321 containerd[1459]: time="2025-01-17T12:25:13.329281548Z" level=info msg="StartContainer for \"0c7db2a6ea6c0dcbe0221cd4dd15ef2058e87da986e6825f1f7cab34768a2258\"" Jan 17 12:25:13.357155 systemd[1]: Started cri-containerd-0c7db2a6ea6c0dcbe0221cd4dd15ef2058e87da986e6825f1f7cab34768a2258.scope - libcontainer container 0c7db2a6ea6c0dcbe0221cd4dd15ef2058e87da986e6825f1f7cab34768a2258. Jan 17 12:25:13.379238 systemd[1]: cri-containerd-0c7db2a6ea6c0dcbe0221cd4dd15ef2058e87da986e6825f1f7cab34768a2258.scope: Deactivated successfully. Jan 17 12:25:13.382712 containerd[1459]: time="2025-01-17T12:25:13.382666597Z" level=info msg="StartContainer for \"0c7db2a6ea6c0dcbe0221cd4dd15ef2058e87da986e6825f1f7cab34768a2258\" returns successfully" Jan 17 12:25:13.405974 containerd[1459]: time="2025-01-17T12:25:13.405904544Z" level=info msg="shim disconnected" id=0c7db2a6ea6c0dcbe0221cd4dd15ef2058e87da986e6825f1f7cab34768a2258 namespace=k8s.io Jan 17 12:25:13.405974 containerd[1459]: time="2025-01-17T12:25:13.405967853Z" level=warning msg="cleaning up after shim disconnected" id=0c7db2a6ea6c0dcbe0221cd4dd15ef2058e87da986e6825f1f7cab34768a2258 namespace=k8s.io Jan 17 12:25:13.405974 containerd[1459]: time="2025-01-17T12:25:13.405976198Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:25:13.613356 kubelet[1774]: E0117 12:25:13.613213 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:13.695234 kubelet[1774]: I0117 12:25:13.695175 1774 setters.go:600] "Node became not ready" node="10.0.0.161" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-17T12:25:13Z","lastTransitionTime":"2025-01-17T12:25:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 17 12:25:14.312674 kubelet[1774]: E0117 12:25:14.312625 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:25:14.314359 containerd[1459]: time="2025-01-17T12:25:14.314322843Z" level=info msg="CreateContainer within sandbox \"64e73c104bb8e8e6f78985ff561b5d65813b4f5bb48a1c701d3b367c1fc2e80b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 12:25:14.474097 containerd[1459]: time="2025-01-17T12:25:14.474052374Z" level=info msg="CreateContainer within sandbox \"64e73c104bb8e8e6f78985ff561b5d65813b4f5bb48a1c701d3b367c1fc2e80b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"197eafd93811c954af0899033582135b03643b7ce3df20bdea871dc5803e0b19\"" Jan 17 12:25:14.474552 containerd[1459]: time="2025-01-17T12:25:14.474527666Z" level=info msg="StartContainer for \"197eafd93811c954af0899033582135b03643b7ce3df20bdea871dc5803e0b19\"" Jan 17 12:25:14.484652 containerd[1459]: time="2025-01-17T12:25:14.484602919Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:25:14.487488 containerd[1459]: time="2025-01-17T12:25:14.486397338Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907265" Jan 17 12:25:14.489620 containerd[1459]: time="2025-01-17T12:25:14.489579034Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:25:14.493135 containerd[1459]: time="2025-01-17T12:25:14.493013774Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.38397096s" Jan 17 12:25:14.493755 containerd[1459]: time="2025-01-17T12:25:14.493130223Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 17 12:25:14.497781 containerd[1459]: time="2025-01-17T12:25:14.495286663Z" level=info msg="CreateContainer within sandbox \"826bbdbed7d18d1494b3c2a50b86cde78b110e5b5abf1a56d75ea374fb9573ab\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 17 12:25:14.511176 systemd[1]: Started cri-containerd-197eafd93811c954af0899033582135b03643b7ce3df20bdea871dc5803e0b19.scope - libcontainer container 197eafd93811c954af0899033582135b03643b7ce3df20bdea871dc5803e0b19. Jan 17 12:25:14.554469 containerd[1459]: time="2025-01-17T12:25:14.554350097Z" level=info msg="StartContainer for \"197eafd93811c954af0899033582135b03643b7ce3df20bdea871dc5803e0b19\" returns successfully" Jan 17 12:25:14.565618 containerd[1459]: time="2025-01-17T12:25:14.565330059Z" level=info msg="CreateContainer within sandbox \"826bbdbed7d18d1494b3c2a50b86cde78b110e5b5abf1a56d75ea374fb9573ab\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f66e67a16c326ffb705e9b35d313f25f02d1891dfddb3b573f8bdd6c48943bc5\"" Jan 17 12:25:14.567280 containerd[1459]: time="2025-01-17T12:25:14.566178532Z" level=info msg="StartContainer for \"f66e67a16c326ffb705e9b35d313f25f02d1891dfddb3b573f8bdd6c48943bc5\"" Jan 17 12:25:14.596313 systemd[1]: Started cri-containerd-f66e67a16c326ffb705e9b35d313f25f02d1891dfddb3b573f8bdd6c48943bc5.scope - libcontainer container f66e67a16c326ffb705e9b35d313f25f02d1891dfddb3b573f8bdd6c48943bc5. Jan 17 12:25:14.615096 kubelet[1774]: E0117 12:25:14.614749 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:14.674304 containerd[1459]: time="2025-01-17T12:25:14.674225760Z" level=info msg="StartContainer for \"f66e67a16c326ffb705e9b35d313f25f02d1891dfddb3b573f8bdd6c48943bc5\" returns successfully" Jan 17 12:25:14.960058 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 17 12:25:15.315625 kubelet[1774]: E0117 12:25:15.315509 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:25:15.318576 kubelet[1774]: E0117 12:25:15.318547 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:25:15.323911 kubelet[1774]: I0117 12:25:15.323850 1774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-m4h9n" podStartSLOduration=1.938720622 podStartE2EDuration="5.323828767s" podCreationTimestamp="2025-01-17 12:25:10 +0000 UTC" firstStartedPulling="2025-01-17 12:25:11.108669464 +0000 UTC m=+60.851891795" lastFinishedPulling="2025-01-17 12:25:14.493777619 +0000 UTC m=+64.236999940" observedRunningTime="2025-01-17 12:25:15.323576514 +0000 UTC m=+65.066798845" watchObservedRunningTime="2025-01-17 12:25:15.323828767 +0000 UTC m=+65.067051108" Jan 17 12:25:15.615378 kubelet[1774]: E0117 12:25:15.615245 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:16.319661 kubelet[1774]: E0117 12:25:16.319634 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:25:16.616495 kubelet[1774]: E0117 12:25:16.616372 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:16.765487 kubelet[1774]: E0117 12:25:16.765453 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:25:17.617131 kubelet[1774]: E0117 12:25:17.617074 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:17.981440 systemd-networkd[1375]: lxc_health: Link UP Jan 17 12:25:17.990766 systemd-networkd[1375]: lxc_health: Gained carrier Jan 17 12:25:18.618202 kubelet[1774]: E0117 12:25:18.618146 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:18.766352 kubelet[1774]: E0117 12:25:18.766301 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:25:18.784690 kubelet[1774]: I0117 12:25:18.784607 1774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7zpjg" podStartSLOduration=8.78459295 podStartE2EDuration="8.78459295s" podCreationTimestamp="2025-01-17 12:25:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:25:15.336518097 +0000 UTC m=+65.079740428" watchObservedRunningTime="2025-01-17 12:25:18.78459295 +0000 UTC m=+68.527815281" Jan 17 12:25:19.054044 systemd[1]: run-containerd-runc-k8s.io-197eafd93811c954af0899033582135b03643b7ce3df20bdea871dc5803e0b19-runc.c5NJFV.mount: Deactivated successfully. Jan 17 12:25:19.325813 kubelet[1774]: E0117 12:25:19.325539 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:25:19.354219 systemd-networkd[1375]: lxc_health: Gained IPv6LL Jan 17 12:25:19.619285 kubelet[1774]: E0117 12:25:19.619157 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:20.326648 kubelet[1774]: E0117 12:25:20.326612 1774 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:25:20.619549 kubelet[1774]: E0117 12:25:20.619448 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:21.620429 kubelet[1774]: E0117 12:25:21.620387 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:22.621125 kubelet[1774]: E0117 12:25:22.621078 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:23.621490 kubelet[1774]: E0117 12:25:23.621428 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:24.622047 kubelet[1774]: E0117 12:25:24.621991 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:25:25.623085 kubelet[1774]: E0117 12:25:25.623015 1774 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"