Jan 30 13:51:28.879862 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 13:51:28.879883 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:51:28.879894 kernel: BIOS-provided physical RAM map: Jan 30 13:51:28.879901 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 30 13:51:28.879907 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 30 13:51:28.879913 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 30 13:51:28.879920 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 30 13:51:28.879927 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 30 13:51:28.879933 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 30 13:51:28.879941 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 30 13:51:28.879947 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 30 13:51:28.879954 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 30 13:51:28.879960 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 30 13:51:28.879966 kernel: NX (Execute Disable) protection: active Jan 30 13:51:28.879974 kernel: APIC: Static calls initialized Jan 30 13:51:28.879983 kernel: SMBIOS 2.8 present. Jan 30 13:51:28.879990 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 30 13:51:28.879997 kernel: Hypervisor detected: KVM Jan 30 13:51:28.880003 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 13:51:28.880010 kernel: kvm-clock: using sched offset of 2253877325 cycles Jan 30 13:51:28.880017 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 13:51:28.880116 kernel: tsc: Detected 2794.748 MHz processor Jan 30 13:51:28.880124 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:51:28.880131 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:51:28.880138 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 30 13:51:28.880147 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 30 13:51:28.880155 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:51:28.880161 kernel: Using GB pages for direct mapping Jan 30 13:51:28.880168 kernel: ACPI: Early table checksum verification disabled Jan 30 13:51:28.880175 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 30 13:51:28.880182 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:51:28.880189 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:51:28.880196 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:51:28.880205 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 30 13:51:28.880212 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:51:28.880219 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:51:28.880226 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:51:28.880233 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:51:28.880240 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Jan 30 13:51:28.880247 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Jan 30 13:51:28.880258 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 30 13:51:28.880267 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Jan 30 13:51:28.880274 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Jan 30 13:51:28.880281 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Jan 30 13:51:28.880288 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Jan 30 13:51:28.880295 kernel: No NUMA configuration found Jan 30 13:51:28.880303 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 30 13:51:28.880311 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 30 13:51:28.880323 kernel: Zone ranges: Jan 30 13:51:28.880333 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:51:28.880342 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 30 13:51:28.880351 kernel: Normal empty Jan 30 13:51:28.880360 kernel: Movable zone start for each node Jan 30 13:51:28.880369 kernel: Early memory node ranges Jan 30 13:51:28.880378 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 30 13:51:28.880387 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 30 13:51:28.880395 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 30 13:51:28.880405 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:51:28.880412 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 30 13:51:28.880420 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 30 13:51:28.880427 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 30 13:51:28.880434 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 13:51:28.880441 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 13:51:28.880448 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 13:51:28.880455 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 13:51:28.880463 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 13:51:28.880472 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 13:51:28.880479 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 13:51:28.880487 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:51:28.880494 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 13:51:28.880501 kernel: TSC deadline timer available Jan 30 13:51:28.880508 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 30 13:51:28.880516 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 13:51:28.880523 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 30 13:51:28.880530 kernel: kvm-guest: setup PV sched yield Jan 30 13:51:28.880537 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 30 13:51:28.880547 kernel: Booting paravirtualized kernel on KVM Jan 30 13:51:28.880554 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:51:28.880562 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 30 13:51:28.880569 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 30 13:51:28.880576 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 30 13:51:28.880583 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 30 13:51:28.880590 kernel: kvm-guest: PV spinlocks enabled Jan 30 13:51:28.880597 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 30 13:51:28.880606 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:51:28.880616 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:51:28.880623 kernel: random: crng init done Jan 30 13:51:28.880630 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 13:51:28.880637 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:51:28.880644 kernel: Fallback order for Node 0: 0 Jan 30 13:51:28.880651 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 30 13:51:28.880658 kernel: Policy zone: DMA32 Jan 30 13:51:28.880666 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:51:28.880676 kernel: Memory: 2434588K/2571752K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 136904K reserved, 0K cma-reserved) Jan 30 13:51:28.880683 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 30 13:51:28.880690 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 13:51:28.880697 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:51:28.880704 kernel: Dynamic Preempt: voluntary Jan 30 13:51:28.880711 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:51:28.880719 kernel: rcu: RCU event tracing is enabled. Jan 30 13:51:28.880726 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 30 13:51:28.880734 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:51:28.880743 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:51:28.880750 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:51:28.880758 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:51:28.880765 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 30 13:51:28.880772 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 30 13:51:28.880779 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:51:28.880786 kernel: Console: colour VGA+ 80x25 Jan 30 13:51:28.880793 kernel: printk: console [ttyS0] enabled Jan 30 13:51:28.880800 kernel: ACPI: Core revision 20230628 Jan 30 13:51:28.880810 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 30 13:51:28.880819 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:51:28.880827 kernel: x2apic enabled Jan 30 13:51:28.880836 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 13:51:28.880844 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 30 13:51:28.880852 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 30 13:51:28.880859 kernel: kvm-guest: setup PV IPIs Jan 30 13:51:28.880876 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 30 13:51:28.880884 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 30 13:51:28.880892 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jan 30 13:51:28.880899 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 30 13:51:28.880906 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 30 13:51:28.880916 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 30 13:51:28.880924 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:51:28.880931 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 13:51:28.880939 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:51:28.880948 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 13:51:28.880956 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 30 13:51:28.880963 kernel: RETBleed: Mitigation: untrained return thunk Jan 30 13:51:28.880971 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 13:51:28.880979 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 13:51:28.880986 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 30 13:51:28.880994 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 30 13:51:28.881002 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 30 13:51:28.881009 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 13:51:28.881019 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 13:51:28.881038 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 13:51:28.881045 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 13:51:28.881053 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 30 13:51:28.881060 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:51:28.881068 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:51:28.881082 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:51:28.881090 kernel: landlock: Up and running. Jan 30 13:51:28.881097 kernel: SELinux: Initializing. Jan 30 13:51:28.881108 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:51:28.881115 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:51:28.881123 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 30 13:51:28.881131 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:51:28.881138 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:51:28.881146 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:51:28.881154 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 30 13:51:28.881161 kernel: ... version: 0 Jan 30 13:51:28.881171 kernel: ... bit width: 48 Jan 30 13:51:28.881179 kernel: ... generic registers: 6 Jan 30 13:51:28.881186 kernel: ... value mask: 0000ffffffffffff Jan 30 13:51:28.881193 kernel: ... max period: 00007fffffffffff Jan 30 13:51:28.881201 kernel: ... fixed-purpose events: 0 Jan 30 13:51:28.881208 kernel: ... event mask: 000000000000003f Jan 30 13:51:28.881216 kernel: signal: max sigframe size: 1776 Jan 30 13:51:28.881223 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:51:28.881231 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:51:28.881238 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:51:28.881248 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:51:28.881255 kernel: .... node #0, CPUs: #1 #2 #3 Jan 30 13:51:28.881263 kernel: smp: Brought up 1 node, 4 CPUs Jan 30 13:51:28.881270 kernel: smpboot: Max logical packages: 1 Jan 30 13:51:28.881278 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jan 30 13:51:28.881285 kernel: devtmpfs: initialized Jan 30 13:51:28.881292 kernel: x86/mm: Memory block size: 128MB Jan 30 13:51:28.881300 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:51:28.881308 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 30 13:51:28.881317 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:51:28.881325 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:51:28.881332 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:51:28.881340 kernel: audit: type=2000 audit(1738245088.729:1): state=initialized audit_enabled=0 res=1 Jan 30 13:51:28.881347 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:51:28.881355 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:51:28.881362 kernel: cpuidle: using governor menu Jan 30 13:51:28.881370 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:51:28.881377 kernel: dca service started, version 1.12.1 Jan 30 13:51:28.881387 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 30 13:51:28.881395 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 30 13:51:28.881402 kernel: PCI: Using configuration type 1 for base access Jan 30 13:51:28.881410 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:51:28.881417 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:51:28.881425 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:51:28.881432 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:51:28.881440 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:51:28.881447 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:51:28.881457 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:51:28.881464 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:51:28.881472 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:51:28.881479 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 13:51:28.881487 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 13:51:28.881494 kernel: ACPI: Interpreter enabled Jan 30 13:51:28.881501 kernel: ACPI: PM: (supports S0 S3 S5) Jan 30 13:51:28.881509 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:51:28.881516 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:51:28.881526 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 13:51:28.881534 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 30 13:51:28.881541 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 13:51:28.881715 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:51:28.881843 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 30 13:51:28.881977 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 30 13:51:28.881988 kernel: PCI host bridge to bus 0000:00 Jan 30 13:51:28.882140 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 13:51:28.882252 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 13:51:28.882363 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 13:51:28.882474 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 30 13:51:28.882595 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 30 13:51:28.882719 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 30 13:51:28.882832 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 13:51:28.882981 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 30 13:51:28.883171 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 30 13:51:28.883327 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 30 13:51:28.883467 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 30 13:51:28.883667 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 30 13:51:28.883812 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 13:51:28.883948 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 30 13:51:28.884128 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 30 13:51:28.884267 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 30 13:51:28.884390 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 30 13:51:28.884520 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 30 13:51:28.884650 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 30 13:51:28.884774 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 30 13:51:28.884901 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 30 13:51:28.885172 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 30 13:51:28.885404 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 30 13:51:28.885574 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 30 13:51:28.885755 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 30 13:51:28.885907 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 30 13:51:28.886094 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 30 13:51:28.886257 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 30 13:51:28.886418 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 30 13:51:28.886569 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 30 13:51:28.886717 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 30 13:51:28.886988 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 30 13:51:28.887179 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 30 13:51:28.887195 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 13:51:28.887211 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 13:51:28.887222 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 13:51:28.887232 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 13:51:28.887243 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 30 13:51:28.887253 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 30 13:51:28.887263 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 30 13:51:28.887273 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 30 13:51:28.887283 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 30 13:51:28.887294 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 30 13:51:28.887307 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 30 13:51:28.887318 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 30 13:51:28.887328 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 30 13:51:28.887339 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 30 13:51:28.887349 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 30 13:51:28.887360 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 30 13:51:28.887370 kernel: iommu: Default domain type: Translated Jan 30 13:51:28.887381 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:51:28.887391 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:51:28.887405 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 13:51:28.887415 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 30 13:51:28.887425 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 30 13:51:28.887583 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 30 13:51:28.887723 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 30 13:51:28.887885 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 13:51:28.887898 kernel: vgaarb: loaded Jan 30 13:51:28.887907 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 30 13:51:28.887927 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 30 13:51:28.887938 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 13:51:28.887948 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:51:28.887959 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:51:28.887969 kernel: pnp: PnP ACPI init Jan 30 13:51:28.888223 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 30 13:51:28.888240 kernel: pnp: PnP ACPI: found 6 devices Jan 30 13:51:28.888250 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:51:28.888263 kernel: NET: Registered PF_INET protocol family Jan 30 13:51:28.888270 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:51:28.888278 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 13:51:28.888286 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:51:28.888294 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 13:51:28.888302 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 13:51:28.888309 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 13:51:28.888317 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:51:28.888325 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:51:28.888335 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:51:28.888343 kernel: NET: Registered PF_XDP protocol family Jan 30 13:51:28.888457 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 13:51:28.888582 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 13:51:28.888707 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 13:51:28.888817 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 30 13:51:28.888924 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 30 13:51:28.889048 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 30 13:51:28.889063 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:51:28.889071 kernel: Initialise system trusted keyrings Jan 30 13:51:28.889088 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 13:51:28.889096 kernel: Key type asymmetric registered Jan 30 13:51:28.889103 kernel: Asymmetric key parser 'x509' registered Jan 30 13:51:28.889111 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:51:28.889119 kernel: io scheduler mq-deadline registered Jan 30 13:51:28.889126 kernel: io scheduler kyber registered Jan 30 13:51:28.889134 kernel: io scheduler bfq registered Jan 30 13:51:28.889141 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:51:28.889152 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 30 13:51:28.889160 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 30 13:51:28.889168 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 30 13:51:28.889175 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:51:28.889183 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:51:28.889191 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 13:51:28.889198 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 13:51:28.889206 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 13:51:28.889331 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 30 13:51:28.889346 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 13:51:28.889458 kernel: rtc_cmos 00:04: registered as rtc0 Jan 30 13:51:28.889570 kernel: rtc_cmos 00:04: setting system clock to 2025-01-30T13:51:28 UTC (1738245088) Jan 30 13:51:28.889681 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 30 13:51:28.889691 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 30 13:51:28.889699 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:51:28.889706 kernel: Segment Routing with IPv6 Jan 30 13:51:28.889717 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:51:28.889725 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:51:28.889733 kernel: Key type dns_resolver registered Jan 30 13:51:28.889740 kernel: IPI shorthand broadcast: enabled Jan 30 13:51:28.889748 kernel: sched_clock: Marking stable (632001832, 120834249)->(772553985, -19717904) Jan 30 13:51:28.889756 kernel: registered taskstats version 1 Jan 30 13:51:28.889763 kernel: Loading compiled-in X.509 certificates Jan 30 13:51:28.889771 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 13:51:28.889779 kernel: Key type .fscrypt registered Jan 30 13:51:28.889786 kernel: Key type fscrypt-provisioning registered Jan 30 13:51:28.889797 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:51:28.889804 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:51:28.889812 kernel: ima: No architecture policies found Jan 30 13:51:28.889819 kernel: clk: Disabling unused clocks Jan 30 13:51:28.889827 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 13:51:28.889835 kernel: Write protecting the kernel read-only data: 36864k Jan 30 13:51:28.889843 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 13:51:28.889850 kernel: Run /init as init process Jan 30 13:51:28.889860 kernel: with arguments: Jan 30 13:51:28.889868 kernel: /init Jan 30 13:51:28.889875 kernel: with environment: Jan 30 13:51:28.889883 kernel: HOME=/ Jan 30 13:51:28.889890 kernel: TERM=linux Jan 30 13:51:28.889897 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:51:28.889907 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:51:28.889917 systemd[1]: Detected virtualization kvm. Jan 30 13:51:28.889928 systemd[1]: Detected architecture x86-64. Jan 30 13:51:28.889936 systemd[1]: Running in initrd. Jan 30 13:51:28.889944 systemd[1]: No hostname configured, using default hostname. Jan 30 13:51:28.889952 systemd[1]: Hostname set to . Jan 30 13:51:28.889961 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:51:28.889969 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:51:28.889977 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:51:28.889985 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:51:28.889997 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:51:28.890018 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:51:28.890094 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:51:28.890103 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:51:28.890128 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:51:28.890140 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:51:28.890149 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:51:28.890157 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:51:28.890166 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:51:28.890174 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:51:28.890182 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:51:28.890191 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:51:28.890199 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:51:28.890210 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:51:28.890218 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:51:28.890227 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:51:28.890235 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:51:28.890244 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:51:28.890252 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:51:28.890261 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:51:28.890269 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:51:28.890278 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:51:28.890289 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:51:28.890297 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:51:28.890305 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:51:28.890313 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:51:28.890322 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:51:28.890330 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:51:28.890339 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:51:28.890347 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:51:28.890383 systemd-journald[193]: Collecting audit messages is disabled. Jan 30 13:51:28.890406 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:51:28.890418 systemd-journald[193]: Journal started Jan 30 13:51:28.890438 systemd-journald[193]: Runtime Journal (/run/log/journal/798393d0b6834bd9b57699900abec581) is 6.0M, max 48.4M, 42.3M free. Jan 30 13:51:28.885651 systemd-modules-load[194]: Inserted module 'overlay' Jan 30 13:51:28.919574 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:51:28.919607 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:51:28.919620 kernel: Bridge firewalling registered Jan 30 13:51:28.912677 systemd-modules-load[194]: Inserted module 'br_netfilter' Jan 30 13:51:28.922062 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:51:28.924113 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:51:28.939536 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:51:28.942503 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:51:28.945107 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:51:28.947167 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:51:28.951357 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:51:28.956865 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:51:28.960509 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:51:28.961612 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:51:28.964217 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:51:28.971474 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:51:28.985362 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:51:28.988367 dracut-cmdline[225]: dracut-dracut-053 Jan 30 13:51:28.990256 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:51:29.021986 systemd-resolved[230]: Positive Trust Anchors: Jan 30 13:51:29.022011 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:51:29.022066 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:51:29.024918 systemd-resolved[230]: Defaulting to hostname 'linux'. Jan 30 13:51:29.026087 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:51:29.032194 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:51:29.097080 kernel: SCSI subsystem initialized Jan 30 13:51:29.108063 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:51:29.118057 kernel: iscsi: registered transport (tcp) Jan 30 13:51:29.148247 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:51:29.148336 kernel: QLogic iSCSI HBA Driver Jan 30 13:51:29.195097 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:51:29.207210 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:51:29.231345 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:51:29.231417 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:51:29.231432 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:51:29.275060 kernel: raid6: avx2x4 gen() 20367 MB/s Jan 30 13:51:29.292080 kernel: raid6: avx2x2 gen() 21043 MB/s Jan 30 13:51:29.309381 kernel: raid6: avx2x1 gen() 17721 MB/s Jan 30 13:51:29.309453 kernel: raid6: using algorithm avx2x2 gen() 21043 MB/s Jan 30 13:51:29.327368 kernel: raid6: .... xor() 13935 MB/s, rmw enabled Jan 30 13:51:29.327451 kernel: raid6: using avx2x2 recovery algorithm Jan 30 13:51:29.354092 kernel: xor: automatically using best checksumming function avx Jan 30 13:51:29.557071 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:51:29.572867 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:51:29.588207 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:51:29.604590 systemd-udevd[413]: Using default interface naming scheme 'v255'. Jan 30 13:51:29.610256 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:51:29.617273 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:51:29.633261 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation Jan 30 13:51:29.672875 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:51:29.686328 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:51:29.759480 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:51:29.768266 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:51:29.783113 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:51:29.787084 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:51:29.789943 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:51:29.792747 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:51:29.796101 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:51:29.798179 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 30 13:51:29.828979 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 30 13:51:29.829215 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 13:51:29.829232 kernel: AES CTR mode by8 optimization enabled Jan 30 13:51:29.829247 kernel: libata version 3.00 loaded. Jan 30 13:51:29.829272 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 13:51:29.829287 kernel: GPT:9289727 != 19775487 Jan 30 13:51:29.829301 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 13:51:29.829315 kernel: GPT:9289727 != 19775487 Jan 30 13:51:29.829329 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:51:29.829342 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:51:29.804613 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:51:29.816528 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:51:29.835280 kernel: ahci 0000:00:1f.2: version 3.0 Jan 30 13:51:29.868124 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 30 13:51:29.868143 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 30 13:51:29.868298 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 30 13:51:29.868446 kernel: scsi host0: ahci Jan 30 13:51:29.868623 kernel: scsi host1: ahci Jan 30 13:51:29.868773 kernel: scsi host2: ahci Jan 30 13:51:29.868922 kernel: scsi host3: ahci Jan 30 13:51:29.869097 kernel: scsi host4: ahci Jan 30 13:51:29.869247 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (463) Jan 30 13:51:29.869263 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (461) Jan 30 13:51:29.869274 kernel: scsi host5: ahci Jan 30 13:51:29.869424 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 30 13:51:29.869435 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 30 13:51:29.869445 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 30 13:51:29.869456 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 30 13:51:29.869466 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 30 13:51:29.869476 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 30 13:51:29.851857 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:51:29.852010 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:51:29.854138 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:51:29.855816 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:51:29.855951 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:51:29.858711 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:51:29.873799 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:51:29.891324 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 13:51:29.930275 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 13:51:29.932183 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:51:29.947642 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 13:51:29.947745 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 13:51:29.957077 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:51:29.971340 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:51:29.972460 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:51:29.987762 disk-uuid[557]: Primary Header is updated. Jan 30 13:51:29.987762 disk-uuid[557]: Secondary Entries is updated. Jan 30 13:51:29.987762 disk-uuid[557]: Secondary Header is updated. Jan 30 13:51:29.992061 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:51:29.996100 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:51:29.996749 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:51:30.175099 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 30 13:51:30.175196 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 30 13:51:30.176061 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 30 13:51:30.177067 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 30 13:51:30.178071 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 30 13:51:30.179081 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 30 13:51:30.180320 kernel: ata3.00: applying bridge limits Jan 30 13:51:30.180343 kernel: ata3.00: configured for UDMA/100 Jan 30 13:51:30.185063 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 30 13:51:30.185086 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 30 13:51:30.230085 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 30 13:51:30.244340 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 30 13:51:30.244367 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 30 13:51:30.998007 disk-uuid[561]: The operation has completed successfully. Jan 30 13:51:30.999394 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:51:31.028200 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:51:31.028325 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:51:31.060448 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:51:31.064565 sh[590]: Success Jan 30 13:51:31.080136 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 30 13:51:31.126868 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:51:31.144267 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:51:31.148225 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:51:31.163268 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 13:51:31.163334 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:51:31.163349 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:51:31.164281 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:51:31.166092 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:51:31.171168 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:51:31.172069 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:51:31.186363 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:51:31.189265 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:51:31.199784 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:51:31.199858 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:51:31.199874 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:51:31.203046 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:51:31.213213 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:51:31.215724 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:51:31.226712 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:51:31.232241 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:51:31.285878 ignition[678]: Ignition 2.19.0 Jan 30 13:51:31.285890 ignition[678]: Stage: fetch-offline Jan 30 13:51:31.285928 ignition[678]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:51:31.285938 ignition[678]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:51:31.286066 ignition[678]: parsed url from cmdline: "" Jan 30 13:51:31.286070 ignition[678]: no config URL provided Jan 30 13:51:31.286076 ignition[678]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:51:31.286086 ignition[678]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:51:31.286114 ignition[678]: op(1): [started] loading QEMU firmware config module Jan 30 13:51:31.286120 ignition[678]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 30 13:51:31.294517 ignition[678]: op(1): [finished] loading QEMU firmware config module Jan 30 13:51:31.296506 ignition[678]: parsing config with SHA512: 385fc2d43e9e4ba508917ec7991ba9e4fa489fc95ffc557c952b9e570b2020f1d514432b9cccafada659ece0e9933da47cae0bfac8e40791c5e277c6f6b04e74 Jan 30 13:51:31.299214 unknown[678]: fetched base config from "system" Jan 30 13:51:31.299228 unknown[678]: fetched user config from "qemu" Jan 30 13:51:31.299534 ignition[678]: fetch-offline: fetch-offline passed Jan 30 13:51:31.299608 ignition[678]: Ignition finished successfully Jan 30 13:51:31.305144 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:51:31.325743 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:51:31.338317 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:51:31.360634 systemd-networkd[779]: lo: Link UP Jan 30 13:51:31.360645 systemd-networkd[779]: lo: Gained carrier Jan 30 13:51:31.362336 systemd-networkd[779]: Enumeration completed Jan 30 13:51:31.362732 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:51:31.362736 systemd-networkd[779]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:51:31.363580 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:51:31.366397 systemd-networkd[779]: eth0: Link UP Jan 30 13:51:31.366402 systemd-networkd[779]: eth0: Gained carrier Jan 30 13:51:31.366415 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:51:31.369845 systemd[1]: Reached target network.target - Network. Jan 30 13:51:31.372470 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 30 13:51:31.382083 systemd-networkd[779]: eth0: DHCPv4 address 10.0.0.127/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 13:51:31.385304 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:51:31.399374 ignition[781]: Ignition 2.19.0 Jan 30 13:51:31.399391 ignition[781]: Stage: kargs Jan 30 13:51:31.399625 ignition[781]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:51:31.399641 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:51:31.403608 ignition[781]: kargs: kargs passed Jan 30 13:51:31.403671 ignition[781]: Ignition finished successfully Jan 30 13:51:31.408671 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:51:31.421353 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:51:31.435785 ignition[791]: Ignition 2.19.0 Jan 30 13:51:31.435800 ignition[791]: Stage: disks Jan 30 13:51:31.436043 ignition[791]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:51:31.436061 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:51:31.436920 ignition[791]: disks: disks passed Jan 30 13:51:31.436985 ignition[791]: Ignition finished successfully Jan 30 13:51:31.442735 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:51:31.443149 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:51:31.445913 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:51:31.446374 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:51:31.446761 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:51:31.447349 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:51:31.462210 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:51:31.478455 systemd-fsck[801]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 13:51:31.486504 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:51:31.494212 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:51:31.587055 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 13:51:31.587465 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:51:31.588330 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:51:31.607241 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:51:31.609579 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:51:31.609895 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 13:51:31.609936 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:51:31.619666 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (809) Jan 30 13:51:31.619697 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:51:31.609958 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:51:31.623809 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:51:31.623835 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:51:31.625119 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:51:31.627457 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:51:31.632261 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:51:31.648315 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:51:31.683797 initrd-setup-root[833]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:51:31.689750 initrd-setup-root[840]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:51:31.695839 initrd-setup-root[847]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:51:31.701043 initrd-setup-root[854]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:51:31.793814 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:51:31.805336 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:51:31.808306 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:51:31.818063 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:51:31.835133 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:51:31.842127 ignition[923]: INFO : Ignition 2.19.0 Jan 30 13:51:31.842127 ignition[923]: INFO : Stage: mount Jan 30 13:51:31.843974 ignition[923]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:51:31.843974 ignition[923]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:51:31.843974 ignition[923]: INFO : mount: mount passed Jan 30 13:51:31.843974 ignition[923]: INFO : Ignition finished successfully Jan 30 13:51:31.845783 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:51:31.852187 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:51:32.161891 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:51:32.176259 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:51:32.184057 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (935) Jan 30 13:51:32.186267 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:51:32.186304 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:51:32.186320 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:51:32.189047 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:51:32.190764 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:51:32.225786 ignition[953]: INFO : Ignition 2.19.0 Jan 30 13:51:32.225786 ignition[953]: INFO : Stage: files Jan 30 13:51:32.227891 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:51:32.227891 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:51:32.227891 ignition[953]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:51:32.227891 ignition[953]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:51:32.227891 ignition[953]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:51:32.234783 ignition[953]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:51:32.234783 ignition[953]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:51:32.234783 ignition[953]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:51:32.234783 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:51:32.234783 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:51:32.234783 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:51:32.234783 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:51:32.234783 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 13:51:32.234783 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 13:51:32.234783 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 13:51:32.234783 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Jan 30 13:51:32.229950 unknown[953]: wrote ssh authorized keys file for user: core Jan 30 13:51:32.399015 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 30 13:51:32.806270 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 13:51:32.806270 ignition[953]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Jan 30 13:51:32.810640 ignition[953]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 13:51:32.813217 ignition[953]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 13:51:32.813217 ignition[953]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Jan 30 13:51:32.813217 ignition[953]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Jan 30 13:51:32.840642 ignition[953]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 13:51:32.846048 ignition[953]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 13:51:32.847778 ignition[953]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Jan 30 13:51:32.847778 ignition[953]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:51:32.847778 ignition[953]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:51:32.847778 ignition[953]: INFO : files: files passed Jan 30 13:51:32.847778 ignition[953]: INFO : Ignition finished successfully Jan 30 13:51:32.848940 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:51:32.859167 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:51:32.861254 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:51:32.864577 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:51:32.864687 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:51:32.872841 initrd-setup-root-after-ignition[981]: grep: /sysroot/oem/oem-release: No such file or directory Jan 30 13:51:32.875678 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:51:32.875678 initrd-setup-root-after-ignition[983]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:51:32.879182 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:51:32.882556 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:51:32.884163 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:51:32.895222 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:51:32.919973 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:51:32.920142 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:51:32.923245 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:51:32.926097 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:51:32.927518 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:51:32.938163 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:51:32.953512 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:51:32.966315 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:51:32.977897 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:51:32.979343 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:51:32.981722 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:51:32.983877 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:51:32.984105 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:51:32.986702 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:51:32.988322 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:51:32.990364 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:51:32.992459 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:51:32.994490 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:51:32.996750 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:51:32.998908 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:51:33.001518 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:51:33.003776 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:51:33.006223 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:51:33.008125 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:51:33.008305 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:51:33.010948 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:51:33.012535 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:51:33.014721 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:51:33.014907 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:51:33.016980 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:51:33.017168 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:51:33.019648 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:51:33.019803 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:51:33.021904 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:51:33.023769 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:51:33.027104 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:51:33.028688 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:51:33.030752 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:51:33.033007 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:51:33.033160 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:51:33.035097 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:51:33.035229 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:51:33.037353 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:51:33.037527 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:51:33.040070 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:51:33.040226 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:51:33.051197 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:51:33.053005 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:51:33.054598 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:51:33.054747 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:51:33.057642 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:51:33.057879 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:51:33.064806 ignition[1008]: INFO : Ignition 2.19.0 Jan 30 13:51:33.064806 ignition[1008]: INFO : Stage: umount Jan 30 13:51:33.064806 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:51:33.064806 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:51:33.071969 ignition[1008]: INFO : umount: umount passed Jan 30 13:51:33.071969 ignition[1008]: INFO : Ignition finished successfully Jan 30 13:51:33.065462 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:51:33.065577 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:51:33.075410 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:51:33.078330 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:51:33.082898 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:51:33.085548 systemd[1]: Stopped target network.target - Network. Jan 30 13:51:33.088053 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:51:33.089344 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:51:33.092141 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:51:33.092216 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:51:33.096064 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:51:33.096139 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:51:33.100143 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:51:33.101449 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:51:33.104717 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:51:33.107865 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:51:33.112097 systemd-networkd[779]: eth0: DHCPv6 lease lost Jan 30 13:51:33.114366 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:51:33.114527 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:51:33.116794 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:51:33.116931 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:51:33.122126 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:51:33.122191 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:51:33.132141 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:51:33.132221 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:51:33.132275 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:51:33.136257 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:51:33.136312 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:51:33.137593 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:51:33.137645 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:51:33.138050 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:51:33.138100 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:51:33.144727 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:51:33.156233 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:51:33.156381 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:51:33.159268 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:51:33.159439 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:51:33.162861 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:51:33.162928 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:51:33.164421 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:51:33.164462 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:51:33.165885 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:51:33.165936 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:51:33.167441 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:51:33.167490 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:51:33.170402 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:51:33.170451 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:51:33.183170 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:51:33.185537 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:51:33.185596 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:51:33.188798 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 13:51:33.188847 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:51:33.191813 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:51:33.191861 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:51:33.193505 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:51:33.193558 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:51:33.196741 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:51:33.196853 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:51:33.233190 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:51:33.234531 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:51:33.237738 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:51:33.240461 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:51:33.241760 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:51:33.254247 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:51:33.264631 systemd[1]: Switching root. Jan 30 13:51:33.300819 systemd-journald[193]: Journal stopped Jan 30 13:51:34.489513 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Jan 30 13:51:34.489584 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:51:34.489601 kernel: SELinux: policy capability open_perms=1 Jan 30 13:51:34.489618 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:51:34.489629 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:51:34.489640 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:51:34.489652 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:51:34.489662 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:51:34.489676 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:51:34.489687 kernel: audit: type=1403 audit(1738245093.716:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:51:34.489704 systemd[1]: Successfully loaded SELinux policy in 42.391ms. Jan 30 13:51:34.489725 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.979ms. Jan 30 13:51:34.489738 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:51:34.489750 systemd[1]: Detected virtualization kvm. Jan 30 13:51:34.489762 systemd[1]: Detected architecture x86-64. Jan 30 13:51:34.489774 systemd[1]: Detected first boot. Jan 30 13:51:34.489786 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:51:34.489800 zram_generator::config[1052]: No configuration found. Jan 30 13:51:34.489812 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:51:34.489824 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 13:51:34.489838 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 13:51:34.489850 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 13:51:34.489866 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:51:34.489878 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:51:34.489890 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:51:34.489902 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:51:34.489923 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:51:34.489935 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:51:34.489947 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:51:34.489959 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:51:34.489973 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:51:34.489985 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:51:34.489997 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:51:34.490009 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:51:34.490135 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:51:34.490152 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:51:34.490164 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 13:51:34.490184 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:51:34.490197 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 13:51:34.490208 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 13:51:34.490224 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 13:51:34.490236 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:51:34.490247 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:51:34.490265 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:51:34.490277 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:51:34.490289 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:51:34.490300 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:51:34.490314 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:51:34.490326 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:51:34.490338 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:51:34.490350 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:51:34.490362 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:51:34.490373 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:51:34.490387 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:51:34.490405 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:51:34.490417 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:51:34.490433 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:51:34.490445 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:51:34.490457 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:51:34.490469 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:51:34.490481 systemd[1]: Reached target machines.target - Containers. Jan 30 13:51:34.490493 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:51:34.490511 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:51:34.490526 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:51:34.490542 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:51:34.490561 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:51:34.490577 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:51:34.490591 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:51:34.490605 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:51:34.490619 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:51:34.490632 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:51:34.490647 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 13:51:34.490660 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 13:51:34.490674 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 13:51:34.490686 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 13:51:34.490699 kernel: loop: module loaded Jan 30 13:51:34.490711 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:51:34.490722 kernel: fuse: init (API version 7.39) Jan 30 13:51:34.490734 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:51:34.490746 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:51:34.490758 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:51:34.490770 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:51:34.490784 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 13:51:34.490795 systemd[1]: Stopped verity-setup.service. Jan 30 13:51:34.490807 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:51:34.490838 systemd-journald[1122]: Collecting audit messages is disabled. Jan 30 13:51:34.490859 kernel: ACPI: bus type drm_connector registered Jan 30 13:51:34.490871 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:51:34.490883 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:51:34.490895 systemd-journald[1122]: Journal started Jan 30 13:51:34.490927 systemd-journald[1122]: Runtime Journal (/run/log/journal/798393d0b6834bd9b57699900abec581) is 6.0M, max 48.4M, 42.3M free. Jan 30 13:51:34.244474 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:51:34.261142 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 13:51:34.261642 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 13:51:34.494420 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:51:34.495188 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:51:34.496412 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:51:34.497653 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:51:34.498943 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:51:34.500244 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:51:34.501742 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:51:34.503336 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:51:34.503509 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:51:34.505056 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:51:34.505235 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:51:34.506786 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:51:34.506980 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:51:34.508413 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:51:34.508581 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:51:34.510330 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:51:34.510547 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:51:34.512216 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:51:34.512386 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:51:34.514083 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:51:34.515512 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:51:34.517067 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:51:34.530616 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:51:34.542177 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:51:34.544744 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:51:34.545970 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:51:34.546003 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:51:34.548316 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:51:34.550741 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:51:34.553741 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:51:34.554945 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:51:34.557018 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:51:34.560579 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:51:34.562404 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:51:34.565471 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:51:34.571000 systemd-journald[1122]: Time spent on flushing to /var/log/journal/798393d0b6834bd9b57699900abec581 is 22.933ms for 931 entries. Jan 30 13:51:34.571000 systemd-journald[1122]: System Journal (/var/log/journal/798393d0b6834bd9b57699900abec581) is 8.0M, max 195.6M, 187.6M free. Jan 30 13:51:34.611325 systemd-journald[1122]: Received client request to flush runtime journal. Jan 30 13:51:34.611370 kernel: loop0: detected capacity change from 0 to 142488 Jan 30 13:51:34.568167 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:51:34.569658 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:51:34.579253 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:51:34.584374 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:51:34.589438 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:51:34.592297 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:51:34.595332 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:51:34.598254 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:51:34.616349 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:51:34.618471 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:51:34.623939 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:51:34.620526 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:51:34.626704 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:51:34.630113 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:51:34.637620 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:51:34.652058 kernel: loop1: detected capacity change from 0 to 218376 Jan 30 13:51:34.652040 udevadm[1178]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 13:51:34.654645 systemd-tmpfiles[1167]: ACLs are not supported, ignoring. Jan 30 13:51:34.654666 systemd-tmpfiles[1167]: ACLs are not supported, ignoring. Jan 30 13:51:34.663175 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:51:34.673181 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:51:34.675498 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:51:34.676235 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:51:34.706044 kernel: loop2: detected capacity change from 0 to 140768 Jan 30 13:51:34.713846 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:51:34.730281 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:51:34.753054 kernel: loop3: detected capacity change from 0 to 142488 Jan 30 13:51:34.753859 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Jan 30 13:51:34.753887 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Jan 30 13:51:34.760816 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:51:34.768049 kernel: loop4: detected capacity change from 0 to 218376 Jan 30 13:51:34.777050 kernel: loop5: detected capacity change from 0 to 140768 Jan 30 13:51:34.785162 (sd-merge)[1193]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 30 13:51:34.785758 (sd-merge)[1193]: Merged extensions into '/usr'. Jan 30 13:51:34.789998 systemd[1]: Reloading requested from client PID 1166 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:51:34.790017 systemd[1]: Reloading... Jan 30 13:51:34.854151 zram_generator::config[1219]: No configuration found. Jan 30 13:51:34.947939 ldconfig[1161]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:51:34.982909 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:51:35.032005 systemd[1]: Reloading finished in 241 ms. Jan 30 13:51:35.064124 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:51:35.065722 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:51:35.083276 systemd[1]: Starting ensure-sysext.service... Jan 30 13:51:35.104691 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:51:35.109566 systemd[1]: Reloading requested from client PID 1257 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:51:35.109581 systemd[1]: Reloading... Jan 30 13:51:35.141866 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:51:35.142644 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:51:35.143798 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:51:35.144114 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Jan 30 13:51:35.144192 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Jan 30 13:51:35.147953 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:51:35.148041 systemd-tmpfiles[1258]: Skipping /boot Jan 30 13:51:35.162466 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:51:35.162587 systemd-tmpfiles[1258]: Skipping /boot Jan 30 13:51:35.165065 zram_generator::config[1284]: No configuration found. Jan 30 13:51:35.276638 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:51:35.325855 systemd[1]: Reloading finished in 215 ms. Jan 30 13:51:35.348107 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:51:35.367924 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:51:35.370790 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:51:35.373191 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:51:35.377892 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:51:35.380090 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:51:35.385444 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:51:35.385611 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:51:35.396618 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:51:35.402408 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:51:35.407554 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:51:35.410220 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:51:35.412748 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:51:35.414316 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:51:35.415980 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:51:35.416392 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:51:35.421797 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:51:35.422357 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:51:35.425803 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:51:35.426477 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:51:35.434262 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:51:35.437291 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:51:35.443000 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:51:35.446449 augenrules[1351]: No rules Jan 30 13:51:35.448446 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:51:35.452414 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:51:35.452764 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:51:35.460262 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:51:35.462906 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:51:35.466284 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:51:35.467687 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:51:35.469865 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:51:35.474478 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:51:35.476003 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:51:35.477365 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:51:35.479710 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:51:35.481850 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:51:35.482086 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:51:35.483964 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:51:35.484151 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:51:35.486015 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:51:35.486238 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:51:35.490674 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:51:35.499336 systemd[1]: Finished ensure-sysext.service. Jan 30 13:51:35.509256 systemd-udevd[1363]: Using default interface naming scheme 'v255'. Jan 30 13:51:35.509822 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:51:35.510078 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:51:35.517242 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:51:35.519512 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:51:35.522521 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:51:35.527916 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:51:35.529218 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:51:35.531268 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 13:51:35.532595 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:51:35.532623 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:51:35.533373 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:51:35.533550 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:51:35.535243 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:51:35.535426 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:51:35.536944 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:51:35.537137 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:51:35.538862 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:51:35.539261 systemd-resolved[1326]: Positive Trust Anchors: Jan 30 13:51:35.539311 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:51:35.539319 systemd-resolved[1326]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:51:35.539711 systemd-resolved[1326]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:51:35.541693 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:51:35.543496 systemd-resolved[1326]: Defaulting to hostname 'linux'. Jan 30 13:51:35.545519 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:51:35.547434 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:51:35.557243 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:51:35.559018 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:51:35.559103 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:51:35.578500 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 30 13:51:35.610057 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1392) Jan 30 13:51:35.637071 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 30 13:51:35.650352 systemd-networkd[1391]: lo: Link UP Jan 30 13:51:35.650364 systemd-networkd[1391]: lo: Gained carrier Jan 30 13:51:35.652607 systemd-networkd[1391]: Enumeration completed Jan 30 13:51:35.654373 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:51:35.654383 systemd-networkd[1391]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:51:35.654744 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:51:35.655174 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:51:35.655208 systemd-networkd[1391]: eth0: Link UP Jan 30 13:51:35.655211 systemd-networkd[1391]: eth0: Gained carrier Jan 30 13:51:35.655221 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:51:35.668582 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 13:51:35.671068 kernel: ACPI: button: Power Button [PWRF] Jan 30 13:51:35.675941 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:51:35.677832 systemd[1]: Reached target network.target - Network. Jan 30 13:51:35.679000 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:51:35.682715 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 30 13:51:35.683525 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 30 13:51:35.683720 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 30 13:51:35.685089 systemd-networkd[1391]: eth0: DHCPv4 address 10.0.0.127/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 13:51:35.685958 systemd-timesyncd[1378]: Network configuration changed, trying to establish connection. Jan 30 13:51:36.560545 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 30 13:51:36.556399 systemd-timesyncd[1378]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 30 13:51:36.556440 systemd-timesyncd[1378]: Initial clock synchronization to Thu 2025-01-30 13:51:36.556307 UTC. Jan 30 13:51:36.556560 systemd-resolved[1326]: Clock change detected. Flushing caches. Jan 30 13:51:36.560860 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:51:36.564515 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:51:36.604302 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 13:51:36.609523 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:51:36.611034 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:51:36.705914 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:51:36.716345 kernel: kvm_amd: TSC scaling supported Jan 30 13:51:36.716392 kernel: kvm_amd: Nested Virtualization enabled Jan 30 13:51:36.716406 kernel: kvm_amd: Nested Paging enabled Jan 30 13:51:36.716421 kernel: kvm_amd: LBR virtualization supported Jan 30 13:51:36.719300 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 30 13:51:36.719392 kernel: kvm_amd: Virtual GIF supported Jan 30 13:51:36.736284 kernel: EDAC MC: Ver: 3.0.0 Jan 30 13:51:36.774442 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:51:36.807654 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:51:36.815850 lvm[1428]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:51:36.848719 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:51:36.850521 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:51:36.851879 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:51:36.853324 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:51:36.854848 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:51:36.856811 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:51:36.858264 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:51:36.859814 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:51:36.890740 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:51:36.890786 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:51:36.891787 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:51:36.893685 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:51:36.896762 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:51:36.907134 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:51:36.909896 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:51:36.911716 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:51:36.913113 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:51:36.914264 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:51:36.915470 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:51:36.915499 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:51:36.916570 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:51:36.918671 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:51:36.921314 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:51:36.923320 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:51:36.926528 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:51:36.927577 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:51:36.930328 jq[1435]: false Jan 30 13:51:36.931371 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:51:36.932872 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:51:36.935499 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:51:36.941392 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:51:36.949724 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 13:51:36.950338 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:51:36.951498 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:51:36.952555 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:51:36.957105 dbus-daemon[1434]: [system] SELinux support is enabled Jan 30 13:51:36.957344 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:51:36.961551 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:51:36.962621 jq[1450]: true Jan 30 13:51:36.963852 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:51:36.964068 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:51:36.964466 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:51:36.964687 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:51:36.966126 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:51:36.966369 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:51:36.975974 extend-filesystems[1436]: Found loop3 Jan 30 13:51:36.976898 extend-filesystems[1436]: Found loop4 Jan 30 13:51:36.976898 extend-filesystems[1436]: Found loop5 Jan 30 13:51:36.976898 extend-filesystems[1436]: Found sr0 Jan 30 13:51:36.976898 extend-filesystems[1436]: Found vda Jan 30 13:51:36.976898 extend-filesystems[1436]: Found vda1 Jan 30 13:51:36.976898 extend-filesystems[1436]: Found vda2 Jan 30 13:51:36.976898 extend-filesystems[1436]: Found vda3 Jan 30 13:51:36.976898 extend-filesystems[1436]: Found usr Jan 30 13:51:36.976898 extend-filesystems[1436]: Found vda4 Jan 30 13:51:36.984176 extend-filesystems[1436]: Found vda6 Jan 30 13:51:36.984176 extend-filesystems[1436]: Found vda7 Jan 30 13:51:36.980836 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:51:36.980883 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:51:36.983451 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:51:36.983472 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:51:36.990239 jq[1452]: true Jan 30 13:51:36.999310 update_engine[1447]: I20250130 13:51:36.997905 1447 main.cc:92] Flatcar Update Engine starting Jan 30 13:51:36.999310 update_engine[1447]: I20250130 13:51:36.999147 1447 update_check_scheduler.cc:74] Next update check in 6m31s Jan 30 13:51:36.999097 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:51:37.001309 extend-filesystems[1436]: Found vda9 Jan 30 13:51:37.001309 extend-filesystems[1436]: Checking size of /dev/vda9 Jan 30 13:51:37.004814 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:51:37.008105 (ntainerd)[1455]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:51:37.028273 extend-filesystems[1436]: Resized partition /dev/vda9 Jan 30 13:51:37.031746 extend-filesystems[1482]: resize2fs 1.47.1 (20-May-2024) Jan 30 13:51:37.036301 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1395) Jan 30 13:51:37.036273 systemd-logind[1440]: Watching system buttons on /dev/input/event1 (Power Button) Jan 30 13:51:37.036298 systemd-logind[1440]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 13:51:37.038652 systemd-logind[1440]: New seat seat0. Jan 30 13:51:37.041610 locksmithd[1464]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:51:37.048424 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:51:37.125749 sshd_keygen[1453]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:51:37.138260 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 30 13:51:37.150068 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:51:37.162629 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:51:37.171170 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:51:37.171471 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:51:37.175919 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:51:37.249497 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:51:37.261477 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:51:37.263752 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 13:51:37.265092 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:51:37.357962 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:51:37.365598 systemd[1]: Started sshd@0-10.0.0.127:22-10.0.0.1:47446.service - OpenSSH per-connection server daemon (10.0.0.1:47446). Jan 30 13:51:37.513268 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 30 13:51:37.790630 sshd[1507]: Connection closed by authenticating user core 10.0.0.1 port 47446 [preauth] Jan 30 13:51:37.644871 systemd[1]: sshd@0-10.0.0.127:22-10.0.0.1:47446.service: Deactivated successfully. Jan 30 13:51:37.725407 systemd-networkd[1391]: eth0: Gained IPv6LL Jan 30 13:51:37.791590 extend-filesystems[1482]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 13:51:37.791590 extend-filesystems[1482]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 30 13:51:37.791590 extend-filesystems[1482]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 30 13:51:37.728330 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:51:37.796857 containerd[1455]: time="2025-01-30T13:51:37.791622621Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 13:51:37.798477 extend-filesystems[1436]: Resized filesystem in /dev/vda9 Jan 30 13:51:37.753333 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:51:37.769577 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 30 13:51:37.798528 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:51:37.804930 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:51:37.807119 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:51:37.807415 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:51:37.822157 containerd[1455]: time="2025-01-30T13:51:37.822100358Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:51:37.823953 containerd[1455]: time="2025-01-30T13:51:37.823910963Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:51:37.823989 containerd[1455]: time="2025-01-30T13:51:37.823951279Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:51:37.823989 containerd[1455]: time="2025-01-30T13:51:37.823973080Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:51:37.824243 containerd[1455]: time="2025-01-30T13:51:37.824193142Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:51:37.824278 containerd[1455]: time="2025-01-30T13:51:37.824244068Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:51:37.824881 containerd[1455]: time="2025-01-30T13:51:37.824351189Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:51:37.824881 containerd[1455]: time="2025-01-30T13:51:37.824374763Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:51:37.824881 containerd[1455]: time="2025-01-30T13:51:37.824605596Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:51:37.824881 containerd[1455]: time="2025-01-30T13:51:37.824624632Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:51:37.824881 containerd[1455]: time="2025-01-30T13:51:37.824647314Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:51:37.824881 containerd[1455]: time="2025-01-30T13:51:37.824661581Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:51:37.824881 containerd[1455]: time="2025-01-30T13:51:37.824772960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:51:37.825305 containerd[1455]: time="2025-01-30T13:51:37.825066791Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:51:37.825305 containerd[1455]: time="2025-01-30T13:51:37.825218636Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:51:37.825305 containerd[1455]: time="2025-01-30T13:51:37.825254864Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:51:37.825402 containerd[1455]: time="2025-01-30T13:51:37.825369949Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:51:37.825476 containerd[1455]: time="2025-01-30T13:51:37.825446753Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:51:37.829082 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 30 13:51:37.829342 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 30 13:51:37.838711 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 13:51:37.887753 bash[1485]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:51:37.889967 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:51:37.892210 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:51:37.895190 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 30 13:51:37.903143 containerd[1455]: time="2025-01-30T13:51:37.903083220Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:51:37.903246 containerd[1455]: time="2025-01-30T13:51:37.903176024Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:51:37.903246 containerd[1455]: time="2025-01-30T13:51:37.903203084Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:51:37.903320 containerd[1455]: time="2025-01-30T13:51:37.903248099Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:51:37.903320 containerd[1455]: time="2025-01-30T13:51:37.903269739Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:51:37.903498 containerd[1455]: time="2025-01-30T13:51:37.903462491Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:51:37.903918 containerd[1455]: time="2025-01-30T13:51:37.903859746Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:51:37.904217 containerd[1455]: time="2025-01-30T13:51:37.904180257Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:51:37.904261 containerd[1455]: time="2025-01-30T13:51:37.904211796Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:51:37.904261 containerd[1455]: time="2025-01-30T13:51:37.904252803Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:51:37.904301 containerd[1455]: time="2025-01-30T13:51:37.904288810Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:51:37.904328 containerd[1455]: time="2025-01-30T13:51:37.904307035Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:51:37.904328 containerd[1455]: time="2025-01-30T13:51:37.904323455Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:51:37.904388 containerd[1455]: time="2025-01-30T13:51:37.904341329Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:51:37.904388 containerd[1455]: time="2025-01-30T13:51:37.904367197Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:51:37.904388 containerd[1455]: time="2025-01-30T13:51:37.904383769Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:51:37.904454 containerd[1455]: time="2025-01-30T13:51:37.904400660Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:51:37.904454 containerd[1455]: time="2025-01-30T13:51:37.904418123Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:51:37.904454 containerd[1455]: time="2025-01-30T13:51:37.904446376Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:51:37.904516 containerd[1455]: time="2025-01-30T13:51:37.904465572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:51:37.904516 containerd[1455]: time="2025-01-30T13:51:37.904484598Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:51:37.904516 containerd[1455]: time="2025-01-30T13:51:37.904501970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:51:37.904580 containerd[1455]: time="2025-01-30T13:51:37.904519503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:51:37.904580 containerd[1455]: time="2025-01-30T13:51:37.904538709Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:51:37.904580 containerd[1455]: time="2025-01-30T13:51:37.904554940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:51:37.904580 containerd[1455]: time="2025-01-30T13:51:37.904572222Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:51:37.904647 containerd[1455]: time="2025-01-30T13:51:37.904589905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:51:37.904647 containerd[1455]: time="2025-01-30T13:51:37.904612698Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:51:37.904647 containerd[1455]: time="2025-01-30T13:51:37.904628427Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:51:37.904703 containerd[1455]: time="2025-01-30T13:51:37.904650889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:51:37.904703 containerd[1455]: time="2025-01-30T13:51:37.904669935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:51:37.904703 containerd[1455]: time="2025-01-30T13:51:37.904690073Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:51:37.904764 containerd[1455]: time="2025-01-30T13:51:37.904718316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:51:37.904764 containerd[1455]: time="2025-01-30T13:51:37.904737642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:51:37.904764 containerd[1455]: time="2025-01-30T13:51:37.904755135Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:51:37.905324 containerd[1455]: time="2025-01-30T13:51:37.905277174Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:51:37.905713 containerd[1455]: time="2025-01-30T13:51:37.905347566Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:51:37.905713 containerd[1455]: time="2025-01-30T13:51:37.905383363Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:51:37.905713 containerd[1455]: time="2025-01-30T13:51:37.905417357Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:51:37.905713 containerd[1455]: time="2025-01-30T13:51:37.905434860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:51:37.905713 containerd[1455]: time="2025-01-30T13:51:37.905460598Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:51:37.905713 containerd[1455]: time="2025-01-30T13:51:37.905497207Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:51:37.905713 containerd[1455]: time="2025-01-30T13:51:37.905515180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:51:37.906653 containerd[1455]: time="2025-01-30T13:51:37.906508714Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:51:37.906653 containerd[1455]: time="2025-01-30T13:51:37.906615764Z" level=info msg="Connect containerd service" Jan 30 13:51:37.906956 containerd[1455]: time="2025-01-30T13:51:37.906688571Z" level=info msg="using legacy CRI server" Jan 30 13:51:37.906956 containerd[1455]: time="2025-01-30T13:51:37.906718527Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:51:37.907007 containerd[1455]: time="2025-01-30T13:51:37.906970219Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:51:37.907818 containerd[1455]: time="2025-01-30T13:51:37.907769819Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:51:37.907986 containerd[1455]: time="2025-01-30T13:51:37.907927484Z" level=info msg="Start subscribing containerd event" Jan 30 13:51:37.908051 containerd[1455]: time="2025-01-30T13:51:37.907989721Z" level=info msg="Start recovering state" Jan 30 13:51:37.908117 containerd[1455]: time="2025-01-30T13:51:37.908097453Z" level=info msg="Start event monitor" Jan 30 13:51:37.908139 containerd[1455]: time="2025-01-30T13:51:37.908118753Z" level=info msg="Start snapshots syncer" Jan 30 13:51:37.908139 containerd[1455]: time="2025-01-30T13:51:37.908129433Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:51:37.908176 containerd[1455]: time="2025-01-30T13:51:37.908148378Z" level=info msg="Start streaming server" Jan 30 13:51:37.908243 containerd[1455]: time="2025-01-30T13:51:37.908203342Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:51:37.908313 containerd[1455]: time="2025-01-30T13:51:37.908288591Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:51:37.908389 containerd[1455]: time="2025-01-30T13:51:37.908361528Z" level=info msg="containerd successfully booted in 0.200481s" Jan 30 13:51:37.908474 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:51:38.609408 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:51:38.611127 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:51:38.613368 systemd[1]: Startup finished in 760ms (kernel) + 5.015s (initrd) + 4.069s (userspace) = 9.845s. Jan 30 13:51:38.633756 (kubelet)[1546]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:51:39.031344 kubelet[1546]: E0130 13:51:39.031152 1546 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:51:39.034940 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:51:39.035147 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:51:47.654125 systemd[1]: Started sshd@1-10.0.0.127:22-10.0.0.1:34608.service - OpenSSH per-connection server daemon (10.0.0.1:34608). Jan 30 13:51:47.688696 sshd[1559]: Accepted publickey for core from 10.0.0.1 port 34608 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:51:47.690602 sshd[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:47.699427 systemd-logind[1440]: New session 1 of user core. Jan 30 13:51:47.700913 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:51:47.715480 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:51:47.727368 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:51:47.730240 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:51:47.738615 (systemd)[1563]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:51:47.838613 systemd[1563]: Queued start job for default target default.target. Jan 30 13:51:47.850464 systemd[1563]: Created slice app.slice - User Application Slice. Jan 30 13:51:47.850488 systemd[1563]: Reached target paths.target - Paths. Jan 30 13:51:47.850501 systemd[1563]: Reached target timers.target - Timers. Jan 30 13:51:47.851973 systemd[1563]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:51:47.863056 systemd[1563]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:51:47.863295 systemd[1563]: Reached target sockets.target - Sockets. Jan 30 13:51:47.863323 systemd[1563]: Reached target basic.target - Basic System. Jan 30 13:51:47.863394 systemd[1563]: Reached target default.target - Main User Target. Jan 30 13:51:47.863446 systemd[1563]: Startup finished in 118ms. Jan 30 13:51:47.863527 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:51:47.865182 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:51:47.930300 systemd[1]: Started sshd@2-10.0.0.127:22-10.0.0.1:34610.service - OpenSSH per-connection server daemon (10.0.0.1:34610). Jan 30 13:51:47.965034 sshd[1574]: Accepted publickey for core from 10.0.0.1 port 34610 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:51:47.966419 sshd[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:47.970995 systemd-logind[1440]: New session 2 of user core. Jan 30 13:51:47.980386 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:51:48.034662 sshd[1574]: pam_unix(sshd:session): session closed for user core Jan 30 13:51:48.048751 systemd[1]: sshd@2-10.0.0.127:22-10.0.0.1:34610.service: Deactivated successfully. Jan 30 13:51:48.050438 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 13:51:48.051744 systemd-logind[1440]: Session 2 logged out. Waiting for processes to exit. Jan 30 13:51:48.053088 systemd[1]: Started sshd@3-10.0.0.127:22-10.0.0.1:34624.service - OpenSSH per-connection server daemon (10.0.0.1:34624). Jan 30 13:51:48.053980 systemd-logind[1440]: Removed session 2. Jan 30 13:51:48.087388 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 34624 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:51:48.088757 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:48.092992 systemd-logind[1440]: New session 3 of user core. Jan 30 13:51:48.109345 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:51:48.158217 sshd[1581]: pam_unix(sshd:session): session closed for user core Jan 30 13:51:48.182111 systemd[1]: sshd@3-10.0.0.127:22-10.0.0.1:34624.service: Deactivated successfully. Jan 30 13:51:48.184043 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 13:51:48.185366 systemd-logind[1440]: Session 3 logged out. Waiting for processes to exit. Jan 30 13:51:48.195450 systemd[1]: Started sshd@4-10.0.0.127:22-10.0.0.1:34626.service - OpenSSH per-connection server daemon (10.0.0.1:34626). Jan 30 13:51:48.196400 systemd-logind[1440]: Removed session 3. Jan 30 13:51:48.224653 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 34626 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:51:48.226039 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:48.229812 systemd-logind[1440]: New session 4 of user core. Jan 30 13:51:48.239337 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:51:48.293092 sshd[1588]: pam_unix(sshd:session): session closed for user core Jan 30 13:51:48.309024 systemd[1]: sshd@4-10.0.0.127:22-10.0.0.1:34626.service: Deactivated successfully. Jan 30 13:51:48.310851 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 13:51:48.312497 systemd-logind[1440]: Session 4 logged out. Waiting for processes to exit. Jan 30 13:51:48.323498 systemd[1]: Started sshd@5-10.0.0.127:22-10.0.0.1:34642.service - OpenSSH per-connection server daemon (10.0.0.1:34642). Jan 30 13:51:48.324571 systemd-logind[1440]: Removed session 4. Jan 30 13:51:48.352029 sshd[1595]: Accepted publickey for core from 10.0.0.1 port 34642 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:51:48.353341 sshd[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:48.357144 systemd-logind[1440]: New session 5 of user core. Jan 30 13:51:48.363369 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:51:48.493515 sudo[1598]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 13:51:48.493867 sudo[1598]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:51:48.512180 sudo[1598]: pam_unix(sudo:session): session closed for user root Jan 30 13:51:48.514129 sshd[1595]: pam_unix(sshd:session): session closed for user core Jan 30 13:51:48.534754 systemd[1]: sshd@5-10.0.0.127:22-10.0.0.1:34642.service: Deactivated successfully. Jan 30 13:51:48.536260 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 13:51:48.537478 systemd-logind[1440]: Session 5 logged out. Waiting for processes to exit. Jan 30 13:51:48.538851 systemd[1]: Started sshd@6-10.0.0.127:22-10.0.0.1:34650.service - OpenSSH per-connection server daemon (10.0.0.1:34650). Jan 30 13:51:48.539495 systemd-logind[1440]: Removed session 5. Jan 30 13:51:48.592461 sshd[1603]: Accepted publickey for core from 10.0.0.1 port 34650 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:51:48.594006 sshd[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:48.597409 systemd-logind[1440]: New session 6 of user core. Jan 30 13:51:48.611332 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:51:48.663651 sudo[1607]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 13:51:48.663978 sudo[1607]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:51:48.667103 sudo[1607]: pam_unix(sudo:session): session closed for user root Jan 30 13:51:48.672483 sudo[1606]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 13:51:48.672805 sudo[1606]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:51:48.690433 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 13:51:48.691949 auditctl[1610]: No rules Jan 30 13:51:48.693144 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:51:48.693427 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 13:51:48.695155 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:51:48.730595 augenrules[1628]: No rules Jan 30 13:51:48.732535 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:51:48.733751 sudo[1606]: pam_unix(sudo:session): session closed for user root Jan 30 13:51:48.735540 sshd[1603]: pam_unix(sshd:session): session closed for user core Jan 30 13:51:48.745657 systemd[1]: sshd@6-10.0.0.127:22-10.0.0.1:34650.service: Deactivated successfully. Jan 30 13:51:48.747216 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:51:48.748493 systemd-logind[1440]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:51:48.749613 systemd[1]: Started sshd@7-10.0.0.127:22-10.0.0.1:34662.service - OpenSSH per-connection server daemon (10.0.0.1:34662). Jan 30 13:51:48.750402 systemd-logind[1440]: Removed session 6. Jan 30 13:51:48.783612 sshd[1636]: Accepted publickey for core from 10.0.0.1 port 34662 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:51:48.784959 sshd[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:48.788484 systemd-logind[1440]: New session 7 of user core. Jan 30 13:51:48.802338 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:51:48.855172 sudo[1639]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:51:48.855520 sudo[1639]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:51:48.884840 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 30 13:51:48.904414 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 30 13:51:48.904762 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 30 13:51:49.276375 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 13:51:49.286453 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:51:49.368713 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 13:51:49.368839 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 13:51:49.369173 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:51:49.378492 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:51:49.405966 systemd[1]: Reloading requested from client PID 1684 ('systemctl') (unit session-7.scope)... Jan 30 13:51:49.405985 systemd[1]: Reloading... Jan 30 13:51:49.483289 zram_generator::config[1722]: No configuration found. Jan 30 13:51:50.415468 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:51:50.491994 systemd[1]: Reloading finished in 1085 ms. Jan 30 13:51:50.541723 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:51:50.544877 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:51:50.545115 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:51:50.555565 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:51:50.709519 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:51:50.714779 (kubelet)[1772]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:51:50.754362 kubelet[1772]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:51:50.754362 kubelet[1772]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 30 13:51:50.754362 kubelet[1772]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:51:50.754770 kubelet[1772]: I0130 13:51:50.754487 1772 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:51:51.123059 kubelet[1772]: I0130 13:51:51.122937 1772 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 30 13:51:51.123059 kubelet[1772]: I0130 13:51:51.122966 1772 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:51:51.123286 kubelet[1772]: I0130 13:51:51.123255 1772 server.go:954] "Client rotation is on, will bootstrap in background" Jan 30 13:51:51.148091 kubelet[1772]: I0130 13:51:51.148052 1772 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:51:51.155724 kubelet[1772]: E0130 13:51:51.155667 1772 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 13:51:51.155724 kubelet[1772]: I0130 13:51:51.155720 1772 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 13:51:51.162716 kubelet[1772]: I0130 13:51:51.162677 1772 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:51:51.163470 kubelet[1772]: I0130 13:51:51.163412 1772 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:51:51.163653 kubelet[1772]: I0130 13:51:51.163456 1772 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.127","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 13:51:51.163653 kubelet[1772]: I0130 13:51:51.163651 1772 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:51:51.163797 kubelet[1772]: I0130 13:51:51.163661 1772 container_manager_linux.go:304] "Creating device plugin manager" Jan 30 13:51:51.163826 kubelet[1772]: I0130 13:51:51.163813 1772 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:51:51.166947 kubelet[1772]: I0130 13:51:51.166910 1772 kubelet.go:446] "Attempting to sync node with API server" Jan 30 13:51:51.166947 kubelet[1772]: I0130 13:51:51.166927 1772 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:51:51.166947 kubelet[1772]: I0130 13:51:51.166943 1772 kubelet.go:352] "Adding apiserver pod source" Jan 30 13:51:51.166947 kubelet[1772]: I0130 13:51:51.166953 1772 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:51:51.167116 kubelet[1772]: E0130 13:51:51.167077 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:51:51.167147 kubelet[1772]: E0130 13:51:51.167119 1772 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:51:51.169280 kubelet[1772]: I0130 13:51:51.169256 1772 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:51:51.169686 kubelet[1772]: I0130 13:51:51.169659 1772 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:51:51.171793 kubelet[1772]: W0130 13:51:51.171761 1772 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:51:51.171871 kubelet[1772]: W0130 13:51:51.171849 1772 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.127" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 30 13:51:51.171906 kubelet[1772]: E0130 13:51:51.171891 1772 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.127\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 30 13:51:51.171938 kubelet[1772]: W0130 13:51:51.171853 1772 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 30 13:51:51.171938 kubelet[1772]: E0130 13:51:51.171923 1772 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 30 13:51:51.173943 kubelet[1772]: I0130 13:51:51.173915 1772 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 30 13:51:51.174012 kubelet[1772]: I0130 13:51:51.173956 1772 server.go:1287] "Started kubelet" Jan 30 13:51:51.175159 kubelet[1772]: I0130 13:51:51.174859 1772 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:51:51.178448 kubelet[1772]: I0130 13:51:51.178417 1772 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:51:51.178605 kubelet[1772]: I0130 13:51:51.178582 1772 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:51:51.181788 kubelet[1772]: I0130 13:51:51.181161 1772 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:51:51.181788 kubelet[1772]: I0130 13:51:51.181506 1772 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 13:51:51.182343 kubelet[1772]: E0130 13:51:51.182326 1772 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:51:51.183363 kubelet[1772]: I0130 13:51:51.182537 1772 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 30 13:51:51.183363 kubelet[1772]: E0130 13:51:51.182720 1772 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.127\" not found" Jan 30 13:51:51.183363 kubelet[1772]: I0130 13:51:51.183290 1772 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:51:51.183363 kubelet[1772]: I0130 13:51:51.183335 1772 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:51:51.183611 kubelet[1772]: I0130 13:51:51.183575 1772 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:51:51.184704 kubelet[1772]: I0130 13:51:51.184031 1772 server.go:490] "Adding debug handlers to kubelet server" Jan 30 13:51:51.185285 kubelet[1772]: I0130 13:51:51.184768 1772 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:51:51.185918 kubelet[1772]: E0130 13:51:51.185851 1772 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.127\" not found" node="10.0.0.127" Jan 30 13:51:51.186856 kubelet[1772]: I0130 13:51:51.186836 1772 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:51:51.198043 kubelet[1772]: I0130 13:51:51.198009 1772 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 30 13:51:51.198085 kubelet[1772]: I0130 13:51:51.198045 1772 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 30 13:51:51.198085 kubelet[1772]: I0130 13:51:51.198064 1772 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:51:51.283789 kubelet[1772]: E0130 13:51:51.283716 1772 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.127\" not found" Jan 30 13:51:51.384210 kubelet[1772]: E0130 13:51:51.384046 1772 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.127\" not found" Jan 30 13:51:51.484544 kubelet[1772]: E0130 13:51:51.484472 1772 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.127\" not found" Jan 30 13:51:51.533405 kubelet[1772]: E0130 13:51:51.533358 1772 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.127" not found Jan 30 13:51:51.585051 kubelet[1772]: E0130 13:51:51.584993 1772 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.127\" not found" Jan 30 13:51:51.685264 kubelet[1772]: E0130 13:51:51.685137 1772 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.127\" not found" Jan 30 13:51:51.785714 kubelet[1772]: E0130 13:51:51.785655 1772 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.127\" not found" Jan 30 13:51:51.843304 kubelet[1772]: I0130 13:51:51.843259 1772 policy_none.go:49] "None policy: Start" Jan 30 13:51:51.843304 kubelet[1772]: I0130 13:51:51.843298 1772 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 30 13:51:51.843304 kubelet[1772]: I0130 13:51:51.843313 1772 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:51:51.852088 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 13:51:51.866852 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 13:51:51.870152 kubelet[1772]: I0130 13:51:51.870096 1772 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:51:51.870472 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 13:51:51.871632 kubelet[1772]: I0130 13:51:51.871582 1772 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:51:51.871632 kubelet[1772]: I0130 13:51:51.871618 1772 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 30 13:51:51.871726 kubelet[1772]: I0130 13:51:51.871643 1772 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 30 13:51:51.871726 kubelet[1772]: I0130 13:51:51.871661 1772 kubelet.go:2388] "Starting kubelet main sync loop" Jan 30 13:51:51.871940 kubelet[1772]: E0130 13:51:51.871794 1772 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:51:51.881551 kubelet[1772]: I0130 13:51:51.881383 1772 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:51:51.881697 kubelet[1772]: I0130 13:51:51.881635 1772 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 13:51:51.882277 kubelet[1772]: I0130 13:51:51.881927 1772 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:51:51.882277 kubelet[1772]: I0130 13:51:51.882146 1772 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:51:51.883348 kubelet[1772]: E0130 13:51:51.883313 1772 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 30 13:51:51.883348 kubelet[1772]: E0130 13:51:51.883349 1772 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.127\" not found" Jan 30 13:51:51.943550 kubelet[1772]: E0130 13:51:51.943404 1772 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.127" not found Jan 30 13:51:51.983855 kubelet[1772]: I0130 13:51:51.983816 1772 kubelet_node_status.go:76] "Attempting to register node" node="10.0.0.127" Jan 30 13:51:51.989496 kubelet[1772]: I0130 13:51:51.989444 1772 kubelet_node_status.go:79] "Successfully registered node" node="10.0.0.127" Jan 30 13:51:51.989496 kubelet[1772]: E0130 13:51:51.989479 1772 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"10.0.0.127\": node \"10.0.0.127\" not found" Jan 30 13:51:51.994929 kubelet[1772]: E0130 13:51:51.994878 1772 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.127\" not found" Jan 30 13:51:52.095245 kubelet[1772]: E0130 13:51:52.095170 1772 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.127\" not found" Jan 30 13:51:52.127480 kubelet[1772]: I0130 13:51:52.127429 1772 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 30 13:51:52.127756 kubelet[1772]: W0130 13:51:52.127667 1772 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 30 13:51:52.127756 kubelet[1772]: W0130 13:51:52.127705 1772 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 30 13:51:52.127756 kubelet[1772]: W0130 13:51:52.127726 1772 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 30 13:51:52.167969 kubelet[1772]: E0130 13:51:52.167917 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:51:52.196202 kubelet[1772]: E0130 13:51:52.196069 1772 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.127\" not found" Jan 30 13:51:52.296784 kubelet[1772]: E0130 13:51:52.296709 1772 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.127\" not found" Jan 30 13:51:52.356414 sudo[1639]: pam_unix(sudo:session): session closed for user root Jan 30 13:51:52.358408 sshd[1636]: pam_unix(sshd:session): session closed for user core Jan 30 13:51:52.363167 systemd[1]: sshd@7-10.0.0.127:22-10.0.0.1:34662.service: Deactivated successfully. Jan 30 13:51:52.365740 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:51:52.366550 systemd-logind[1440]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:51:52.367552 systemd-logind[1440]: Removed session 7. Jan 30 13:51:52.398464 kubelet[1772]: I0130 13:51:52.398430 1772 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 30 13:51:52.398767 containerd[1455]: time="2025-01-30T13:51:52.398727427Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 13:51:52.399121 kubelet[1772]: I0130 13:51:52.398952 1772 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 30 13:51:53.168210 kubelet[1772]: E0130 13:51:53.168162 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:51:53.168210 kubelet[1772]: I0130 13:51:53.168200 1772 apiserver.go:52] "Watching apiserver" Jan 30 13:51:53.180486 systemd[1]: Created slice kubepods-burstable-pod4a521709_11a9_4f29_9e62_40df10f8b4c0.slice - libcontainer container kubepods-burstable-pod4a521709_11a9_4f29_9e62_40df10f8b4c0.slice. Jan 30 13:51:53.184575 kubelet[1772]: I0130 13:51:53.184522 1772 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:51:53.194052 kubelet[1772]: I0130 13:51:53.193998 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4a521709-11a9-4f29-9e62-40df10f8b4c0-bpf-maps\") pod \"cilium-2lx82\" (UID: \"4a521709-11a9-4f29-9e62-40df10f8b4c0\") " pod="kube-system/cilium-2lx82" Jan 30 13:51:53.194052 kubelet[1772]: I0130 13:51:53.194034 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4a521709-11a9-4f29-9e62-40df10f8b4c0-cni-path\") pod \"cilium-2lx82\" (UID: \"4a521709-11a9-4f29-9e62-40df10f8b4c0\") " pod="kube-system/cilium-2lx82" Jan 30 13:51:53.194243 kubelet[1772]: I0130 13:51:53.194056 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58kcj\" (UniqueName: \"kubernetes.io/projected/4a521709-11a9-4f29-9e62-40df10f8b4c0-kube-api-access-58kcj\") pod \"cilium-2lx82\" (UID: \"4a521709-11a9-4f29-9e62-40df10f8b4c0\") " pod="kube-system/cilium-2lx82" Jan 30 13:51:53.194243 kubelet[1772]: I0130 13:51:53.194079 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b72844ab-37d4-4b72-96db-e09628e0ccb7-kube-proxy\") pod \"kube-proxy-twvxr\" (UID: \"b72844ab-37d4-4b72-96db-e09628e0ccb7\") " pod="kube-system/kube-proxy-twvxr" Jan 30 13:51:53.194243 kubelet[1772]: I0130 13:51:53.194098 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b72844ab-37d4-4b72-96db-e09628e0ccb7-xtables-lock\") pod \"kube-proxy-twvxr\" (UID: \"b72844ab-37d4-4b72-96db-e09628e0ccb7\") " pod="kube-system/kube-proxy-twvxr" Jan 30 13:51:53.194243 kubelet[1772]: I0130 13:51:53.194116 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4a521709-11a9-4f29-9e62-40df10f8b4c0-hostproc\") pod \"cilium-2lx82\" (UID: \"4a521709-11a9-4f29-9e62-40df10f8b4c0\") " pod="kube-system/cilium-2lx82" Jan 30 13:51:53.194243 kubelet[1772]: I0130 13:51:53.194133 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4a521709-11a9-4f29-9e62-40df10f8b4c0-clustermesh-secrets\") pod \"cilium-2lx82\" (UID: \"4a521709-11a9-4f29-9e62-40df10f8b4c0\") " pod="kube-system/cilium-2lx82" Jan 30 13:51:53.194420 kubelet[1772]: I0130 13:51:53.194155 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4a521709-11a9-4f29-9e62-40df10f8b4c0-cilium-config-path\") pod \"cilium-2lx82\" (UID: \"4a521709-11a9-4f29-9e62-40df10f8b4c0\") " pod="kube-system/cilium-2lx82" Jan 30 13:51:53.194420 kubelet[1772]: I0130 13:51:53.194174 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b72844ab-37d4-4b72-96db-e09628e0ccb7-lib-modules\") pod \"kube-proxy-twvxr\" (UID: \"b72844ab-37d4-4b72-96db-e09628e0ccb7\") " pod="kube-system/kube-proxy-twvxr" Jan 30 13:51:53.194420 kubelet[1772]: I0130 13:51:53.194191 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4a521709-11a9-4f29-9e62-40df10f8b4c0-cilium-run\") pod \"cilium-2lx82\" (UID: \"4a521709-11a9-4f29-9e62-40df10f8b4c0\") " pod="kube-system/cilium-2lx82" Jan 30 13:51:53.194420 kubelet[1772]: I0130 13:51:53.194210 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4a521709-11a9-4f29-9e62-40df10f8b4c0-etc-cni-netd\") pod \"cilium-2lx82\" (UID: \"4a521709-11a9-4f29-9e62-40df10f8b4c0\") " pod="kube-system/cilium-2lx82" Jan 30 13:51:53.194420 kubelet[1772]: I0130 13:51:53.194247 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4a521709-11a9-4f29-9e62-40df10f8b4c0-lib-modules\") pod \"cilium-2lx82\" (UID: \"4a521709-11a9-4f29-9e62-40df10f8b4c0\") " pod="kube-system/cilium-2lx82" Jan 30 13:51:53.194420 kubelet[1772]: I0130 13:51:53.194268 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4a521709-11a9-4f29-9e62-40df10f8b4c0-xtables-lock\") pod \"cilium-2lx82\" (UID: \"4a521709-11a9-4f29-9e62-40df10f8b4c0\") " pod="kube-system/cilium-2lx82" Jan 30 13:51:53.194621 kubelet[1772]: I0130 13:51:53.194290 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4a521709-11a9-4f29-9e62-40df10f8b4c0-host-proc-sys-net\") pod \"cilium-2lx82\" (UID: \"4a521709-11a9-4f29-9e62-40df10f8b4c0\") " pod="kube-system/cilium-2lx82" Jan 30 13:51:53.194621 kubelet[1772]: I0130 13:51:53.194308 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4a521709-11a9-4f29-9e62-40df10f8b4c0-cilium-cgroup\") pod \"cilium-2lx82\" (UID: \"4a521709-11a9-4f29-9e62-40df10f8b4c0\") " pod="kube-system/cilium-2lx82" Jan 30 13:51:53.194621 kubelet[1772]: I0130 13:51:53.194329 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4a521709-11a9-4f29-9e62-40df10f8b4c0-host-proc-sys-kernel\") pod \"cilium-2lx82\" (UID: \"4a521709-11a9-4f29-9e62-40df10f8b4c0\") " pod="kube-system/cilium-2lx82" Jan 30 13:51:53.194621 kubelet[1772]: I0130 13:51:53.194349 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4a521709-11a9-4f29-9e62-40df10f8b4c0-hubble-tls\") pod \"cilium-2lx82\" (UID: \"4a521709-11a9-4f29-9e62-40df10f8b4c0\") " pod="kube-system/cilium-2lx82" Jan 30 13:51:53.194621 kubelet[1772]: I0130 13:51:53.194371 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mld2t\" (UniqueName: \"kubernetes.io/projected/b72844ab-37d4-4b72-96db-e09628e0ccb7-kube-api-access-mld2t\") pod \"kube-proxy-twvxr\" (UID: \"b72844ab-37d4-4b72-96db-e09628e0ccb7\") " pod="kube-system/kube-proxy-twvxr" Jan 30 13:51:53.196966 systemd[1]: Created slice kubepods-besteffort-podb72844ab_37d4_4b72_96db_e09628e0ccb7.slice - libcontainer container kubepods-besteffort-podb72844ab_37d4_4b72_96db_e09628e0ccb7.slice. Jan 30 13:51:53.495575 kubelet[1772]: E0130 13:51:53.495408 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:53.496376 containerd[1455]: time="2025-01-30T13:51:53.496333480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2lx82,Uid:4a521709-11a9-4f29-9e62-40df10f8b4c0,Namespace:kube-system,Attempt:0,}" Jan 30 13:51:53.509266 kubelet[1772]: E0130 13:51:53.509205 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:53.509788 containerd[1455]: time="2025-01-30T13:51:53.509748367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-twvxr,Uid:b72844ab-37d4-4b72-96db-e09628e0ccb7,Namespace:kube-system,Attempt:0,}" Jan 30 13:51:54.168463 kubelet[1772]: E0130 13:51:54.168395 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:51:54.459808 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3089894402.mount: Deactivated successfully. Jan 30 13:51:54.468573 containerd[1455]: time="2025-01-30T13:51:54.468534083Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:51:54.471058 containerd[1455]: time="2025-01-30T13:51:54.470842171Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:51:54.471586 containerd[1455]: time="2025-01-30T13:51:54.471551351Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:51:54.471988 containerd[1455]: time="2025-01-30T13:51:54.471955499Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 30 13:51:54.473157 containerd[1455]: time="2025-01-30T13:51:54.473123199Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:51:54.475838 containerd[1455]: time="2025-01-30T13:51:54.475804548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:51:54.476698 containerd[1455]: time="2025-01-30T13:51:54.476670331Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 966.822127ms" Jan 30 13:51:54.478843 containerd[1455]: time="2025-01-30T13:51:54.478812929Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 982.390823ms" Jan 30 13:51:54.588750 containerd[1455]: time="2025-01-30T13:51:54.588643932Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:51:54.588750 containerd[1455]: time="2025-01-30T13:51:54.588646236Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:51:54.588750 containerd[1455]: time="2025-01-30T13:51:54.588713282Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:51:54.588750 containerd[1455]: time="2025-01-30T13:51:54.588727939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:51:54.589196 containerd[1455]: time="2025-01-30T13:51:54.588863994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:51:54.589196 containerd[1455]: time="2025-01-30T13:51:54.588775569Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:51:54.589196 containerd[1455]: time="2025-01-30T13:51:54.588803261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:51:54.589196 containerd[1455]: time="2025-01-30T13:51:54.588883561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:51:54.654377 systemd[1]: Started cri-containerd-a5b7690cb8cbd54f7b011cdd3ce5bfe81f7b90c95fa0191ce270bddcf58c4640.scope - libcontainer container a5b7690cb8cbd54f7b011cdd3ce5bfe81f7b90c95fa0191ce270bddcf58c4640. Jan 30 13:51:54.656666 systemd[1]: Started cri-containerd-d5ad9ebc507a53f97fdec60d9dcbf4872d54ade44887eebbffa10833440f7000.scope - libcontainer container d5ad9ebc507a53f97fdec60d9dcbf4872d54ade44887eebbffa10833440f7000. Jan 30 13:51:54.678090 containerd[1455]: time="2025-01-30T13:51:54.678004635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-twvxr,Uid:b72844ab-37d4-4b72-96db-e09628e0ccb7,Namespace:kube-system,Attempt:0,} returns sandbox id \"a5b7690cb8cbd54f7b011cdd3ce5bfe81f7b90c95fa0191ce270bddcf58c4640\"" Jan 30 13:51:54.679053 kubelet[1772]: E0130 13:51:54.679008 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:54.680352 containerd[1455]: time="2025-01-30T13:51:54.680264263Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\"" Jan 30 13:51:54.681940 containerd[1455]: time="2025-01-30T13:51:54.681905340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2lx82,Uid:4a521709-11a9-4f29-9e62-40df10f8b4c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"d5ad9ebc507a53f97fdec60d9dcbf4872d54ade44887eebbffa10833440f7000\"" Jan 30 13:51:54.682760 kubelet[1772]: E0130 13:51:54.682727 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:55.169031 kubelet[1772]: E0130 13:51:55.168995 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:51:55.726459 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount903512647.mount: Deactivated successfully. Jan 30 13:51:56.169297 kubelet[1772]: E0130 13:51:56.169158 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:51:57.055561 containerd[1455]: time="2025-01-30T13:51:57.055495499Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:57.056487 containerd[1455]: time="2025-01-30T13:51:57.056428869Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.1: active requests=0, bytes read=30909466" Jan 30 13:51:57.057781 containerd[1455]: time="2025-01-30T13:51:57.057744366Z" level=info msg="ImageCreate event name:\"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:57.096046 containerd[1455]: time="2025-01-30T13:51:57.095993006Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:57.096639 containerd[1455]: time="2025-01-30T13:51:57.096597680Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.1\" with image id \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\", repo tag \"registry.k8s.io/kube-proxy:v1.32.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\", size \"30908485\" in 2.416293653s" Jan 30 13:51:57.096728 containerd[1455]: time="2025-01-30T13:51:57.096635842Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\" returns image reference \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\"" Jan 30 13:51:57.097464 containerd[1455]: time="2025-01-30T13:51:57.097424671Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 30 13:51:57.098642 containerd[1455]: time="2025-01-30T13:51:57.098610735Z" level=info msg="CreateContainer within sandbox \"a5b7690cb8cbd54f7b011cdd3ce5bfe81f7b90c95fa0191ce270bddcf58c4640\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 13:51:57.128715 containerd[1455]: time="2025-01-30T13:51:57.128660570Z" level=info msg="CreateContainer within sandbox \"a5b7690cb8cbd54f7b011cdd3ce5bfe81f7b90c95fa0191ce270bddcf58c4640\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6e59b8459e211e5bc9405e43da538c0a67995eac3da99b857b0328c16f2e5925\"" Jan 30 13:51:57.129425 containerd[1455]: time="2025-01-30T13:51:57.129395929Z" level=info msg="StartContainer for \"6e59b8459e211e5bc9405e43da538c0a67995eac3da99b857b0328c16f2e5925\"" Jan 30 13:51:57.160648 systemd[1]: Started cri-containerd-6e59b8459e211e5bc9405e43da538c0a67995eac3da99b857b0328c16f2e5925.scope - libcontainer container 6e59b8459e211e5bc9405e43da538c0a67995eac3da99b857b0328c16f2e5925. Jan 30 13:51:57.169874 kubelet[1772]: E0130 13:51:57.169835 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:51:57.189362 containerd[1455]: time="2025-01-30T13:51:57.189308646Z" level=info msg="StartContainer for \"6e59b8459e211e5bc9405e43da538c0a67995eac3da99b857b0328c16f2e5925\" returns successfully" Jan 30 13:51:57.884547 kubelet[1772]: E0130 13:51:57.884487 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:58.002251 kubelet[1772]: I0130 13:51:58.002177 1772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-twvxr" podStartSLOduration=4.584623369 podStartE2EDuration="7.002161135s" podCreationTimestamp="2025-01-30 13:51:51 +0000 UTC" firstStartedPulling="2025-01-30 13:51:54.679787509 +0000 UTC m=+3.960764198" lastFinishedPulling="2025-01-30 13:51:57.097325275 +0000 UTC m=+6.378301964" observedRunningTime="2025-01-30 13:51:58.002118565 +0000 UTC m=+7.283095254" watchObservedRunningTime="2025-01-30 13:51:58.002161135 +0000 UTC m=+7.283137824" Jan 30 13:51:58.171073 kubelet[1772]: E0130 13:51:58.170908 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:51:58.886241 kubelet[1772]: E0130 13:51:58.886019 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:51:59.172163 kubelet[1772]: E0130 13:51:59.172043 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:00.172725 kubelet[1772]: E0130 13:52:00.172663 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:01.173239 kubelet[1772]: E0130 13:52:01.173180 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:01.785638 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2191851192.mount: Deactivated successfully. Jan 30 13:52:02.174293 kubelet[1772]: E0130 13:52:02.174100 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:03.174481 kubelet[1772]: E0130 13:52:03.174428 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:04.174850 kubelet[1772]: E0130 13:52:04.174810 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:04.693826 containerd[1455]: time="2025-01-30T13:52:04.693770431Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:04.694670 containerd[1455]: time="2025-01-30T13:52:04.694615466Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 30 13:52:04.695933 containerd[1455]: time="2025-01-30T13:52:04.695896829Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:04.697619 containerd[1455]: time="2025-01-30T13:52:04.697579975Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.60011601s" Jan 30 13:52:04.697661 containerd[1455]: time="2025-01-30T13:52:04.697620621Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 30 13:52:04.699546 containerd[1455]: time="2025-01-30T13:52:04.699515355Z" level=info msg="CreateContainer within sandbox \"d5ad9ebc507a53f97fdec60d9dcbf4872d54ade44887eebbffa10833440f7000\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 13:52:04.713198 containerd[1455]: time="2025-01-30T13:52:04.713022616Z" level=info msg="CreateContainer within sandbox \"d5ad9ebc507a53f97fdec60d9dcbf4872d54ade44887eebbffa10833440f7000\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7fe27c347dab59718078eb835bcd0857524d89ebe49b2870e2d28c380e0d5313\"" Jan 30 13:52:04.713662 containerd[1455]: time="2025-01-30T13:52:04.713637469Z" level=info msg="StartContainer for \"7fe27c347dab59718078eb835bcd0857524d89ebe49b2870e2d28c380e0d5313\"" Jan 30 13:52:04.742398 systemd[1]: Started cri-containerd-7fe27c347dab59718078eb835bcd0857524d89ebe49b2870e2d28c380e0d5313.scope - libcontainer container 7fe27c347dab59718078eb835bcd0857524d89ebe49b2870e2d28c380e0d5313. Jan 30 13:52:04.768983 containerd[1455]: time="2025-01-30T13:52:04.768924973Z" level=info msg="StartContainer for \"7fe27c347dab59718078eb835bcd0857524d89ebe49b2870e2d28c380e0d5313\" returns successfully" Jan 30 13:52:04.779674 systemd[1]: cri-containerd-7fe27c347dab59718078eb835bcd0857524d89ebe49b2870e2d28c380e0d5313.scope: Deactivated successfully. Jan 30 13:52:04.896947 kubelet[1772]: E0130 13:52:04.896915 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:05.175723 kubelet[1772]: E0130 13:52:05.175674 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:05.640401 containerd[1455]: time="2025-01-30T13:52:05.640205605Z" level=info msg="shim disconnected" id=7fe27c347dab59718078eb835bcd0857524d89ebe49b2870e2d28c380e0d5313 namespace=k8s.io Jan 30 13:52:05.640401 containerd[1455]: time="2025-01-30T13:52:05.640306073Z" level=warning msg="cleaning up after shim disconnected" id=7fe27c347dab59718078eb835bcd0857524d89ebe49b2870e2d28c380e0d5313 namespace=k8s.io Jan 30 13:52:05.640401 containerd[1455]: time="2025-01-30T13:52:05.640318597Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:52:05.707607 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7fe27c347dab59718078eb835bcd0857524d89ebe49b2870e2d28c380e0d5313-rootfs.mount: Deactivated successfully. Jan 30 13:52:05.900303 kubelet[1772]: E0130 13:52:05.899956 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:05.901802 containerd[1455]: time="2025-01-30T13:52:05.901753090Z" level=info msg="CreateContainer within sandbox \"d5ad9ebc507a53f97fdec60d9dcbf4872d54ade44887eebbffa10833440f7000\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 13:52:05.997094 containerd[1455]: time="2025-01-30T13:52:05.997040197Z" level=info msg="CreateContainer within sandbox \"d5ad9ebc507a53f97fdec60d9dcbf4872d54ade44887eebbffa10833440f7000\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bd7c9778220f6a472cb39bc0aae27d6f9aa558130cb7c3d26683969f0288b01a\"" Jan 30 13:52:05.997732 containerd[1455]: time="2025-01-30T13:52:05.997699473Z" level=info msg="StartContainer for \"bd7c9778220f6a472cb39bc0aae27d6f9aa558130cb7c3d26683969f0288b01a\"" Jan 30 13:52:06.030445 systemd[1]: Started cri-containerd-bd7c9778220f6a472cb39bc0aae27d6f9aa558130cb7c3d26683969f0288b01a.scope - libcontainer container bd7c9778220f6a472cb39bc0aae27d6f9aa558130cb7c3d26683969f0288b01a. Jan 30 13:52:06.059377 containerd[1455]: time="2025-01-30T13:52:06.059333038Z" level=info msg="StartContainer for \"bd7c9778220f6a472cb39bc0aae27d6f9aa558130cb7c3d26683969f0288b01a\" returns successfully" Jan 30 13:52:06.072814 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:52:06.073128 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:52:06.073245 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:52:06.078720 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:52:06.079008 systemd[1]: cri-containerd-bd7c9778220f6a472cb39bc0aae27d6f9aa558130cb7c3d26683969f0288b01a.scope: Deactivated successfully. Jan 30 13:52:06.102174 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:52:06.176739 kubelet[1772]: E0130 13:52:06.176680 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:06.372696 containerd[1455]: time="2025-01-30T13:52:06.372626135Z" level=info msg="shim disconnected" id=bd7c9778220f6a472cb39bc0aae27d6f9aa558130cb7c3d26683969f0288b01a namespace=k8s.io Jan 30 13:52:06.372696 containerd[1455]: time="2025-01-30T13:52:06.372690405Z" level=warning msg="cleaning up after shim disconnected" id=bd7c9778220f6a472cb39bc0aae27d6f9aa558130cb7c3d26683969f0288b01a namespace=k8s.io Jan 30 13:52:06.372696 containerd[1455]: time="2025-01-30T13:52:06.372702237Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:52:06.708060 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd7c9778220f6a472cb39bc0aae27d6f9aa558130cb7c3d26683969f0288b01a-rootfs.mount: Deactivated successfully. Jan 30 13:52:06.902851 kubelet[1772]: E0130 13:52:06.902819 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:06.904495 containerd[1455]: time="2025-01-30T13:52:06.904455855Z" level=info msg="CreateContainer within sandbox \"d5ad9ebc507a53f97fdec60d9dcbf4872d54ade44887eebbffa10833440f7000\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 13:52:06.923947 containerd[1455]: time="2025-01-30T13:52:06.923888679Z" level=info msg="CreateContainer within sandbox \"d5ad9ebc507a53f97fdec60d9dcbf4872d54ade44887eebbffa10833440f7000\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8d9e882b8bfd999e9bd3b2d7f960b21b4475498c8c9c59366d592feab71e0447\"" Jan 30 13:52:06.924520 containerd[1455]: time="2025-01-30T13:52:06.924464128Z" level=info msg="StartContainer for \"8d9e882b8bfd999e9bd3b2d7f960b21b4475498c8c9c59366d592feab71e0447\"" Jan 30 13:52:06.956400 systemd[1]: Started cri-containerd-8d9e882b8bfd999e9bd3b2d7f960b21b4475498c8c9c59366d592feab71e0447.scope - libcontainer container 8d9e882b8bfd999e9bd3b2d7f960b21b4475498c8c9c59366d592feab71e0447. Jan 30 13:52:06.986505 containerd[1455]: time="2025-01-30T13:52:06.985758557Z" level=info msg="StartContainer for \"8d9e882b8bfd999e9bd3b2d7f960b21b4475498c8c9c59366d592feab71e0447\" returns successfully" Jan 30 13:52:06.985979 systemd[1]: cri-containerd-8d9e882b8bfd999e9bd3b2d7f960b21b4475498c8c9c59366d592feab71e0447.scope: Deactivated successfully. Jan 30 13:52:07.013771 containerd[1455]: time="2025-01-30T13:52:07.013703504Z" level=info msg="shim disconnected" id=8d9e882b8bfd999e9bd3b2d7f960b21b4475498c8c9c59366d592feab71e0447 namespace=k8s.io Jan 30 13:52:07.013771 containerd[1455]: time="2025-01-30T13:52:07.013760030Z" level=warning msg="cleaning up after shim disconnected" id=8d9e882b8bfd999e9bd3b2d7f960b21b4475498c8c9c59366d592feab71e0447 namespace=k8s.io Jan 30 13:52:07.013771 containerd[1455]: time="2025-01-30T13:52:07.013770359Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:52:07.176898 kubelet[1772]: E0130 13:52:07.176839 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:07.707799 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d9e882b8bfd999e9bd3b2d7f960b21b4475498c8c9c59366d592feab71e0447-rootfs.mount: Deactivated successfully. Jan 30 13:52:07.905798 kubelet[1772]: E0130 13:52:07.905765 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:07.907556 containerd[1455]: time="2025-01-30T13:52:07.907504279Z" level=info msg="CreateContainer within sandbox \"d5ad9ebc507a53f97fdec60d9dcbf4872d54ade44887eebbffa10833440f7000\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 13:52:08.068849 containerd[1455]: time="2025-01-30T13:52:08.068662348Z" level=info msg="CreateContainer within sandbox \"d5ad9ebc507a53f97fdec60d9dcbf4872d54ade44887eebbffa10833440f7000\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"28c3b2688f5f22e2509d7848782789e7f38a402bc8d228dcdeb71b82892869b3\"" Jan 30 13:52:08.069418 containerd[1455]: time="2025-01-30T13:52:08.069387067Z" level=info msg="StartContainer for \"28c3b2688f5f22e2509d7848782789e7f38a402bc8d228dcdeb71b82892869b3\"" Jan 30 13:52:08.106447 systemd[1]: Started cri-containerd-28c3b2688f5f22e2509d7848782789e7f38a402bc8d228dcdeb71b82892869b3.scope - libcontainer container 28c3b2688f5f22e2509d7848782789e7f38a402bc8d228dcdeb71b82892869b3. Jan 30 13:52:08.131183 systemd[1]: cri-containerd-28c3b2688f5f22e2509d7848782789e7f38a402bc8d228dcdeb71b82892869b3.scope: Deactivated successfully. Jan 30 13:52:08.134103 containerd[1455]: time="2025-01-30T13:52:08.134061054Z" level=info msg="StartContainer for \"28c3b2688f5f22e2509d7848782789e7f38a402bc8d228dcdeb71b82892869b3\" returns successfully" Jan 30 13:52:08.159475 containerd[1455]: time="2025-01-30T13:52:08.159413549Z" level=info msg="shim disconnected" id=28c3b2688f5f22e2509d7848782789e7f38a402bc8d228dcdeb71b82892869b3 namespace=k8s.io Jan 30 13:52:08.159475 containerd[1455]: time="2025-01-30T13:52:08.159466649Z" level=warning msg="cleaning up after shim disconnected" id=28c3b2688f5f22e2509d7848782789e7f38a402bc8d228dcdeb71b82892869b3 namespace=k8s.io Jan 30 13:52:08.159475 containerd[1455]: time="2025-01-30T13:52:08.159475535Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:52:08.177057 kubelet[1772]: E0130 13:52:08.176995 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:08.708393 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-28c3b2688f5f22e2509d7848782789e7f38a402bc8d228dcdeb71b82892869b3-rootfs.mount: Deactivated successfully. Jan 30 13:52:08.909573 kubelet[1772]: E0130 13:52:08.909544 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:08.911254 containerd[1455]: time="2025-01-30T13:52:08.911203854Z" level=info msg="CreateContainer within sandbox \"d5ad9ebc507a53f97fdec60d9dcbf4872d54ade44887eebbffa10833440f7000\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 13:52:09.080640 containerd[1455]: time="2025-01-30T13:52:09.080495753Z" level=info msg="CreateContainer within sandbox \"d5ad9ebc507a53f97fdec60d9dcbf4872d54ade44887eebbffa10833440f7000\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ec7f383f2a1e38c4e4d57c1871f836d7721a2eb737d20a2e9386463dae10573e\"" Jan 30 13:52:09.081198 containerd[1455]: time="2025-01-30T13:52:09.081144337Z" level=info msg="StartContainer for \"ec7f383f2a1e38c4e4d57c1871f836d7721a2eb737d20a2e9386463dae10573e\"" Jan 30 13:52:09.120425 systemd[1]: Started cri-containerd-ec7f383f2a1e38c4e4d57c1871f836d7721a2eb737d20a2e9386463dae10573e.scope - libcontainer container ec7f383f2a1e38c4e4d57c1871f836d7721a2eb737d20a2e9386463dae10573e. Jan 30 13:52:09.177562 kubelet[1772]: E0130 13:52:09.177497 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:09.298113 containerd[1455]: time="2025-01-30T13:52:09.298047611Z" level=info msg="StartContainer for \"ec7f383f2a1e38c4e4d57c1871f836d7721a2eb737d20a2e9386463dae10573e\" returns successfully" Jan 30 13:52:09.426761 kubelet[1772]: I0130 13:52:09.426716 1772 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Jan 30 13:52:09.622257 kernel: Initializing XFRM netlink socket Jan 30 13:52:09.914523 kubelet[1772]: E0130 13:52:09.914501 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:09.946843 kubelet[1772]: I0130 13:52:09.946793 1772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2lx82" podStartSLOduration=8.931680137 podStartE2EDuration="18.946779293s" podCreationTimestamp="2025-01-30 13:51:51 +0000 UTC" firstStartedPulling="2025-01-30 13:51:54.683164732 +0000 UTC m=+3.964141421" lastFinishedPulling="2025-01-30 13:52:04.698263888 +0000 UTC m=+13.979240577" observedRunningTime="2025-01-30 13:52:09.946633761 +0000 UTC m=+19.227610460" watchObservedRunningTime="2025-01-30 13:52:09.946779293 +0000 UTC m=+19.227755982" Jan 30 13:52:10.177956 kubelet[1772]: E0130 13:52:10.177837 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:10.916733 kubelet[1772]: E0130 13:52:10.916688 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:11.167420 kubelet[1772]: E0130 13:52:11.167292 1772 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:11.178758 kubelet[1772]: E0130 13:52:11.178719 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:11.314781 systemd-networkd[1391]: cilium_host: Link UP Jan 30 13:52:11.314943 systemd-networkd[1391]: cilium_net: Link UP Jan 30 13:52:11.314947 systemd-networkd[1391]: cilium_net: Gained carrier Jan 30 13:52:11.316322 systemd-networkd[1391]: cilium_host: Gained carrier Jan 30 13:52:11.416479 systemd-networkd[1391]: cilium_vxlan: Link UP Jan 30 13:52:11.416490 systemd-networkd[1391]: cilium_vxlan: Gained carrier Jan 30 13:52:11.619308 kernel: NET: Registered PF_ALG protocol family Jan 30 13:52:11.918299 kubelet[1772]: E0130 13:52:11.918258 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:12.094421 systemd-networkd[1391]: cilium_host: Gained IPv6LL Jan 30 13:52:12.157349 systemd-networkd[1391]: cilium_net: Gained IPv6LL Jan 30 13:52:12.179258 kubelet[1772]: E0130 13:52:12.179091 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:12.252568 systemd-networkd[1391]: lxc_health: Link UP Jan 30 13:52:12.256403 systemd-networkd[1391]: lxc_health: Gained carrier Jan 30 13:52:12.736419 systemd-networkd[1391]: cilium_vxlan: Gained IPv6LL Jan 30 13:52:13.180258 kubelet[1772]: E0130 13:52:13.180168 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:13.309387 systemd-networkd[1391]: lxc_health: Gained IPv6LL Jan 30 13:52:13.497092 kubelet[1772]: E0130 13:52:13.496963 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:13.699307 systemd[1]: Created slice kubepods-besteffort-pod5a95016d_3445_458a_ab7c_bf6bf4c9b4b2.slice - libcontainer container kubepods-besteffort-pod5a95016d_3445_458a_ab7c_bf6bf4c9b4b2.slice. Jan 30 13:52:13.731500 kubelet[1772]: I0130 13:52:13.731441 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6664\" (UniqueName: \"kubernetes.io/projected/5a95016d-3445-458a-ab7c-bf6bf4c9b4b2-kube-api-access-n6664\") pod \"nginx-deployment-7fcdb87857-8vvwk\" (UID: \"5a95016d-3445-458a-ab7c-bf6bf4c9b4b2\") " pod="default/nginx-deployment-7fcdb87857-8vvwk" Jan 30 13:52:13.921868 kubelet[1772]: E0130 13:52:13.921810 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:14.004141 containerd[1455]: time="2025-01-30T13:52:14.004092801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-8vvwk,Uid:5a95016d-3445-458a-ab7c-bf6bf4c9b4b2,Namespace:default,Attempt:0,}" Jan 30 13:52:14.047243 systemd-networkd[1391]: lxcd46e3d363bda: Link UP Jan 30 13:52:14.057256 kernel: eth0: renamed from tmp28343 Jan 30 13:52:14.064605 systemd-networkd[1391]: lxcd46e3d363bda: Gained carrier Jan 30 13:52:14.180786 kubelet[1772]: E0130 13:52:14.180628 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:14.923531 kubelet[1772]: E0130 13:52:14.923494 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:15.181271 kubelet[1772]: E0130 13:52:15.181038 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:15.616338 systemd-networkd[1391]: lxcd46e3d363bda: Gained IPv6LL Jan 30 13:52:16.181637 kubelet[1772]: E0130 13:52:16.181588 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:17.015660 containerd[1455]: time="2025-01-30T13:52:17.015026411Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:52:17.015660 containerd[1455]: time="2025-01-30T13:52:17.015625195Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:52:17.015660 containerd[1455]: time="2025-01-30T13:52:17.015639091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:17.016070 containerd[1455]: time="2025-01-30T13:52:17.015710959Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:17.042385 systemd[1]: Started cri-containerd-283433b0e56f9e90f0184449b151407d06cdff5d99cf1efabddf298d97c8db13.scope - libcontainer container 283433b0e56f9e90f0184449b151407d06cdff5d99cf1efabddf298d97c8db13. Jan 30 13:52:17.053858 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:52:17.076793 containerd[1455]: time="2025-01-30T13:52:17.076759432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-8vvwk,Uid:5a95016d-3445-458a-ab7c-bf6bf4c9b4b2,Namespace:default,Attempt:0,} returns sandbox id \"283433b0e56f9e90f0184449b151407d06cdff5d99cf1efabddf298d97c8db13\"" Jan 30 13:52:17.077845 containerd[1455]: time="2025-01-30T13:52:17.077810059Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 30 13:52:17.182608 kubelet[1772]: E0130 13:52:17.182571 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:18.183238 kubelet[1772]: E0130 13:52:18.183122 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:19.183825 kubelet[1772]: E0130 13:52:19.183762 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:20.184136 kubelet[1772]: E0130 13:52:20.184086 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:20.275638 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount583933456.mount: Deactivated successfully. Jan 30 13:52:21.184457 kubelet[1772]: E0130 13:52:21.184409 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:21.373966 containerd[1455]: time="2025-01-30T13:52:21.373904669Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:21.374676 containerd[1455]: time="2025-01-30T13:52:21.374607264Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71015561" Jan 30 13:52:21.375789 containerd[1455]: time="2025-01-30T13:52:21.375741292Z" level=info msg="ImageCreate event name:\"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:21.378239 containerd[1455]: time="2025-01-30T13:52:21.378186393Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:21.379389 containerd[1455]: time="2025-01-30T13:52:21.379345708Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 4.301497707s" Jan 30 13:52:21.379444 containerd[1455]: time="2025-01-30T13:52:21.379389592Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 30 13:52:21.381389 containerd[1455]: time="2025-01-30T13:52:21.381359839Z" level=info msg="CreateContainer within sandbox \"283433b0e56f9e90f0184449b151407d06cdff5d99cf1efabddf298d97c8db13\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 30 13:52:21.395295 containerd[1455]: time="2025-01-30T13:52:21.395246987Z" level=info msg="CreateContainer within sandbox \"283433b0e56f9e90f0184449b151407d06cdff5d99cf1efabddf298d97c8db13\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"3c1852e8992ab372436d13087b1e1bb31701005466434254db09ca67c7138341\"" Jan 30 13:52:21.395709 containerd[1455]: time="2025-01-30T13:52:21.395678758Z" level=info msg="StartContainer for \"3c1852e8992ab372436d13087b1e1bb31701005466434254db09ca67c7138341\"" Jan 30 13:52:21.431423 systemd[1]: Started cri-containerd-3c1852e8992ab372436d13087b1e1bb31701005466434254db09ca67c7138341.scope - libcontainer container 3c1852e8992ab372436d13087b1e1bb31701005466434254db09ca67c7138341. Jan 30 13:52:21.551215 containerd[1455]: time="2025-01-30T13:52:21.551050669Z" level=info msg="StartContainer for \"3c1852e8992ab372436d13087b1e1bb31701005466434254db09ca67c7138341\" returns successfully" Jan 30 13:52:21.769477 update_engine[1447]: I20250130 13:52:21.769408 1447 update_attempter.cc:509] Updating boot flags... Jan 30 13:52:21.846280 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2972) Jan 30 13:52:21.888333 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2973) Jan 30 13:52:21.937501 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2973) Jan 30 13:52:22.055809 kubelet[1772]: I0130 13:52:22.055722 1772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-8vvwk" podStartSLOduration=4.75304106 podStartE2EDuration="9.055702914s" podCreationTimestamp="2025-01-30 13:52:13 +0000 UTC" firstStartedPulling="2025-01-30 13:52:17.077548328 +0000 UTC m=+26.358525017" lastFinishedPulling="2025-01-30 13:52:21.380210182 +0000 UTC m=+30.661186871" observedRunningTime="2025-01-30 13:52:22.055524134 +0000 UTC m=+31.336500823" watchObservedRunningTime="2025-01-30 13:52:22.055702914 +0000 UTC m=+31.336679603" Jan 30 13:52:22.185025 kubelet[1772]: E0130 13:52:22.184960 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:23.185804 kubelet[1772]: E0130 13:52:23.185731 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:24.186302 kubelet[1772]: E0130 13:52:24.186212 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:25.186835 kubelet[1772]: E0130 13:52:25.186786 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:25.397541 systemd[1]: Created slice kubepods-besteffort-pod0aebe73a_21a3_440d_acfa_4fdbc6d5af9a.slice - libcontainer container kubepods-besteffort-pod0aebe73a_21a3_440d_acfa_4fdbc6d5af9a.slice. Jan 30 13:52:25.498645 kubelet[1772]: I0130 13:52:25.498511 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/0aebe73a-21a3-440d-acfa-4fdbc6d5af9a-data\") pod \"nfs-server-provisioner-0\" (UID: \"0aebe73a-21a3-440d-acfa-4fdbc6d5af9a\") " pod="default/nfs-server-provisioner-0" Jan 30 13:52:25.498645 kubelet[1772]: I0130 13:52:25.498558 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz649\" (UniqueName: \"kubernetes.io/projected/0aebe73a-21a3-440d-acfa-4fdbc6d5af9a-kube-api-access-mz649\") pod \"nfs-server-provisioner-0\" (UID: \"0aebe73a-21a3-440d-acfa-4fdbc6d5af9a\") " pod="default/nfs-server-provisioner-0" Jan 30 13:52:25.700748 containerd[1455]: time="2025-01-30T13:52:25.700697342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:0aebe73a-21a3-440d-acfa-4fdbc6d5af9a,Namespace:default,Attempt:0,}" Jan 30 13:52:25.741017 systemd-networkd[1391]: lxc94bca266f3a8: Link UP Jan 30 13:52:25.748249 kernel: eth0: renamed from tmpc08d8 Jan 30 13:52:25.756713 systemd-networkd[1391]: lxc94bca266f3a8: Gained carrier Jan 30 13:52:25.959034 containerd[1455]: time="2025-01-30T13:52:25.958932242Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:52:25.959034 containerd[1455]: time="2025-01-30T13:52:25.959013607Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:52:25.959205 containerd[1455]: time="2025-01-30T13:52:25.959042331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:25.959775 containerd[1455]: time="2025-01-30T13:52:25.959707121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:25.986371 systemd[1]: Started cri-containerd-c08d8e55e4969f42b6c44f3ee602d758430206449bc9b55f0550ba6c52c4166d.scope - libcontainer container c08d8e55e4969f42b6c44f3ee602d758430206449bc9b55f0550ba6c52c4166d. Jan 30 13:52:25.998046 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:52:26.021666 containerd[1455]: time="2025-01-30T13:52:26.021462392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:0aebe73a-21a3-440d-acfa-4fdbc6d5af9a,Namespace:default,Attempt:0,} returns sandbox id \"c08d8e55e4969f42b6c44f3ee602d758430206449bc9b55f0550ba6c52c4166d\"" Jan 30 13:52:26.024122 containerd[1455]: time="2025-01-30T13:52:26.024094800Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 30 13:52:26.186987 kubelet[1772]: E0130 13:52:26.186920 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:26.608816 systemd[1]: run-containerd-runc-k8s.io-c08d8e55e4969f42b6c44f3ee602d758430206449bc9b55f0550ba6c52c4166d-runc.c1IRcS.mount: Deactivated successfully. Jan 30 13:52:26.877394 systemd-networkd[1391]: lxc94bca266f3a8: Gained IPv6LL Jan 30 13:52:27.187530 kubelet[1772]: E0130 13:52:27.187481 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:28.187680 kubelet[1772]: E0130 13:52:28.187620 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:29.188638 kubelet[1772]: E0130 13:52:29.188558 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:30.189563 kubelet[1772]: E0130 13:52:30.189522 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:30.399858 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4137184204.mount: Deactivated successfully. Jan 30 13:52:31.167274 kubelet[1772]: E0130 13:52:31.167207 1772 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:31.190610 kubelet[1772]: E0130 13:52:31.190582 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:32.191329 kubelet[1772]: E0130 13:52:32.190983 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:33.191781 kubelet[1772]: E0130 13:52:33.191721 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:34.192309 kubelet[1772]: E0130 13:52:34.192258 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:34.867213 containerd[1455]: time="2025-01-30T13:52:34.867136757Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:34.867923 containerd[1455]: time="2025-01-30T13:52:34.867839523Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jan 30 13:52:34.869136 containerd[1455]: time="2025-01-30T13:52:34.869071838Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:34.871781 containerd[1455]: time="2025-01-30T13:52:34.871731337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:34.872731 containerd[1455]: time="2025-01-30T13:52:34.872687993Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 8.848558245s" Jan 30 13:52:34.872731 containerd[1455]: time="2025-01-30T13:52:34.872724892Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 30 13:52:34.874851 containerd[1455]: time="2025-01-30T13:52:34.874811810Z" level=info msg="CreateContainer within sandbox \"c08d8e55e4969f42b6c44f3ee602d758430206449bc9b55f0550ba6c52c4166d\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 30 13:52:34.888600 containerd[1455]: time="2025-01-30T13:52:34.888549252Z" level=info msg="CreateContainer within sandbox \"c08d8e55e4969f42b6c44f3ee602d758430206449bc9b55f0550ba6c52c4166d\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"11247636cfa23f044b2dbe251a4dedef23f613f020d4c93064d8e4ee3e7711b0\"" Jan 30 13:52:34.889109 containerd[1455]: time="2025-01-30T13:52:34.889072329Z" level=info msg="StartContainer for \"11247636cfa23f044b2dbe251a4dedef23f613f020d4c93064d8e4ee3e7711b0\"" Jan 30 13:52:34.958401 systemd[1]: Started cri-containerd-11247636cfa23f044b2dbe251a4dedef23f613f020d4c93064d8e4ee3e7711b0.scope - libcontainer container 11247636cfa23f044b2dbe251a4dedef23f613f020d4c93064d8e4ee3e7711b0. Jan 30 13:52:35.192749 kubelet[1772]: E0130 13:52:35.192702 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:35.593841 containerd[1455]: time="2025-01-30T13:52:35.593704587Z" level=info msg="StartContainer for \"11247636cfa23f044b2dbe251a4dedef23f613f020d4c93064d8e4ee3e7711b0\" returns successfully" Jan 30 13:52:35.980181 kubelet[1772]: I0130 13:52:35.980119 1772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.130323012 podStartE2EDuration="10.980103417s" podCreationTimestamp="2025-01-30 13:52:25 +0000 UTC" firstStartedPulling="2025-01-30 13:52:26.023843884 +0000 UTC m=+35.304820573" lastFinishedPulling="2025-01-30 13:52:34.873624299 +0000 UTC m=+44.154600978" observedRunningTime="2025-01-30 13:52:35.979652817 +0000 UTC m=+45.260629516" watchObservedRunningTime="2025-01-30 13:52:35.980103417 +0000 UTC m=+45.261080106" Jan 30 13:52:36.192893 kubelet[1772]: E0130 13:52:36.192838 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:37.193054 kubelet[1772]: E0130 13:52:37.193013 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:38.194063 kubelet[1772]: E0130 13:52:38.194012 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:39.195010 kubelet[1772]: E0130 13:52:39.194957 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:40.195410 kubelet[1772]: E0130 13:52:40.195355 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:41.195552 kubelet[1772]: E0130 13:52:41.195494 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:42.195821 kubelet[1772]: E0130 13:52:42.195761 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:43.196830 kubelet[1772]: E0130 13:52:43.196777 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:44.197738 kubelet[1772]: E0130 13:52:44.197669 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:45.198873 kubelet[1772]: E0130 13:52:45.198818 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:45.307161 systemd[1]: Created slice kubepods-besteffort-pode5e345c5_67d4_42b9_9df3_1552102a9b42.slice - libcontainer container kubepods-besteffort-pode5e345c5_67d4_42b9_9df3_1552102a9b42.slice. Jan 30 13:52:45.408341 kubelet[1772]: I0130 13:52:45.408281 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-26895829-4a25-4824-8f74-23b946952fdb\" (UniqueName: \"kubernetes.io/nfs/e5e345c5-67d4-42b9-9df3-1552102a9b42-pvc-26895829-4a25-4824-8f74-23b946952fdb\") pod \"test-pod-1\" (UID: \"e5e345c5-67d4-42b9-9df3-1552102a9b42\") " pod="default/test-pod-1" Jan 30 13:52:45.408341 kubelet[1772]: I0130 13:52:45.408332 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7h6m\" (UniqueName: \"kubernetes.io/projected/e5e345c5-67d4-42b9-9df3-1552102a9b42-kube-api-access-r7h6m\") pod \"test-pod-1\" (UID: \"e5e345c5-67d4-42b9-9df3-1552102a9b42\") " pod="default/test-pod-1" Jan 30 13:52:45.536303 kernel: FS-Cache: Loaded Jan 30 13:52:45.603431 kernel: RPC: Registered named UNIX socket transport module. Jan 30 13:52:45.603563 kernel: RPC: Registered udp transport module. Jan 30 13:52:45.603592 kernel: RPC: Registered tcp transport module. Jan 30 13:52:45.604731 kernel: RPC: Registered tcp-with-tls transport module. Jan 30 13:52:45.604766 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 30 13:52:45.884614 kernel: NFS: Registering the id_resolver key type Jan 30 13:52:45.884765 kernel: Key type id_resolver registered Jan 30 13:52:45.884799 kernel: Key type id_legacy registered Jan 30 13:52:45.911639 nfsidmap[3175]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 30 13:52:45.916561 nfsidmap[3178]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 30 13:52:46.199834 kubelet[1772]: E0130 13:52:46.199747 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:46.210937 containerd[1455]: time="2025-01-30T13:52:46.210875619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:e5e345c5-67d4-42b9-9df3-1552102a9b42,Namespace:default,Attempt:0,}" Jan 30 13:52:46.241455 systemd-networkd[1391]: lxc7900666ac471: Link UP Jan 30 13:52:46.257250 kernel: eth0: renamed from tmp8ec3b Jan 30 13:52:46.268291 systemd-networkd[1391]: lxc7900666ac471: Gained carrier Jan 30 13:52:46.489966 containerd[1455]: time="2025-01-30T13:52:46.489803132Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:52:46.490542 containerd[1455]: time="2025-01-30T13:52:46.490462623Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:52:46.490542 containerd[1455]: time="2025-01-30T13:52:46.490499472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:46.490766 containerd[1455]: time="2025-01-30T13:52:46.490706501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:46.509350 systemd[1]: Started cri-containerd-8ec3b00a274cd2d4fe4431dc57ac1f43077267c9f78f1e48cbf9ec045a4a3f36.scope - libcontainer container 8ec3b00a274cd2d4fe4431dc57ac1f43077267c9f78f1e48cbf9ec045a4a3f36. Jan 30 13:52:46.522586 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:52:46.544747 containerd[1455]: time="2025-01-30T13:52:46.544707011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:e5e345c5-67d4-42b9-9df3-1552102a9b42,Namespace:default,Attempt:0,} returns sandbox id \"8ec3b00a274cd2d4fe4431dc57ac1f43077267c9f78f1e48cbf9ec045a4a3f36\"" Jan 30 13:52:46.545834 containerd[1455]: time="2025-01-30T13:52:46.545806539Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 30 13:52:46.903682 containerd[1455]: time="2025-01-30T13:52:46.903538643Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:46.904454 containerd[1455]: time="2025-01-30T13:52:46.904396736Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 30 13:52:46.906870 containerd[1455]: time="2025-01-30T13:52:46.906836334Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 360.987626ms" Jan 30 13:52:46.906870 containerd[1455]: time="2025-01-30T13:52:46.906863626Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 30 13:52:46.908648 containerd[1455]: time="2025-01-30T13:52:46.908606033Z" level=info msg="CreateContainer within sandbox \"8ec3b00a274cd2d4fe4431dc57ac1f43077267c9f78f1e48cbf9ec045a4a3f36\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 30 13:52:46.922581 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3487274710.mount: Deactivated successfully. Jan 30 13:52:46.925209 containerd[1455]: time="2025-01-30T13:52:46.925168482Z" level=info msg="CreateContainer within sandbox \"8ec3b00a274cd2d4fe4431dc57ac1f43077267c9f78f1e48cbf9ec045a4a3f36\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"334c7a23dae5815284231e74922e00e8a37645e8b57cd3aa3e149c6e77d200f4\"" Jan 30 13:52:46.925871 containerd[1455]: time="2025-01-30T13:52:46.925829886Z" level=info msg="StartContainer for \"334c7a23dae5815284231e74922e00e8a37645e8b57cd3aa3e149c6e77d200f4\"" Jan 30 13:52:46.960357 systemd[1]: Started cri-containerd-334c7a23dae5815284231e74922e00e8a37645e8b57cd3aa3e149c6e77d200f4.scope - libcontainer container 334c7a23dae5815284231e74922e00e8a37645e8b57cd3aa3e149c6e77d200f4. Jan 30 13:52:46.989010 containerd[1455]: time="2025-01-30T13:52:46.988921062Z" level=info msg="StartContainer for \"334c7a23dae5815284231e74922e00e8a37645e8b57cd3aa3e149c6e77d200f4\" returns successfully" Jan 30 13:52:47.200481 kubelet[1772]: E0130 13:52:47.200442 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:47.518001 systemd[1]: run-containerd-runc-k8s.io-334c7a23dae5815284231e74922e00e8a37645e8b57cd3aa3e149c6e77d200f4-runc.J7rCnL.mount: Deactivated successfully. Jan 30 13:52:47.869553 systemd-networkd[1391]: lxc7900666ac471: Gained IPv6LL Jan 30 13:52:48.001006 kubelet[1772]: I0130 13:52:48.000957 1772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=22.639026079 podStartE2EDuration="23.000942542s" podCreationTimestamp="2025-01-30 13:52:25 +0000 UTC" firstStartedPulling="2025-01-30 13:52:46.545522646 +0000 UTC m=+55.826499335" lastFinishedPulling="2025-01-30 13:52:46.907439108 +0000 UTC m=+56.188415798" observedRunningTime="2025-01-30 13:52:48.000617442 +0000 UTC m=+57.281594131" watchObservedRunningTime="2025-01-30 13:52:48.000942542 +0000 UTC m=+57.281919231" Jan 30 13:52:48.201337 kubelet[1772]: E0130 13:52:48.201294 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:49.201649 kubelet[1772]: E0130 13:52:49.201573 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:50.201854 kubelet[1772]: E0130 13:52:50.201776 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:51.168128 kubelet[1772]: E0130 13:52:51.168062 1772 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:51.202150 kubelet[1772]: E0130 13:52:51.202120 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:52.202698 kubelet[1772]: E0130 13:52:52.202665 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:53.203046 kubelet[1772]: E0130 13:52:53.202988 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:53.892180 containerd[1455]: time="2025-01-30T13:52:53.892134563Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:52:53.898990 containerd[1455]: time="2025-01-30T13:52:53.898964285Z" level=info msg="StopContainer for \"ec7f383f2a1e38c4e4d57c1871f836d7721a2eb737d20a2e9386463dae10573e\" with timeout 2 (s)" Jan 30 13:52:53.899215 containerd[1455]: time="2025-01-30T13:52:53.899177355Z" level=info msg="Stop container \"ec7f383f2a1e38c4e4d57c1871f836d7721a2eb737d20a2e9386463dae10573e\" with signal terminated" Jan 30 13:52:53.906107 systemd-networkd[1391]: lxc_health: Link DOWN Jan 30 13:52:53.906117 systemd-networkd[1391]: lxc_health: Lost carrier Jan 30 13:52:53.940849 systemd[1]: cri-containerd-ec7f383f2a1e38c4e4d57c1871f836d7721a2eb737d20a2e9386463dae10573e.scope: Deactivated successfully. Jan 30 13:52:53.941218 systemd[1]: cri-containerd-ec7f383f2a1e38c4e4d57c1871f836d7721a2eb737d20a2e9386463dae10573e.scope: Consumed 7.244s CPU time. Jan 30 13:52:53.961147 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec7f383f2a1e38c4e4d57c1871f836d7721a2eb737d20a2e9386463dae10573e-rootfs.mount: Deactivated successfully. Jan 30 13:52:54.165609 containerd[1455]: time="2025-01-30T13:52:54.165544232Z" level=info msg="shim disconnected" id=ec7f383f2a1e38c4e4d57c1871f836d7721a2eb737d20a2e9386463dae10573e namespace=k8s.io Jan 30 13:52:54.165609 containerd[1455]: time="2025-01-30T13:52:54.165605016Z" level=warning msg="cleaning up after shim disconnected" id=ec7f383f2a1e38c4e4d57c1871f836d7721a2eb737d20a2e9386463dae10573e namespace=k8s.io Jan 30 13:52:54.165831 containerd[1455]: time="2025-01-30T13:52:54.165614754Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:52:54.199916 containerd[1455]: time="2025-01-30T13:52:54.199862771Z" level=info msg="StopContainer for \"ec7f383f2a1e38c4e4d57c1871f836d7721a2eb737d20a2e9386463dae10573e\" returns successfully" Jan 30 13:52:54.200526 containerd[1455]: time="2025-01-30T13:52:54.200481924Z" level=info msg="StopPodSandbox for \"d5ad9ebc507a53f97fdec60d9dcbf4872d54ade44887eebbffa10833440f7000\"" Jan 30 13:52:54.200526 containerd[1455]: time="2025-01-30T13:52:54.200520476Z" level=info msg="Container to stop \"bd7c9778220f6a472cb39bc0aae27d6f9aa558130cb7c3d26683969f0288b01a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:52:54.200624 containerd[1455]: time="2025-01-30T13:52:54.200532849Z" level=info msg="Container to stop \"28c3b2688f5f22e2509d7848782789e7f38a402bc8d228dcdeb71b82892869b3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:52:54.200624 containerd[1455]: time="2025-01-30T13:52:54.200541906Z" level=info msg="Container to stop \"7fe27c347dab59718078eb835bcd0857524d89ebe49b2870e2d28c380e0d5313\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:52:54.200624 containerd[1455]: time="2025-01-30T13:52:54.200550783Z" level=info msg="Container to stop \"8d9e882b8bfd999e9bd3b2d7f960b21b4475498c8c9c59366d592feab71e0447\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:52:54.200624 containerd[1455]: time="2025-01-30T13:52:54.200571682Z" level=info msg="Container to stop \"ec7f383f2a1e38c4e4d57c1871f836d7721a2eb737d20a2e9386463dae10573e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:52:54.202459 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d5ad9ebc507a53f97fdec60d9dcbf4872d54ade44887eebbffa10833440f7000-shm.mount: Deactivated successfully. Jan 30 13:52:54.203728 kubelet[1772]: E0130 13:52:54.203701 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:54.207473 systemd[1]: cri-containerd-d5ad9ebc507a53f97fdec60d9dcbf4872d54ade44887eebbffa10833440f7000.scope: Deactivated successfully. Jan 30 13:52:54.226053 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d5ad9ebc507a53f97fdec60d9dcbf4872d54ade44887eebbffa10833440f7000-rootfs.mount: Deactivated successfully. Jan 30 13:52:54.232469 containerd[1455]: time="2025-01-30T13:52:54.232386729Z" level=info msg="shim disconnected" id=d5ad9ebc507a53f97fdec60d9dcbf4872d54ade44887eebbffa10833440f7000 namespace=k8s.io Jan 30 13:52:54.232469 containerd[1455]: time="2025-01-30T13:52:54.232467049Z" level=warning msg="cleaning up after shim disconnected" id=d5ad9ebc507a53f97fdec60d9dcbf4872d54ade44887eebbffa10833440f7000 namespace=k8s.io Jan 30 13:52:54.232469 containerd[1455]: time="2025-01-30T13:52:54.232476958Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:52:54.247889 containerd[1455]: time="2025-01-30T13:52:54.247824670Z" level=info msg="TearDown network for sandbox \"d5ad9ebc507a53f97fdec60d9dcbf4872d54ade44887eebbffa10833440f7000\" successfully" Jan 30 13:52:54.247889 containerd[1455]: time="2025-01-30T13:52:54.247870075Z" level=info msg="StopPodSandbox for \"d5ad9ebc507a53f97fdec60d9dcbf4872d54ade44887eebbffa10833440f7000\" returns successfully" Jan 30 13:52:54.263332 kubelet[1772]: I0130 13:52:54.263276 1772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4a521709-11a9-4f29-9e62-40df10f8b4c0-bpf-maps\") pod \"4a521709-11a9-4f29-9e62-40df10f8b4c0\" (UID: \"4a521709-11a9-4f29-9e62-40df10f8b4c0\") " Jan 30 13:52:54.263332 kubelet[1772]: I0130 13:52:54.263319 1772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4a521709-11a9-4f29-9e62-40df10f8b4c0-cilium-run\") pod \"4a521709-11a9-4f29-9e62-40df10f8b4c0\" (UID: \"4a521709-11a9-4f29-9e62-40df10f8b4c0\") " Jan 30 13:52:54.263514 kubelet[1772]: I0130 13:52:54.263349 1772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4a521709-11a9-4f29-9e62-40df10f8b4c0-hostproc\") pod \"4a521709-11a9-4f29-9e62-40df10f8b4c0\" (UID: \"4a521709-11a9-4f29-9e62-40df10f8b4c0\") " Jan 30 13:52:54.263514 kubelet[1772]: I0130 13:52:54.263383 1772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4a521709-11a9-4f29-9e62-40df10f8b4c0-cilium-config-path\") pod \"4a521709-11a9-4f29-9e62-40df10f8b4c0\" (UID: \"4a521709-11a9-4f29-9e62-40df10f8b4c0\") " Jan 30 13:52:54.263514 kubelet[1772]: I0130 13:52:54.263410 1772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4a521709-11a9-4f29-9e62-40df10f8b4c0-host-proc-sys-kernel\") pod \"4a521709-11a9-4f29-9e62-40df10f8b4c0\" (UID: \"4a521709-11a9-4f29-9e62-40df10f8b4c0\") " Jan 30 13:52:54.263514 kubelet[1772]: I0130 13:52:54.263417 1772 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a521709-11a9-4f29-9e62-40df10f8b4c0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4a521709-11a9-4f29-9e62-40df10f8b4c0" (UID: "4a521709-11a9-4f29-9e62-40df10f8b4c0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:52:54.263514 kubelet[1772]: I0130 13:52:54.263415 1772 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a521709-11a9-4f29-9e62-40df10f8b4c0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4a521709-11a9-4f29-9e62-40df10f8b4c0" (UID: "4a521709-11a9-4f29-9e62-40df10f8b4c0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:52:54.263514 kubelet[1772]: I0130 13:52:54.263435 1772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4a521709-11a9-4f29-9e62-40df10f8b4c0-hubble-tls\") pod \"4a521709-11a9-4f29-9e62-40df10f8b4c0\" (UID: \"4a521709-11a9-4f29-9e62-40df10f8b4c0\") " Jan 30 13:52:54.263674 kubelet[1772]: I0130 13:52:54.263509 1772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4a521709-11a9-4f29-9e62-40df10f8b4c0-cni-path\") pod \"4a521709-11a9-4f29-9e62-40df10f8b4c0\" (UID: \"4a521709-11a9-4f29-9e62-40df10f8b4c0\") " Jan 30 13:52:54.263674 kubelet[1772]: I0130 13:52:54.263533 1772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4a521709-11a9-4f29-9e62-40df10f8b4c0-xtables-lock\") pod \"4a521709-11a9-4f29-9e62-40df10f8b4c0\" (UID: \"4a521709-11a9-4f29-9e62-40df10f8b4c0\") " Jan 30 13:52:54.263674 kubelet[1772]: I0130 13:52:54.263565 1772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4a521709-11a9-4f29-9e62-40df10f8b4c0-cilium-cgroup\") pod \"4a521709-11a9-4f29-9e62-40df10f8b4c0\" (UID: \"4a521709-11a9-4f29-9e62-40df10f8b4c0\") " Jan 30 13:52:54.263674 kubelet[1772]: I0130 13:52:54.263586 1772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4a521709-11a9-4f29-9e62-40df10f8b4c0-etc-cni-netd\") pod \"4a521709-11a9-4f29-9e62-40df10f8b4c0\" (UID: \"4a521709-11a9-4f29-9e62-40df10f8b4c0\") " Jan 30 13:52:54.263674 kubelet[1772]: I0130 13:52:54.263615 1772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4a521709-11a9-4f29-9e62-40df10f8b4c0-lib-modules\") pod \"4a521709-11a9-4f29-9e62-40df10f8b4c0\" (UID: \"4a521709-11a9-4f29-9e62-40df10f8b4c0\") " Jan 30 13:52:54.263674 kubelet[1772]: I0130 13:52:54.263640 1772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58kcj\" (UniqueName: \"kubernetes.io/projected/4a521709-11a9-4f29-9e62-40df10f8b4c0-kube-api-access-58kcj\") pod \"4a521709-11a9-4f29-9e62-40df10f8b4c0\" (UID: \"4a521709-11a9-4f29-9e62-40df10f8b4c0\") " Jan 30 13:52:54.263813 kubelet[1772]: I0130 13:52:54.263668 1772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4a521709-11a9-4f29-9e62-40df10f8b4c0-clustermesh-secrets\") pod \"4a521709-11a9-4f29-9e62-40df10f8b4c0\" (UID: \"4a521709-11a9-4f29-9e62-40df10f8b4c0\") " Jan 30 13:52:54.263813 kubelet[1772]: I0130 13:52:54.263688 1772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4a521709-11a9-4f29-9e62-40df10f8b4c0-host-proc-sys-net\") pod \"4a521709-11a9-4f29-9e62-40df10f8b4c0\" (UID: \"4a521709-11a9-4f29-9e62-40df10f8b4c0\") " Jan 30 13:52:54.263813 kubelet[1772]: I0130 13:52:54.263726 1772 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4a521709-11a9-4f29-9e62-40df10f8b4c0-bpf-maps\") on node \"10.0.0.127\" DevicePath \"\"" Jan 30 13:52:54.263813 kubelet[1772]: I0130 13:52:54.263739 1772 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4a521709-11a9-4f29-9e62-40df10f8b4c0-cilium-run\") on node \"10.0.0.127\" DevicePath \"\"" Jan 30 13:52:54.263813 kubelet[1772]: I0130 13:52:54.263770 1772 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a521709-11a9-4f29-9e62-40df10f8b4c0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4a521709-11a9-4f29-9e62-40df10f8b4c0" (UID: "4a521709-11a9-4f29-9e62-40df10f8b4c0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:52:54.263813 kubelet[1772]: I0130 13:52:54.263790 1772 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a521709-11a9-4f29-9e62-40df10f8b4c0-cni-path" (OuterVolumeSpecName: "cni-path") pod "4a521709-11a9-4f29-9e62-40df10f8b4c0" (UID: "4a521709-11a9-4f29-9e62-40df10f8b4c0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:52:54.263949 kubelet[1772]: I0130 13:52:54.263808 1772 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a521709-11a9-4f29-9e62-40df10f8b4c0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4a521709-11a9-4f29-9e62-40df10f8b4c0" (UID: "4a521709-11a9-4f29-9e62-40df10f8b4c0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:52:54.263949 kubelet[1772]: I0130 13:52:54.263828 1772 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a521709-11a9-4f29-9e62-40df10f8b4c0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4a521709-11a9-4f29-9e62-40df10f8b4c0" (UID: "4a521709-11a9-4f29-9e62-40df10f8b4c0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:52:54.263949 kubelet[1772]: I0130 13:52:54.263847 1772 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a521709-11a9-4f29-9e62-40df10f8b4c0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4a521709-11a9-4f29-9e62-40df10f8b4c0" (UID: "4a521709-11a9-4f29-9e62-40df10f8b4c0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:52:54.263949 kubelet[1772]: I0130 13:52:54.263866 1772 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a521709-11a9-4f29-9e62-40df10f8b4c0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4a521709-11a9-4f29-9e62-40df10f8b4c0" (UID: "4a521709-11a9-4f29-9e62-40df10f8b4c0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:52:54.264508 kubelet[1772]: I0130 13:52:54.264097 1772 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a521709-11a9-4f29-9e62-40df10f8b4c0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4a521709-11a9-4f29-9e62-40df10f8b4c0" (UID: "4a521709-11a9-4f29-9e62-40df10f8b4c0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:52:54.264508 kubelet[1772]: I0130 13:52:54.264140 1772 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a521709-11a9-4f29-9e62-40df10f8b4c0-hostproc" (OuterVolumeSpecName: "hostproc") pod "4a521709-11a9-4f29-9e62-40df10f8b4c0" (UID: "4a521709-11a9-4f29-9e62-40df10f8b4c0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:52:54.267042 kubelet[1772]: I0130 13:52:54.267007 1772 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a521709-11a9-4f29-9e62-40df10f8b4c0-kube-api-access-58kcj" (OuterVolumeSpecName: "kube-api-access-58kcj") pod "4a521709-11a9-4f29-9e62-40df10f8b4c0" (UID: "4a521709-11a9-4f29-9e62-40df10f8b4c0"). InnerVolumeSpecName "kube-api-access-58kcj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 13:52:54.267437 kubelet[1772]: I0130 13:52:54.267407 1772 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a521709-11a9-4f29-9e62-40df10f8b4c0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4a521709-11a9-4f29-9e62-40df10f8b4c0" (UID: "4a521709-11a9-4f29-9e62-40df10f8b4c0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 13:52:54.268260 kubelet[1772]: I0130 13:52:54.267846 1772 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a521709-11a9-4f29-9e62-40df10f8b4c0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4a521709-11a9-4f29-9e62-40df10f8b4c0" (UID: "4a521709-11a9-4f29-9e62-40df10f8b4c0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 13:52:54.268260 kubelet[1772]: I0130 13:52:54.267902 1772 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a521709-11a9-4f29-9e62-40df10f8b4c0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4a521709-11a9-4f29-9e62-40df10f8b4c0" (UID: "4a521709-11a9-4f29-9e62-40df10f8b4c0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 13:52:54.268569 systemd[1]: var-lib-kubelet-pods-4a521709\x2d11a9\x2d4f29\x2d9e62\x2d40df10f8b4c0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d58kcj.mount: Deactivated successfully. Jan 30 13:52:54.268697 systemd[1]: var-lib-kubelet-pods-4a521709\x2d11a9\x2d4f29\x2d9e62\x2d40df10f8b4c0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 30 13:52:54.363987 kubelet[1772]: I0130 13:52:54.363910 1772 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4a521709-11a9-4f29-9e62-40df10f8b4c0-etc-cni-netd\") on node \"10.0.0.127\" DevicePath \"\"" Jan 30 13:52:54.363987 kubelet[1772]: I0130 13:52:54.363965 1772 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4a521709-11a9-4f29-9e62-40df10f8b4c0-lib-modules\") on node \"10.0.0.127\" DevicePath \"\"" Jan 30 13:52:54.363987 kubelet[1772]: I0130 13:52:54.363974 1772 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-58kcj\" (UniqueName: \"kubernetes.io/projected/4a521709-11a9-4f29-9e62-40df10f8b4c0-kube-api-access-58kcj\") on node \"10.0.0.127\" DevicePath \"\"" Jan 30 13:52:54.363987 kubelet[1772]: I0130 13:52:54.363987 1772 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4a521709-11a9-4f29-9e62-40df10f8b4c0-clustermesh-secrets\") on node \"10.0.0.127\" DevicePath \"\"" Jan 30 13:52:54.363987 kubelet[1772]: I0130 13:52:54.363995 1772 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4a521709-11a9-4f29-9e62-40df10f8b4c0-host-proc-sys-net\") on node \"10.0.0.127\" DevicePath \"\"" Jan 30 13:52:54.363987 kubelet[1772]: I0130 13:52:54.364003 1772 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4a521709-11a9-4f29-9e62-40df10f8b4c0-hostproc\") on node \"10.0.0.127\" DevicePath \"\"" Jan 30 13:52:54.363987 kubelet[1772]: I0130 13:52:54.364013 1772 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4a521709-11a9-4f29-9e62-40df10f8b4c0-cilium-config-path\") on node \"10.0.0.127\" DevicePath \"\"" Jan 30 13:52:54.364325 kubelet[1772]: I0130 13:52:54.364021 1772 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4a521709-11a9-4f29-9e62-40df10f8b4c0-host-proc-sys-kernel\") on node \"10.0.0.127\" DevicePath \"\"" Jan 30 13:52:54.364325 kubelet[1772]: I0130 13:52:54.364029 1772 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4a521709-11a9-4f29-9e62-40df10f8b4c0-hubble-tls\") on node \"10.0.0.127\" DevicePath \"\"" Jan 30 13:52:54.364325 kubelet[1772]: I0130 13:52:54.364036 1772 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4a521709-11a9-4f29-9e62-40df10f8b4c0-cni-path\") on node \"10.0.0.127\" DevicePath \"\"" Jan 30 13:52:54.364325 kubelet[1772]: I0130 13:52:54.364043 1772 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4a521709-11a9-4f29-9e62-40df10f8b4c0-xtables-lock\") on node \"10.0.0.127\" DevicePath \"\"" Jan 30 13:52:54.364325 kubelet[1772]: I0130 13:52:54.364051 1772 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4a521709-11a9-4f29-9e62-40df10f8b4c0-cilium-cgroup\") on node \"10.0.0.127\" DevicePath \"\"" Jan 30 13:52:54.877165 systemd[1]: var-lib-kubelet-pods-4a521709\x2d11a9\x2d4f29\x2d9e62\x2d40df10f8b4c0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 30 13:52:55.005820 kubelet[1772]: I0130 13:52:55.005790 1772 scope.go:117] "RemoveContainer" containerID="ec7f383f2a1e38c4e4d57c1871f836d7721a2eb737d20a2e9386463dae10573e" Jan 30 13:52:55.006875 containerd[1455]: time="2025-01-30T13:52:55.006848076Z" level=info msg="RemoveContainer for \"ec7f383f2a1e38c4e4d57c1871f836d7721a2eb737d20a2e9386463dae10573e\"" Jan 30 13:52:55.010601 containerd[1455]: time="2025-01-30T13:52:55.010577311Z" level=info msg="RemoveContainer for \"ec7f383f2a1e38c4e4d57c1871f836d7721a2eb737d20a2e9386463dae10573e\" returns successfully" Jan 30 13:52:55.010769 kubelet[1772]: I0130 13:52:55.010749 1772 scope.go:117] "RemoveContainer" containerID="28c3b2688f5f22e2509d7848782789e7f38a402bc8d228dcdeb71b82892869b3" Jan 30 13:52:55.011371 systemd[1]: Removed slice kubepods-burstable-pod4a521709_11a9_4f29_9e62_40df10f8b4c0.slice - libcontainer container kubepods-burstable-pod4a521709_11a9_4f29_9e62_40df10f8b4c0.slice. Jan 30 13:52:55.011604 containerd[1455]: time="2025-01-30T13:52:55.011478865Z" level=info msg="RemoveContainer for \"28c3b2688f5f22e2509d7848782789e7f38a402bc8d228dcdeb71b82892869b3\"" Jan 30 13:52:55.011725 systemd[1]: kubepods-burstable-pod4a521709_11a9_4f29_9e62_40df10f8b4c0.slice: Consumed 7.341s CPU time. Jan 30 13:52:55.014763 containerd[1455]: time="2025-01-30T13:52:55.014734269Z" level=info msg="RemoveContainer for \"28c3b2688f5f22e2509d7848782789e7f38a402bc8d228dcdeb71b82892869b3\" returns successfully" Jan 30 13:52:55.014872 kubelet[1772]: I0130 13:52:55.014849 1772 scope.go:117] "RemoveContainer" containerID="8d9e882b8bfd999e9bd3b2d7f960b21b4475498c8c9c59366d592feab71e0447" Jan 30 13:52:55.015689 containerd[1455]: time="2025-01-30T13:52:55.015660148Z" level=info msg="RemoveContainer for \"8d9e882b8bfd999e9bd3b2d7f960b21b4475498c8c9c59366d592feab71e0447\"" Jan 30 13:52:55.019023 containerd[1455]: time="2025-01-30T13:52:55.018993279Z" level=info msg="RemoveContainer for \"8d9e882b8bfd999e9bd3b2d7f960b21b4475498c8c9c59366d592feab71e0447\" returns successfully" Jan 30 13:52:55.019132 kubelet[1772]: I0130 13:52:55.019105 1772 scope.go:117] "RemoveContainer" containerID="bd7c9778220f6a472cb39bc0aae27d6f9aa558130cb7c3d26683969f0288b01a" Jan 30 13:52:55.019886 containerd[1455]: time="2025-01-30T13:52:55.019864977Z" level=info msg="RemoveContainer for \"bd7c9778220f6a472cb39bc0aae27d6f9aa558130cb7c3d26683969f0288b01a\"" Jan 30 13:52:55.022683 containerd[1455]: time="2025-01-30T13:52:55.022654185Z" level=info msg="RemoveContainer for \"bd7c9778220f6a472cb39bc0aae27d6f9aa558130cb7c3d26683969f0288b01a\" returns successfully" Jan 30 13:52:55.022786 kubelet[1772]: I0130 13:52:55.022768 1772 scope.go:117] "RemoveContainer" containerID="7fe27c347dab59718078eb835bcd0857524d89ebe49b2870e2d28c380e0d5313" Jan 30 13:52:55.023543 containerd[1455]: time="2025-01-30T13:52:55.023511867Z" level=info msg="RemoveContainer for \"7fe27c347dab59718078eb835bcd0857524d89ebe49b2870e2d28c380e0d5313\"" Jan 30 13:52:55.026617 containerd[1455]: time="2025-01-30T13:52:55.026590810Z" level=info msg="RemoveContainer for \"7fe27c347dab59718078eb835bcd0857524d89ebe49b2870e2d28c380e0d5313\" returns successfully" Jan 30 13:52:55.026742 kubelet[1772]: I0130 13:52:55.026721 1772 scope.go:117] "RemoveContainer" containerID="ec7f383f2a1e38c4e4d57c1871f836d7721a2eb737d20a2e9386463dae10573e" Jan 30 13:52:55.026903 containerd[1455]: time="2025-01-30T13:52:55.026877499Z" level=error msg="ContainerStatus for \"ec7f383f2a1e38c4e4d57c1871f836d7721a2eb737d20a2e9386463dae10573e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ec7f383f2a1e38c4e4d57c1871f836d7721a2eb737d20a2e9386463dae10573e\": not found" Jan 30 13:52:55.027018 kubelet[1772]: E0130 13:52:55.026995 1772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ec7f383f2a1e38c4e4d57c1871f836d7721a2eb737d20a2e9386463dae10573e\": not found" containerID="ec7f383f2a1e38c4e4d57c1871f836d7721a2eb737d20a2e9386463dae10573e" Jan 30 13:52:55.027056 kubelet[1772]: I0130 13:52:55.027016 1772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ec7f383f2a1e38c4e4d57c1871f836d7721a2eb737d20a2e9386463dae10573e"} err="failed to get container status \"ec7f383f2a1e38c4e4d57c1871f836d7721a2eb737d20a2e9386463dae10573e\": rpc error: code = NotFound desc = an error occurred when try to find container \"ec7f383f2a1e38c4e4d57c1871f836d7721a2eb737d20a2e9386463dae10573e\": not found" Jan 30 13:52:55.027091 kubelet[1772]: I0130 13:52:55.027057 1772 scope.go:117] "RemoveContainer" containerID="28c3b2688f5f22e2509d7848782789e7f38a402bc8d228dcdeb71b82892869b3" Jan 30 13:52:55.027251 containerd[1455]: time="2025-01-30T13:52:55.027209382Z" level=error msg="ContainerStatus for \"28c3b2688f5f22e2509d7848782789e7f38a402bc8d228dcdeb71b82892869b3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"28c3b2688f5f22e2509d7848782789e7f38a402bc8d228dcdeb71b82892869b3\": not found" Jan 30 13:52:55.027354 kubelet[1772]: E0130 13:52:55.027339 1772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"28c3b2688f5f22e2509d7848782789e7f38a402bc8d228dcdeb71b82892869b3\": not found" containerID="28c3b2688f5f22e2509d7848782789e7f38a402bc8d228dcdeb71b82892869b3" Jan 30 13:52:55.027388 kubelet[1772]: I0130 13:52:55.027356 1772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"28c3b2688f5f22e2509d7848782789e7f38a402bc8d228dcdeb71b82892869b3"} err="failed to get container status \"28c3b2688f5f22e2509d7848782789e7f38a402bc8d228dcdeb71b82892869b3\": rpc error: code = NotFound desc = an error occurred when try to find container \"28c3b2688f5f22e2509d7848782789e7f38a402bc8d228dcdeb71b82892869b3\": not found" Jan 30 13:52:55.027388 kubelet[1772]: I0130 13:52:55.027368 1772 scope.go:117] "RemoveContainer" containerID="8d9e882b8bfd999e9bd3b2d7f960b21b4475498c8c9c59366d592feab71e0447" Jan 30 13:52:55.027542 containerd[1455]: time="2025-01-30T13:52:55.027506590Z" level=error msg="ContainerStatus for \"8d9e882b8bfd999e9bd3b2d7f960b21b4475498c8c9c59366d592feab71e0447\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8d9e882b8bfd999e9bd3b2d7f960b21b4475498c8c9c59366d592feab71e0447\": not found" Jan 30 13:52:55.027649 kubelet[1772]: E0130 13:52:55.027635 1772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8d9e882b8bfd999e9bd3b2d7f960b21b4475498c8c9c59366d592feab71e0447\": not found" containerID="8d9e882b8bfd999e9bd3b2d7f960b21b4475498c8c9c59366d592feab71e0447" Jan 30 13:52:55.027682 kubelet[1772]: I0130 13:52:55.027650 1772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8d9e882b8bfd999e9bd3b2d7f960b21b4475498c8c9c59366d592feab71e0447"} err="failed to get container status \"8d9e882b8bfd999e9bd3b2d7f960b21b4475498c8c9c59366d592feab71e0447\": rpc error: code = NotFound desc = an error occurred when try to find container \"8d9e882b8bfd999e9bd3b2d7f960b21b4475498c8c9c59366d592feab71e0447\": not found" Jan 30 13:52:55.027682 kubelet[1772]: I0130 13:52:55.027660 1772 scope.go:117] "RemoveContainer" containerID="bd7c9778220f6a472cb39bc0aae27d6f9aa558130cb7c3d26683969f0288b01a" Jan 30 13:52:55.027822 containerd[1455]: time="2025-01-30T13:52:55.027794501Z" level=error msg="ContainerStatus for \"bd7c9778220f6a472cb39bc0aae27d6f9aa558130cb7c3d26683969f0288b01a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bd7c9778220f6a472cb39bc0aae27d6f9aa558130cb7c3d26683969f0288b01a\": not found" Jan 30 13:52:55.027921 kubelet[1772]: E0130 13:52:55.027905 1772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bd7c9778220f6a472cb39bc0aae27d6f9aa558130cb7c3d26683969f0288b01a\": not found" containerID="bd7c9778220f6a472cb39bc0aae27d6f9aa558130cb7c3d26683969f0288b01a" Jan 30 13:52:55.027958 kubelet[1772]: I0130 13:52:55.027924 1772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bd7c9778220f6a472cb39bc0aae27d6f9aa558130cb7c3d26683969f0288b01a"} err="failed to get container status \"bd7c9778220f6a472cb39bc0aae27d6f9aa558130cb7c3d26683969f0288b01a\": rpc error: code = NotFound desc = an error occurred when try to find container \"bd7c9778220f6a472cb39bc0aae27d6f9aa558130cb7c3d26683969f0288b01a\": not found" Jan 30 13:52:55.027958 kubelet[1772]: I0130 13:52:55.027937 1772 scope.go:117] "RemoveContainer" containerID="7fe27c347dab59718078eb835bcd0857524d89ebe49b2870e2d28c380e0d5313" Jan 30 13:52:55.028113 containerd[1455]: time="2025-01-30T13:52:55.028082092Z" level=error msg="ContainerStatus for \"7fe27c347dab59718078eb835bcd0857524d89ebe49b2870e2d28c380e0d5313\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7fe27c347dab59718078eb835bcd0857524d89ebe49b2870e2d28c380e0d5313\": not found" Jan 30 13:52:55.028213 kubelet[1772]: E0130 13:52:55.028195 1772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7fe27c347dab59718078eb835bcd0857524d89ebe49b2870e2d28c380e0d5313\": not found" containerID="7fe27c347dab59718078eb835bcd0857524d89ebe49b2870e2d28c380e0d5313" Jan 30 13:52:55.028278 kubelet[1772]: I0130 13:52:55.028215 1772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7fe27c347dab59718078eb835bcd0857524d89ebe49b2870e2d28c380e0d5313"} err="failed to get container status \"7fe27c347dab59718078eb835bcd0857524d89ebe49b2870e2d28c380e0d5313\": rpc error: code = NotFound desc = an error occurred when try to find container \"7fe27c347dab59718078eb835bcd0857524d89ebe49b2870e2d28c380e0d5313\": not found" Jan 30 13:52:55.204328 kubelet[1772]: E0130 13:52:55.204285 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:55.874987 kubelet[1772]: I0130 13:52:55.874944 1772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a521709-11a9-4f29-9e62-40df10f8b4c0" path="/var/lib/kubelet/pods/4a521709-11a9-4f29-9e62-40df10f8b4c0/volumes" Jan 30 13:52:55.969172 kubelet[1772]: I0130 13:52:55.969124 1772 memory_manager.go:355] "RemoveStaleState removing state" podUID="4a521709-11a9-4f29-9e62-40df10f8b4c0" containerName="cilium-agent" Jan 30 13:52:55.979361 systemd[1]: Created slice kubepods-besteffort-pod8bd03808_836b_457d_a443_644a2e4c2266.slice - libcontainer container kubepods-besteffort-pod8bd03808_836b_457d_a443_644a2e4c2266.slice. Jan 30 13:52:55.987700 systemd[1]: Created slice kubepods-burstable-pod9cdd06e1_1d34_4016_8797_ab2353afcd1d.slice - libcontainer container kubepods-burstable-pod9cdd06e1_1d34_4016_8797_ab2353afcd1d.slice. Jan 30 13:52:56.074045 kubelet[1772]: I0130 13:52:56.073984 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9cdd06e1-1d34-4016-8797-ab2353afcd1d-clustermesh-secrets\") pod \"cilium-q2ldx\" (UID: \"9cdd06e1-1d34-4016-8797-ab2353afcd1d\") " pod="kube-system/cilium-q2ldx" Jan 30 13:52:56.074045 kubelet[1772]: I0130 13:52:56.074029 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9cdd06e1-1d34-4016-8797-ab2353afcd1d-host-proc-sys-kernel\") pod \"cilium-q2ldx\" (UID: \"9cdd06e1-1d34-4016-8797-ab2353afcd1d\") " pod="kube-system/cilium-q2ldx" Jan 30 13:52:56.074045 kubelet[1772]: I0130 13:52:56.074047 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9cdd06e1-1d34-4016-8797-ab2353afcd1d-hubble-tls\") pod \"cilium-q2ldx\" (UID: \"9cdd06e1-1d34-4016-8797-ab2353afcd1d\") " pod="kube-system/cilium-q2ldx" Jan 30 13:52:56.074264 kubelet[1772]: I0130 13:52:56.074064 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jggw\" (UniqueName: \"kubernetes.io/projected/8bd03808-836b-457d-a443-644a2e4c2266-kube-api-access-6jggw\") pod \"cilium-operator-6c4d7847fc-hrs9h\" (UID: \"8bd03808-836b-457d-a443-644a2e4c2266\") " pod="kube-system/cilium-operator-6c4d7847fc-hrs9h" Jan 30 13:52:56.074264 kubelet[1772]: I0130 13:52:56.074091 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9cdd06e1-1d34-4016-8797-ab2353afcd1d-hostproc\") pod \"cilium-q2ldx\" (UID: \"9cdd06e1-1d34-4016-8797-ab2353afcd1d\") " pod="kube-system/cilium-q2ldx" Jan 30 13:52:56.074264 kubelet[1772]: I0130 13:52:56.074105 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9cdd06e1-1d34-4016-8797-ab2353afcd1d-cni-path\") pod \"cilium-q2ldx\" (UID: \"9cdd06e1-1d34-4016-8797-ab2353afcd1d\") " pod="kube-system/cilium-q2ldx" Jan 30 13:52:56.074264 kubelet[1772]: I0130 13:52:56.074120 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9cdd06e1-1d34-4016-8797-ab2353afcd1d-cilium-ipsec-secrets\") pod \"cilium-q2ldx\" (UID: \"9cdd06e1-1d34-4016-8797-ab2353afcd1d\") " pod="kube-system/cilium-q2ldx" Jan 30 13:52:56.074264 kubelet[1772]: I0130 13:52:56.074147 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw247\" (UniqueName: \"kubernetes.io/projected/9cdd06e1-1d34-4016-8797-ab2353afcd1d-kube-api-access-nw247\") pod \"cilium-q2ldx\" (UID: \"9cdd06e1-1d34-4016-8797-ab2353afcd1d\") " pod="kube-system/cilium-q2ldx" Jan 30 13:52:56.074381 kubelet[1772]: I0130 13:52:56.074161 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9cdd06e1-1d34-4016-8797-ab2353afcd1d-cilium-cgroup\") pod \"cilium-q2ldx\" (UID: \"9cdd06e1-1d34-4016-8797-ab2353afcd1d\") " pod="kube-system/cilium-q2ldx" Jan 30 13:52:56.074381 kubelet[1772]: I0130 13:52:56.074174 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9cdd06e1-1d34-4016-8797-ab2353afcd1d-etc-cni-netd\") pod \"cilium-q2ldx\" (UID: \"9cdd06e1-1d34-4016-8797-ab2353afcd1d\") " pod="kube-system/cilium-q2ldx" Jan 30 13:52:56.074381 kubelet[1772]: I0130 13:52:56.074188 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9cdd06e1-1d34-4016-8797-ab2353afcd1d-cilium-config-path\") pod \"cilium-q2ldx\" (UID: \"9cdd06e1-1d34-4016-8797-ab2353afcd1d\") " pod="kube-system/cilium-q2ldx" Jan 30 13:52:56.074381 kubelet[1772]: I0130 13:52:56.074203 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9cdd06e1-1d34-4016-8797-ab2353afcd1d-host-proc-sys-net\") pod \"cilium-q2ldx\" (UID: \"9cdd06e1-1d34-4016-8797-ab2353afcd1d\") " pod="kube-system/cilium-q2ldx" Jan 30 13:52:56.074381 kubelet[1772]: I0130 13:52:56.074252 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9cdd06e1-1d34-4016-8797-ab2353afcd1d-cilium-run\") pod \"cilium-q2ldx\" (UID: \"9cdd06e1-1d34-4016-8797-ab2353afcd1d\") " pod="kube-system/cilium-q2ldx" Jan 30 13:52:56.074381 kubelet[1772]: I0130 13:52:56.074281 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9cdd06e1-1d34-4016-8797-ab2353afcd1d-lib-modules\") pod \"cilium-q2ldx\" (UID: \"9cdd06e1-1d34-4016-8797-ab2353afcd1d\") " pod="kube-system/cilium-q2ldx" Jan 30 13:52:56.074521 kubelet[1772]: I0130 13:52:56.074295 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9cdd06e1-1d34-4016-8797-ab2353afcd1d-xtables-lock\") pod \"cilium-q2ldx\" (UID: \"9cdd06e1-1d34-4016-8797-ab2353afcd1d\") " pod="kube-system/cilium-q2ldx" Jan 30 13:52:56.074521 kubelet[1772]: I0130 13:52:56.074326 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8bd03808-836b-457d-a443-644a2e4c2266-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-hrs9h\" (UID: \"8bd03808-836b-457d-a443-644a2e4c2266\") " pod="kube-system/cilium-operator-6c4d7847fc-hrs9h" Jan 30 13:52:56.074521 kubelet[1772]: I0130 13:52:56.074349 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9cdd06e1-1d34-4016-8797-ab2353afcd1d-bpf-maps\") pod \"cilium-q2ldx\" (UID: \"9cdd06e1-1d34-4016-8797-ab2353afcd1d\") " pod="kube-system/cilium-q2ldx" Jan 30 13:52:56.205320 kubelet[1772]: E0130 13:52:56.205275 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:56.283099 kubelet[1772]: E0130 13:52:56.283041 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:56.283637 containerd[1455]: time="2025-01-30T13:52:56.283592493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-hrs9h,Uid:8bd03808-836b-457d-a443-644a2e4c2266,Namespace:kube-system,Attempt:0,}" Jan 30 13:52:56.301456 kubelet[1772]: E0130 13:52:56.301415 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:56.301944 containerd[1455]: time="2025-01-30T13:52:56.301910137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q2ldx,Uid:9cdd06e1-1d34-4016-8797-ab2353afcd1d,Namespace:kube-system,Attempt:0,}" Jan 30 13:52:56.304428 containerd[1455]: time="2025-01-30T13:52:56.304276601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:52:56.304428 containerd[1455]: time="2025-01-30T13:52:56.304401847Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:52:56.304523 containerd[1455]: time="2025-01-30T13:52:56.304422866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:56.305283 containerd[1455]: time="2025-01-30T13:52:56.305194876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:56.323566 systemd[1]: Started cri-containerd-efa9d9304187dea7435e5ec55ab91cba5c0b464c8ca1ab7148de39a99aea300e.scope - libcontainer container efa9d9304187dea7435e5ec55ab91cba5c0b464c8ca1ab7148de39a99aea300e. Jan 30 13:52:56.327935 containerd[1455]: time="2025-01-30T13:52:56.327801504Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:52:56.327990 containerd[1455]: time="2025-01-30T13:52:56.327895081Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:52:56.327990 containerd[1455]: time="2025-01-30T13:52:56.327953601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:56.328280 containerd[1455]: time="2025-01-30T13:52:56.328171760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:56.350275 systemd[1]: Started cri-containerd-dd72362d41b05ccf5203d3297e31549945ac8e84c2c71525b516ffeca7dd402d.scope - libcontainer container dd72362d41b05ccf5203d3297e31549945ac8e84c2c71525b516ffeca7dd402d. Jan 30 13:52:56.360480 containerd[1455]: time="2025-01-30T13:52:56.360424756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-hrs9h,Uid:8bd03808-836b-457d-a443-644a2e4c2266,Namespace:kube-system,Attempt:0,} returns sandbox id \"efa9d9304187dea7435e5ec55ab91cba5c0b464c8ca1ab7148de39a99aea300e\"" Jan 30 13:52:56.361162 kubelet[1772]: E0130 13:52:56.361134 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:56.362353 containerd[1455]: time="2025-01-30T13:52:56.362325106Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 30 13:52:56.377922 containerd[1455]: time="2025-01-30T13:52:56.377880452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q2ldx,Uid:9cdd06e1-1d34-4016-8797-ab2353afcd1d,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd72362d41b05ccf5203d3297e31549945ac8e84c2c71525b516ffeca7dd402d\"" Jan 30 13:52:56.378527 kubelet[1772]: E0130 13:52:56.378508 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:56.379978 containerd[1455]: time="2025-01-30T13:52:56.379945039Z" level=info msg="CreateContainer within sandbox \"dd72362d41b05ccf5203d3297e31549945ac8e84c2c71525b516ffeca7dd402d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 13:52:56.396571 containerd[1455]: time="2025-01-30T13:52:56.396396037Z" level=info msg="CreateContainer within sandbox \"dd72362d41b05ccf5203d3297e31549945ac8e84c2c71525b516ffeca7dd402d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4e7fbd66145a8d08d95f7e24aa218224f072b3ea6d61c62c7cf1719f7d041824\"" Jan 30 13:52:56.397004 containerd[1455]: time="2025-01-30T13:52:56.396977839Z" level=info msg="StartContainer for \"4e7fbd66145a8d08d95f7e24aa218224f072b3ea6d61c62c7cf1719f7d041824\"" Jan 30 13:52:56.428426 systemd[1]: Started cri-containerd-4e7fbd66145a8d08d95f7e24aa218224f072b3ea6d61c62c7cf1719f7d041824.scope - libcontainer container 4e7fbd66145a8d08d95f7e24aa218224f072b3ea6d61c62c7cf1719f7d041824. Jan 30 13:52:56.456267 containerd[1455]: time="2025-01-30T13:52:56.455939619Z" level=info msg="StartContainer for \"4e7fbd66145a8d08d95f7e24aa218224f072b3ea6d61c62c7cf1719f7d041824\" returns successfully" Jan 30 13:52:56.465943 systemd[1]: cri-containerd-4e7fbd66145a8d08d95f7e24aa218224f072b3ea6d61c62c7cf1719f7d041824.scope: Deactivated successfully. Jan 30 13:52:56.506427 containerd[1455]: time="2025-01-30T13:52:56.506332997Z" level=info msg="shim disconnected" id=4e7fbd66145a8d08d95f7e24aa218224f072b3ea6d61c62c7cf1719f7d041824 namespace=k8s.io Jan 30 13:52:56.506427 containerd[1455]: time="2025-01-30T13:52:56.506406846Z" level=warning msg="cleaning up after shim disconnected" id=4e7fbd66145a8d08d95f7e24aa218224f072b3ea6d61c62c7cf1719f7d041824 namespace=k8s.io Jan 30 13:52:56.506427 containerd[1455]: time="2025-01-30T13:52:56.506418097Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:52:56.519296 containerd[1455]: time="2025-01-30T13:52:56.519217937Z" level=warning msg="cleanup warnings time=\"2025-01-30T13:52:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 13:52:56.899014 kubelet[1772]: E0130 13:52:56.898875 1772 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 13:52:57.011455 kubelet[1772]: E0130 13:52:57.011392 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:57.013039 containerd[1455]: time="2025-01-30T13:52:57.013000093Z" level=info msg="CreateContainer within sandbox \"dd72362d41b05ccf5203d3297e31549945ac8e84c2c71525b516ffeca7dd402d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 13:52:57.205603 kubelet[1772]: E0130 13:52:57.205568 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:58.206528 kubelet[1772]: E0130 13:52:58.206435 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:58.543261 containerd[1455]: time="2025-01-30T13:52:58.543067398Z" level=info msg="CreateContainer within sandbox \"dd72362d41b05ccf5203d3297e31549945ac8e84c2c71525b516ffeca7dd402d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bfc744eaa6516edf900ea8dc2ae4d0cdb1a50362ad1836c893a23fe8220ff7bb\"" Jan 30 13:52:58.543968 containerd[1455]: time="2025-01-30T13:52:58.543922243Z" level=info msg="StartContainer for \"bfc744eaa6516edf900ea8dc2ae4d0cdb1a50362ad1836c893a23fe8220ff7bb\"" Jan 30 13:52:58.576465 systemd[1]: Started cri-containerd-bfc744eaa6516edf900ea8dc2ae4d0cdb1a50362ad1836c893a23fe8220ff7bb.scope - libcontainer container bfc744eaa6516edf900ea8dc2ae4d0cdb1a50362ad1836c893a23fe8220ff7bb. Jan 30 13:52:58.609842 systemd[1]: cri-containerd-bfc744eaa6516edf900ea8dc2ae4d0cdb1a50362ad1836c893a23fe8220ff7bb.scope: Deactivated successfully. Jan 30 13:52:58.887877 containerd[1455]: time="2025-01-30T13:52:58.887709215Z" level=info msg="StartContainer for \"bfc744eaa6516edf900ea8dc2ae4d0cdb1a50362ad1836c893a23fe8220ff7bb\" returns successfully" Jan 30 13:52:58.908090 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bfc744eaa6516edf900ea8dc2ae4d0cdb1a50362ad1836c893a23fe8220ff7bb-rootfs.mount: Deactivated successfully. Jan 30 13:52:58.934360 containerd[1455]: time="2025-01-30T13:52:58.934281120Z" level=info msg="shim disconnected" id=bfc744eaa6516edf900ea8dc2ae4d0cdb1a50362ad1836c893a23fe8220ff7bb namespace=k8s.io Jan 30 13:52:58.934360 containerd[1455]: time="2025-01-30T13:52:58.934347655Z" level=warning msg="cleaning up after shim disconnected" id=bfc744eaa6516edf900ea8dc2ae4d0cdb1a50362ad1836c893a23fe8220ff7bb namespace=k8s.io Jan 30 13:52:58.934360 containerd[1455]: time="2025-01-30T13:52:58.934363324Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:52:59.021066 kubelet[1772]: E0130 13:52:59.021034 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:52:59.023047 containerd[1455]: time="2025-01-30T13:52:59.022996078Z" level=info msg="CreateContainer within sandbox \"dd72362d41b05ccf5203d3297e31549945ac8e84c2c71525b516ffeca7dd402d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 13:52:59.041399 containerd[1455]: time="2025-01-30T13:52:59.041326487Z" level=info msg="CreateContainer within sandbox \"dd72362d41b05ccf5203d3297e31549945ac8e84c2c71525b516ffeca7dd402d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"79c19ad20f6db985cf5b67ad10bb75c30529a9fe5fa9e9364648d96760f10382\"" Jan 30 13:52:59.041955 containerd[1455]: time="2025-01-30T13:52:59.041920482Z" level=info msg="StartContainer for \"79c19ad20f6db985cf5b67ad10bb75c30529a9fe5fa9e9364648d96760f10382\"" Jan 30 13:52:59.073473 systemd[1]: Started cri-containerd-79c19ad20f6db985cf5b67ad10bb75c30529a9fe5fa9e9364648d96760f10382.scope - libcontainer container 79c19ad20f6db985cf5b67ad10bb75c30529a9fe5fa9e9364648d96760f10382. Jan 30 13:52:59.112123 containerd[1455]: time="2025-01-30T13:52:59.112075190Z" level=info msg="StartContainer for \"79c19ad20f6db985cf5b67ad10bb75c30529a9fe5fa9e9364648d96760f10382\" returns successfully" Jan 30 13:52:59.113824 systemd[1]: cri-containerd-79c19ad20f6db985cf5b67ad10bb75c30529a9fe5fa9e9364648d96760f10382.scope: Deactivated successfully. Jan 30 13:52:59.144309 containerd[1455]: time="2025-01-30T13:52:59.144112635Z" level=info msg="shim disconnected" id=79c19ad20f6db985cf5b67ad10bb75c30529a9fe5fa9e9364648d96760f10382 namespace=k8s.io Jan 30 13:52:59.144309 containerd[1455]: time="2025-01-30T13:52:59.144172108Z" level=warning msg="cleaning up after shim disconnected" id=79c19ad20f6db985cf5b67ad10bb75c30529a9fe5fa9e9364648d96760f10382 namespace=k8s.io Jan 30 13:52:59.144309 containerd[1455]: time="2025-01-30T13:52:59.144183158Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:52:59.207330 kubelet[1772]: E0130 13:52:59.207267 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:52:59.783216 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4228464652.mount: Deactivated successfully. Jan 30 13:53:00.025535 kubelet[1772]: E0130 13:53:00.025501 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:53:00.027570 containerd[1455]: time="2025-01-30T13:53:00.027537234Z" level=info msg="CreateContainer within sandbox \"dd72362d41b05ccf5203d3297e31549945ac8e84c2c71525b516ffeca7dd402d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 13:53:00.047176 containerd[1455]: time="2025-01-30T13:53:00.047034180Z" level=info msg="CreateContainer within sandbox \"dd72362d41b05ccf5203d3297e31549945ac8e84c2c71525b516ffeca7dd402d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6820be04764725c2a84a8a33ebc78d6b337913a865bb4ecd21227ed3951fea0b\"" Jan 30 13:53:00.047844 containerd[1455]: time="2025-01-30T13:53:00.047796842Z" level=info msg="StartContainer for \"6820be04764725c2a84a8a33ebc78d6b337913a865bb4ecd21227ed3951fea0b\"" Jan 30 13:53:00.066430 containerd[1455]: time="2025-01-30T13:53:00.066372439Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:00.067561 containerd[1455]: time="2025-01-30T13:53:00.067518630Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 30 13:53:00.068779 containerd[1455]: time="2025-01-30T13:53:00.068717922Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:00.070284 containerd[1455]: time="2025-01-30T13:53:00.070162434Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.707799356s" Jan 30 13:53:00.070284 containerd[1455]: time="2025-01-30T13:53:00.070193943Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 30 13:53:00.072618 containerd[1455]: time="2025-01-30T13:53:00.072552531Z" level=info msg="CreateContainer within sandbox \"efa9d9304187dea7435e5ec55ab91cba5c0b464c8ca1ab7148de39a99aea300e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 30 13:53:00.077481 systemd[1]: Started cri-containerd-6820be04764725c2a84a8a33ebc78d6b337913a865bb4ecd21227ed3951fea0b.scope - libcontainer container 6820be04764725c2a84a8a33ebc78d6b337913a865bb4ecd21227ed3951fea0b. Jan 30 13:53:00.087332 containerd[1455]: time="2025-01-30T13:53:00.087165788Z" level=info msg="CreateContainer within sandbox \"efa9d9304187dea7435e5ec55ab91cba5c0b464c8ca1ab7148de39a99aea300e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"13aa4a4d7229d38c457d5aee825143f0664402f2aa0d3cfb3fdb149840994149\"" Jan 30 13:53:00.087750 containerd[1455]: time="2025-01-30T13:53:00.087723145Z" level=info msg="StartContainer for \"13aa4a4d7229d38c457d5aee825143f0664402f2aa0d3cfb3fdb149840994149\"" Jan 30 13:53:00.106667 systemd[1]: cri-containerd-6820be04764725c2a84a8a33ebc78d6b337913a865bb4ecd21227ed3951fea0b.scope: Deactivated successfully. Jan 30 13:53:00.132423 systemd[1]: Started cri-containerd-13aa4a4d7229d38c457d5aee825143f0664402f2aa0d3cfb3fdb149840994149.scope - libcontainer container 13aa4a4d7229d38c457d5aee825143f0664402f2aa0d3cfb3fdb149840994149. Jan 30 13:53:00.207646 kubelet[1772]: E0130 13:53:00.207603 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:00.233833 containerd[1455]: time="2025-01-30T13:53:00.233789038Z" level=info msg="StartContainer for \"13aa4a4d7229d38c457d5aee825143f0664402f2aa0d3cfb3fdb149840994149\" returns successfully" Jan 30 13:53:00.233987 containerd[1455]: time="2025-01-30T13:53:00.233801882Z" level=info msg="StartContainer for \"6820be04764725c2a84a8a33ebc78d6b337913a865bb4ecd21227ed3951fea0b\" returns successfully" Jan 30 13:53:00.423507 containerd[1455]: time="2025-01-30T13:53:00.423445083Z" level=info msg="shim disconnected" id=6820be04764725c2a84a8a33ebc78d6b337913a865bb4ecd21227ed3951fea0b namespace=k8s.io Jan 30 13:53:00.423695 containerd[1455]: time="2025-01-30T13:53:00.423507009Z" level=warning msg="cleaning up after shim disconnected" id=6820be04764725c2a84a8a33ebc78d6b337913a865bb4ecd21227ed3951fea0b namespace=k8s.io Jan 30 13:53:00.423695 containerd[1455]: time="2025-01-30T13:53:00.423519412Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:53:01.029145 kubelet[1772]: E0130 13:53:01.029107 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:53:01.030627 kubelet[1772]: E0130 13:53:01.030582 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:53:01.030858 containerd[1455]: time="2025-01-30T13:53:01.030818730Z" level=info msg="CreateContainer within sandbox \"dd72362d41b05ccf5203d3297e31549945ac8e84c2c71525b516ffeca7dd402d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 13:53:01.048080 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2036708431.mount: Deactivated successfully. Jan 30 13:53:01.049972 containerd[1455]: time="2025-01-30T13:53:01.049920010Z" level=info msg="CreateContainer within sandbox \"dd72362d41b05ccf5203d3297e31549945ac8e84c2c71525b516ffeca7dd402d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"839c0f56361075a2ced532274994e6450eaa948c2a4aabf0501ad922204aad8c\"" Jan 30 13:53:01.050533 containerd[1455]: time="2025-01-30T13:53:01.050389512Z" level=info msg="StartContainer for \"839c0f56361075a2ced532274994e6450eaa948c2a4aabf0501ad922204aad8c\"" Jan 30 13:53:01.052111 kubelet[1772]: I0130 13:53:01.052066 1772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-hrs9h" podStartSLOduration=2.342582329 podStartE2EDuration="6.052048656s" podCreationTimestamp="2025-01-30 13:52:55 +0000 UTC" firstStartedPulling="2025-01-30 13:52:56.361936176 +0000 UTC m=+65.642912865" lastFinishedPulling="2025-01-30 13:53:00.071402503 +0000 UTC m=+69.352379192" observedRunningTime="2025-01-30 13:53:01.051842249 +0000 UTC m=+70.332818938" watchObservedRunningTime="2025-01-30 13:53:01.052048656 +0000 UTC m=+70.333025345" Jan 30 13:53:01.080381 systemd[1]: Started cri-containerd-839c0f56361075a2ced532274994e6450eaa948c2a4aabf0501ad922204aad8c.scope - libcontainer container 839c0f56361075a2ced532274994e6450eaa948c2a4aabf0501ad922204aad8c. Jan 30 13:53:01.111258 containerd[1455]: time="2025-01-30T13:53:01.111190990Z" level=info msg="StartContainer for \"839c0f56361075a2ced532274994e6450eaa948c2a4aabf0501ad922204aad8c\" returns successfully" Jan 30 13:53:01.208267 kubelet[1772]: E0130 13:53:01.208206 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:01.523267 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 30 13:53:02.034437 kubelet[1772]: E0130 13:53:02.034402 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:53:02.034581 kubelet[1772]: E0130 13:53:02.034493 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:53:02.049839 kubelet[1772]: I0130 13:53:02.049768 1772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-q2ldx" podStartSLOduration=7.049748697 podStartE2EDuration="7.049748697s" podCreationTimestamp="2025-01-30 13:52:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:53:02.049067048 +0000 UTC m=+71.330043757" watchObservedRunningTime="2025-01-30 13:53:02.049748697 +0000 UTC m=+71.330725386" Jan 30 13:53:02.209908 kubelet[1772]: E0130 13:53:02.209823 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:03.036062 kubelet[1772]: E0130 13:53:03.036027 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:53:03.211048 kubelet[1772]: E0130 13:53:03.210985 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:04.211655 kubelet[1772]: E0130 13:53:04.211616 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:04.803462 systemd-networkd[1391]: lxc_health: Link UP Jan 30 13:53:04.814721 systemd-networkd[1391]: lxc_health: Gained carrier Jan 30 13:53:05.212834 kubelet[1772]: E0130 13:53:05.212680 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:06.109563 systemd-networkd[1391]: lxc_health: Gained IPv6LL Jan 30 13:53:06.213026 kubelet[1772]: E0130 13:53:06.212974 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:06.303120 kubelet[1772]: E0130 13:53:06.303086 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:53:07.043035 kubelet[1772]: E0130 13:53:07.042995 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:53:07.213831 kubelet[1772]: E0130 13:53:07.213773 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:08.044819 kubelet[1772]: E0130 13:53:08.044769 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:53:08.214419 kubelet[1772]: E0130 13:53:08.214379 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:09.214810 kubelet[1772]: E0130 13:53:09.214743 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:10.215376 kubelet[1772]: E0130 13:53:10.215308 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:11.168065 kubelet[1772]: E0130 13:53:11.168022 1772 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:11.215568 kubelet[1772]: E0130 13:53:11.215508 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:12.215703 kubelet[1772]: E0130 13:53:12.215645 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"