May 8 00:45:49.915545 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed May 7 22:54:21 -00 2025 May 8 00:45:49.915572 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=86cfbfcc89a9c46f6cbba5bdb3509d1ce1367f0c93b0b0e4c6bdcad1a2064c90 May 8 00:45:49.915587 kernel: BIOS-provided physical RAM map: May 8 00:45:49.915596 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 8 00:45:49.915604 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 8 00:45:49.915613 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 8 00:45:49.915623 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 8 00:45:49.915632 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 8 00:45:49.915641 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable May 8 00:45:49.915650 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS May 8 00:45:49.915662 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable May 8 00:45:49.915670 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved May 8 00:45:49.915679 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 May 8 00:45:49.915689 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved May 8 00:45:49.915699 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data May 8 00:45:49.915709 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 8 00:45:49.915721 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable May 8 00:45:49.915731 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved May 8 00:45:49.915741 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 8 00:45:49.915750 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 8 00:45:49.915760 kernel: NX (Execute Disable) protection: active May 8 00:45:49.915769 kernel: APIC: Static calls initialized May 8 00:45:49.915779 kernel: efi: EFI v2.7 by EDK II May 8 00:45:49.915789 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 May 8 00:45:49.915798 kernel: SMBIOS 2.8 present. May 8 00:45:49.915807 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 May 8 00:45:49.915816 kernel: Hypervisor detected: KVM May 8 00:45:49.915829 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 8 00:45:49.915838 kernel: kvm-clock: using sched offset of 4057903347 cycles May 8 00:45:49.915849 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 8 00:45:49.915859 kernel: tsc: Detected 2794.748 MHz processor May 8 00:45:49.915869 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 8 00:45:49.915880 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 8 00:45:49.915890 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 May 8 00:45:49.915900 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 8 00:45:49.915910 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 8 00:45:49.915922 kernel: Using GB pages for direct mapping May 8 00:45:49.915932 kernel: Secure boot disabled May 8 00:45:49.915942 kernel: ACPI: Early table checksum verification disabled May 8 00:45:49.915952 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 8 00:45:49.915967 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 8 00:45:49.915977 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:45:49.915988 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:45:49.916001 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 8 00:45:49.916011 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:45:49.916022 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:45:49.916032 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:45:49.916042 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:45:49.916053 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 8 00:45:49.916063 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 8 00:45:49.916076 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] May 8 00:45:49.916087 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 8 00:45:49.916097 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 8 00:45:49.916107 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 8 00:45:49.916117 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 8 00:45:49.916128 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 8 00:45:49.916138 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 8 00:45:49.916162 kernel: No NUMA configuration found May 8 00:45:49.916172 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] May 8 00:45:49.916182 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] May 8 00:45:49.916196 kernel: Zone ranges: May 8 00:45:49.916207 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 8 00:45:49.916217 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] May 8 00:45:49.916227 kernel: Normal empty May 8 00:45:49.916238 kernel: Movable zone start for each node May 8 00:45:49.916248 kernel: Early memory node ranges May 8 00:45:49.916259 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 8 00:45:49.916269 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 8 00:45:49.916279 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 8 00:45:49.916293 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] May 8 00:45:49.916303 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] May 8 00:45:49.916313 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] May 8 00:45:49.916324 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] May 8 00:45:49.916334 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 8 00:45:49.916345 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 8 00:45:49.916355 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 8 00:45:49.916365 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 8 00:45:49.916376 kernel: On node 0, zone DMA: 240 pages in unavailable ranges May 8 00:45:49.916389 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges May 8 00:45:49.916399 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges May 8 00:45:49.916410 kernel: ACPI: PM-Timer IO Port: 0x608 May 8 00:45:49.916420 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 8 00:45:49.916431 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 8 00:45:49.916441 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 8 00:45:49.916451 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 8 00:45:49.916462 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 8 00:45:49.916472 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 8 00:45:49.916482 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 8 00:45:49.916496 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 8 00:45:49.916513 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 8 00:45:49.916523 kernel: TSC deadline timer available May 8 00:45:49.916534 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 8 00:45:49.916544 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 8 00:45:49.916554 kernel: kvm-guest: KVM setup pv remote TLB flush May 8 00:45:49.916564 kernel: kvm-guest: setup PV sched yield May 8 00:45:49.916575 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices May 8 00:45:49.916585 kernel: Booting paravirtualized kernel on KVM May 8 00:45:49.916598 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 8 00:45:49.916609 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 8 00:45:49.916620 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 May 8 00:45:49.916630 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 May 8 00:45:49.916640 kernel: pcpu-alloc: [0] 0 1 2 3 May 8 00:45:49.916650 kernel: kvm-guest: PV spinlocks enabled May 8 00:45:49.916661 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 8 00:45:49.916673 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=86cfbfcc89a9c46f6cbba5bdb3509d1ce1367f0c93b0b0e4c6bdcad1a2064c90 May 8 00:45:49.916687 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 8 00:45:49.916698 kernel: random: crng init done May 8 00:45:49.916708 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 8 00:45:49.916718 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 00:45:49.916729 kernel: Fallback order for Node 0: 0 May 8 00:45:49.916739 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 May 8 00:45:49.916749 kernel: Policy zone: DMA32 May 8 00:45:49.916760 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 8 00:45:49.916771 kernel: Memory: 2400600K/2567000K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42856K init, 2336K bss, 166140K reserved, 0K cma-reserved) May 8 00:45:49.916785 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 8 00:45:49.916795 kernel: ftrace: allocating 37944 entries in 149 pages May 8 00:45:49.916805 kernel: ftrace: allocated 149 pages with 4 groups May 8 00:45:49.916815 kernel: Dynamic Preempt: voluntary May 8 00:45:49.916835 kernel: rcu: Preemptible hierarchical RCU implementation. May 8 00:45:49.916850 kernel: rcu: RCU event tracing is enabled. May 8 00:45:49.916861 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 8 00:45:49.916872 kernel: Trampoline variant of Tasks RCU enabled. May 8 00:45:49.916883 kernel: Rude variant of Tasks RCU enabled. May 8 00:45:49.916894 kernel: Tracing variant of Tasks RCU enabled. May 8 00:45:49.916905 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 8 00:45:49.916927 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 8 00:45:49.916941 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 8 00:45:49.916960 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 8 00:45:49.916987 kernel: Console: colour dummy device 80x25 May 8 00:45:49.917006 kernel: printk: console [ttyS0] enabled May 8 00:45:49.917025 kernel: ACPI: Core revision 20230628 May 8 00:45:49.917055 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 8 00:45:49.917088 kernel: APIC: Switch to symmetric I/O mode setup May 8 00:45:49.917100 kernel: x2apic enabled May 8 00:45:49.917126 kernel: APIC: Switched APIC routing to: physical x2apic May 8 00:45:49.917137 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 8 00:45:49.917161 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 8 00:45:49.917172 kernel: kvm-guest: setup PV IPIs May 8 00:45:49.917183 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 8 00:45:49.917194 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 8 00:45:49.917208 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 8 00:45:49.917219 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 8 00:45:49.917230 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 8 00:45:49.917241 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 8 00:45:49.917252 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 8 00:45:49.917262 kernel: Spectre V2 : Mitigation: Retpolines May 8 00:45:49.917273 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 8 00:45:49.917284 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 8 00:45:49.917296 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 8 00:45:49.917312 kernel: RETBleed: Mitigation: untrained return thunk May 8 00:45:49.917326 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 8 00:45:49.917337 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 8 00:45:49.917349 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 8 00:45:49.917361 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 8 00:45:49.917372 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 8 00:45:49.917383 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 8 00:45:49.917395 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 8 00:45:49.917409 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 8 00:45:49.917419 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 8 00:45:49.917431 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 8 00:45:49.917443 kernel: Freeing SMP alternatives memory: 32K May 8 00:45:49.917454 kernel: pid_max: default: 32768 minimum: 301 May 8 00:45:49.917465 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 8 00:45:49.917477 kernel: landlock: Up and running. May 8 00:45:49.917487 kernel: SELinux: Initializing. May 8 00:45:49.917498 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:45:49.917520 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:45:49.917531 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 8 00:45:49.917542 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 8 00:45:49.917554 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 8 00:45:49.917566 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 8 00:45:49.917577 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 8 00:45:49.917588 kernel: ... version: 0 May 8 00:45:49.917600 kernel: ... bit width: 48 May 8 00:45:49.917610 kernel: ... generic registers: 6 May 8 00:45:49.917624 kernel: ... value mask: 0000ffffffffffff May 8 00:45:49.917636 kernel: ... max period: 00007fffffffffff May 8 00:45:49.917647 kernel: ... fixed-purpose events: 0 May 8 00:45:49.917658 kernel: ... event mask: 000000000000003f May 8 00:45:49.917669 kernel: signal: max sigframe size: 1776 May 8 00:45:49.917680 kernel: rcu: Hierarchical SRCU implementation. May 8 00:45:49.917691 kernel: rcu: Max phase no-delay instances is 400. May 8 00:45:49.917702 kernel: smp: Bringing up secondary CPUs ... May 8 00:45:49.917714 kernel: smpboot: x86: Booting SMP configuration: May 8 00:45:49.917728 kernel: .... node #0, CPUs: #1 #2 #3 May 8 00:45:49.917739 kernel: smp: Brought up 1 node, 4 CPUs May 8 00:45:49.917750 kernel: smpboot: Max logical packages: 1 May 8 00:45:49.917761 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 8 00:45:49.917772 kernel: devtmpfs: initialized May 8 00:45:49.917783 kernel: x86/mm: Memory block size: 128MB May 8 00:45:49.917795 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 8 00:45:49.917805 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 8 00:45:49.917816 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) May 8 00:45:49.917830 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 8 00:45:49.917842 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 8 00:45:49.917853 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 8 00:45:49.917864 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 8 00:45:49.917876 kernel: pinctrl core: initialized pinctrl subsystem May 8 00:45:49.917887 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 8 00:45:49.917898 kernel: audit: initializing netlink subsys (disabled) May 8 00:45:49.917909 kernel: audit: type=2000 audit(1746665150.108:1): state=initialized audit_enabled=0 res=1 May 8 00:45:49.917920 kernel: thermal_sys: Registered thermal governor 'step_wise' May 8 00:45:49.917934 kernel: thermal_sys: Registered thermal governor 'user_space' May 8 00:45:49.917945 kernel: cpuidle: using governor menu May 8 00:45:49.917956 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 8 00:45:49.917968 kernel: dca service started, version 1.12.1 May 8 00:45:49.917978 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 8 00:45:49.917988 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 8 00:45:49.917998 kernel: PCI: Using configuration type 1 for base access May 8 00:45:49.918008 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 8 00:45:49.918019 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 8 00:45:49.918032 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 8 00:45:49.918042 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 8 00:45:49.918053 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 8 00:45:49.918062 kernel: ACPI: Added _OSI(Module Device) May 8 00:45:49.918071 kernel: ACPI: Added _OSI(Processor Device) May 8 00:45:49.918080 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 8 00:45:49.918090 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 8 00:45:49.918099 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 8 00:45:49.918108 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 8 00:45:49.918120 kernel: ACPI: Interpreter enabled May 8 00:45:49.918129 kernel: ACPI: PM: (supports S0 S3 S5) May 8 00:45:49.918138 kernel: ACPI: Using IOAPIC for interrupt routing May 8 00:45:49.918163 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 8 00:45:49.918174 kernel: PCI: Using E820 reservations for host bridge windows May 8 00:45:49.918185 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 8 00:45:49.918196 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 8 00:45:49.918492 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 00:45:49.918695 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 8 00:45:49.918904 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 8 00:45:49.918919 kernel: PCI host bridge to bus 0000:00 May 8 00:45:49.919079 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 8 00:45:49.919229 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 8 00:45:49.919359 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 8 00:45:49.919487 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 8 00:45:49.919628 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 8 00:45:49.919759 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] May 8 00:45:49.919892 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 8 00:45:49.920093 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 8 00:45:49.920277 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 8 00:45:49.920429 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] May 8 00:45:49.920584 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] May 8 00:45:49.920725 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 8 00:45:49.920869 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb May 8 00:45:49.921050 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 8 00:45:49.921225 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 8 00:45:49.921368 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] May 8 00:45:49.921517 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] May 8 00:45:49.921670 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] May 8 00:45:49.921835 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 8 00:45:49.921989 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] May 8 00:45:49.922138 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] May 8 00:45:49.922305 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] May 8 00:45:49.922464 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 8 00:45:49.922630 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] May 8 00:45:49.922779 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] May 8 00:45:49.922928 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] May 8 00:45:49.923076 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] May 8 00:45:49.923302 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 8 00:45:49.923460 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 8 00:45:49.923638 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 8 00:45:49.923793 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] May 8 00:45:49.923942 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] May 8 00:45:49.924097 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 8 00:45:49.924263 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] May 8 00:45:49.924277 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 8 00:45:49.924287 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 8 00:45:49.924297 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 8 00:45:49.924307 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 8 00:45:49.924321 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 8 00:45:49.924331 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 8 00:45:49.924340 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 8 00:45:49.924350 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 8 00:45:49.924360 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 8 00:45:49.924370 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 8 00:45:49.924379 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 8 00:45:49.924389 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 8 00:45:49.924398 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 8 00:45:49.924411 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 8 00:45:49.924420 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 8 00:45:49.924430 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 8 00:45:49.924440 kernel: iommu: Default domain type: Translated May 8 00:45:49.924450 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 8 00:45:49.924459 kernel: efivars: Registered efivars operations May 8 00:45:49.924470 kernel: PCI: Using ACPI for IRQ routing May 8 00:45:49.924480 kernel: PCI: pci_cache_line_size set to 64 bytes May 8 00:45:49.924490 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 8 00:45:49.924510 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] May 8 00:45:49.924521 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] May 8 00:45:49.924531 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] May 8 00:45:49.924681 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 8 00:45:49.924835 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 8 00:45:49.924990 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 8 00:45:49.925005 kernel: vgaarb: loaded May 8 00:45:49.925016 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 8 00:45:49.925026 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 8 00:45:49.925041 kernel: clocksource: Switched to clocksource kvm-clock May 8 00:45:49.925051 kernel: VFS: Disk quotas dquot_6.6.0 May 8 00:45:49.925062 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 8 00:45:49.925072 kernel: pnp: PnP ACPI init May 8 00:45:49.925255 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 8 00:45:49.925272 kernel: pnp: PnP ACPI: found 6 devices May 8 00:45:49.925282 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 8 00:45:49.925292 kernel: NET: Registered PF_INET protocol family May 8 00:45:49.925308 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 8 00:45:49.925318 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 8 00:45:49.925328 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 8 00:45:49.925339 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 8 00:45:49.925349 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 8 00:45:49.925359 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 8 00:45:49.925369 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:45:49.925380 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:45:49.925390 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 8 00:45:49.925403 kernel: NET: Registered PF_XDP protocol family May 8 00:45:49.925570 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window May 8 00:45:49.925727 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] May 8 00:45:49.925872 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 8 00:45:49.926016 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 8 00:45:49.926228 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 8 00:45:49.926372 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 8 00:45:49.926526 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 8 00:45:49.926667 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] May 8 00:45:49.926681 kernel: PCI: CLS 0 bytes, default 64 May 8 00:45:49.926691 kernel: Initialise system trusted keyrings May 8 00:45:49.926702 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 8 00:45:49.926712 kernel: Key type asymmetric registered May 8 00:45:49.926722 kernel: Asymmetric key parser 'x509' registered May 8 00:45:49.926732 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 8 00:45:49.926742 kernel: io scheduler mq-deadline registered May 8 00:45:49.926757 kernel: io scheduler kyber registered May 8 00:45:49.926767 kernel: io scheduler bfq registered May 8 00:45:49.926777 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 8 00:45:49.926788 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 8 00:45:49.926799 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 8 00:45:49.926809 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 8 00:45:49.926819 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 8 00:45:49.926829 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 8 00:45:49.926840 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 8 00:45:49.926853 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 8 00:45:49.926863 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 8 00:45:49.927018 kernel: rtc_cmos 00:04: RTC can wake from S4 May 8 00:45:49.927034 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 8 00:45:49.927196 kernel: rtc_cmos 00:04: registered as rtc0 May 8 00:45:49.927342 kernel: rtc_cmos 00:04: setting system clock to 2025-05-08T00:45:49 UTC (1746665149) May 8 00:45:49.927485 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 8 00:45:49.927508 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 8 00:45:49.927524 kernel: efifb: probing for efifb May 8 00:45:49.927535 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k May 8 00:45:49.927545 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 May 8 00:45:49.927555 kernel: efifb: scrolling: redraw May 8 00:45:49.927565 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 May 8 00:45:49.927576 kernel: Console: switching to colour frame buffer device 100x37 May 8 00:45:49.927609 kernel: fb0: EFI VGA frame buffer device May 8 00:45:49.927622 kernel: pstore: Using crash dump compression: deflate May 8 00:45:49.927633 kernel: pstore: Registered efi_pstore as persistent store backend May 8 00:45:49.927646 kernel: NET: Registered PF_INET6 protocol family May 8 00:45:49.927656 kernel: Segment Routing with IPv6 May 8 00:45:49.927667 kernel: In-situ OAM (IOAM) with IPv6 May 8 00:45:49.927678 kernel: NET: Registered PF_PACKET protocol family May 8 00:45:49.927688 kernel: Key type dns_resolver registered May 8 00:45:49.927699 kernel: IPI shorthand broadcast: enabled May 8 00:45:49.927710 kernel: sched_clock: Marking stable (651003241, 135075282)->(803610393, -17531870) May 8 00:45:49.927721 kernel: registered taskstats version 1 May 8 00:45:49.927731 kernel: Loading compiled-in X.509 certificates May 8 00:45:49.927745 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 75e4e434c57439d3f2eaf7797bbbcdd698dafd0e' May 8 00:45:49.927756 kernel: Key type .fscrypt registered May 8 00:45:49.927766 kernel: Key type fscrypt-provisioning registered May 8 00:45:49.927777 kernel: ima: No TPM chip found, activating TPM-bypass! May 8 00:45:49.927788 kernel: ima: Allocated hash algorithm: sha1 May 8 00:45:49.927801 kernel: ima: No architecture policies found May 8 00:45:49.927812 kernel: clk: Disabling unused clocks May 8 00:45:49.927823 kernel: Freeing unused kernel image (initmem) memory: 42856K May 8 00:45:49.927833 kernel: Write protecting the kernel read-only data: 36864k May 8 00:45:49.927847 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 8 00:45:49.927858 kernel: Run /init as init process May 8 00:45:49.927868 kernel: with arguments: May 8 00:45:49.927879 kernel: /init May 8 00:45:49.927889 kernel: with environment: May 8 00:45:49.927899 kernel: HOME=/ May 8 00:45:49.927909 kernel: TERM=linux May 8 00:45:49.927920 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 8 00:45:49.927933 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 8 00:45:49.927950 systemd[1]: Detected virtualization kvm. May 8 00:45:49.927961 systemd[1]: Detected architecture x86-64. May 8 00:45:49.927972 systemd[1]: Running in initrd. May 8 00:45:49.927987 systemd[1]: No hostname configured, using default hostname. May 8 00:45:49.928001 systemd[1]: Hostname set to . May 8 00:45:49.928013 systemd[1]: Initializing machine ID from VM UUID. May 8 00:45:49.928024 systemd[1]: Queued start job for default target initrd.target. May 8 00:45:49.928035 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:45:49.928047 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:45:49.928059 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 8 00:45:49.928070 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:45:49.928082 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 8 00:45:49.928097 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 8 00:45:49.928110 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 8 00:45:49.928122 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 8 00:45:49.928133 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:45:49.928145 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:45:49.928247 systemd[1]: Reached target paths.target - Path Units. May 8 00:45:49.928259 systemd[1]: Reached target slices.target - Slice Units. May 8 00:45:49.928274 systemd[1]: Reached target swap.target - Swaps. May 8 00:45:49.928285 systemd[1]: Reached target timers.target - Timer Units. May 8 00:45:49.928299 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:45:49.928310 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:45:49.928324 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 8 00:45:49.928337 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 8 00:45:49.928350 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:45:49.928361 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:45:49.928377 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:45:49.928388 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:45:49.928399 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 8 00:45:49.928410 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:45:49.928421 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 8 00:45:49.928432 systemd[1]: Starting systemd-fsck-usr.service... May 8 00:45:49.928442 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:45:49.928453 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:45:49.928465 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:45:49.928480 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 8 00:45:49.928491 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:45:49.928509 systemd[1]: Finished systemd-fsck-usr.service. May 8 00:45:49.928545 systemd-journald[193]: Collecting audit messages is disabled. May 8 00:45:49.928573 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 00:45:49.928585 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:45:49.928596 systemd-journald[193]: Journal started May 8 00:45:49.928622 systemd-journald[193]: Runtime Journal (/run/log/journal/e89afb47099d4f00beb3db3bb0388d76) is 6.0M, max 48.3M, 42.2M free. May 8 00:45:49.918882 systemd-modules-load[194]: Inserted module 'overlay' May 8 00:45:49.932196 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:45:49.934222 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:45:49.934978 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 00:45:49.946170 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 8 00:45:49.947981 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:45:49.950634 kernel: Bridge firewalling registered May 8 00:45:49.948255 systemd-modules-load[194]: Inserted module 'br_netfilter' May 8 00:45:49.951384 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:45:49.951862 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:45:49.955774 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:45:49.972440 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:45:49.974204 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:45:49.975369 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:45:49.977361 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 8 00:45:49.989446 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:45:49.992318 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:45:49.998824 dracut-cmdline[226]: dracut-dracut-053 May 8 00:45:50.003270 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=86cfbfcc89a9c46f6cbba5bdb3509d1ce1367f0c93b0b0e4c6bdcad1a2064c90 May 8 00:45:50.028043 systemd-resolved[234]: Positive Trust Anchors: May 8 00:45:50.028065 systemd-resolved[234]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:45:50.028106 systemd-resolved[234]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:45:50.030733 systemd-resolved[234]: Defaulting to hostname 'linux'. May 8 00:45:50.031940 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:45:50.039887 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:45:50.117201 kernel: SCSI subsystem initialized May 8 00:45:50.126202 kernel: Loading iSCSI transport class v2.0-870. May 8 00:45:50.137185 kernel: iscsi: registered transport (tcp) May 8 00:45:50.159192 kernel: iscsi: registered transport (qla4xxx) May 8 00:45:50.159259 kernel: QLogic iSCSI HBA Driver May 8 00:45:50.217772 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 8 00:45:50.228631 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 8 00:45:50.258402 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 8 00:45:50.258484 kernel: device-mapper: uevent: version 1.0.3 May 8 00:45:50.259653 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 8 00:45:50.303197 kernel: raid6: avx2x4 gen() 27808 MB/s May 8 00:45:50.320194 kernel: raid6: avx2x2 gen() 29772 MB/s May 8 00:45:50.337313 kernel: raid6: avx2x1 gen() 24788 MB/s May 8 00:45:50.337369 kernel: raid6: using algorithm avx2x2 gen() 29772 MB/s May 8 00:45:50.355284 kernel: raid6: .... xor() 19921 MB/s, rmw enabled May 8 00:45:50.355311 kernel: raid6: using avx2x2 recovery algorithm May 8 00:45:50.381200 kernel: xor: automatically using best checksumming function avx May 8 00:45:50.595208 kernel: Btrfs loaded, zoned=no, fsverity=no May 8 00:45:50.610465 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 8 00:45:50.624422 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:45:50.637536 systemd-udevd[413]: Using default interface naming scheme 'v255'. May 8 00:45:50.643330 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:45:50.655462 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 8 00:45:50.673474 dracut-pre-trigger[422]: rd.md=0: removing MD RAID activation May 8 00:45:50.713909 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:45:50.728556 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:45:50.802069 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:45:50.812429 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 8 00:45:50.829470 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 8 00:45:50.832987 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:45:50.834366 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:45:50.835695 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:45:50.845173 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 8 00:45:50.873224 kernel: cryptd: max_cpu_qlen set to 1000 May 8 00:45:50.873246 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 8 00:45:50.873395 kernel: AVX2 version of gcm_enc/dec engaged. May 8 00:45:50.873407 kernel: AES CTR mode by8 optimization enabled May 8 00:45:50.873426 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 8 00:45:50.873437 kernel: GPT:9289727 != 19775487 May 8 00:45:50.873447 kernel: GPT:Alternate GPT header not at the end of the disk. May 8 00:45:50.873457 kernel: GPT:9289727 != 19775487 May 8 00:45:50.873467 kernel: GPT: Use GNU Parted to correct GPT errors. May 8 00:45:50.873476 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:45:50.846425 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 8 00:45:50.866318 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 8 00:45:50.870876 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:45:50.871025 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:45:50.880416 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:45:50.881696 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:45:50.882022 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:45:50.885811 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:45:50.892216 kernel: libata version 3.00 loaded. May 8 00:45:50.897129 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:45:50.902699 kernel: ahci 0000:00:1f.2: version 3.0 May 8 00:45:50.936550 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 8 00:45:50.936575 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 8 00:45:50.936734 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (474) May 8 00:45:50.936746 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 8 00:45:50.936887 kernel: BTRFS: device fsid 28014d97-e6d7-4db4-b1d9-76a980e09972 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (458) May 8 00:45:50.936898 kernel: scsi host0: ahci May 8 00:45:50.937054 kernel: scsi host1: ahci May 8 00:45:50.937216 kernel: scsi host2: ahci May 8 00:45:50.937372 kernel: scsi host3: ahci May 8 00:45:50.937528 kernel: scsi host4: ahci May 8 00:45:50.937672 kernel: scsi host5: ahci May 8 00:45:50.937816 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 May 8 00:45:50.937826 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 May 8 00:45:50.937837 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 May 8 00:45:50.937847 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 May 8 00:45:50.937861 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 May 8 00:45:50.937871 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 May 8 00:45:50.919885 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 8 00:45:50.932983 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 8 00:45:50.947384 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 8 00:45:50.951913 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 8 00:45:50.952026 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 8 00:45:50.972430 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 8 00:45:50.972611 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:45:50.972694 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:45:50.976749 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:45:50.978815 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:45:50.999445 disk-uuid[566]: Primary Header is updated. May 8 00:45:50.999445 disk-uuid[566]: Secondary Entries is updated. May 8 00:45:50.999445 disk-uuid[566]: Secondary Header is updated. May 8 00:45:51.006311 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:45:51.000297 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:45:51.001865 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 00:45:51.026765 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:45:51.250449 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 8 00:45:51.250558 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 8 00:45:51.250592 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 8 00:45:51.250605 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 8 00:45:51.252191 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 8 00:45:51.252291 kernel: ata3.00: applying bridge limits May 8 00:45:51.253177 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 8 00:45:51.254180 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 8 00:45:51.255179 kernel: ata3.00: configured for UDMA/100 May 8 00:45:51.255194 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 8 00:45:51.299192 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 8 00:45:51.313223 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 8 00:45:51.313245 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 8 00:45:52.015839 disk-uuid[572]: The operation has completed successfully. May 8 00:45:52.017512 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:45:52.047266 systemd[1]: disk-uuid.service: Deactivated successfully. May 8 00:45:52.047416 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 8 00:45:52.079400 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 8 00:45:52.083073 sh[599]: Success May 8 00:45:52.097182 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 8 00:45:52.130961 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 8 00:45:52.140966 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 8 00:45:52.143703 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 8 00:45:52.155170 kernel: BTRFS info (device dm-0): first mount of filesystem 28014d97-e6d7-4db4-b1d9-76a980e09972 May 8 00:45:52.155224 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 8 00:45:52.155235 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 8 00:45:52.156258 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 8 00:45:52.157734 kernel: BTRFS info (device dm-0): using free space tree May 8 00:45:52.162329 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 8 00:45:52.165030 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 8 00:45:52.178376 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 8 00:45:52.181091 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 8 00:45:52.189331 kernel: BTRFS info (device vda6): first mount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:45:52.189366 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:45:52.189379 kernel: BTRFS info (device vda6): using free space tree May 8 00:45:52.193179 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:45:52.202469 systemd[1]: mnt-oem.mount: Deactivated successfully. May 8 00:45:52.204355 kernel: BTRFS info (device vda6): last unmount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:45:52.215071 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 8 00:45:52.223321 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 8 00:45:52.278591 ignition[693]: Ignition 2.19.0 May 8 00:45:52.278606 ignition[693]: Stage: fetch-offline May 8 00:45:52.278644 ignition[693]: no configs at "/usr/lib/ignition/base.d" May 8 00:45:52.278654 ignition[693]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:45:52.278765 ignition[693]: parsed url from cmdline: "" May 8 00:45:52.278769 ignition[693]: no config URL provided May 8 00:45:52.278776 ignition[693]: reading system config file "/usr/lib/ignition/user.ign" May 8 00:45:52.278788 ignition[693]: no config at "/usr/lib/ignition/user.ign" May 8 00:45:52.278819 ignition[693]: op(1): [started] loading QEMU firmware config module May 8 00:45:52.278826 ignition[693]: op(1): executing: "modprobe" "qemu_fw_cfg" May 8 00:45:52.286165 ignition[693]: op(1): [finished] loading QEMU firmware config module May 8 00:45:52.288691 ignition[693]: parsing config with SHA512: f8b00d1454c5b2392a1db52070063ab8c23949e606c351f562587316f8bb5a075b2c649b1542c5736062b00637ace1cbde5770193044aa5cb0b6e21b9a41c729 May 8 00:45:52.291370 unknown[693]: fetched base config from "system" May 8 00:45:52.291381 unknown[693]: fetched user config from "qemu" May 8 00:45:52.291621 ignition[693]: fetch-offline: fetch-offline passed May 8 00:45:52.291680 ignition[693]: Ignition finished successfully May 8 00:45:52.294575 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:45:52.313040 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:45:52.325472 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:45:52.347442 systemd-networkd[788]: lo: Link UP May 8 00:45:52.347460 systemd-networkd[788]: lo: Gained carrier May 8 00:45:52.349055 systemd-networkd[788]: Enumeration completed May 8 00:45:52.349439 systemd-networkd[788]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:45:52.349452 systemd-networkd[788]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:45:52.350400 systemd-networkd[788]: eth0: Link UP May 8 00:45:52.350403 systemd-networkd[788]: eth0: Gained carrier May 8 00:45:52.350410 systemd-networkd[788]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:45:52.350641 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:45:52.360101 systemd[1]: Reached target network.target - Network. May 8 00:45:52.362161 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 8 00:45:52.370202 systemd-networkd[788]: eth0: DHCPv4 address 10.0.0.140/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 00:45:52.373319 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 8 00:45:52.387845 ignition[790]: Ignition 2.19.0 May 8 00:45:52.387859 ignition[790]: Stage: kargs May 8 00:45:52.388055 ignition[790]: no configs at "/usr/lib/ignition/base.d" May 8 00:45:52.388069 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:45:52.388924 ignition[790]: kargs: kargs passed May 8 00:45:52.388973 ignition[790]: Ignition finished successfully May 8 00:45:52.395893 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 8 00:45:52.408490 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 8 00:45:52.421773 ignition[799]: Ignition 2.19.0 May 8 00:45:52.421785 ignition[799]: Stage: disks May 8 00:45:52.421977 ignition[799]: no configs at "/usr/lib/ignition/base.d" May 8 00:45:52.421995 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:45:52.425827 ignition[799]: disks: disks passed May 8 00:45:52.426515 ignition[799]: Ignition finished successfully May 8 00:45:52.429712 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 8 00:45:52.431985 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 8 00:45:52.434329 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 8 00:45:52.436739 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:45:52.438867 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:45:52.440917 systemd[1]: Reached target basic.target - Basic System. May 8 00:45:52.456280 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 8 00:45:52.468707 systemd-resolved[234]: Detected conflict on linux IN A 10.0.0.140 May 8 00:45:52.468722 systemd-resolved[234]: Hostname conflict, changing published hostname from 'linux' to 'linux6'. May 8 00:45:52.470998 systemd-fsck[809]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 8 00:45:52.478769 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 8 00:45:52.496239 systemd[1]: Mounting sysroot.mount - /sysroot... May 8 00:45:52.627194 kernel: EXT4-fs (vda9): mounted filesystem 36960c89-ba45-4808-a41c-bf61ce9470a3 r/w with ordered data mode. Quota mode: none. May 8 00:45:52.628032 systemd[1]: Mounted sysroot.mount - /sysroot. May 8 00:45:52.628772 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 8 00:45:52.639230 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:45:52.658162 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 8 00:45:52.660088 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 8 00:45:52.660140 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 8 00:45:52.660193 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:45:52.670192 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 8 00:45:52.675625 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (817) May 8 00:45:52.675653 kernel: BTRFS info (device vda6): first mount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:45:52.675668 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:45:52.675681 kernel: BTRFS info (device vda6): using free space tree May 8 00:45:52.676688 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 8 00:45:52.678814 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:45:52.681091 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:45:52.728674 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory May 8 00:45:52.734140 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory May 8 00:45:52.739923 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory May 8 00:45:52.744803 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory May 8 00:45:52.844599 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 8 00:45:52.855251 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 8 00:45:52.858065 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 8 00:45:52.867190 kernel: BTRFS info (device vda6): last unmount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:45:52.883570 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 8 00:45:52.895039 ignition[932]: INFO : Ignition 2.19.0 May 8 00:45:52.895039 ignition[932]: INFO : Stage: mount May 8 00:45:52.897109 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:45:52.897109 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:45:52.897109 ignition[932]: INFO : mount: mount passed May 8 00:45:52.897109 ignition[932]: INFO : Ignition finished successfully May 8 00:45:52.899443 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 8 00:45:52.909322 systemd[1]: Starting ignition-files.service - Ignition (files)... May 8 00:45:53.155029 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 8 00:45:53.167386 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 00:45:53.175172 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (944) May 8 00:45:53.175237 kernel: BTRFS info (device vda6): first mount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 00:45:53.177237 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:45:53.177254 kernel: BTRFS info (device vda6): using free space tree May 8 00:45:53.181185 kernel: BTRFS info (device vda6): auto enabling async discard May 8 00:45:53.182764 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 00:45:53.208952 ignition[962]: INFO : Ignition 2.19.0 May 8 00:45:53.208952 ignition[962]: INFO : Stage: files May 8 00:45:53.211044 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:45:53.211044 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:45:53.211044 ignition[962]: DEBUG : files: compiled without relabeling support, skipping May 8 00:45:53.215551 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 8 00:45:53.215551 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 8 00:45:53.215551 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 8 00:45:53.215551 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 8 00:45:53.215551 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 8 00:45:53.215005 unknown[962]: wrote ssh authorized keys file for user: core May 8 00:45:53.223986 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" May 8 00:45:53.223986 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" May 8 00:45:53.223986 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:45:53.223986 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:45:53.223986 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 8 00:45:53.223986 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 8 00:45:53.223986 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 8 00:45:53.223986 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 May 8 00:45:53.694188 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK May 8 00:45:54.130659 systemd-networkd[788]: eth0: Gained IPv6LL May 8 00:45:54.132858 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 8 00:45:54.132858 ignition[962]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" May 8 00:45:54.132858 ignition[962]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:45:54.138387 ignition[962]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:45:54.138387 ignition[962]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" May 8 00:45:54.138387 ignition[962]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" May 8 00:45:54.156326 ignition[962]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:45:54.163045 ignition[962]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:45:54.164772 ignition[962]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" May 8 00:45:54.164772 ignition[962]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" May 8 00:45:54.164772 ignition[962]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" May 8 00:45:54.164772 ignition[962]: INFO : files: files passed May 8 00:45:54.164772 ignition[962]: INFO : Ignition finished successfully May 8 00:45:54.173451 systemd[1]: Finished ignition-files.service - Ignition (files). May 8 00:45:54.183398 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 8 00:45:54.185869 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 8 00:45:54.188390 systemd[1]: ignition-quench.service: Deactivated successfully. May 8 00:45:54.188540 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 8 00:45:54.202020 initrd-setup-root-after-ignition[990]: grep: /sysroot/oem/oem-release: No such file or directory May 8 00:45:54.206333 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:45:54.206333 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 8 00:45:54.209476 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:45:54.213719 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:45:54.215279 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 8 00:45:54.225395 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 8 00:45:54.257497 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 8 00:45:54.257631 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 8 00:45:54.258831 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 8 00:45:54.261017 systemd[1]: Reached target initrd.target - Initrd Default Target. May 8 00:45:54.263968 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 8 00:45:54.264802 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 8 00:45:54.284006 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:45:54.295394 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 8 00:45:54.306979 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 8 00:45:54.308289 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:45:54.308575 systemd[1]: Stopped target timers.target - Timer Units. May 8 00:45:54.308904 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 8 00:45:54.309020 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 00:45:54.316466 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 8 00:45:54.316601 systemd[1]: Stopped target basic.target - Basic System. May 8 00:45:54.318547 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 8 00:45:54.318889 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 8 00:45:54.319421 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 8 00:45:54.325521 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 8 00:45:54.326674 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 8 00:45:54.327053 systemd[1]: Stopped target sysinit.target - System Initialization. May 8 00:45:54.327582 systemd[1]: Stopped target local-fs.target - Local File Systems. May 8 00:45:54.327916 systemd[1]: Stopped target swap.target - Swaps. May 8 00:45:54.328402 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 8 00:45:54.328513 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 8 00:45:54.339067 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 8 00:45:54.339268 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:45:54.339707 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 8 00:45:54.339801 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:45:54.340045 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 8 00:45:54.340172 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 8 00:45:54.340931 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 8 00:45:54.341042 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 8 00:45:54.349772 systemd[1]: Stopped target paths.target - Path Units. May 8 00:45:54.350771 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 8 00:45:54.356198 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:45:54.356344 systemd[1]: Stopped target slices.target - Slice Units. May 8 00:45:54.358972 systemd[1]: Stopped target sockets.target - Socket Units. May 8 00:45:54.361577 systemd[1]: iscsid.socket: Deactivated successfully. May 8 00:45:54.361666 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 8 00:45:54.362508 systemd[1]: iscsiuio.socket: Deactivated successfully. May 8 00:45:54.362593 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 00:45:54.364225 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 8 00:45:54.364329 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 00:45:54.364736 systemd[1]: ignition-files.service: Deactivated successfully. May 8 00:45:54.364837 systemd[1]: Stopped ignition-files.service - Ignition (files). May 8 00:45:54.384278 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 8 00:45:54.385265 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 8 00:45:54.385379 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:45:54.388242 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 8 00:45:54.388598 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 8 00:45:54.388705 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:45:54.390627 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 8 00:45:54.390723 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 8 00:45:54.398627 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 8 00:45:54.398736 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 8 00:45:54.411983 ignition[1016]: INFO : Ignition 2.19.0 May 8 00:45:54.411983 ignition[1016]: INFO : Stage: umount May 8 00:45:54.413911 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:45:54.413911 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:45:54.416328 ignition[1016]: INFO : umount: umount passed May 8 00:45:54.417212 ignition[1016]: INFO : Ignition finished successfully May 8 00:45:54.417137 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 8 00:45:54.419267 systemd[1]: ignition-mount.service: Deactivated successfully. May 8 00:45:54.419383 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 8 00:45:54.421480 systemd[1]: Stopped target network.target - Network. May 8 00:45:54.423352 systemd[1]: ignition-disks.service: Deactivated successfully. May 8 00:45:54.423413 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 8 00:45:54.424456 systemd[1]: ignition-kargs.service: Deactivated successfully. May 8 00:45:54.424501 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 8 00:45:54.424587 systemd[1]: ignition-setup.service: Deactivated successfully. May 8 00:45:54.424627 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 8 00:45:54.425363 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 8 00:45:54.425413 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 8 00:45:54.425823 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 8 00:45:54.426105 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 8 00:45:54.435208 systemd-networkd[788]: eth0: DHCPv6 lease lost May 8 00:45:54.436336 systemd[1]: systemd-resolved.service: Deactivated successfully. May 8 00:45:54.436499 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 8 00:45:54.438510 systemd[1]: systemd-networkd.service: Deactivated successfully. May 8 00:45:54.438641 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 8 00:45:54.439431 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 8 00:45:54.439489 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 8 00:45:54.445363 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 8 00:45:54.446827 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 8 00:45:54.446906 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 00:45:54.449222 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:45:54.449272 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 00:45:54.451371 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 8 00:45:54.451433 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 8 00:45:54.453759 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 8 00:45:54.453824 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:45:54.456692 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:45:54.469110 systemd[1]: network-cleanup.service: Deactivated successfully. May 8 00:45:54.469257 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 8 00:45:54.471439 systemd[1]: systemd-udevd.service: Deactivated successfully. May 8 00:45:54.471618 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:45:54.475002 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 8 00:45:54.475072 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 8 00:45:54.477128 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 8 00:45:54.477183 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:45:54.479107 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 8 00:45:54.479170 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 8 00:45:54.481603 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 8 00:45:54.481649 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 8 00:45:54.483676 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:45:54.483721 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 00:45:54.498361 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 8 00:45:54.500690 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 8 00:45:54.500761 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:45:54.503143 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:45:54.503206 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:45:54.506926 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 8 00:45:54.507049 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 8 00:45:54.724969 systemd[1]: sysroot-boot.service: Deactivated successfully. May 8 00:45:54.725123 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 8 00:45:54.727744 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 8 00:45:54.729029 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 8 00:45:54.729094 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 8 00:45:54.739294 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 8 00:45:54.746784 systemd[1]: Switching root. May 8 00:45:54.779173 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). May 8 00:45:54.779247 systemd-journald[193]: Journal stopped May 8 00:45:55.968893 kernel: SELinux: policy capability network_peer_controls=1 May 8 00:45:55.972173 kernel: SELinux: policy capability open_perms=1 May 8 00:45:55.972212 kernel: SELinux: policy capability extended_socket_class=1 May 8 00:45:55.972234 kernel: SELinux: policy capability always_check_network=0 May 8 00:45:55.972252 kernel: SELinux: policy capability cgroup_seclabel=1 May 8 00:45:55.972267 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 8 00:45:55.972285 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 8 00:45:55.972299 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 8 00:45:55.972312 kernel: audit: type=1403 audit(1746665155.187:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 8 00:45:55.972328 systemd[1]: Successfully loaded SELinux policy in 44.639ms. May 8 00:45:55.972353 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.251ms. May 8 00:45:55.972381 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 8 00:45:55.972402 systemd[1]: Detected virtualization kvm. May 8 00:45:55.972416 systemd[1]: Detected architecture x86-64. May 8 00:45:55.972430 systemd[1]: Detected first boot. May 8 00:45:55.972451 systemd[1]: Initializing machine ID from VM UUID. May 8 00:45:55.972466 zram_generator::config[1060]: No configuration found. May 8 00:45:55.972483 systemd[1]: Populated /etc with preset unit settings. May 8 00:45:55.972497 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 8 00:45:55.972514 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 8 00:45:55.972529 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 8 00:45:55.972546 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 8 00:45:55.972565 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 8 00:45:55.972580 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 8 00:45:55.972597 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 8 00:45:55.972609 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 8 00:45:55.972621 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 8 00:45:55.972633 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 8 00:45:55.972647 systemd[1]: Created slice user.slice - User and Session Slice. May 8 00:45:55.972658 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 00:45:55.972670 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 00:45:55.972682 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 8 00:45:55.972694 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 8 00:45:55.972707 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 8 00:45:55.972719 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 00:45:55.972730 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 8 00:45:55.972742 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 00:45:55.972756 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 8 00:45:55.972768 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 8 00:45:55.972779 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 8 00:45:55.972791 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 8 00:45:55.972803 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 00:45:55.972815 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 00:45:55.972827 systemd[1]: Reached target slices.target - Slice Units. May 8 00:45:55.972841 systemd[1]: Reached target swap.target - Swaps. May 8 00:45:55.972852 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 8 00:45:55.972864 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 8 00:45:55.972876 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 00:45:55.972888 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 00:45:55.972899 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 00:45:55.972912 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 8 00:45:55.972923 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 8 00:45:55.972935 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 8 00:45:55.972946 systemd[1]: Mounting media.mount - External Media Directory... May 8 00:45:55.972961 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:45:55.972974 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 8 00:45:55.972986 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 8 00:45:55.972997 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 8 00:45:55.973009 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 8 00:45:55.973021 systemd[1]: Reached target machines.target - Containers. May 8 00:45:55.973033 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 8 00:45:55.973045 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:45:55.973059 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 00:45:55.973071 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 8 00:45:55.973082 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:45:55.973094 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:45:55.973106 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:45:55.973117 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 8 00:45:55.973129 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:45:55.973141 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 8 00:45:55.973196 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 8 00:45:55.973208 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 8 00:45:55.973220 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 8 00:45:55.973232 systemd[1]: Stopped systemd-fsck-usr.service. May 8 00:45:55.973243 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 00:45:55.973255 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 00:45:55.973271 kernel: fuse: init (API version 7.39) May 8 00:45:55.973283 kernel: loop: module loaded May 8 00:45:55.973320 systemd-journald[1123]: Collecting audit messages is disabled. May 8 00:45:55.973350 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 8 00:45:55.973376 systemd-journald[1123]: Journal started May 8 00:45:55.973402 systemd-journald[1123]: Runtime Journal (/run/log/journal/e89afb47099d4f00beb3db3bb0388d76) is 6.0M, max 48.3M, 42.2M free. May 8 00:45:55.707854 systemd[1]: Queued start job for default target multi-user.target. May 8 00:45:55.725862 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 8 00:45:55.726353 systemd[1]: systemd-journald.service: Deactivated successfully. May 8 00:45:55.977379 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 8 00:45:55.984173 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 00:45:55.986286 systemd[1]: verity-setup.service: Deactivated successfully. May 8 00:45:55.986316 systemd[1]: Stopped verity-setup.service. May 8 00:45:55.989240 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:45:55.992165 systemd[1]: Started systemd-journald.service - Journal Service. May 8 00:45:55.993394 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 8 00:45:55.995060 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 8 00:45:55.996298 systemd[1]: Mounted media.mount - External Media Directory. May 8 00:45:55.997389 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 8 00:45:55.998607 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 8 00:45:55.999907 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 8 00:45:56.001253 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 00:45:56.002861 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 8 00:45:56.003033 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 8 00:45:56.004511 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:45:56.004673 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:45:56.006143 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 8 00:45:56.007675 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:45:56.007843 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:45:56.009518 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 8 00:45:56.009710 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 8 00:45:56.011085 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:45:56.011376 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:45:56.012733 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 00:45:56.014391 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 8 00:45:56.015907 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 8 00:45:56.027559 systemd[1]: Reached target network-pre.target - Preparation for Network. May 8 00:45:56.033175 kernel: ACPI: bus type drm_connector registered May 8 00:45:56.038472 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 8 00:45:56.041139 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 8 00:45:56.042342 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 8 00:45:56.042391 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 00:45:56.044447 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 8 00:45:56.046960 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 8 00:45:56.049217 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 8 00:45:56.050403 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:45:56.052665 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 8 00:45:56.056490 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 8 00:45:56.057806 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:45:56.059453 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 8 00:45:56.060862 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:45:56.062135 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:45:56.064917 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 8 00:45:56.073351 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 8 00:45:56.078723 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:45:56.078957 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:45:56.083438 systemd-journald[1123]: Time spent on flushing to /var/log/journal/e89afb47099d4f00beb3db3bb0388d76 is 13.771ms for 979 entries. May 8 00:45:56.083438 systemd-journald[1123]: System Journal (/var/log/journal/e89afb47099d4f00beb3db3bb0388d76) is 8.0M, max 195.6M, 187.6M free. May 8 00:45:56.105037 systemd-journald[1123]: Received client request to flush runtime journal. May 8 00:45:56.105074 kernel: loop0: detected capacity change from 0 to 140768 May 8 00:45:56.080752 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 8 00:45:56.082582 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 8 00:45:56.086142 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 8 00:45:56.088711 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 8 00:45:56.094629 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 00:45:56.104695 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 8 00:45:56.113344 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 8 00:45:56.118443 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 8 00:45:56.121530 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 8 00:45:56.123996 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:45:56.132081 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 8 00:45:56.127295 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 8 00:45:56.135593 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 00:45:56.139064 udevadm[1188]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 8 00:45:56.149673 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 8 00:45:56.150523 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 8 00:45:56.160848 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. May 8 00:45:56.161407 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. May 8 00:45:56.168376 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 00:45:56.169243 kernel: loop1: detected capacity change from 0 to 142488 May 8 00:45:56.201720 kernel: loop2: detected capacity change from 0 to 218376 May 8 00:45:56.228181 kernel: loop3: detected capacity change from 0 to 140768 May 8 00:45:56.241210 kernel: loop4: detected capacity change from 0 to 142488 May 8 00:45:56.251231 kernel: loop5: detected capacity change from 0 to 218376 May 8 00:45:56.257106 (sd-merge)[1199]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 8 00:45:56.257915 (sd-merge)[1199]: Merged extensions into '/usr'. May 8 00:45:56.263621 systemd[1]: Reloading requested from client PID 1173 ('systemd-sysext') (unit systemd-sysext.service)... May 8 00:45:56.263640 systemd[1]: Reloading... May 8 00:45:56.322242 zram_generator::config[1228]: No configuration found. May 8 00:45:56.382390 ldconfig[1168]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 8 00:45:56.449837 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:45:56.499852 systemd[1]: Reloading finished in 235 ms. May 8 00:45:56.538303 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 8 00:45:56.539882 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 8 00:45:56.552359 systemd[1]: Starting ensure-sysext.service... May 8 00:45:56.554418 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 00:45:56.562240 systemd[1]: Reloading requested from client PID 1262 ('systemctl') (unit ensure-sysext.service)... May 8 00:45:56.562257 systemd[1]: Reloading... May 8 00:45:56.576413 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 8 00:45:56.576782 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 8 00:45:56.577793 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 8 00:45:56.578089 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. May 8 00:45:56.578189 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. May 8 00:45:56.581634 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:45:56.581648 systemd-tmpfiles[1263]: Skipping /boot May 8 00:45:56.595541 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. May 8 00:45:56.595648 systemd-tmpfiles[1263]: Skipping /boot May 8 00:45:56.614184 zram_generator::config[1290]: No configuration found. May 8 00:45:56.739460 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:45:56.793669 systemd[1]: Reloading finished in 231 ms. May 8 00:45:56.811200 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 8 00:45:56.829924 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 00:45:56.840596 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 8 00:45:56.843835 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 8 00:45:56.846859 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 8 00:45:56.851633 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 00:45:56.856605 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 00:45:56.860291 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 8 00:45:56.864673 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:45:56.864907 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:45:56.867832 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:45:56.873078 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:45:56.876627 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:45:56.879357 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:45:56.887795 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 8 00:45:56.890201 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:45:56.891845 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 8 00:45:56.894379 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:45:56.894682 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:45:56.897868 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:45:56.898451 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:45:56.900718 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:45:56.900920 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:45:56.912124 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:45:56.912446 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:45:56.913855 augenrules[1356]: No rules May 8 00:45:56.917171 systemd-udevd[1334]: Using default interface naming scheme 'v255'. May 8 00:45:56.922681 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 8 00:45:56.925226 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 8 00:45:56.927476 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 8 00:45:56.933942 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 8 00:45:56.937706 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 8 00:45:56.943368 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 00:45:56.945660 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 8 00:45:56.952442 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:45:56.952607 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:45:56.962729 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:45:56.966481 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 00:45:56.968980 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 00:45:56.970099 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:45:56.975659 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 00:45:56.976844 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:45:56.976973 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:45:56.978312 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:45:56.978646 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:45:56.986916 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:45:56.987715 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 00:45:56.999518 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 8 00:45:57.000501 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:45:57.000714 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 00:45:57.013454 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 00:45:57.016754 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 00:45:57.018008 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 00:45:57.018085 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:45:57.018111 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:45:57.018734 systemd[1]: Finished ensure-sysext.service. May 8 00:45:57.021324 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:45:57.021560 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 00:45:57.027697 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:45:57.027976 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 00:45:57.033033 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:45:57.033089 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 00:45:57.038929 systemd-resolved[1333]: Positive Trust Anchors: May 8 00:45:57.038949 systemd-resolved[1333]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:45:57.038981 systemd-resolved[1333]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 00:45:57.043581 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 8 00:45:57.045280 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:45:57.045486 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 00:45:57.049572 systemd-resolved[1333]: Defaulting to hostname 'linux'. May 8 00:45:57.055298 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1390) May 8 00:45:57.060916 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 00:45:57.063995 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 00:45:57.079461 systemd-networkd[1391]: lo: Link UP May 8 00:45:57.079473 systemd-networkd[1391]: lo: Gained carrier May 8 00:45:57.081125 systemd-networkd[1391]: Enumeration completed May 8 00:45:57.082143 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 00:45:57.082974 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:45:57.083390 systemd[1]: Reached target network.target - Network. May 8 00:45:57.083494 systemd-networkd[1391]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:45:57.085980 systemd-networkd[1391]: eth0: Link UP May 8 00:45:57.085987 systemd-networkd[1391]: eth0: Gained carrier May 8 00:45:57.086000 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 00:45:57.091377 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 8 00:45:57.101326 systemd-networkd[1391]: eth0: DHCPv4 address 10.0.0.140/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 00:45:57.108178 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 8 00:45:57.110299 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 8 00:45:57.120096 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 8 00:45:57.120268 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 8 00:45:58.393364 kernel: ACPI: button: Power Button [PWRF] May 8 00:45:58.393394 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 8 00:45:58.394175 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 8 00:45:58.394361 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 8 00:45:57.125522 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 8 00:45:57.128801 systemd[1]: Reached target time-set.target - System Time Set. May 8 00:45:58.392592 systemd-resolved[1333]: Clock change detected. Flushing caches. May 8 00:45:58.393382 systemd-timesyncd[1408]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 8 00:45:58.393473 systemd-timesyncd[1408]: Initial clock synchronization to Thu 2025-05-08 00:45:58.392524 UTC. May 8 00:45:58.405097 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 8 00:45:58.414517 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 May 8 00:45:58.454712 kernel: mousedev: PS/2 mouse device common for all mice May 8 00:45:58.487688 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:45:58.502358 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:45:58.502692 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:45:58.514801 kernel: kvm_amd: TSC scaling supported May 8 00:45:58.514847 kernel: kvm_amd: Nested Virtualization enabled May 8 00:45:58.514862 kernel: kvm_amd: Nested Paging enabled May 8 00:45:58.514874 kernel: kvm_amd: LBR virtualization supported May 8 00:45:58.515884 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 8 00:45:58.515918 kernel: kvm_amd: Virtual GIF supported May 8 00:45:58.526729 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 00:45:58.535429 kernel: EDAC MC: Ver: 3.0.0 May 8 00:45:58.564982 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 8 00:45:58.576739 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 8 00:45:58.581008 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 00:45:58.586277 lvm[1431]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:45:58.620605 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 8 00:45:58.622203 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 00:45:58.623373 systemd[1]: Reached target sysinit.target - System Initialization. May 8 00:45:58.624620 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 8 00:45:58.625931 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 8 00:45:58.627488 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 8 00:45:58.628728 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 8 00:45:58.630025 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 8 00:45:58.631304 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 8 00:45:58.631330 systemd[1]: Reached target paths.target - Path Units. May 8 00:45:58.632482 systemd[1]: Reached target timers.target - Timer Units. May 8 00:45:58.634350 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 8 00:45:58.637059 systemd[1]: Starting docker.socket - Docker Socket for the API... May 8 00:45:58.653295 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 8 00:45:58.655708 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 8 00:45:58.657266 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 8 00:45:58.658507 systemd[1]: Reached target sockets.target - Socket Units. May 8 00:45:58.659513 systemd[1]: Reached target basic.target - Basic System. May 8 00:45:58.660534 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 8 00:45:58.660561 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 8 00:45:58.661539 systemd[1]: Starting containerd.service - containerd container runtime... May 8 00:45:58.663638 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 8 00:45:58.669301 lvm[1437]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:45:58.668494 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 8 00:45:58.671609 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 8 00:45:58.672751 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 8 00:45:58.674495 jq[1440]: false May 8 00:45:58.674876 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 8 00:45:58.681595 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 8 00:45:58.686808 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 8 00:45:58.692517 systemd[1]: Starting systemd-logind.service - User Login Management... May 8 00:45:58.694074 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 8 00:45:58.694517 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 8 00:45:58.697204 systemd[1]: Starting update-engine.service - Update Engine... May 8 00:45:58.699544 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 8 00:45:58.703497 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 8 00:45:58.705548 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 8 00:45:58.705786 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 8 00:45:58.706104 systemd[1]: motdgen.service: Deactivated successfully. May 8 00:45:58.710631 jq[1453]: true May 8 00:45:58.706294 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 8 00:45:58.708653 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 8 00:45:58.708925 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 8 00:45:58.712145 dbus-daemon[1439]: [system] SELinux support is enabled May 8 00:45:58.712687 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 8 00:45:58.715561 extend-filesystems[1441]: Found loop3 May 8 00:45:58.715561 extend-filesystems[1441]: Found loop4 May 8 00:45:58.715561 extend-filesystems[1441]: Found loop5 May 8 00:45:58.715561 extend-filesystems[1441]: Found sr0 May 8 00:45:58.715561 extend-filesystems[1441]: Found vda May 8 00:45:58.715561 extend-filesystems[1441]: Found vda1 May 8 00:45:58.715561 extend-filesystems[1441]: Found vda2 May 8 00:45:58.715561 extend-filesystems[1441]: Found vda3 May 8 00:45:58.715561 extend-filesystems[1441]: Found usr May 8 00:45:58.715561 extend-filesystems[1441]: Found vda4 May 8 00:45:58.715561 extend-filesystems[1441]: Found vda6 May 8 00:45:58.715561 extend-filesystems[1441]: Found vda7 May 8 00:45:58.715561 extend-filesystems[1441]: Found vda9 May 8 00:45:58.715561 extend-filesystems[1441]: Checking size of /dev/vda9 May 8 00:45:58.724558 (ntainerd)[1460]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 8 00:45:58.739059 jq[1457]: true May 8 00:45:58.728817 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 8 00:45:58.728860 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 8 00:45:58.730597 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 8 00:45:58.730613 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 8 00:45:58.744622 update_engine[1452]: I20250508 00:45:58.744511 1452 main.cc:92] Flatcar Update Engine starting May 8 00:45:58.747956 systemd[1]: Started update-engine.service - Update Engine. May 8 00:45:58.749332 update_engine[1452]: I20250508 00:45:58.748096 1452 update_check_scheduler.cc:74] Next update check in 4m57s May 8 00:45:58.751721 extend-filesystems[1441]: Resized partition /dev/vda9 May 8 00:45:58.757675 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1370) May 8 00:45:58.758136 extend-filesystems[1475]: resize2fs 1.47.1 (20-May-2024) May 8 00:45:58.758630 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 8 00:45:58.778439 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 8 00:45:58.792469 systemd-logind[1448]: Watching system buttons on /dev/input/event1 (Power Button) May 8 00:45:58.792497 systemd-logind[1448]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 8 00:45:58.792858 systemd-logind[1448]: New seat seat0. May 8 00:45:58.793651 systemd[1]: Started systemd-logind.service - User Login Management. May 8 00:45:58.811616 sshd_keygen[1465]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 8 00:45:58.818459 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 8 00:45:58.819622 locksmithd[1474]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 8 00:45:58.837138 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 8 00:45:58.859875 systemd[1]: Starting issuegen.service - Generate /run/issue... May 8 00:45:59.274655 containerd[1460]: time="2025-05-08T00:45:59.274379757Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 8 00:45:58.867183 systemd[1]: issuegen.service: Deactivated successfully. May 8 00:45:59.274988 extend-filesystems[1475]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 8 00:45:59.274988 extend-filesystems[1475]: old_desc_blocks = 1, new_desc_blocks = 1 May 8 00:45:59.274988 extend-filesystems[1475]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 8 00:45:58.867423 systemd[1]: Finished issuegen.service - Generate /run/issue. May 8 00:45:59.279340 extend-filesystems[1441]: Resized filesystem in /dev/vda9 May 8 00:45:58.870151 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 8 00:45:58.885806 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 8 00:45:58.898665 systemd[1]: Started getty@tty1.service - Getty on tty1. May 8 00:45:58.900737 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 8 00:45:58.901996 systemd[1]: Reached target getty.target - Login Prompts. May 8 00:45:59.281078 systemd[1]: extend-filesystems.service: Deactivated successfully. May 8 00:45:59.281335 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 8 00:45:59.291805 bash[1489]: Updated "/home/core/.ssh/authorized_keys" May 8 00:45:59.294380 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 8 00:45:59.296765 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 8 00:45:59.297853 containerd[1460]: time="2025-05-08T00:45:59.297774759Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 8 00:45:59.299957 containerd[1460]: time="2025-05-08T00:45:59.299909543Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 8 00:45:59.299957 containerd[1460]: time="2025-05-08T00:45:59.299938728Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 8 00:45:59.299957 containerd[1460]: time="2025-05-08T00:45:59.299954658Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 8 00:45:59.300170 containerd[1460]: time="2025-05-08T00:45:59.300147490Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 8 00:45:59.300199 containerd[1460]: time="2025-05-08T00:45:59.300167958Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 8 00:45:59.300269 containerd[1460]: time="2025-05-08T00:45:59.300233721Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:45:59.300302 containerd[1460]: time="2025-05-08T00:45:59.300268907Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 8 00:45:59.300513 containerd[1460]: time="2025-05-08T00:45:59.300488790Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:45:59.300513 containerd[1460]: time="2025-05-08T00:45:59.300506744Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 8 00:45:59.300588 containerd[1460]: time="2025-05-08T00:45:59.300519728Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:45:59.300588 containerd[1460]: time="2025-05-08T00:45:59.300529797Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 8 00:45:59.300643 containerd[1460]: time="2025-05-08T00:45:59.300635936Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 8 00:45:59.300905 containerd[1460]: time="2025-05-08T00:45:59.300881366Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 8 00:45:59.301025 containerd[1460]: time="2025-05-08T00:45:59.301004337Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:45:59.301025 containerd[1460]: time="2025-05-08T00:45:59.301020247Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 8 00:45:59.301148 containerd[1460]: time="2025-05-08T00:45:59.301127979Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 8 00:45:59.301200 containerd[1460]: time="2025-05-08T00:45:59.301186659Z" level=info msg="metadata content store policy set" policy=shared May 8 00:45:59.310215 containerd[1460]: time="2025-05-08T00:45:59.310160020Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 8 00:45:59.310345 containerd[1460]: time="2025-05-08T00:45:59.310245140Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 8 00:45:59.310345 containerd[1460]: time="2025-05-08T00:45:59.310268754Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 8 00:45:59.310345 containerd[1460]: time="2025-05-08T00:45:59.310284033Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 8 00:45:59.310345 containerd[1460]: time="2025-05-08T00:45:59.310298340Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 8 00:45:59.310548 containerd[1460]: time="2025-05-08T00:45:59.310524394Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 8 00:45:59.310885 containerd[1460]: time="2025-05-08T00:45:59.310847460Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 8 00:45:59.311050 containerd[1460]: time="2025-05-08T00:45:59.311023971Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 8 00:45:59.311085 containerd[1460]: time="2025-05-08T00:45:59.311051983Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 8 00:45:59.311085 containerd[1460]: time="2025-05-08T00:45:59.311072362Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 8 00:45:59.311139 containerd[1460]: time="2025-05-08T00:45:59.311095034Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 8 00:45:59.311139 containerd[1460]: time="2025-05-08T00:45:59.311114651Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 8 00:45:59.311139 containerd[1460]: time="2025-05-08T00:45:59.311133306Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 8 00:45:59.311207 containerd[1460]: time="2025-05-08T00:45:59.311155287Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 8 00:45:59.311207 containerd[1460]: time="2025-05-08T00:45:59.311176737Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 8 00:45:59.311207 containerd[1460]: time="2025-05-08T00:45:59.311194491Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 8 00:45:59.311280 containerd[1460]: time="2025-05-08T00:45:59.311210100Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 8 00:45:59.311280 containerd[1460]: time="2025-05-08T00:45:59.311225098Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 8 00:45:59.311280 containerd[1460]: time="2025-05-08T00:45:59.311250626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 8 00:45:59.311280 containerd[1460]: time="2025-05-08T00:45:59.311269091Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 8 00:45:59.311471 containerd[1460]: time="2025-05-08T00:45:59.311286363Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 8 00:45:59.311471 containerd[1460]: time="2025-05-08T00:45:59.311305599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 8 00:45:59.311471 containerd[1460]: time="2025-05-08T00:45:59.311323813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 8 00:45:59.311471 containerd[1460]: time="2025-05-08T00:45:59.311342929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 8 00:45:59.311471 containerd[1460]: time="2025-05-08T00:45:59.311361724Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 8 00:45:59.311471 containerd[1460]: time="2025-05-08T00:45:59.311382233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 8 00:45:59.311471 containerd[1460]: time="2025-05-08T00:45:59.311416908Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 8 00:45:59.311471 containerd[1460]: time="2025-05-08T00:45:59.311446984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 8 00:45:59.311471 containerd[1460]: time="2025-05-08T00:45:59.311465008Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 8 00:45:59.311693 containerd[1460]: time="2025-05-08T00:45:59.311481910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 8 00:45:59.311693 containerd[1460]: time="2025-05-08T00:45:59.311498341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 8 00:45:59.311693 containerd[1460]: time="2025-05-08T00:45:59.311523328Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 8 00:45:59.311693 containerd[1460]: time="2025-05-08T00:45:59.311551480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 8 00:45:59.311693 containerd[1460]: time="2025-05-08T00:45:59.311578491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 8 00:45:59.311693 containerd[1460]: time="2025-05-08T00:45:59.311595303Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 8 00:45:59.311693 containerd[1460]: time="2025-05-08T00:45:59.311664713Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 8 00:45:59.311855 containerd[1460]: time="2025-05-08T00:45:59.311691383Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 8 00:45:59.311855 containerd[1460]: time="2025-05-08T00:45:59.311708425Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 8 00:45:59.311855 containerd[1460]: time="2025-05-08T00:45:59.311727641Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 8 00:45:59.311855 containerd[1460]: time="2025-05-08T00:45:59.311742609Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 8 00:45:59.311855 containerd[1460]: time="2025-05-08T00:45:59.311761304Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 8 00:45:59.311855 containerd[1460]: time="2025-05-08T00:45:59.311807110Z" level=info msg="NRI interface is disabled by configuration." May 8 00:45:59.311855 containerd[1460]: time="2025-05-08T00:45:59.311834712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 8 00:45:59.312270 containerd[1460]: time="2025-05-08T00:45:59.312190729Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 8 00:45:59.312270 containerd[1460]: time="2025-05-08T00:45:59.312271992Z" level=info msg="Connect containerd service" May 8 00:45:59.312471 containerd[1460]: time="2025-05-08T00:45:59.312316054Z" level=info msg="using legacy CRI server" May 8 00:45:59.312471 containerd[1460]: time="2025-05-08T00:45:59.312324590Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 8 00:45:59.312518 containerd[1460]: time="2025-05-08T00:45:59.312488267Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 8 00:45:59.313256 containerd[1460]: time="2025-05-08T00:45:59.313224118Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:45:59.313465 containerd[1460]: time="2025-05-08T00:45:59.313383917Z" level=info msg="Start subscribing containerd event" May 8 00:45:59.313498 containerd[1460]: time="2025-05-08T00:45:59.313481891Z" level=info msg="Start recovering state" May 8 00:45:59.313606 containerd[1460]: time="2025-05-08T00:45:59.313587349Z" level=info msg="Start event monitor" May 8 00:45:59.313606 containerd[1460]: time="2025-05-08T00:45:59.313606515Z" level=info msg="Start snapshots syncer" May 8 00:45:59.313662 containerd[1460]: time="2025-05-08T00:45:59.313616353Z" level=info msg="Start cni network conf syncer for default" May 8 00:45:59.313662 containerd[1460]: time="2025-05-08T00:45:59.313628116Z" level=info msg="Start streaming server" May 8 00:45:59.313764 containerd[1460]: time="2025-05-08T00:45:59.313695071Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 8 00:45:59.313785 containerd[1460]: time="2025-05-08T00:45:59.313760404Z" level=info msg=serving... address=/run/containerd/containerd.sock May 8 00:45:59.313858 containerd[1460]: time="2025-05-08T00:45:59.313835474Z" level=info msg="containerd successfully booted in 0.300571s" May 8 00:45:59.313940 systemd[1]: Started containerd.service - containerd container runtime. May 8 00:45:59.937668 systemd-networkd[1391]: eth0: Gained IPv6LL May 8 00:45:59.941565 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 8 00:45:59.943624 systemd[1]: Reached target network-online.target - Network is Online. May 8 00:45:59.953847 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 8 00:45:59.957127 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:45:59.960021 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 8 00:45:59.981050 systemd[1]: coreos-metadata.service: Deactivated successfully. May 8 00:45:59.981376 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 8 00:45:59.983682 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 8 00:45:59.988576 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 8 00:46:00.654252 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:46:00.655967 systemd[1]: Reached target multi-user.target - Multi-User System. May 8 00:46:00.657478 systemd[1]: Startup finished in 792ms (kernel) + 5.465s (initrd) + 4.250s (userspace) = 10.508s. May 8 00:46:00.659586 (kubelet)[1545]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 00:46:01.069906 kubelet[1545]: E0508 00:46:01.069745 1545 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:46:01.073939 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:46:01.074171 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:46:04.088114 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 8 00:46:04.089340 systemd[1]: Started sshd@0-10.0.0.140:22-10.0.0.1:42840.service - OpenSSH per-connection server daemon (10.0.0.1:42840). May 8 00:46:04.139169 sshd[1558]: Accepted publickey for core from 10.0.0.1 port 42840 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:46:04.141234 sshd[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:46:04.151834 systemd-logind[1448]: New session 1 of user core. May 8 00:46:04.153491 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 8 00:46:04.172833 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 8 00:46:04.187176 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 8 00:46:04.190751 systemd[1]: Starting user@500.service - User Manager for UID 500... May 8 00:46:04.199371 (systemd)[1562]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 8 00:46:04.305986 systemd[1562]: Queued start job for default target default.target. May 8 00:46:04.318011 systemd[1562]: Created slice app.slice - User Application Slice. May 8 00:46:04.318043 systemd[1562]: Reached target paths.target - Paths. May 8 00:46:04.318058 systemd[1562]: Reached target timers.target - Timers. May 8 00:46:04.319890 systemd[1562]: Starting dbus.socket - D-Bus User Message Bus Socket... May 8 00:46:04.332269 systemd[1562]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 8 00:46:04.332489 systemd[1562]: Reached target sockets.target - Sockets. May 8 00:46:04.332514 systemd[1562]: Reached target basic.target - Basic System. May 8 00:46:04.332566 systemd[1562]: Reached target default.target - Main User Target. May 8 00:46:04.332611 systemd[1562]: Startup finished in 126ms. May 8 00:46:04.333154 systemd[1]: Started user@500.service - User Manager for UID 500. May 8 00:46:04.334968 systemd[1]: Started session-1.scope - Session 1 of User core. May 8 00:46:04.400235 systemd[1]: Started sshd@1-10.0.0.140:22-10.0.0.1:42852.service - OpenSSH per-connection server daemon (10.0.0.1:42852). May 8 00:46:04.461521 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 42852 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:46:04.463078 sshd[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:46:04.467073 systemd-logind[1448]: New session 2 of user core. May 8 00:46:04.477518 systemd[1]: Started session-2.scope - Session 2 of User core. May 8 00:46:04.532645 sshd[1573]: pam_unix(sshd:session): session closed for user core May 8 00:46:04.544242 systemd[1]: sshd@1-10.0.0.140:22-10.0.0.1:42852.service: Deactivated successfully. May 8 00:46:04.545821 systemd[1]: session-2.scope: Deactivated successfully. May 8 00:46:04.547471 systemd-logind[1448]: Session 2 logged out. Waiting for processes to exit. May 8 00:46:04.555729 systemd[1]: Started sshd@2-10.0.0.140:22-10.0.0.1:42854.service - OpenSSH per-connection server daemon (10.0.0.1:42854). May 8 00:46:04.557009 systemd-logind[1448]: Removed session 2. May 8 00:46:04.588617 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 42854 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:46:04.590072 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:46:04.593715 systemd-logind[1448]: New session 3 of user core. May 8 00:46:04.603536 systemd[1]: Started session-3.scope - Session 3 of User core. May 8 00:46:04.654481 sshd[1580]: pam_unix(sshd:session): session closed for user core May 8 00:46:04.667519 systemd[1]: sshd@2-10.0.0.140:22-10.0.0.1:42854.service: Deactivated successfully. May 8 00:46:04.669273 systemd[1]: session-3.scope: Deactivated successfully. May 8 00:46:04.670960 systemd-logind[1448]: Session 3 logged out. Waiting for processes to exit. May 8 00:46:04.672308 systemd[1]: Started sshd@3-10.0.0.140:22-10.0.0.1:42860.service - OpenSSH per-connection server daemon (10.0.0.1:42860). May 8 00:46:04.673126 systemd-logind[1448]: Removed session 3. May 8 00:46:04.709524 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 42860 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:46:04.711113 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:46:04.714921 systemd-logind[1448]: New session 4 of user core. May 8 00:46:04.728525 systemd[1]: Started session-4.scope - Session 4 of User core. May 8 00:46:04.785237 sshd[1587]: pam_unix(sshd:session): session closed for user core May 8 00:46:04.802695 systemd[1]: sshd@3-10.0.0.140:22-10.0.0.1:42860.service: Deactivated successfully. May 8 00:46:04.804387 systemd[1]: session-4.scope: Deactivated successfully. May 8 00:46:04.805996 systemd-logind[1448]: Session 4 logged out. Waiting for processes to exit. May 8 00:46:04.815642 systemd[1]: Started sshd@4-10.0.0.140:22-10.0.0.1:42874.service - OpenSSH per-connection server daemon (10.0.0.1:42874). May 8 00:46:04.816618 systemd-logind[1448]: Removed session 4. May 8 00:46:04.848823 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 42874 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:46:04.850494 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:46:04.854194 systemd-logind[1448]: New session 5 of user core. May 8 00:46:04.862540 systemd[1]: Started session-5.scope - Session 5 of User core. May 8 00:46:04.921703 sudo[1597]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 8 00:46:04.922068 sudo[1597]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:46:04.937991 sudo[1597]: pam_unix(sudo:session): session closed for user root May 8 00:46:04.940090 sshd[1594]: pam_unix(sshd:session): session closed for user core May 8 00:46:04.952074 systemd[1]: sshd@4-10.0.0.140:22-10.0.0.1:42874.service: Deactivated successfully. May 8 00:46:04.953859 systemd[1]: session-5.scope: Deactivated successfully. May 8 00:46:04.955187 systemd-logind[1448]: Session 5 logged out. Waiting for processes to exit. May 8 00:46:04.956589 systemd[1]: Started sshd@5-10.0.0.140:22-10.0.0.1:42880.service - OpenSSH per-connection server daemon (10.0.0.1:42880). May 8 00:46:04.957451 systemd-logind[1448]: Removed session 5. May 8 00:46:04.998496 sshd[1602]: Accepted publickey for core from 10.0.0.1 port 42880 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:46:05.000282 sshd[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:46:05.005067 systemd-logind[1448]: New session 6 of user core. May 8 00:46:05.017613 systemd[1]: Started session-6.scope - Session 6 of User core. May 8 00:46:05.074515 sudo[1606]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 8 00:46:05.074972 sudo[1606]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:46:05.079145 sudo[1606]: pam_unix(sudo:session): session closed for user root May 8 00:46:05.086470 sudo[1605]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 8 00:46:05.086900 sudo[1605]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:46:05.107711 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 8 00:46:05.109770 auditctl[1609]: No rules May 8 00:46:05.111046 systemd[1]: audit-rules.service: Deactivated successfully. May 8 00:46:05.111300 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 8 00:46:05.113035 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 8 00:46:05.146281 augenrules[1627]: No rules May 8 00:46:05.148076 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 8 00:46:05.149589 sudo[1605]: pam_unix(sudo:session): session closed for user root May 8 00:46:05.151588 sshd[1602]: pam_unix(sshd:session): session closed for user core May 8 00:46:05.161244 systemd[1]: sshd@5-10.0.0.140:22-10.0.0.1:42880.service: Deactivated successfully. May 8 00:46:05.162931 systemd[1]: session-6.scope: Deactivated successfully. May 8 00:46:05.164448 systemd-logind[1448]: Session 6 logged out. Waiting for processes to exit. May 8 00:46:05.173711 systemd[1]: Started sshd@6-10.0.0.140:22-10.0.0.1:42894.service - OpenSSH per-connection server daemon (10.0.0.1:42894). May 8 00:46:05.174733 systemd-logind[1448]: Removed session 6. May 8 00:46:05.206316 sshd[1635]: Accepted publickey for core from 10.0.0.1 port 42894 ssh2: RSA SHA256:ekllvhAjptCULlaFKPQUz58VR0uIuOgifX+67B4onhs May 8 00:46:05.208225 sshd[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 00:46:05.213180 systemd-logind[1448]: New session 7 of user core. May 8 00:46:05.222619 systemd[1]: Started session-7.scope - Session 7 of User core. May 8 00:46:05.275863 sudo[1638]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 8 00:46:05.276197 sudo[1638]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 00:46:05.297803 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 8 00:46:05.318673 systemd[1]: coreos-metadata.service: Deactivated successfully. May 8 00:46:05.318954 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 8 00:46:05.767627 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:46:05.778634 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:46:05.803103 systemd[1]: Reloading requested from client PID 1681 ('systemctl') (unit session-7.scope)... May 8 00:46:05.803118 systemd[1]: Reloading... May 8 00:46:05.869479 zram_generator::config[1719]: No configuration found. May 8 00:46:06.554029 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:46:06.630766 systemd[1]: Reloading finished in 827 ms. May 8 00:46:06.681538 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:46:06.685940 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:46:06.686202 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:46:06.687944 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 00:46:06.843745 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 00:46:06.859810 (kubelet)[1769]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 00:46:06.907359 kubelet[1769]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:46:06.907359 kubelet[1769]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 8 00:46:06.907359 kubelet[1769]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:46:06.907740 kubelet[1769]: I0508 00:46:06.907437 1769 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:46:07.214870 kubelet[1769]: I0508 00:46:07.214762 1769 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 8 00:46:07.214870 kubelet[1769]: I0508 00:46:07.214796 1769 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:46:07.215113 kubelet[1769]: I0508 00:46:07.215079 1769 server.go:954] "Client rotation is on, will bootstrap in background" May 8 00:46:07.241040 kubelet[1769]: I0508 00:46:07.240995 1769 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:46:07.250245 kubelet[1769]: E0508 00:46:07.250206 1769 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 8 00:46:07.250245 kubelet[1769]: I0508 00:46:07.250242 1769 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 8 00:46:07.255513 kubelet[1769]: I0508 00:46:07.255472 1769 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:46:07.256989 kubelet[1769]: I0508 00:46:07.256946 1769 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:46:07.257146 kubelet[1769]: I0508 00:46:07.256984 1769 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.140","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 8 00:46:07.257257 kubelet[1769]: I0508 00:46:07.257148 1769 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:46:07.257257 kubelet[1769]: I0508 00:46:07.257156 1769 container_manager_linux.go:304] "Creating device plugin manager" May 8 00:46:07.257307 kubelet[1769]: I0508 00:46:07.257277 1769 state_mem.go:36] "Initialized new in-memory state store" May 8 00:46:07.260544 kubelet[1769]: I0508 00:46:07.260505 1769 kubelet.go:446] "Attempting to sync node with API server" May 8 00:46:07.260544 kubelet[1769]: I0508 00:46:07.260523 1769 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:46:07.260544 kubelet[1769]: I0508 00:46:07.260539 1769 kubelet.go:352] "Adding apiserver pod source" May 8 00:46:07.260544 kubelet[1769]: I0508 00:46:07.260549 1769 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:46:07.260756 kubelet[1769]: E0508 00:46:07.260661 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:46:07.260756 kubelet[1769]: E0508 00:46:07.260692 1769 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:46:07.263988 kubelet[1769]: I0508 00:46:07.263953 1769 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 8 00:46:07.264316 kubelet[1769]: I0508 00:46:07.264302 1769 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:46:07.264831 kubelet[1769]: W0508 00:46:07.264805 1769 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 8 00:46:07.265903 kubelet[1769]: W0508 00:46:07.265801 1769 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.140" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope May 8 00:46:07.265903 kubelet[1769]: W0508 00:46:07.265839 1769 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope May 8 00:46:07.265903 kubelet[1769]: E0508 00:46:07.265868 1769 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" May 8 00:46:07.265903 kubelet[1769]: E0508 00:46:07.265875 1769 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.140\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" May 8 00:46:07.266700 kubelet[1769]: I0508 00:46:07.266673 1769 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 8 00:46:07.266735 kubelet[1769]: I0508 00:46:07.266710 1769 server.go:1287] "Started kubelet" May 8 00:46:07.267446 kubelet[1769]: I0508 00:46:07.267384 1769 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:46:07.268751 kubelet[1769]: I0508 00:46:07.268224 1769 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:46:07.268751 kubelet[1769]: I0508 00:46:07.268439 1769 server.go:490] "Adding debug handlers to kubelet server" May 8 00:46:07.269039 kubelet[1769]: I0508 00:46:07.268725 1769 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:46:07.270175 kubelet[1769]: I0508 00:46:07.270136 1769 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:46:07.270504 kubelet[1769]: I0508 00:46:07.270456 1769 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 8 00:46:07.271682 kubelet[1769]: E0508 00:46:07.271642 1769 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.140\" not found" May 8 00:46:07.271730 kubelet[1769]: I0508 00:46:07.271687 1769 volume_manager.go:297] "Starting Kubelet Volume Manager" May 8 00:46:07.271961 kubelet[1769]: I0508 00:46:07.271938 1769 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:46:07.271995 kubelet[1769]: I0508 00:46:07.271989 1769 reconciler.go:26] "Reconciler: start to sync state" May 8 00:46:07.272530 kubelet[1769]: E0508 00:46:07.272428 1769 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:46:07.273634 kubelet[1769]: I0508 00:46:07.273454 1769 factory.go:221] Registration of the systemd container factory successfully May 8 00:46:07.273634 kubelet[1769]: I0508 00:46:07.273594 1769 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:46:07.275692 kubelet[1769]: I0508 00:46:07.275347 1769 factory.go:221] Registration of the containerd container factory successfully May 8 00:46:07.280716 kubelet[1769]: E0508 00:46:07.280667 1769 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.140\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" May 8 00:46:07.281424 kubelet[1769]: W0508 00:46:07.281295 1769 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope May 8 00:46:07.281424 kubelet[1769]: E0508 00:46:07.281324 1769 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" May 8 00:46:07.282126 kubelet[1769]: E0508 00:46:07.280852 1769 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.140.183d66b7247aaf56 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.140,UID:10.0.0.140,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.140,},FirstTimestamp:2025-05-08 00:46:07.266688854 +0000 UTC m=+0.401687632,LastTimestamp:2025-05-08 00:46:07.266688854 +0000 UTC m=+0.401687632,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.140,}" May 8 00:46:07.282693 kubelet[1769]: E0508 00:46:07.282623 1769 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.140.183d66b724d22069 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.140,UID:10.0.0.140,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.140,},FirstTimestamp:2025-05-08 00:46:07.272419433 +0000 UTC m=+0.407418201,LastTimestamp:2025-05-08 00:46:07.272419433 +0000 UTC m=+0.407418201,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.140,}" May 8 00:46:07.287778 kubelet[1769]: I0508 00:46:07.287685 1769 cpu_manager.go:221] "Starting CPU manager" policy="none" May 8 00:46:07.287778 kubelet[1769]: I0508 00:46:07.287696 1769 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 8 00:46:07.287778 kubelet[1769]: I0508 00:46:07.287711 1769 state_mem.go:36] "Initialized new in-memory state store" May 8 00:46:07.291379 kubelet[1769]: E0508 00:46:07.291295 1769 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.140.183d66b725b114c4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.140,UID:10.0.0.140,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.140 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.140,},FirstTimestamp:2025-05-08 00:46:07.28703098 +0000 UTC m=+0.422029758,LastTimestamp:2025-05-08 00:46:07.28703098 +0000 UTC m=+0.422029758,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.140,}" May 8 00:46:07.295223 kubelet[1769]: E0508 00:46:07.295102 1769 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.140.183d66b725b12408 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.140,UID:10.0.0.140,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.140 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.140,},FirstTimestamp:2025-05-08 00:46:07.287034888 +0000 UTC m=+0.422033666,LastTimestamp:2025-05-08 00:46:07.287034888 +0000 UTC m=+0.422033666,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.140,}" May 8 00:46:07.299001 kubelet[1769]: E0508 00:46:07.298890 1769 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.140.183d66b725b12f07 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.140,UID:10.0.0.140,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.140 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.140,},FirstTimestamp:2025-05-08 00:46:07.287037703 +0000 UTC m=+0.422036481,LastTimestamp:2025-05-08 00:46:07.287037703 +0000 UTC m=+0.422036481,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.140,}" May 8 00:46:07.372158 kubelet[1769]: E0508 00:46:07.372124 1769 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.140\" not found" May 8 00:46:07.472577 kubelet[1769]: E0508 00:46:07.472470 1769 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.140\" not found" May 8 00:46:07.572747 kubelet[1769]: E0508 00:46:07.572707 1769 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.140\" not found" May 8 00:46:07.673022 kubelet[1769]: E0508 00:46:07.672971 1769 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.140\" not found" May 8 00:46:07.721063 kubelet[1769]: E0508 00:46:07.721032 1769 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.140\" not found" node="10.0.0.140" May 8 00:46:07.761747 kubelet[1769]: I0508 00:46:07.761604 1769 policy_none.go:49] "None policy: Start" May 8 00:46:07.761747 kubelet[1769]: I0508 00:46:07.761674 1769 memory_manager.go:186] "Starting memorymanager" policy="None" May 8 00:46:07.761747 kubelet[1769]: I0508 00:46:07.761696 1769 state_mem.go:35] "Initializing new in-memory state store" May 8 00:46:07.769777 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 8 00:46:07.773822 kubelet[1769]: E0508 00:46:07.773789 1769 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.140\" not found" May 8 00:46:07.778663 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 8 00:46:07.781692 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 8 00:46:07.783425 kubelet[1769]: I0508 00:46:07.783349 1769 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:46:07.784591 kubelet[1769]: I0508 00:46:07.784565 1769 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:46:07.784637 kubelet[1769]: I0508 00:46:07.784599 1769 status_manager.go:227] "Starting to sync pod status with apiserver" May 8 00:46:07.784637 kubelet[1769]: I0508 00:46:07.784619 1769 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 8 00:46:07.784637 kubelet[1769]: I0508 00:46:07.784629 1769 kubelet.go:2388] "Starting kubelet main sync loop" May 8 00:46:07.784790 kubelet[1769]: E0508 00:46:07.784754 1769 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:46:07.789438 kubelet[1769]: I0508 00:46:07.789413 1769 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:46:07.789940 kubelet[1769]: I0508 00:46:07.789752 1769 eviction_manager.go:189] "Eviction manager: starting control loop" May 8 00:46:07.789940 kubelet[1769]: I0508 00:46:07.789769 1769 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:46:07.790036 kubelet[1769]: I0508 00:46:07.790008 1769 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:46:07.791126 kubelet[1769]: E0508 00:46:07.791102 1769 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 8 00:46:07.791234 kubelet[1769]: E0508 00:46:07.791141 1769 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.140\" not found" May 8 00:46:07.891425 kubelet[1769]: I0508 00:46:07.891370 1769 kubelet_node_status.go:76] "Attempting to register node" node="10.0.0.140" May 8 00:46:07.895756 kubelet[1769]: I0508 00:46:07.895727 1769 kubelet_node_status.go:79] "Successfully registered node" node="10.0.0.140" May 8 00:46:07.895879 kubelet[1769]: E0508 00:46:07.895761 1769 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"10.0.0.140\": node \"10.0.0.140\" not found" May 8 00:46:07.900105 kubelet[1769]: E0508 00:46:07.900040 1769 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.140\" not found" May 8 00:46:08.001000 kubelet[1769]: E0508 00:46:08.000894 1769 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.140\" not found" May 8 00:46:08.101848 kubelet[1769]: E0508 00:46:08.101667 1769 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.140\" not found" May 8 00:46:08.202454 kubelet[1769]: E0508 00:46:08.202367 1769 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.140\" not found" May 8 00:46:08.217667 kubelet[1769]: I0508 00:46:08.217590 1769 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" May 8 00:46:08.217813 kubelet[1769]: W0508 00:46:08.217797 1769 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 8 00:46:08.261141 kubelet[1769]: E0508 00:46:08.261100 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:46:08.302951 kubelet[1769]: E0508 00:46:08.302892 1769 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.140\" not found" May 8 00:46:08.370579 sudo[1638]: pam_unix(sudo:session): session closed for user root May 8 00:46:08.372621 sshd[1635]: pam_unix(sshd:session): session closed for user core May 8 00:46:08.376946 systemd[1]: sshd@6-10.0.0.140:22-10.0.0.1:42894.service: Deactivated successfully. May 8 00:46:08.379206 systemd[1]: session-7.scope: Deactivated successfully. May 8 00:46:08.379853 systemd-logind[1448]: Session 7 logged out. Waiting for processes to exit. May 8 00:46:08.380885 systemd-logind[1448]: Removed session 7. May 8 00:46:08.403675 kubelet[1769]: E0508 00:46:08.403633 1769 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.140\" not found" May 8 00:46:08.504436 kubelet[1769]: E0508 00:46:08.504363 1769 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.140\" not found" May 8 00:46:08.605107 kubelet[1769]: E0508 00:46:08.605027 1769 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.140\" not found" May 8 00:46:08.705748 kubelet[1769]: E0508 00:46:08.705607 1769 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.140\" not found" May 8 00:46:08.806475 kubelet[1769]: I0508 00:46:08.806443 1769 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" May 8 00:46:08.806863 containerd[1460]: time="2025-05-08T00:46:08.806810522Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 8 00:46:08.807241 kubelet[1769]: I0508 00:46:08.806989 1769 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" May 8 00:46:09.261559 kubelet[1769]: I0508 00:46:09.261516 1769 apiserver.go:52] "Watching apiserver" May 8 00:46:09.261559 kubelet[1769]: E0508 00:46:09.261529 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:46:09.270393 systemd[1]: Created slice kubepods-burstable-pod9bf27ce4_e87e_45ca_916f_5ead7169257b.slice - libcontainer container kubepods-burstable-pod9bf27ce4_e87e_45ca_916f_5ead7169257b.slice. May 8 00:46:09.275893 kubelet[1769]: I0508 00:46:09.275855 1769 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:46:09.281277 kubelet[1769]: I0508 00:46:09.281235 1769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9bf27ce4-e87e-45ca-916f-5ead7169257b-cilium-run\") pod \"cilium-x2mhj\" (UID: \"9bf27ce4-e87e-45ca-916f-5ead7169257b\") " pod="kube-system/cilium-x2mhj" May 8 00:46:09.281277 kubelet[1769]: I0508 00:46:09.281270 1769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9bf27ce4-e87e-45ca-916f-5ead7169257b-bpf-maps\") pod \"cilium-x2mhj\" (UID: \"9bf27ce4-e87e-45ca-916f-5ead7169257b\") " pod="kube-system/cilium-x2mhj" May 8 00:46:09.281277 kubelet[1769]: I0508 00:46:09.281294 1769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9bf27ce4-e87e-45ca-916f-5ead7169257b-cni-path\") pod \"cilium-x2mhj\" (UID: \"9bf27ce4-e87e-45ca-916f-5ead7169257b\") " pod="kube-system/cilium-x2mhj" May 8 00:46:09.281507 kubelet[1769]: I0508 00:46:09.281316 1769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79td8\" (UniqueName: \"kubernetes.io/projected/9bf27ce4-e87e-45ca-916f-5ead7169257b-kube-api-access-79td8\") pod \"cilium-x2mhj\" (UID: \"9bf27ce4-e87e-45ca-916f-5ead7169257b\") " pod="kube-system/cilium-x2mhj" May 8 00:46:09.281507 kubelet[1769]: I0508 00:46:09.281346 1769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9bf27ce4-e87e-45ca-916f-5ead7169257b-etc-cni-netd\") pod \"cilium-x2mhj\" (UID: \"9bf27ce4-e87e-45ca-916f-5ead7169257b\") " pod="kube-system/cilium-x2mhj" May 8 00:46:09.281507 kubelet[1769]: I0508 00:46:09.281362 1769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9bf27ce4-e87e-45ca-916f-5ead7169257b-clustermesh-secrets\") pod \"cilium-x2mhj\" (UID: \"9bf27ce4-e87e-45ca-916f-5ead7169257b\") " pod="kube-system/cilium-x2mhj" May 8 00:46:09.281507 kubelet[1769]: I0508 00:46:09.281382 1769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9bf27ce4-e87e-45ca-916f-5ead7169257b-host-proc-sys-kernel\") pod \"cilium-x2mhj\" (UID: \"9bf27ce4-e87e-45ca-916f-5ead7169257b\") " pod="kube-system/cilium-x2mhj" May 8 00:46:09.281507 kubelet[1769]: I0508 00:46:09.281418 1769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9bf27ce4-e87e-45ca-916f-5ead7169257b-hubble-tls\") pod \"cilium-x2mhj\" (UID: \"9bf27ce4-e87e-45ca-916f-5ead7169257b\") " pod="kube-system/cilium-x2mhj" May 8 00:46:09.281671 kubelet[1769]: I0508 00:46:09.281435 1769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkczd\" (UniqueName: \"kubernetes.io/projected/b02b7e54-1024-4cef-821a-8b0032ced10f-kube-api-access-gkczd\") pod \"kube-proxy-5b4qz\" (UID: \"b02b7e54-1024-4cef-821a-8b0032ced10f\") " pod="kube-system/kube-proxy-5b4qz" May 8 00:46:09.281671 kubelet[1769]: I0508 00:46:09.281455 1769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9bf27ce4-e87e-45ca-916f-5ead7169257b-hostproc\") pod \"cilium-x2mhj\" (UID: \"9bf27ce4-e87e-45ca-916f-5ead7169257b\") " pod="kube-system/cilium-x2mhj" May 8 00:46:09.281671 kubelet[1769]: I0508 00:46:09.281474 1769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9bf27ce4-e87e-45ca-916f-5ead7169257b-cilium-cgroup\") pod \"cilium-x2mhj\" (UID: \"9bf27ce4-e87e-45ca-916f-5ead7169257b\") " pod="kube-system/cilium-x2mhj" May 8 00:46:09.281671 kubelet[1769]: I0508 00:46:09.281489 1769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9bf27ce4-e87e-45ca-916f-5ead7169257b-lib-modules\") pod \"cilium-x2mhj\" (UID: \"9bf27ce4-e87e-45ca-916f-5ead7169257b\") " pod="kube-system/cilium-x2mhj" May 8 00:46:09.281671 kubelet[1769]: I0508 00:46:09.281503 1769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9bf27ce4-e87e-45ca-916f-5ead7169257b-cilium-config-path\") pod \"cilium-x2mhj\" (UID: \"9bf27ce4-e87e-45ca-916f-5ead7169257b\") " pod="kube-system/cilium-x2mhj" May 8 00:46:09.281671 kubelet[1769]: I0508 00:46:09.281517 1769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b02b7e54-1024-4cef-821a-8b0032ced10f-xtables-lock\") pod \"kube-proxy-5b4qz\" (UID: \"b02b7e54-1024-4cef-821a-8b0032ced10f\") " pod="kube-system/kube-proxy-5b4qz" May 8 00:46:09.281845 kubelet[1769]: I0508 00:46:09.281533 1769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b02b7e54-1024-4cef-821a-8b0032ced10f-lib-modules\") pod \"kube-proxy-5b4qz\" (UID: \"b02b7e54-1024-4cef-821a-8b0032ced10f\") " pod="kube-system/kube-proxy-5b4qz" May 8 00:46:09.281845 kubelet[1769]: I0508 00:46:09.281554 1769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9bf27ce4-e87e-45ca-916f-5ead7169257b-xtables-lock\") pod \"cilium-x2mhj\" (UID: \"9bf27ce4-e87e-45ca-916f-5ead7169257b\") " pod="kube-system/cilium-x2mhj" May 8 00:46:09.281845 kubelet[1769]: I0508 00:46:09.281573 1769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9bf27ce4-e87e-45ca-916f-5ead7169257b-host-proc-sys-net\") pod \"cilium-x2mhj\" (UID: \"9bf27ce4-e87e-45ca-916f-5ead7169257b\") " pod="kube-system/cilium-x2mhj" May 8 00:46:09.281845 kubelet[1769]: I0508 00:46:09.281594 1769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b02b7e54-1024-4cef-821a-8b0032ced10f-kube-proxy\") pod \"kube-proxy-5b4qz\" (UID: \"b02b7e54-1024-4cef-821a-8b0032ced10f\") " pod="kube-system/kube-proxy-5b4qz" May 8 00:46:09.283542 systemd[1]: Created slice kubepods-besteffort-podb02b7e54_1024_4cef_821a_8b0032ced10f.slice - libcontainer container kubepods-besteffort-podb02b7e54_1024_4cef_821a_8b0032ced10f.slice. May 8 00:46:09.581959 kubelet[1769]: E0508 00:46:09.581827 1769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:09.582708 containerd[1460]: time="2025-05-08T00:46:09.582657810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x2mhj,Uid:9bf27ce4-e87e-45ca-916f-5ead7169257b,Namespace:kube-system,Attempt:0,}" May 8 00:46:09.596327 kubelet[1769]: E0508 00:46:09.596280 1769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:09.596769 containerd[1460]: time="2025-05-08T00:46:09.596736908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5b4qz,Uid:b02b7e54-1024-4cef-821a-8b0032ced10f,Namespace:kube-system,Attempt:0,}" May 8 00:46:10.210944 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1018916296.mount: Deactivated successfully. May 8 00:46:10.220556 containerd[1460]: time="2025-05-08T00:46:10.220509768Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:46:10.225222 containerd[1460]: time="2025-05-08T00:46:10.225182793Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 8 00:46:10.226311 containerd[1460]: time="2025-05-08T00:46:10.226253512Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:46:10.227424 containerd[1460]: time="2025-05-08T00:46:10.227376117Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:46:10.228047 containerd[1460]: time="2025-05-08T00:46:10.227998344Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 00:46:10.231878 containerd[1460]: time="2025-05-08T00:46:10.231847584Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 00:46:10.233083 containerd[1460]: time="2025-05-08T00:46:10.233053086Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 650.266915ms" May 8 00:46:10.234081 containerd[1460]: time="2025-05-08T00:46:10.234049414Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 637.249027ms" May 8 00:46:10.261990 kubelet[1769]: E0508 00:46:10.261921 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:46:10.334371 containerd[1460]: time="2025-05-08T00:46:10.334243530Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:46:10.334371 containerd[1460]: time="2025-05-08T00:46:10.334308432Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:46:10.335040 containerd[1460]: time="2025-05-08T00:46:10.334972457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:46:10.335233 containerd[1460]: time="2025-05-08T00:46:10.335168796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:46:10.336855 containerd[1460]: time="2025-05-08T00:46:10.336778465Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:46:10.336855 containerd[1460]: time="2025-05-08T00:46:10.336834199Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:46:10.337053 containerd[1460]: time="2025-05-08T00:46:10.336964073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:46:10.338090 containerd[1460]: time="2025-05-08T00:46:10.338026275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:46:10.398952 systemd[1]: run-containerd-runc-k8s.io-10390812f22571681821afb0b46ca13a045594983d68e9e7af4313043e5fd803-runc.JkkRya.mount: Deactivated successfully. May 8 00:46:10.406547 systemd[1]: Started cri-containerd-10390812f22571681821afb0b46ca13a045594983d68e9e7af4313043e5fd803.scope - libcontainer container 10390812f22571681821afb0b46ca13a045594983d68e9e7af4313043e5fd803. May 8 00:46:10.408473 systemd[1]: Started cri-containerd-8831c64e30ff1c087dbf2f9b93d7d99ed0e099dd8ae674766a69e26339dc1186.scope - libcontainer container 8831c64e30ff1c087dbf2f9b93d7d99ed0e099dd8ae674766a69e26339dc1186. May 8 00:46:10.429522 containerd[1460]: time="2025-05-08T00:46:10.429465279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x2mhj,Uid:9bf27ce4-e87e-45ca-916f-5ead7169257b,Namespace:kube-system,Attempt:0,} returns sandbox id \"10390812f22571681821afb0b46ca13a045594983d68e9e7af4313043e5fd803\"" May 8 00:46:10.430667 kubelet[1769]: E0508 00:46:10.430645 1769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:10.432060 containerd[1460]: time="2025-05-08T00:46:10.432022215Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 8 00:46:10.434735 containerd[1460]: time="2025-05-08T00:46:10.434661576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5b4qz,Uid:b02b7e54-1024-4cef-821a-8b0032ced10f,Namespace:kube-system,Attempt:0,} returns sandbox id \"8831c64e30ff1c087dbf2f9b93d7d99ed0e099dd8ae674766a69e26339dc1186\"" May 8 00:46:10.435507 kubelet[1769]: E0508 00:46:10.435464 1769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:11.263015 kubelet[1769]: E0508 00:46:11.262961 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:46:12.264003 kubelet[1769]: E0508 00:46:12.263951 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:46:13.264119 kubelet[1769]: E0508 00:46:13.264077 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:46:14.060277 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1729075858.mount: Deactivated successfully. May 8 00:46:14.265527 kubelet[1769]: E0508 00:46:14.265475 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:46:15.266205 kubelet[1769]: E0508 00:46:15.266154 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:46:16.267036 kubelet[1769]: E0508 00:46:16.266993 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:46:17.267387 kubelet[1769]: E0508 00:46:17.267343 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:46:18.267549 kubelet[1769]: E0508 00:46:18.267495 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:46:18.395421 containerd[1460]: time="2025-05-08T00:46:18.395343089Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:46:18.398691 containerd[1460]: time="2025-05-08T00:46:18.398650753Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 8 00:46:18.422171 containerd[1460]: time="2025-05-08T00:46:18.422135263Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:46:18.423545 containerd[1460]: time="2025-05-08T00:46:18.423518788Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.991459343s" May 8 00:46:18.423619 containerd[1460]: time="2025-05-08T00:46:18.423546420Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 8 00:46:18.424931 containerd[1460]: time="2025-05-08T00:46:18.424886804Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 8 00:46:18.425985 containerd[1460]: time="2025-05-08T00:46:18.425958044Z" level=info msg="CreateContainer within sandbox \"10390812f22571681821afb0b46ca13a045594983d68e9e7af4313043e5fd803\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 00:46:18.533251 containerd[1460]: time="2025-05-08T00:46:18.533154634Z" level=info msg="CreateContainer within sandbox \"10390812f22571681821afb0b46ca13a045594983d68e9e7af4313043e5fd803\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7f7525f5f223506fd88a7cf81a1c5f161b48f1e380c5c92a898b3ffb0145247a\"" May 8 00:46:18.533844 containerd[1460]: time="2025-05-08T00:46:18.533806106Z" level=info msg="StartContainer for \"7f7525f5f223506fd88a7cf81a1c5f161b48f1e380c5c92a898b3ffb0145247a\"" May 8 00:46:18.562564 systemd[1]: Started cri-containerd-7f7525f5f223506fd88a7cf81a1c5f161b48f1e380c5c92a898b3ffb0145247a.scope - libcontainer container 7f7525f5f223506fd88a7cf81a1c5f161b48f1e380c5c92a898b3ffb0145247a. May 8 00:46:18.598027 systemd[1]: cri-containerd-7f7525f5f223506fd88a7cf81a1c5f161b48f1e380c5c92a898b3ffb0145247a.scope: Deactivated successfully. May 8 00:46:18.602067 containerd[1460]: time="2025-05-08T00:46:18.602030245Z" level=info msg="StartContainer for \"7f7525f5f223506fd88a7cf81a1c5f161b48f1e380c5c92a898b3ffb0145247a\" returns successfully" May 8 00:46:18.803058 kubelet[1769]: E0508 00:46:18.802943 1769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:19.268770 kubelet[1769]: E0508 00:46:19.268613 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:46:19.453855 containerd[1460]: time="2025-05-08T00:46:19.453794651Z" level=info msg="shim disconnected" id=7f7525f5f223506fd88a7cf81a1c5f161b48f1e380c5c92a898b3ffb0145247a namespace=k8s.io May 8 00:46:19.453855 containerd[1460]: time="2025-05-08T00:46:19.453851237Z" level=warning msg="cleaning up after shim disconnected" id=7f7525f5f223506fd88a7cf81a1c5f161b48f1e380c5c92a898b3ffb0145247a namespace=k8s.io May 8 00:46:19.454233 containerd[1460]: time="2025-05-08T00:46:19.453864162Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:46:19.489545 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f7525f5f223506fd88a7cf81a1c5f161b48f1e380c5c92a898b3ffb0145247a-rootfs.mount: Deactivated successfully. May 8 00:46:19.805571 kubelet[1769]: E0508 00:46:19.805520 1769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:19.809390 containerd[1460]: time="2025-05-08T00:46:19.809347547Z" level=info msg="CreateContainer within sandbox \"10390812f22571681821afb0b46ca13a045594983d68e9e7af4313043e5fd803\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 8 00:46:20.268801 kubelet[1769]: E0508 00:46:20.268703 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:46:20.363171 containerd[1460]: time="2025-05-08T00:46:20.363117651Z" level=info msg="CreateContainer within sandbox \"10390812f22571681821afb0b46ca13a045594983d68e9e7af4313043e5fd803\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c49b84c720b406facb1dd6bce61fd80f1bca6b364a17107356be65abb038291d\"" May 8 00:46:20.363512 containerd[1460]: time="2025-05-08T00:46:20.363474380Z" level=info msg="StartContainer for \"c49b84c720b406facb1dd6bce61fd80f1bca6b364a17107356be65abb038291d\"" May 8 00:46:20.392666 systemd[1]: Started cri-containerd-c49b84c720b406facb1dd6bce61fd80f1bca6b364a17107356be65abb038291d.scope - libcontainer container c49b84c720b406facb1dd6bce61fd80f1bca6b364a17107356be65abb038291d. May 8 00:46:20.428600 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:46:20.428825 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 00:46:20.429020 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 8 00:46:20.439683 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 00:46:20.439869 systemd[1]: cri-containerd-c49b84c720b406facb1dd6bce61fd80f1bca6b364a17107356be65abb038291d.scope: Deactivated successfully. May 8 00:46:20.463694 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 00:46:20.474472 containerd[1460]: time="2025-05-08T00:46:20.474424381Z" level=info msg="StartContainer for \"c49b84c720b406facb1dd6bce61fd80f1bca6b364a17107356be65abb038291d\" returns successfully" May 8 00:46:20.492772 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c49b84c720b406facb1dd6bce61fd80f1bca6b364a17107356be65abb038291d-rootfs.mount: Deactivated successfully. May 8 00:46:20.605280 containerd[1460]: time="2025-05-08T00:46:20.605162869Z" level=info msg="shim disconnected" id=c49b84c720b406facb1dd6bce61fd80f1bca6b364a17107356be65abb038291d namespace=k8s.io May 8 00:46:20.605280 containerd[1460]: time="2025-05-08T00:46:20.605218213Z" level=warning msg="cleaning up after shim disconnected" id=c49b84c720b406facb1dd6bce61fd80f1bca6b364a17107356be65abb038291d namespace=k8s.io May 8 00:46:20.605280 containerd[1460]: time="2025-05-08T00:46:20.605227971Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:46:20.807745 kubelet[1769]: E0508 00:46:20.807710 1769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:20.811388 containerd[1460]: time="2025-05-08T00:46:20.809305412Z" level=info msg="CreateContainer within sandbox \"10390812f22571681821afb0b46ca13a045594983d68e9e7af4313043e5fd803\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 8 00:46:21.269591 kubelet[1769]: E0508 00:46:21.269559 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:46:21.634814 containerd[1460]: time="2025-05-08T00:46:21.634684447Z" level=info msg="CreateContainer within sandbox \"10390812f22571681821afb0b46ca13a045594983d68e9e7af4313043e5fd803\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8662d99d8205c28e1b19f0165af1ad9ab4deb427bb0c2149118a2d43be291dd1\"" May 8 00:46:21.635802 containerd[1460]: time="2025-05-08T00:46:21.635755676Z" level=info msg="StartContainer for \"8662d99d8205c28e1b19f0165af1ad9ab4deb427bb0c2149118a2d43be291dd1\"" May 8 00:46:21.671538 systemd[1]: Started cri-containerd-8662d99d8205c28e1b19f0165af1ad9ab4deb427bb0c2149118a2d43be291dd1.scope - libcontainer container 8662d99d8205c28e1b19f0165af1ad9ab4deb427bb0c2149118a2d43be291dd1. May 8 00:46:21.672836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3527705130.mount: Deactivated successfully. May 8 00:46:21.706816 systemd[1]: cri-containerd-8662d99d8205c28e1b19f0165af1ad9ab4deb427bb0c2149118a2d43be291dd1.scope: Deactivated successfully. May 8 00:46:21.828613 containerd[1460]: time="2025-05-08T00:46:21.828558940Z" level=info msg="StartContainer for \"8662d99d8205c28e1b19f0165af1ad9ab4deb427bb0c2149118a2d43be291dd1\" returns successfully" May 8 00:46:21.846519 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8662d99d8205c28e1b19f0165af1ad9ab4deb427bb0c2149118a2d43be291dd1-rootfs.mount: Deactivated successfully. May 8 00:46:22.163884 containerd[1460]: time="2025-05-08T00:46:22.163820335Z" level=info msg="shim disconnected" id=8662d99d8205c28e1b19f0165af1ad9ab4deb427bb0c2149118a2d43be291dd1 namespace=k8s.io May 8 00:46:22.163884 containerd[1460]: time="2025-05-08T00:46:22.163875739Z" level=warning msg="cleaning up after shim disconnected" id=8662d99d8205c28e1b19f0165af1ad9ab4deb427bb0c2149118a2d43be291dd1 namespace=k8s.io May 8 00:46:22.164171 containerd[1460]: time="2025-05-08T00:46:22.163888984Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:46:22.270755 kubelet[1769]: E0508 00:46:22.270710 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:46:22.834531 kubelet[1769]: E0508 00:46:22.834228 1769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:22.836094 containerd[1460]: time="2025-05-08T00:46:22.836057810Z" level=info msg="CreateContainer within sandbox \"10390812f22571681821afb0b46ca13a045594983d68e9e7af4313043e5fd803\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 8 00:46:23.271391 kubelet[1769]: E0508 00:46:23.271258 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:46:23.495170 containerd[1460]: time="2025-05-08T00:46:23.495099694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:46:23.552154 containerd[1460]: time="2025-05-08T00:46:23.551929401Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=30917856" May 8 00:46:23.612741 containerd[1460]: time="2025-05-08T00:46:23.612675944Z" level=info msg="ImageCreate event name:\"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:46:23.623512 containerd[1460]: time="2025-05-08T00:46:23.623433933Z" level=info msg="CreateContainer within sandbox \"10390812f22571681821afb0b46ca13a045594983d68e9e7af4313043e5fd803\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3cb8886202568ad0d528ac41642f2b31156305e19f60f6eabe3dfdee505e351a\"" May 8 00:46:23.624163 containerd[1460]: time="2025-05-08T00:46:23.624094141Z" level=info msg="StartContainer for \"3cb8886202568ad0d528ac41642f2b31156305e19f60f6eabe3dfdee505e351a\"" May 8 00:46:23.635478 containerd[1460]: time="2025-05-08T00:46:23.635426367Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:46:23.636519 containerd[1460]: time="2025-05-08T00:46:23.636476427Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"30916875\" in 5.211542433s" May 8 00:46:23.636572 containerd[1460]: time="2025-05-08T00:46:23.636527513Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" May 8 00:46:23.638960 containerd[1460]: time="2025-05-08T00:46:23.638910372Z" level=info msg="CreateContainer within sandbox \"8831c64e30ff1c087dbf2f9b93d7d99ed0e099dd8ae674766a69e26339dc1186\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 8 00:46:23.719559 systemd[1]: Started cri-containerd-3cb8886202568ad0d528ac41642f2b31156305e19f60f6eabe3dfdee505e351a.scope - libcontainer container 3cb8886202568ad0d528ac41642f2b31156305e19f60f6eabe3dfdee505e351a. May 8 00:46:23.756068 systemd[1]: cri-containerd-3cb8886202568ad0d528ac41642f2b31156305e19f60f6eabe3dfdee505e351a.scope: Deactivated successfully. May 8 00:46:23.809782 containerd[1460]: time="2025-05-08T00:46:23.809347002Z" level=info msg="StartContainer for \"3cb8886202568ad0d528ac41642f2b31156305e19f60f6eabe3dfdee505e351a\" returns successfully" May 8 00:46:23.838717 kubelet[1769]: E0508 00:46:23.838329 1769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:23.896529 containerd[1460]: time="2025-05-08T00:46:23.896462486Z" level=info msg="CreateContainer within sandbox \"8831c64e30ff1c087dbf2f9b93d7d99ed0e099dd8ae674766a69e26339dc1186\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9ec3993b285d760fb1b5b6ef228b744eb79598eb42271cb799edbcccaa13c681\"" May 8 00:46:23.897317 containerd[1460]: time="2025-05-08T00:46:23.897266164Z" level=info msg="StartContainer for \"9ec3993b285d760fb1b5b6ef228b744eb79598eb42271cb799edbcccaa13c681\"" May 8 00:46:24.272313 kubelet[1769]: E0508 00:46:24.272175 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:46:24.324545 systemd[1]: Started cri-containerd-9ec3993b285d760fb1b5b6ef228b744eb79598eb42271cb799edbcccaa13c681.scope - libcontainer container 9ec3993b285d760fb1b5b6ef228b744eb79598eb42271cb799edbcccaa13c681. May 8 00:46:24.412633 containerd[1460]: time="2025-05-08T00:46:24.412566209Z" level=info msg="StartContainer for \"9ec3993b285d760fb1b5b6ef228b744eb79598eb42271cb799edbcccaa13c681\" returns successfully" May 8 00:46:24.413177 containerd[1460]: time="2025-05-08T00:46:24.413128764Z" level=info msg="shim disconnected" id=3cb8886202568ad0d528ac41642f2b31156305e19f60f6eabe3dfdee505e351a namespace=k8s.io May 8 00:46:24.413177 containerd[1460]: time="2025-05-08T00:46:24.413168399Z" level=warning msg="cleaning up after shim disconnected" id=3cb8886202568ad0d528ac41642f2b31156305e19f60f6eabe3dfdee505e351a namespace=k8s.io May 8 00:46:24.413177 containerd[1460]: time="2025-05-08T00:46:24.413177476Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:46:24.470588 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3cb8886202568ad0d528ac41642f2b31156305e19f60f6eabe3dfdee505e351a-rootfs.mount: Deactivated successfully. May 8 00:46:24.840220 kubelet[1769]: E0508 00:46:24.840180 1769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:24.842383 kubelet[1769]: E0508 00:46:24.842353 1769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:24.843864 containerd[1460]: time="2025-05-08T00:46:24.843829887Z" level=info msg="CreateContainer within sandbox \"10390812f22571681821afb0b46ca13a045594983d68e9e7af4313043e5fd803\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 8 00:46:24.913825 kubelet[1769]: I0508 00:46:24.913755 1769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5b4qz" podStartSLOduration=4.712248433 podStartE2EDuration="17.913734518s" podCreationTimestamp="2025-05-08 00:46:07 +0000 UTC" firstStartedPulling="2025-05-08 00:46:10.435942067 +0000 UTC m=+3.570940835" lastFinishedPulling="2025-05-08 00:46:23.637428142 +0000 UTC m=+16.772426920" observedRunningTime="2025-05-08 00:46:24.865614689 +0000 UTC m=+18.000613467" watchObservedRunningTime="2025-05-08 00:46:24.913734518 +0000 UTC m=+18.048733296" May 8 00:46:24.980312 containerd[1460]: time="2025-05-08T00:46:24.980258989Z" level=info msg="CreateContainer within sandbox \"10390812f22571681821afb0b46ca13a045594983d68e9e7af4313043e5fd803\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5c0adea16f5dd9d6edd155167efaf17ce6767f7e0673abbad190940f1d6a7f27\"" May 8 00:46:24.980757 containerd[1460]: time="2025-05-08T00:46:24.980727568Z" level=info msg="StartContainer for \"5c0adea16f5dd9d6edd155167efaf17ce6767f7e0673abbad190940f1d6a7f27\"" May 8 00:46:25.015546 systemd[1]: Started cri-containerd-5c0adea16f5dd9d6edd155167efaf17ce6767f7e0673abbad190940f1d6a7f27.scope - libcontainer container 5c0adea16f5dd9d6edd155167efaf17ce6767f7e0673abbad190940f1d6a7f27. May 8 00:46:25.070219 containerd[1460]: time="2025-05-08T00:46:25.070167732Z" level=info msg="StartContainer for \"5c0adea16f5dd9d6edd155167efaf17ce6767f7e0673abbad190940f1d6a7f27\" returns successfully" May 8 00:46:25.273291 kubelet[1769]: E0508 00:46:25.273187 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:46:25.287394 kubelet[1769]: I0508 00:46:25.287357 1769 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 8 00:46:25.651425 kernel: Initializing XFRM netlink socket May 8 00:46:25.846376 kubelet[1769]: E0508 00:46:25.846347 1769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:25.846522 kubelet[1769]: E0508 00:46:25.846390 1769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:26.274369 kubelet[1769]: E0508 00:46:26.274268 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:46:26.847684 kubelet[1769]: E0508 00:46:26.847654 1769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:27.261073 kubelet[1769]: E0508 00:46:27.260943 1769 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:46:27.275426 kubelet[1769]: E0508 00:46:27.275377 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:46:27.351266 systemd-networkd[1391]: cilium_host: Link UP May 8 00:46:27.351442 systemd-networkd[1391]: cilium_net: Link UP May 8 00:46:27.351446 systemd-networkd[1391]: cilium_net: Gained carrier May 8 00:46:27.352303 systemd-networkd[1391]: cilium_host: Gained carrier May 8 00:46:27.352908 systemd-networkd[1391]: cilium_host: Gained IPv6LL May 8 00:46:27.397554 systemd-networkd[1391]: cilium_net: Gained IPv6LL May 8 00:46:27.465943 systemd-networkd[1391]: cilium_vxlan: Link UP May 8 00:46:27.465952 systemd-networkd[1391]: cilium_vxlan: Gained carrier May 8 00:46:27.765425 kernel: NET: Registered PF_ALG protocol family May 8 00:46:27.848719 kubelet[1769]: E0508 00:46:27.848692 1769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:28.275960 kubelet[1769]: E0508 00:46:28.275923 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:46:28.403300 systemd-networkd[1391]: lxc_health: Link UP May 8 00:46:28.413091 systemd-networkd[1391]: lxc_health: Gained carrier May 8 00:46:29.249571 systemd-networkd[1391]: cilium_vxlan: Gained IPv6LL May 8 00:46:29.276364 kubelet[1769]: E0508 00:46:29.276311 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:46:29.583421 kubelet[1769]: E0508 00:46:29.583279 1769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:29.633560 systemd-networkd[1391]: lxc_health: Gained IPv6LL May 8 00:46:29.648071 kubelet[1769]: I0508 00:46:29.648008 1769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-x2mhj" podStartSLOduration=14.654756978 podStartE2EDuration="22.647990489s" podCreationTimestamp="2025-05-08 00:46:07 +0000 UTC" firstStartedPulling="2025-05-08 00:46:10.431462956 +0000 UTC m=+3.566461734" lastFinishedPulling="2025-05-08 00:46:18.424696467 +0000 UTC m=+11.559695245" observedRunningTime="2025-05-08 00:46:25.889880909 +0000 UTC m=+19.024879687" watchObservedRunningTime="2025-05-08 00:46:29.647990489 +0000 UTC m=+22.782989267" May 8 00:46:29.700351 systemd[1]: Created slice kubepods-besteffort-pod74cfdf99_7b0f_4808_a7c3_980483ec6039.slice - libcontainer container kubepods-besteffort-pod74cfdf99_7b0f_4808_a7c3_980483ec6039.slice. May 8 00:46:29.704694 kubelet[1769]: I0508 00:46:29.704659 1769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pb7p\" (UniqueName: \"kubernetes.io/projected/74cfdf99-7b0f-4808-a7c3-980483ec6039-kube-api-access-2pb7p\") pod \"nginx-deployment-7fcdb87857-28wkv\" (UID: \"74cfdf99-7b0f-4808-a7c3-980483ec6039\") " pod="default/nginx-deployment-7fcdb87857-28wkv" May 8 00:46:30.004100 containerd[1460]: time="2025-05-08T00:46:30.003764179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-28wkv,Uid:74cfdf99-7b0f-4808-a7c3-980483ec6039,Namespace:default,Attempt:0,}" May 8 00:46:30.277457 kubelet[1769]: E0508 00:46:30.277358 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:46:30.366442 systemd-networkd[1391]: lxca68c848ed031: Link UP May 8 00:46:30.381437 kernel: eth0: renamed from tmp5a43c May 8 00:46:30.389618 systemd-networkd[1391]: lxca68c848ed031: Gained carrier May 8 00:46:30.487305 kubelet[1769]: I0508 00:46:30.487141 1769 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:46:30.487576 kubelet[1769]: E0508 00:46:30.487557 1769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:30.853233 kubelet[1769]: E0508 00:46:30.853199 1769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:31.278366 kubelet[1769]: E0508 00:46:31.278280 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:46:31.809606 systemd-networkd[1391]: lxca68c848ed031: Gained IPv6LL May 8 00:46:32.278912 kubelet[1769]: E0508 00:46:32.278872 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:46:32.637659 containerd[1460]: time="2025-05-08T00:46:32.636928045Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:46:32.638076 containerd[1460]: time="2025-05-08T00:46:32.637567930Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:46:32.638076 containerd[1460]: time="2025-05-08T00:46:32.637580134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:46:32.638076 containerd[1460]: time="2025-05-08T00:46:32.637668733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:46:32.651508 systemd[1]: run-containerd-runc-k8s.io-5a43c2cbc64e9840164b71afdf806daf56a47b4a9610827a3decb252172e8087-runc.Q27da1.mount: Deactivated successfully. May 8 00:46:32.661527 systemd[1]: Started cri-containerd-5a43c2cbc64e9840164b71afdf806daf56a47b4a9610827a3decb252172e8087.scope - libcontainer container 5a43c2cbc64e9840164b71afdf806daf56a47b4a9610827a3decb252172e8087. May 8 00:46:32.671651 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:46:32.695339 containerd[1460]: time="2025-05-08T00:46:32.695285847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-28wkv,Uid:74cfdf99-7b0f-4808-a7c3-980483ec6039,Namespace:default,Attempt:0,} returns sandbox id \"5a43c2cbc64e9840164b71afdf806daf56a47b4a9610827a3decb252172e8087\"" May 8 00:46:32.698637 containerd[1460]: time="2025-05-08T00:46:32.698598102Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 8 00:46:33.279764 kubelet[1769]: E0508 00:46:33.279725 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:46:34.280296 kubelet[1769]: E0508 00:46:34.280227 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:46:35.240952 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3491250565.mount: Deactivated successfully. May 8 00:46:35.281229 kubelet[1769]: E0508 00:46:35.281177 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:46:36.282154 kubelet[1769]: E0508 00:46:36.282106 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:46:36.577165 containerd[1460]: time="2025-05-08T00:46:36.577061241Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:46:36.580432 containerd[1460]: time="2025-05-08T00:46:36.580342317Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73306220" May 8 00:46:36.581873 containerd[1460]: time="2025-05-08T00:46:36.581841105Z" level=info msg="ImageCreate event name:\"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:46:36.584381 containerd[1460]: time="2025-05-08T00:46:36.584339017Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:46:36.585482 containerd[1460]: time="2025-05-08T00:46:36.585428725Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\", size \"73306098\" in 3.886768283s" May 8 00:46:36.585482 containerd[1460]: time="2025-05-08T00:46:36.585482247Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\"" May 8 00:46:36.587627 containerd[1460]: time="2025-05-08T00:46:36.587600886Z" level=info msg="CreateContainer within sandbox \"5a43c2cbc64e9840164b71afdf806daf56a47b4a9610827a3decb252172e8087\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" May 8 00:46:36.602668 containerd[1460]: time="2025-05-08T00:46:36.602615212Z" level=info msg="CreateContainer within sandbox \"5a43c2cbc64e9840164b71afdf806daf56a47b4a9610827a3decb252172e8087\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"e07ed1a278a4936ecdfc7eceb16ce917f2735e3751169a39335d9b52a1e50fba\"" May 8 00:46:36.603144 containerd[1460]: time="2025-05-08T00:46:36.603122469Z" level=info msg="StartContainer for \"e07ed1a278a4936ecdfc7eceb16ce917f2735e3751169a39335d9b52a1e50fba\"" May 8 00:46:36.629536 systemd[1]: Started cri-containerd-e07ed1a278a4936ecdfc7eceb16ce917f2735e3751169a39335d9b52a1e50fba.scope - libcontainer container e07ed1a278a4936ecdfc7eceb16ce917f2735e3751169a39335d9b52a1e50fba. May 8 00:46:36.653185 containerd[1460]: time="2025-05-08T00:46:36.653146723Z" level=info msg="StartContainer for \"e07ed1a278a4936ecdfc7eceb16ce917f2735e3751169a39335d9b52a1e50fba\" returns successfully" May 8 00:46:36.872225 kubelet[1769]: I0508 00:46:36.872097 1769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-28wkv" podStartSLOduration=3.9816828490000002 podStartE2EDuration="7.872080354s" podCreationTimestamp="2025-05-08 00:46:29 +0000 UTC" firstStartedPulling="2025-05-08 00:46:32.696040744 +0000 UTC m=+25.831039522" lastFinishedPulling="2025-05-08 00:46:36.586438249 +0000 UTC m=+29.721437027" observedRunningTime="2025-05-08 00:46:36.872047341 +0000 UTC m=+30.007046109" watchObservedRunningTime="2025-05-08 00:46:36.872080354 +0000 UTC m=+30.007079122" May 8 00:46:37.282389 kubelet[1769]: E0508 00:46:37.282363 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:46:38.282771 kubelet[1769]: E0508 00:46:38.282716 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:46:39.283845 kubelet[1769]: E0508 00:46:39.283781 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:46:40.285005 kubelet[1769]: E0508 00:46:40.284945 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:46:41.239675 systemd[1]: Created slice kubepods-besteffort-pod95550d78_7329_4266_b5b1_4c1167722d22.slice - libcontainer container kubepods-besteffort-pod95550d78_7329_4266_b5b1_4c1167722d22.slice. May 8 00:46:41.263037 kubelet[1769]: I0508 00:46:41.262983 1769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/95550d78-7329-4266-b5b1-4c1167722d22-data\") pod \"nfs-server-provisioner-0\" (UID: \"95550d78-7329-4266-b5b1-4c1167722d22\") " pod="default/nfs-server-provisioner-0" May 8 00:46:41.263037 kubelet[1769]: I0508 00:46:41.263030 1769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j894m\" (UniqueName: \"kubernetes.io/projected/95550d78-7329-4266-b5b1-4c1167722d22-kube-api-access-j894m\") pod \"nfs-server-provisioner-0\" (UID: \"95550d78-7329-4266-b5b1-4c1167722d22\") " pod="default/nfs-server-provisioner-0" May 8 00:46:41.285355 kubelet[1769]: E0508 00:46:41.285307 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:46:41.543471 containerd[1460]: time="2025-05-08T00:46:41.543291362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:95550d78-7329-4266-b5b1-4c1167722d22,Namespace:default,Attempt:0,}" May 8 00:46:41.636286 systemd-networkd[1391]: lxc9d77fd6a16a6: Link UP May 8 00:46:41.647782 kernel: eth0: renamed from tmp53097 May 8 00:46:41.652429 systemd-networkd[1391]: lxc9d77fd6a16a6: Gained carrier May 8 00:46:41.886858 containerd[1460]: time="2025-05-08T00:46:41.886046123Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:46:41.886858 containerd[1460]: time="2025-05-08T00:46:41.886708259Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:46:41.886858 containerd[1460]: time="2025-05-08T00:46:41.886722386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:46:41.886858 containerd[1460]: time="2025-05-08T00:46:41.886829339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:46:41.911542 systemd[1]: Started cri-containerd-53097e0279b9d31f1bba73e42154c427802b39e2fa3dbacd9d6109fadffa0b45.scope - libcontainer container 53097e0279b9d31f1bba73e42154c427802b39e2fa3dbacd9d6109fadffa0b45. May 8 00:46:41.923285 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:46:41.948511 containerd[1460]: time="2025-05-08T00:46:41.948462973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:95550d78-7329-4266-b5b1-4c1167722d22,Namespace:default,Attempt:0,} returns sandbox id \"53097e0279b9d31f1bba73e42154c427802b39e2fa3dbacd9d6109fadffa0b45\"" May 8 00:46:41.950445 containerd[1460]: time="2025-05-08T00:46:41.950370363Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" May 8 00:46:42.286333 kubelet[1769]: E0508 00:46:42.286291 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:46:42.689546 systemd-networkd[1391]: lxc9d77fd6a16a6: Gained IPv6LL May 8 00:46:43.286866 kubelet[1769]: E0508 00:46:43.286830 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:46:43.906252 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1710858977.mount: Deactivated successfully. May 8 00:46:44.150443 update_engine[1452]: I20250508 00:46:44.150352 1452 update_attempter.cc:509] Updating boot flags... May 8 00:46:44.187431 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2995) May 8 00:46:44.219447 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2995) May 8 00:46:44.251426 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2995) May 8 00:46:44.287998 kubelet[1769]: E0508 00:46:44.287961 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:46:45.288356 kubelet[1769]: E0508 00:46:45.288310 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:46:46.177028 containerd[1460]: time="2025-05-08T00:46:46.176964625Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:46:46.178250 containerd[1460]: time="2025-05-08T00:46:46.178179152Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" May 8 00:46:46.179971 containerd[1460]: time="2025-05-08T00:46:46.179935595Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:46:46.183268 containerd[1460]: time="2025-05-08T00:46:46.183229437Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:46:46.184213 containerd[1460]: time="2025-05-08T00:46:46.184172061Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 4.233763043s" May 8 00:46:46.184252 containerd[1460]: time="2025-05-08T00:46:46.184214190Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" May 8 00:46:46.186150 containerd[1460]: time="2025-05-08T00:46:46.186121859Z" level=info msg="CreateContainer within sandbox \"53097e0279b9d31f1bba73e42154c427802b39e2fa3dbacd9d6109fadffa0b45\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" May 8 00:46:46.202829 containerd[1460]: time="2025-05-08T00:46:46.202795695Z" level=info msg="CreateContainer within sandbox \"53097e0279b9d31f1bba73e42154c427802b39e2fa3dbacd9d6109fadffa0b45\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"13065d9336b05eb1380af3945f6fa56eb4ee0ed8b291f55899b0babd4da3b938\"" May 8 00:46:46.203150 containerd[1460]: time="2025-05-08T00:46:46.203125027Z" level=info msg="StartContainer for \"13065d9336b05eb1380af3945f6fa56eb4ee0ed8b291f55899b0babd4da3b938\"" May 8 00:46:46.273527 systemd[1]: Started cri-containerd-13065d9336b05eb1380af3945f6fa56eb4ee0ed8b291f55899b0babd4da3b938.scope - libcontainer container 13065d9336b05eb1380af3945f6fa56eb4ee0ed8b291f55899b0babd4da3b938. May 8 00:46:46.288707 kubelet[1769]: E0508 00:46:46.288673 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:46:46.336155 containerd[1460]: time="2025-05-08T00:46:46.336099220Z" level=info msg="StartContainer for \"13065d9336b05eb1380af3945f6fa56eb4ee0ed8b291f55899b0babd4da3b938\" returns successfully" May 8 00:46:46.896070 kubelet[1769]: I0508 00:46:46.896012 1769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.660923801 podStartE2EDuration="5.895993961s" podCreationTimestamp="2025-05-08 00:46:41 +0000 UTC" firstStartedPulling="2025-05-08 00:46:41.949924587 +0000 UTC m=+35.084923365" lastFinishedPulling="2025-05-08 00:46:46.184994747 +0000 UTC m=+39.319993525" observedRunningTime="2025-05-08 00:46:46.895547897 +0000 UTC m=+40.030546675" watchObservedRunningTime="2025-05-08 00:46:46.895993961 +0000 UTC m=+40.030992729" May 8 00:46:47.260739 kubelet[1769]: E0508 00:46:47.260675 1769 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:46:47.289189 kubelet[1769]: E0508 00:46:47.289150 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:46:48.290128 kubelet[1769]: E0508 00:46:48.290077 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:46:49.290778 kubelet[1769]: E0508 00:46:49.290743 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:46:50.291813 kubelet[1769]: E0508 00:46:50.291756 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:46:51.292928 kubelet[1769]: E0508 00:46:51.292850 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:46:52.293842 kubelet[1769]: E0508 00:46:52.293777 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:46:53.294377 kubelet[1769]: E0508 00:46:53.294318 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:46:54.294526 kubelet[1769]: E0508 00:46:54.294463 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:46:55.295033 kubelet[1769]: E0508 00:46:55.294981 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:46:55.583669 systemd[1]: Created slice kubepods-besteffort-pod9c5c04bf_7860_4cdd_baf5_4829f0118610.slice - libcontainer container kubepods-besteffort-pod9c5c04bf_7860_4cdd_baf5_4829f0118610.slice. May 8 00:46:55.726952 kubelet[1769]: I0508 00:46:55.726917 1769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dcfe4dc3-0ce4-4ba6-900b-9ffd57393b1a\" (UniqueName: \"kubernetes.io/nfs/9c5c04bf-7860-4cdd-baf5-4829f0118610-pvc-dcfe4dc3-0ce4-4ba6-900b-9ffd57393b1a\") pod \"test-pod-1\" (UID: \"9c5c04bf-7860-4cdd-baf5-4829f0118610\") " pod="default/test-pod-1" May 8 00:46:55.726952 kubelet[1769]: I0508 00:46:55.726969 1769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgqqt\" (UniqueName: \"kubernetes.io/projected/9c5c04bf-7860-4cdd-baf5-4829f0118610-kube-api-access-tgqqt\") pod \"test-pod-1\" (UID: \"9c5c04bf-7860-4cdd-baf5-4829f0118610\") " pod="default/test-pod-1" May 8 00:46:55.855428 kernel: FS-Cache: Loaded May 8 00:46:55.922582 kernel: RPC: Registered named UNIX socket transport module. May 8 00:46:55.922675 kernel: RPC: Registered udp transport module. May 8 00:46:55.922694 kernel: RPC: Registered tcp transport module. May 8 00:46:55.922715 kernel: RPC: Registered tcp-with-tls transport module. May 8 00:46:55.923935 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. May 8 00:46:56.182718 kernel: NFS: Registering the id_resolver key type May 8 00:46:56.182821 kernel: Key type id_resolver registered May 8 00:46:56.182835 kernel: Key type id_legacy registered May 8 00:46:56.207006 nfsidmap[3164]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 8 00:46:56.211580 nfsidmap[3167]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 8 00:46:56.295714 kubelet[1769]: E0508 00:46:56.295667 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:46:56.486255 containerd[1460]: time="2025-05-08T00:46:56.486215480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:9c5c04bf-7860-4cdd-baf5-4829f0118610,Namespace:default,Attempt:0,}" May 8 00:46:56.610987 systemd-networkd[1391]: lxc5cc927da94bc: Link UP May 8 00:46:56.622466 kernel: eth0: renamed from tmpacd55 May 8 00:46:56.629126 systemd-networkd[1391]: lxc5cc927da94bc: Gained carrier May 8 00:46:56.831245 containerd[1460]: time="2025-05-08T00:46:56.830887930Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:46:56.831245 containerd[1460]: time="2025-05-08T00:46:56.830957241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:46:56.831245 containerd[1460]: time="2025-05-08T00:46:56.830976768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:46:56.831245 containerd[1460]: time="2025-05-08T00:46:56.831066818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:46:56.862595 systemd[1]: Started cri-containerd-acd558453d251453cc43d58500d43ea1aed80cf3d4eff021a26784c3b46cb6d8.scope - libcontainer container acd558453d251453cc43d58500d43ea1aed80cf3d4eff021a26784c3b46cb6d8. May 8 00:46:56.874382 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:46:56.898666 containerd[1460]: time="2025-05-08T00:46:56.898621023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:9c5c04bf-7860-4cdd-baf5-4829f0118610,Namespace:default,Attempt:0,} returns sandbox id \"acd558453d251453cc43d58500d43ea1aed80cf3d4eff021a26784c3b46cb6d8\"" May 8 00:46:56.899958 containerd[1460]: time="2025-05-08T00:46:56.899822958Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 8 00:46:57.296784 kubelet[1769]: E0508 00:46:57.296734 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:46:57.381301 containerd[1460]: time="2025-05-08T00:46:57.381255233Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:46:57.399951 containerd[1460]: time="2025-05-08T00:46:57.399911786Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" May 8 00:46:57.402529 containerd[1460]: time="2025-05-08T00:46:57.402492798Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\", size \"73306098\" in 502.628332ms" May 8 00:46:57.402529 containerd[1460]: time="2025-05-08T00:46:57.402524297Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\"" May 8 00:46:57.404281 containerd[1460]: time="2025-05-08T00:46:57.404251761Z" level=info msg="CreateContainer within sandbox \"acd558453d251453cc43d58500d43ea1aed80cf3d4eff021a26784c3b46cb6d8\" for container &ContainerMetadata{Name:test,Attempt:0,}" May 8 00:46:57.510298 containerd[1460]: time="2025-05-08T00:46:57.510240609Z" level=info msg="CreateContainer within sandbox \"acd558453d251453cc43d58500d43ea1aed80cf3d4eff021a26784c3b46cb6d8\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"a2478e39e9cf0db1eea66d95e60b090a6a1d264f9df4a6593b49b4d6c272b884\"" May 8 00:46:57.510857 containerd[1460]: time="2025-05-08T00:46:57.510821353Z" level=info msg="StartContainer for \"a2478e39e9cf0db1eea66d95e60b090a6a1d264f9df4a6593b49b4d6c272b884\"" May 8 00:46:57.540575 systemd[1]: Started cri-containerd-a2478e39e9cf0db1eea66d95e60b090a6a1d264f9df4a6593b49b4d6c272b884.scope - libcontainer container a2478e39e9cf0db1eea66d95e60b090a6a1d264f9df4a6593b49b4d6c272b884. May 8 00:46:57.584792 containerd[1460]: time="2025-05-08T00:46:57.584665491Z" level=info msg="StartContainer for \"a2478e39e9cf0db1eea66d95e60b090a6a1d264f9df4a6593b49b4d6c272b884\" returns successfully" May 8 00:46:57.929026 kubelet[1769]: I0508 00:46:57.928876 1769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=16.425293991 podStartE2EDuration="16.928858597s" podCreationTimestamp="2025-05-08 00:46:41 +0000 UTC" firstStartedPulling="2025-05-08 00:46:56.899526279 +0000 UTC m=+50.034525047" lastFinishedPulling="2025-05-08 00:46:57.403090875 +0000 UTC m=+50.538089653" observedRunningTime="2025-05-08 00:46:57.928576976 +0000 UTC m=+51.063575754" watchObservedRunningTime="2025-05-08 00:46:57.928858597 +0000 UTC m=+51.063857375" May 8 00:46:58.297044 kubelet[1769]: E0508 00:46:58.296994 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:46:58.369571 systemd-networkd[1391]: lxc5cc927da94bc: Gained IPv6LL May 8 00:46:59.297361 kubelet[1769]: E0508 00:46:59.297282 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:00.297609 kubelet[1769]: E0508 00:47:00.297561 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:01.298108 kubelet[1769]: E0508 00:47:01.298061 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:02.298961 kubelet[1769]: E0508 00:47:02.298906 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:03.300137 kubelet[1769]: E0508 00:47:03.300075 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:03.882958 containerd[1460]: time="2025-05-08T00:47:03.882908575Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:47:03.890269 containerd[1460]: time="2025-05-08T00:47:03.890234426Z" level=info msg="StopContainer for \"5c0adea16f5dd9d6edd155167efaf17ce6767f7e0673abbad190940f1d6a7f27\" with timeout 2 (s)" May 8 00:47:03.890547 containerd[1460]: time="2025-05-08T00:47:03.890523641Z" level=info msg="Stop container \"5c0adea16f5dd9d6edd155167efaf17ce6767f7e0673abbad190940f1d6a7f27\" with signal terminated" May 8 00:47:03.897318 systemd-networkd[1391]: lxc_health: Link DOWN May 8 00:47:03.897326 systemd-networkd[1391]: lxc_health: Lost carrier May 8 00:47:03.925859 systemd[1]: cri-containerd-5c0adea16f5dd9d6edd155167efaf17ce6767f7e0673abbad190940f1d6a7f27.scope: Deactivated successfully. May 8 00:47:03.926521 systemd[1]: cri-containerd-5c0adea16f5dd9d6edd155167efaf17ce6767f7e0673abbad190940f1d6a7f27.scope: Consumed 7.090s CPU time. May 8 00:47:03.944897 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5c0adea16f5dd9d6edd155167efaf17ce6767f7e0673abbad190940f1d6a7f27-rootfs.mount: Deactivated successfully. May 8 00:47:04.045877 containerd[1460]: time="2025-05-08T00:47:04.045816030Z" level=info msg="shim disconnected" id=5c0adea16f5dd9d6edd155167efaf17ce6767f7e0673abbad190940f1d6a7f27 namespace=k8s.io May 8 00:47:04.045877 containerd[1460]: time="2025-05-08T00:47:04.045868399Z" level=warning msg="cleaning up after shim disconnected" id=5c0adea16f5dd9d6edd155167efaf17ce6767f7e0673abbad190940f1d6a7f27 namespace=k8s.io May 8 00:47:04.045877 containerd[1460]: time="2025-05-08T00:47:04.045877226Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:47:04.106586 containerd[1460]: time="2025-05-08T00:47:04.106530330Z" level=info msg="StopContainer for \"5c0adea16f5dd9d6edd155167efaf17ce6767f7e0673abbad190940f1d6a7f27\" returns successfully" May 8 00:47:04.107253 containerd[1460]: time="2025-05-08T00:47:04.107219776Z" level=info msg="StopPodSandbox for \"10390812f22571681821afb0b46ca13a045594983d68e9e7af4313043e5fd803\"" May 8 00:47:04.107253 containerd[1460]: time="2025-05-08T00:47:04.107251427Z" level=info msg="Container to stop \"7f7525f5f223506fd88a7cf81a1c5f161b48f1e380c5c92a898b3ffb0145247a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:47:04.107253 containerd[1460]: time="2025-05-08T00:47:04.107262898Z" level=info msg="Container to stop \"5c0adea16f5dd9d6edd155167efaf17ce6767f7e0673abbad190940f1d6a7f27\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:47:04.107580 containerd[1460]: time="2025-05-08T00:47:04.107271404Z" level=info msg="Container to stop \"c49b84c720b406facb1dd6bce61fd80f1bca6b364a17107356be65abb038291d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:47:04.107580 containerd[1460]: time="2025-05-08T00:47:04.107280351Z" level=info msg="Container to stop \"8662d99d8205c28e1b19f0165af1ad9ab4deb427bb0c2149118a2d43be291dd1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:47:04.107580 containerd[1460]: time="2025-05-08T00:47:04.107288827Z" level=info msg="Container to stop \"3cb8886202568ad0d528ac41642f2b31156305e19f60f6eabe3dfdee505e351a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:47:04.109216 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-10390812f22571681821afb0b46ca13a045594983d68e9e7af4313043e5fd803-shm.mount: Deactivated successfully. May 8 00:47:04.113886 systemd[1]: cri-containerd-10390812f22571681821afb0b46ca13a045594983d68e9e7af4313043e5fd803.scope: Deactivated successfully. May 8 00:47:04.133693 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-10390812f22571681821afb0b46ca13a045594983d68e9e7af4313043e5fd803-rootfs.mount: Deactivated successfully. May 8 00:47:04.183425 containerd[1460]: time="2025-05-08T00:47:04.183302733Z" level=info msg="shim disconnected" id=10390812f22571681821afb0b46ca13a045594983d68e9e7af4313043e5fd803 namespace=k8s.io May 8 00:47:04.183425 containerd[1460]: time="2025-05-08T00:47:04.183378845Z" level=warning msg="cleaning up after shim disconnected" id=10390812f22571681821afb0b46ca13a045594983d68e9e7af4313043e5fd803 namespace=k8s.io May 8 00:47:04.183425 containerd[1460]: time="2025-05-08T00:47:04.183393783Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:47:04.199079 containerd[1460]: time="2025-05-08T00:47:04.199023579Z" level=info msg="TearDown network for sandbox \"10390812f22571681821afb0b46ca13a045594983d68e9e7af4313043e5fd803\" successfully" May 8 00:47:04.199079 containerd[1460]: time="2025-05-08T00:47:04.199062092Z" level=info msg="StopPodSandbox for \"10390812f22571681821afb0b46ca13a045594983d68e9e7af4313043e5fd803\" returns successfully" May 8 00:47:04.300977 kubelet[1769]: E0508 00:47:04.300876 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:04.378310 kubelet[1769]: I0508 00:47:04.378246 1769 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9bf27ce4-e87e-45ca-916f-5ead7169257b-hubble-tls\") pod \"9bf27ce4-e87e-45ca-916f-5ead7169257b\" (UID: \"9bf27ce4-e87e-45ca-916f-5ead7169257b\") " May 8 00:47:04.378310 kubelet[1769]: I0508 00:47:04.378318 1769 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9bf27ce4-e87e-45ca-916f-5ead7169257b-cilium-run\") pod \"9bf27ce4-e87e-45ca-916f-5ead7169257b\" (UID: \"9bf27ce4-e87e-45ca-916f-5ead7169257b\") " May 8 00:47:04.378505 kubelet[1769]: I0508 00:47:04.378344 1769 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9bf27ce4-e87e-45ca-916f-5ead7169257b-host-proc-sys-kernel\") pod \"9bf27ce4-e87e-45ca-916f-5ead7169257b\" (UID: \"9bf27ce4-e87e-45ca-916f-5ead7169257b\") " May 8 00:47:04.378505 kubelet[1769]: I0508 00:47:04.378362 1769 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9bf27ce4-e87e-45ca-916f-5ead7169257b-lib-modules\") pod \"9bf27ce4-e87e-45ca-916f-5ead7169257b\" (UID: \"9bf27ce4-e87e-45ca-916f-5ead7169257b\") " May 8 00:47:04.378505 kubelet[1769]: I0508 00:47:04.378380 1769 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9bf27ce4-e87e-45ca-916f-5ead7169257b-cilium-config-path\") pod \"9bf27ce4-e87e-45ca-916f-5ead7169257b\" (UID: \"9bf27ce4-e87e-45ca-916f-5ead7169257b\") " May 8 00:47:04.378505 kubelet[1769]: I0508 00:47:04.378396 1769 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9bf27ce4-e87e-45ca-916f-5ead7169257b-clustermesh-secrets\") pod \"9bf27ce4-e87e-45ca-916f-5ead7169257b\" (UID: \"9bf27ce4-e87e-45ca-916f-5ead7169257b\") " May 8 00:47:04.378505 kubelet[1769]: I0508 00:47:04.378434 1769 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9bf27ce4-e87e-45ca-916f-5ead7169257b-bpf-maps\") pod \"9bf27ce4-e87e-45ca-916f-5ead7169257b\" (UID: \"9bf27ce4-e87e-45ca-916f-5ead7169257b\") " May 8 00:47:04.378505 kubelet[1769]: I0508 00:47:04.378447 1769 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9bf27ce4-e87e-45ca-916f-5ead7169257b-etc-cni-netd\") pod \"9bf27ce4-e87e-45ca-916f-5ead7169257b\" (UID: \"9bf27ce4-e87e-45ca-916f-5ead7169257b\") " May 8 00:47:04.378662 kubelet[1769]: I0508 00:47:04.378460 1769 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9bf27ce4-e87e-45ca-916f-5ead7169257b-xtables-lock\") pod \"9bf27ce4-e87e-45ca-916f-5ead7169257b\" (UID: \"9bf27ce4-e87e-45ca-916f-5ead7169257b\") " May 8 00:47:04.378662 kubelet[1769]: I0508 00:47:04.378475 1769 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9bf27ce4-e87e-45ca-916f-5ead7169257b-cilium-cgroup\") pod \"9bf27ce4-e87e-45ca-916f-5ead7169257b\" (UID: \"9bf27ce4-e87e-45ca-916f-5ead7169257b\") " May 8 00:47:04.378662 kubelet[1769]: I0508 00:47:04.378466 1769 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bf27ce4-e87e-45ca-916f-5ead7169257b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9bf27ce4-e87e-45ca-916f-5ead7169257b" (UID: "9bf27ce4-e87e-45ca-916f-5ead7169257b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:47:04.378662 kubelet[1769]: I0508 00:47:04.378519 1769 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bf27ce4-e87e-45ca-916f-5ead7169257b-cni-path" (OuterVolumeSpecName: "cni-path") pod "9bf27ce4-e87e-45ca-916f-5ead7169257b" (UID: "9bf27ce4-e87e-45ca-916f-5ead7169257b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:47:04.378662 kubelet[1769]: I0508 00:47:04.378489 1769 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9bf27ce4-e87e-45ca-916f-5ead7169257b-cni-path\") pod \"9bf27ce4-e87e-45ca-916f-5ead7169257b\" (UID: \"9bf27ce4-e87e-45ca-916f-5ead7169257b\") " May 8 00:47:04.378782 kubelet[1769]: I0508 00:47:04.378541 1769 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bf27ce4-e87e-45ca-916f-5ead7169257b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9bf27ce4-e87e-45ca-916f-5ead7169257b" (UID: "9bf27ce4-e87e-45ca-916f-5ead7169257b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:47:04.378782 kubelet[1769]: I0508 00:47:04.378557 1769 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79td8\" (UniqueName: \"kubernetes.io/projected/9bf27ce4-e87e-45ca-916f-5ead7169257b-kube-api-access-79td8\") pod \"9bf27ce4-e87e-45ca-916f-5ead7169257b\" (UID: \"9bf27ce4-e87e-45ca-916f-5ead7169257b\") " May 8 00:47:04.378782 kubelet[1769]: I0508 00:47:04.378575 1769 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9bf27ce4-e87e-45ca-916f-5ead7169257b-hostproc\") pod \"9bf27ce4-e87e-45ca-916f-5ead7169257b\" (UID: \"9bf27ce4-e87e-45ca-916f-5ead7169257b\") " May 8 00:47:04.378782 kubelet[1769]: I0508 00:47:04.378591 1769 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9bf27ce4-e87e-45ca-916f-5ead7169257b-host-proc-sys-net\") pod \"9bf27ce4-e87e-45ca-916f-5ead7169257b\" (UID: \"9bf27ce4-e87e-45ca-916f-5ead7169257b\") " May 8 00:47:04.378782 kubelet[1769]: I0508 00:47:04.378629 1769 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9bf27ce4-e87e-45ca-916f-5ead7169257b-host-proc-sys-kernel\") on node \"10.0.0.140\" DevicePath \"\"" May 8 00:47:04.378782 kubelet[1769]: I0508 00:47:04.378639 1769 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9bf27ce4-e87e-45ca-916f-5ead7169257b-cilium-run\") on node \"10.0.0.140\" DevicePath \"\"" May 8 00:47:04.378782 kubelet[1769]: I0508 00:47:04.378650 1769 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9bf27ce4-e87e-45ca-916f-5ead7169257b-cni-path\") on node \"10.0.0.140\" DevicePath \"\"" May 8 00:47:04.381794 kubelet[1769]: I0508 00:47:04.378574 1769 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bf27ce4-e87e-45ca-916f-5ead7169257b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9bf27ce4-e87e-45ca-916f-5ead7169257b" (UID: "9bf27ce4-e87e-45ca-916f-5ead7169257b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:47:04.381794 kubelet[1769]: I0508 00:47:04.378586 1769 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bf27ce4-e87e-45ca-916f-5ead7169257b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9bf27ce4-e87e-45ca-916f-5ead7169257b" (UID: "9bf27ce4-e87e-45ca-916f-5ead7169257b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:47:04.381794 kubelet[1769]: I0508 00:47:04.378672 1769 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bf27ce4-e87e-45ca-916f-5ead7169257b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9bf27ce4-e87e-45ca-916f-5ead7169257b" (UID: "9bf27ce4-e87e-45ca-916f-5ead7169257b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:47:04.381794 kubelet[1769]: I0508 00:47:04.378686 1769 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bf27ce4-e87e-45ca-916f-5ead7169257b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9bf27ce4-e87e-45ca-916f-5ead7169257b" (UID: "9bf27ce4-e87e-45ca-916f-5ead7169257b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:47:04.381794 kubelet[1769]: I0508 00:47:04.378700 1769 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bf27ce4-e87e-45ca-916f-5ead7169257b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9bf27ce4-e87e-45ca-916f-5ead7169257b" (UID: "9bf27ce4-e87e-45ca-916f-5ead7169257b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:47:04.381948 kubelet[1769]: I0508 00:47:04.378711 1769 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bf27ce4-e87e-45ca-916f-5ead7169257b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9bf27ce4-e87e-45ca-916f-5ead7169257b" (UID: "9bf27ce4-e87e-45ca-916f-5ead7169257b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:47:04.381948 kubelet[1769]: I0508 00:47:04.381675 1769 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bf27ce4-e87e-45ca-916f-5ead7169257b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9bf27ce4-e87e-45ca-916f-5ead7169257b" (UID: "9bf27ce4-e87e-45ca-916f-5ead7169257b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 8 00:47:04.381948 kubelet[1769]: I0508 00:47:04.381715 1769 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bf27ce4-e87e-45ca-916f-5ead7169257b-hostproc" (OuterVolumeSpecName: "hostproc") pod "9bf27ce4-e87e-45ca-916f-5ead7169257b" (UID: "9bf27ce4-e87e-45ca-916f-5ead7169257b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:47:04.382040 kubelet[1769]: I0508 00:47:04.381975 1769 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bf27ce4-e87e-45ca-916f-5ead7169257b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9bf27ce4-e87e-45ca-916f-5ead7169257b" (UID: "9bf27ce4-e87e-45ca-916f-5ead7169257b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 8 00:47:04.382724 systemd[1]: var-lib-kubelet-pods-9bf27ce4\x2de87e\x2d45ca\x2d916f\x2d5ead7169257b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d79td8.mount: Deactivated successfully. May 8 00:47:04.382847 systemd[1]: var-lib-kubelet-pods-9bf27ce4\x2de87e\x2d45ca\x2d916f\x2d5ead7169257b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 8 00:47:04.383125 kubelet[1769]: I0508 00:47:04.383062 1769 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bf27ce4-e87e-45ca-916f-5ead7169257b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9bf27ce4-e87e-45ca-916f-5ead7169257b" (UID: "9bf27ce4-e87e-45ca-916f-5ead7169257b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 8 00:47:04.383177 kubelet[1769]: I0508 00:47:04.383140 1769 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bf27ce4-e87e-45ca-916f-5ead7169257b-kube-api-access-79td8" (OuterVolumeSpecName: "kube-api-access-79td8") pod "9bf27ce4-e87e-45ca-916f-5ead7169257b" (UID: "9bf27ce4-e87e-45ca-916f-5ead7169257b"). InnerVolumeSpecName "kube-api-access-79td8". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 8 00:47:04.479067 kubelet[1769]: I0508 00:47:04.479022 1769 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9bf27ce4-e87e-45ca-916f-5ead7169257b-lib-modules\") on node \"10.0.0.140\" DevicePath \"\"" May 8 00:47:04.479067 kubelet[1769]: I0508 00:47:04.479058 1769 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9bf27ce4-e87e-45ca-916f-5ead7169257b-cilium-config-path\") on node \"10.0.0.140\" DevicePath \"\"" May 8 00:47:04.479067 kubelet[1769]: I0508 00:47:04.479069 1769 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9bf27ce4-e87e-45ca-916f-5ead7169257b-clustermesh-secrets\") on node \"10.0.0.140\" DevicePath \"\"" May 8 00:47:04.479222 kubelet[1769]: I0508 00:47:04.479080 1769 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9bf27ce4-e87e-45ca-916f-5ead7169257b-bpf-maps\") on node \"10.0.0.140\" DevicePath \"\"" May 8 00:47:04.479222 kubelet[1769]: I0508 00:47:04.479088 1769 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9bf27ce4-e87e-45ca-916f-5ead7169257b-etc-cni-netd\") on node \"10.0.0.140\" DevicePath \"\"" May 8 00:47:04.479222 kubelet[1769]: I0508 00:47:04.479096 1769 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9bf27ce4-e87e-45ca-916f-5ead7169257b-xtables-lock\") on node \"10.0.0.140\" DevicePath \"\"" May 8 00:47:04.479222 kubelet[1769]: I0508 00:47:04.479104 1769 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9bf27ce4-e87e-45ca-916f-5ead7169257b-cilium-cgroup\") on node \"10.0.0.140\" DevicePath \"\"" May 8 00:47:04.479222 kubelet[1769]: I0508 00:47:04.479111 1769 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-79td8\" (UniqueName: \"kubernetes.io/projected/9bf27ce4-e87e-45ca-916f-5ead7169257b-kube-api-access-79td8\") on node \"10.0.0.140\" DevicePath \"\"" May 8 00:47:04.479222 kubelet[1769]: I0508 00:47:04.479119 1769 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9bf27ce4-e87e-45ca-916f-5ead7169257b-hostproc\") on node \"10.0.0.140\" DevicePath \"\"" May 8 00:47:04.479222 kubelet[1769]: I0508 00:47:04.479127 1769 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9bf27ce4-e87e-45ca-916f-5ead7169257b-host-proc-sys-net\") on node \"10.0.0.140\" DevicePath \"\"" May 8 00:47:04.479222 kubelet[1769]: I0508 00:47:04.479134 1769 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9bf27ce4-e87e-45ca-916f-5ead7169257b-hubble-tls\") on node \"10.0.0.140\" DevicePath \"\"" May 8 00:47:04.871718 systemd[1]: var-lib-kubelet-pods-9bf27ce4\x2de87e\x2d45ca\x2d916f\x2d5ead7169257b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 8 00:47:04.922560 kubelet[1769]: I0508 00:47:04.922514 1769 scope.go:117] "RemoveContainer" containerID="5c0adea16f5dd9d6edd155167efaf17ce6767f7e0673abbad190940f1d6a7f27" May 8 00:47:04.924197 containerd[1460]: time="2025-05-08T00:47:04.924160411Z" level=info msg="RemoveContainer for \"5c0adea16f5dd9d6edd155167efaf17ce6767f7e0673abbad190940f1d6a7f27\"" May 8 00:47:04.927896 systemd[1]: Removed slice kubepods-burstable-pod9bf27ce4_e87e_45ca_916f_5ead7169257b.slice - libcontainer container kubepods-burstable-pod9bf27ce4_e87e_45ca_916f_5ead7169257b.slice. May 8 00:47:04.928180 systemd[1]: kubepods-burstable-pod9bf27ce4_e87e_45ca_916f_5ead7169257b.slice: Consumed 7.194s CPU time. May 8 00:47:05.009827 containerd[1460]: time="2025-05-08T00:47:05.009777647Z" level=info msg="RemoveContainer for \"5c0adea16f5dd9d6edd155167efaf17ce6767f7e0673abbad190940f1d6a7f27\" returns successfully" May 8 00:47:05.010048 kubelet[1769]: I0508 00:47:05.010009 1769 scope.go:117] "RemoveContainer" containerID="3cb8886202568ad0d528ac41642f2b31156305e19f60f6eabe3dfdee505e351a" May 8 00:47:05.011045 containerd[1460]: time="2025-05-08T00:47:05.011003422Z" level=info msg="RemoveContainer for \"3cb8886202568ad0d528ac41642f2b31156305e19f60f6eabe3dfdee505e351a\"" May 8 00:47:05.250171 containerd[1460]: time="2025-05-08T00:47:05.250099813Z" level=info msg="RemoveContainer for \"3cb8886202568ad0d528ac41642f2b31156305e19f60f6eabe3dfdee505e351a\" returns successfully" May 8 00:47:05.250480 kubelet[1769]: I0508 00:47:05.250447 1769 scope.go:117] "RemoveContainer" containerID="8662d99d8205c28e1b19f0165af1ad9ab4deb427bb0c2149118a2d43be291dd1" May 8 00:47:05.251777 containerd[1460]: time="2025-05-08T00:47:05.251742602Z" level=info msg="RemoveContainer for \"8662d99d8205c28e1b19f0165af1ad9ab4deb427bb0c2149118a2d43be291dd1\"" May 8 00:47:05.301082 kubelet[1769]: E0508 00:47:05.301016 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:05.426617 containerd[1460]: time="2025-05-08T00:47:05.426555221Z" level=info msg="RemoveContainer for \"8662d99d8205c28e1b19f0165af1ad9ab4deb427bb0c2149118a2d43be291dd1\" returns successfully" May 8 00:47:05.426827 kubelet[1769]: I0508 00:47:05.426794 1769 scope.go:117] "RemoveContainer" containerID="c49b84c720b406facb1dd6bce61fd80f1bca6b364a17107356be65abb038291d" May 8 00:47:05.427792 containerd[1460]: time="2025-05-08T00:47:05.427763754Z" level=info msg="RemoveContainer for \"c49b84c720b406facb1dd6bce61fd80f1bca6b364a17107356be65abb038291d\"" May 8 00:47:05.469910 containerd[1460]: time="2025-05-08T00:47:05.469884120Z" level=info msg="RemoveContainer for \"c49b84c720b406facb1dd6bce61fd80f1bca6b364a17107356be65abb038291d\" returns successfully" May 8 00:47:05.470019 kubelet[1769]: I0508 00:47:05.470001 1769 scope.go:117] "RemoveContainer" containerID="7f7525f5f223506fd88a7cf81a1c5f161b48f1e380c5c92a898b3ffb0145247a" May 8 00:47:05.471040 containerd[1460]: time="2025-05-08T00:47:05.470980651Z" level=info msg="RemoveContainer for \"7f7525f5f223506fd88a7cf81a1c5f161b48f1e380c5c92a898b3ffb0145247a\"" May 8 00:47:05.570757 containerd[1460]: time="2025-05-08T00:47:05.570490826Z" level=info msg="RemoveContainer for \"7f7525f5f223506fd88a7cf81a1c5f161b48f1e380c5c92a898b3ffb0145247a\" returns successfully" May 8 00:47:05.570882 kubelet[1769]: I0508 00:47:05.570824 1769 scope.go:117] "RemoveContainer" containerID="5c0adea16f5dd9d6edd155167efaf17ce6767f7e0673abbad190940f1d6a7f27" May 8 00:47:05.571251 containerd[1460]: time="2025-05-08T00:47:05.571103919Z" level=error msg="ContainerStatus for \"5c0adea16f5dd9d6edd155167efaf17ce6767f7e0673abbad190940f1d6a7f27\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5c0adea16f5dd9d6edd155167efaf17ce6767f7e0673abbad190940f1d6a7f27\": not found" May 8 00:47:05.571313 kubelet[1769]: E0508 00:47:05.571255 1769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5c0adea16f5dd9d6edd155167efaf17ce6767f7e0673abbad190940f1d6a7f27\": not found" containerID="5c0adea16f5dd9d6edd155167efaf17ce6767f7e0673abbad190940f1d6a7f27" May 8 00:47:05.571375 kubelet[1769]: I0508 00:47:05.571293 1769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5c0adea16f5dd9d6edd155167efaf17ce6767f7e0673abbad190940f1d6a7f27"} err="failed to get container status \"5c0adea16f5dd9d6edd155167efaf17ce6767f7e0673abbad190940f1d6a7f27\": rpc error: code = NotFound desc = an error occurred when try to find container \"5c0adea16f5dd9d6edd155167efaf17ce6767f7e0673abbad190940f1d6a7f27\": not found" May 8 00:47:05.571375 kubelet[1769]: I0508 00:47:05.571342 1769 scope.go:117] "RemoveContainer" containerID="3cb8886202568ad0d528ac41642f2b31156305e19f60f6eabe3dfdee505e351a" May 8 00:47:05.571791 containerd[1460]: time="2025-05-08T00:47:05.571749753Z" level=error msg="ContainerStatus for \"3cb8886202568ad0d528ac41642f2b31156305e19f60f6eabe3dfdee505e351a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3cb8886202568ad0d528ac41642f2b31156305e19f60f6eabe3dfdee505e351a\": not found" May 8 00:47:05.571912 kubelet[1769]: E0508 00:47:05.571884 1769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3cb8886202568ad0d528ac41642f2b31156305e19f60f6eabe3dfdee505e351a\": not found" containerID="3cb8886202568ad0d528ac41642f2b31156305e19f60f6eabe3dfdee505e351a" May 8 00:47:05.571912 kubelet[1769]: I0508 00:47:05.571907 1769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3cb8886202568ad0d528ac41642f2b31156305e19f60f6eabe3dfdee505e351a"} err="failed to get container status \"3cb8886202568ad0d528ac41642f2b31156305e19f60f6eabe3dfdee505e351a\": rpc error: code = NotFound desc = an error occurred when try to find container \"3cb8886202568ad0d528ac41642f2b31156305e19f60f6eabe3dfdee505e351a\": not found" May 8 00:47:05.571912 kubelet[1769]: I0508 00:47:05.571923 1769 scope.go:117] "RemoveContainer" containerID="8662d99d8205c28e1b19f0165af1ad9ab4deb427bb0c2149118a2d43be291dd1" May 8 00:47:05.572102 containerd[1460]: time="2025-05-08T00:47:05.572069534Z" level=error msg="ContainerStatus for \"8662d99d8205c28e1b19f0165af1ad9ab4deb427bb0c2149118a2d43be291dd1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8662d99d8205c28e1b19f0165af1ad9ab4deb427bb0c2149118a2d43be291dd1\": not found" May 8 00:47:05.572272 kubelet[1769]: E0508 00:47:05.572233 1769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8662d99d8205c28e1b19f0165af1ad9ab4deb427bb0c2149118a2d43be291dd1\": not found" containerID="8662d99d8205c28e1b19f0165af1ad9ab4deb427bb0c2149118a2d43be291dd1" May 8 00:47:05.572301 kubelet[1769]: I0508 00:47:05.572275 1769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8662d99d8205c28e1b19f0165af1ad9ab4deb427bb0c2149118a2d43be291dd1"} err="failed to get container status \"8662d99d8205c28e1b19f0165af1ad9ab4deb427bb0c2149118a2d43be291dd1\": rpc error: code = NotFound desc = an error occurred when try to find container \"8662d99d8205c28e1b19f0165af1ad9ab4deb427bb0c2149118a2d43be291dd1\": not found" May 8 00:47:05.572333 kubelet[1769]: I0508 00:47:05.572308 1769 scope.go:117] "RemoveContainer" containerID="c49b84c720b406facb1dd6bce61fd80f1bca6b364a17107356be65abb038291d" May 8 00:47:05.572545 containerd[1460]: time="2025-05-08T00:47:05.572513109Z" level=error msg="ContainerStatus for \"c49b84c720b406facb1dd6bce61fd80f1bca6b364a17107356be65abb038291d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c49b84c720b406facb1dd6bce61fd80f1bca6b364a17107356be65abb038291d\": not found" May 8 00:47:05.572675 kubelet[1769]: E0508 00:47:05.572649 1769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c49b84c720b406facb1dd6bce61fd80f1bca6b364a17107356be65abb038291d\": not found" containerID="c49b84c720b406facb1dd6bce61fd80f1bca6b364a17107356be65abb038291d" May 8 00:47:05.572675 kubelet[1769]: I0508 00:47:05.572670 1769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c49b84c720b406facb1dd6bce61fd80f1bca6b364a17107356be65abb038291d"} err="failed to get container status \"c49b84c720b406facb1dd6bce61fd80f1bca6b364a17107356be65abb038291d\": rpc error: code = NotFound desc = an error occurred when try to find container \"c49b84c720b406facb1dd6bce61fd80f1bca6b364a17107356be65abb038291d\": not found" May 8 00:47:05.572755 kubelet[1769]: I0508 00:47:05.572683 1769 scope.go:117] "RemoveContainer" containerID="7f7525f5f223506fd88a7cf81a1c5f161b48f1e380c5c92a898b3ffb0145247a" May 8 00:47:05.572943 containerd[1460]: time="2025-05-08T00:47:05.572894045Z" level=error msg="ContainerStatus for \"7f7525f5f223506fd88a7cf81a1c5f161b48f1e380c5c92a898b3ffb0145247a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7f7525f5f223506fd88a7cf81a1c5f161b48f1e380c5c92a898b3ffb0145247a\": not found" May 8 00:47:05.573108 kubelet[1769]: E0508 00:47:05.573080 1769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7f7525f5f223506fd88a7cf81a1c5f161b48f1e380c5c92a898b3ffb0145247a\": not found" containerID="7f7525f5f223506fd88a7cf81a1c5f161b48f1e380c5c92a898b3ffb0145247a" May 8 00:47:05.573154 kubelet[1769]: I0508 00:47:05.573107 1769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7f7525f5f223506fd88a7cf81a1c5f161b48f1e380c5c92a898b3ffb0145247a"} err="failed to get container status \"7f7525f5f223506fd88a7cf81a1c5f161b48f1e380c5c92a898b3ffb0145247a\": rpc error: code = NotFound desc = an error occurred when try to find container \"7f7525f5f223506fd88a7cf81a1c5f161b48f1e380c5c92a898b3ffb0145247a\": not found" May 8 00:47:05.787863 kubelet[1769]: I0508 00:47:05.787815 1769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bf27ce4-e87e-45ca-916f-5ead7169257b" path="/var/lib/kubelet/pods/9bf27ce4-e87e-45ca-916f-5ead7169257b/volumes" May 8 00:47:06.301937 kubelet[1769]: E0508 00:47:06.301885 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:07.261322 kubelet[1769]: E0508 00:47:07.261260 1769 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:07.272697 containerd[1460]: time="2025-05-08T00:47:07.272661777Z" level=info msg="StopPodSandbox for \"10390812f22571681821afb0b46ca13a045594983d68e9e7af4313043e5fd803\"" May 8 00:47:07.273016 containerd[1460]: time="2025-05-08T00:47:07.272761524Z" level=info msg="TearDown network for sandbox \"10390812f22571681821afb0b46ca13a045594983d68e9e7af4313043e5fd803\" successfully" May 8 00:47:07.273016 containerd[1460]: time="2025-05-08T00:47:07.272778877Z" level=info msg="StopPodSandbox for \"10390812f22571681821afb0b46ca13a045594983d68e9e7af4313043e5fd803\" returns successfully" May 8 00:47:07.273194 containerd[1460]: time="2025-05-08T00:47:07.273170463Z" level=info msg="RemovePodSandbox for \"10390812f22571681821afb0b46ca13a045594983d68e9e7af4313043e5fd803\"" May 8 00:47:07.273223 containerd[1460]: time="2025-05-08T00:47:07.273198255Z" level=info msg="Forcibly stopping sandbox \"10390812f22571681821afb0b46ca13a045594983d68e9e7af4313043e5fd803\"" May 8 00:47:07.273266 containerd[1460]: time="2025-05-08T00:47:07.273248039Z" level=info msg="TearDown network for sandbox \"10390812f22571681821afb0b46ca13a045594983d68e9e7af4313043e5fd803\" successfully" May 8 00:47:07.302242 kubelet[1769]: E0508 00:47:07.302208 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:07.515932 containerd[1460]: time="2025-05-08T00:47:07.515776675Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"10390812f22571681821afb0b46ca13a045594983d68e9e7af4313043e5fd803\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 00:47:07.515932 containerd[1460]: time="2025-05-08T00:47:07.515843741Z" level=info msg="RemovePodSandbox \"10390812f22571681821afb0b46ca13a045594983d68e9e7af4313043e5fd803\" returns successfully" May 8 00:47:07.801846 kubelet[1769]: E0508 00:47:07.801746 1769 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 8 00:47:08.302712 kubelet[1769]: E0508 00:47:08.302657 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:08.546470 kubelet[1769]: I0508 00:47:08.546432 1769 memory_manager.go:355] "RemoveStaleState removing state" podUID="9bf27ce4-e87e-45ca-916f-5ead7169257b" containerName="cilium-agent" May 8 00:47:08.552620 systemd[1]: Created slice kubepods-besteffort-pod8302a4a7_5941_4f05_a2a2_0c390156a5f6.slice - libcontainer container kubepods-besteffort-pod8302a4a7_5941_4f05_a2a2_0c390156a5f6.slice. May 8 00:47:08.572675 systemd[1]: Created slice kubepods-burstable-pod293df032_b3d8_47e2_9ff3_26d6e22b2b21.slice - libcontainer container kubepods-burstable-pod293df032_b3d8_47e2_9ff3_26d6e22b2b21.slice. May 8 00:47:08.703065 kubelet[1769]: I0508 00:47:08.702987 1769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/293df032-b3d8-47e2-9ff3-26d6e22b2b21-cilium-run\") pod \"cilium-9f78g\" (UID: \"293df032-b3d8-47e2-9ff3-26d6e22b2b21\") " pod="kube-system/cilium-9f78g" May 8 00:47:08.703065 kubelet[1769]: I0508 00:47:08.703046 1769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/293df032-b3d8-47e2-9ff3-26d6e22b2b21-cilium-ipsec-secrets\") pod \"cilium-9f78g\" (UID: \"293df032-b3d8-47e2-9ff3-26d6e22b2b21\") " pod="kube-system/cilium-9f78g" May 8 00:47:08.703065 kubelet[1769]: I0508 00:47:08.703075 1769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/293df032-b3d8-47e2-9ff3-26d6e22b2b21-host-proc-sys-net\") pod \"cilium-9f78g\" (UID: \"293df032-b3d8-47e2-9ff3-26d6e22b2b21\") " pod="kube-system/cilium-9f78g" May 8 00:47:08.703304 kubelet[1769]: I0508 00:47:08.703099 1769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8302a4a7-5941-4f05-a2a2-0c390156a5f6-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-n2cvw\" (UID: \"8302a4a7-5941-4f05-a2a2-0c390156a5f6\") " pod="kube-system/cilium-operator-6c4d7847fc-n2cvw" May 8 00:47:08.703304 kubelet[1769]: I0508 00:47:08.703125 1769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/293df032-b3d8-47e2-9ff3-26d6e22b2b21-etc-cni-netd\") pod \"cilium-9f78g\" (UID: \"293df032-b3d8-47e2-9ff3-26d6e22b2b21\") " pod="kube-system/cilium-9f78g" May 8 00:47:08.703304 kubelet[1769]: I0508 00:47:08.703145 1769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/293df032-b3d8-47e2-9ff3-26d6e22b2b21-lib-modules\") pod \"cilium-9f78g\" (UID: \"293df032-b3d8-47e2-9ff3-26d6e22b2b21\") " pod="kube-system/cilium-9f78g" May 8 00:47:08.703304 kubelet[1769]: I0508 00:47:08.703166 1769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/293df032-b3d8-47e2-9ff3-26d6e22b2b21-clustermesh-secrets\") pod \"cilium-9f78g\" (UID: \"293df032-b3d8-47e2-9ff3-26d6e22b2b21\") " pod="kube-system/cilium-9f78g" May 8 00:47:08.703304 kubelet[1769]: I0508 00:47:08.703190 1769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/293df032-b3d8-47e2-9ff3-26d6e22b2b21-bpf-maps\") pod \"cilium-9f78g\" (UID: \"293df032-b3d8-47e2-9ff3-26d6e22b2b21\") " pod="kube-system/cilium-9f78g" May 8 00:47:08.703471 kubelet[1769]: I0508 00:47:08.703209 1769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/293df032-b3d8-47e2-9ff3-26d6e22b2b21-hostproc\") pod \"cilium-9f78g\" (UID: \"293df032-b3d8-47e2-9ff3-26d6e22b2b21\") " pod="kube-system/cilium-9f78g" May 8 00:47:08.703471 kubelet[1769]: I0508 00:47:08.703228 1769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/293df032-b3d8-47e2-9ff3-26d6e22b2b21-hubble-tls\") pod \"cilium-9f78g\" (UID: \"293df032-b3d8-47e2-9ff3-26d6e22b2b21\") " pod="kube-system/cilium-9f78g" May 8 00:47:08.703471 kubelet[1769]: I0508 00:47:08.703252 1769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/293df032-b3d8-47e2-9ff3-26d6e22b2b21-cilium-config-path\") pod \"cilium-9f78g\" (UID: \"293df032-b3d8-47e2-9ff3-26d6e22b2b21\") " pod="kube-system/cilium-9f78g" May 8 00:47:08.703471 kubelet[1769]: I0508 00:47:08.703274 1769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/293df032-b3d8-47e2-9ff3-26d6e22b2b21-cilium-cgroup\") pod \"cilium-9f78g\" (UID: \"293df032-b3d8-47e2-9ff3-26d6e22b2b21\") " pod="kube-system/cilium-9f78g" May 8 00:47:08.703471 kubelet[1769]: I0508 00:47:08.703332 1769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/293df032-b3d8-47e2-9ff3-26d6e22b2b21-cni-path\") pod \"cilium-9f78g\" (UID: \"293df032-b3d8-47e2-9ff3-26d6e22b2b21\") " pod="kube-system/cilium-9f78g" May 8 00:47:08.703471 kubelet[1769]: I0508 00:47:08.703381 1769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/293df032-b3d8-47e2-9ff3-26d6e22b2b21-xtables-lock\") pod \"cilium-9f78g\" (UID: \"293df032-b3d8-47e2-9ff3-26d6e22b2b21\") " pod="kube-system/cilium-9f78g" May 8 00:47:08.703652 kubelet[1769]: I0508 00:47:08.703440 1769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/293df032-b3d8-47e2-9ff3-26d6e22b2b21-host-proc-sys-kernel\") pod \"cilium-9f78g\" (UID: \"293df032-b3d8-47e2-9ff3-26d6e22b2b21\") " pod="kube-system/cilium-9f78g" May 8 00:47:08.703652 kubelet[1769]: I0508 00:47:08.703490 1769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhtmg\" (UniqueName: \"kubernetes.io/projected/293df032-b3d8-47e2-9ff3-26d6e22b2b21-kube-api-access-nhtmg\") pod \"cilium-9f78g\" (UID: \"293df032-b3d8-47e2-9ff3-26d6e22b2b21\") " pod="kube-system/cilium-9f78g" May 8 00:47:08.703652 kubelet[1769]: I0508 00:47:08.703512 1769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ch9h\" (UniqueName: \"kubernetes.io/projected/8302a4a7-5941-4f05-a2a2-0c390156a5f6-kube-api-access-2ch9h\") pod \"cilium-operator-6c4d7847fc-n2cvw\" (UID: \"8302a4a7-5941-4f05-a2a2-0c390156a5f6\") " pod="kube-system/cilium-operator-6c4d7847fc-n2cvw" May 8 00:47:08.855974 kubelet[1769]: E0508 00:47:08.855865 1769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:08.856354 containerd[1460]: time="2025-05-08T00:47:08.856317635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-n2cvw,Uid:8302a4a7-5941-4f05-a2a2-0c390156a5f6,Namespace:kube-system,Attempt:0,}" May 8 00:47:08.887243 kubelet[1769]: E0508 00:47:08.887221 1769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:08.887709 containerd[1460]: time="2025-05-08T00:47:08.887665508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9f78g,Uid:293df032-b3d8-47e2-9ff3-26d6e22b2b21,Namespace:kube-system,Attempt:0,}" May 8 00:47:09.108466 containerd[1460]: time="2025-05-08T00:47:09.108089507Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:47:09.108466 containerd[1460]: time="2025-05-08T00:47:09.108176991Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:47:09.108466 containerd[1460]: time="2025-05-08T00:47:09.108193382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:47:09.108466 containerd[1460]: time="2025-05-08T00:47:09.108314750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:47:09.118053 containerd[1460]: time="2025-05-08T00:47:09.117635237Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:47:09.118053 containerd[1460]: time="2025-05-08T00:47:09.117683908Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:47:09.118053 containerd[1460]: time="2025-05-08T00:47:09.117697423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:47:09.118053 containerd[1460]: time="2025-05-08T00:47:09.117774067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:47:09.130624 systemd[1]: Started cri-containerd-088ddd5e2ed6a9df9f31777cba55ea9a2fa14bb92d1bac431593cf683bd7112f.scope - libcontainer container 088ddd5e2ed6a9df9f31777cba55ea9a2fa14bb92d1bac431593cf683bd7112f. May 8 00:47:09.134243 systemd[1]: Started cri-containerd-d779a8ebd9b3b7e1bc4793bad91fac9cbe532c7a70c2df0ad431d2581c446bd6.scope - libcontainer container d779a8ebd9b3b7e1bc4793bad91fac9cbe532c7a70c2df0ad431d2581c446bd6. May 8 00:47:09.156827 containerd[1460]: time="2025-05-08T00:47:09.156776528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9f78g,Uid:293df032-b3d8-47e2-9ff3-26d6e22b2b21,Namespace:kube-system,Attempt:0,} returns sandbox id \"d779a8ebd9b3b7e1bc4793bad91fac9cbe532c7a70c2df0ad431d2581c446bd6\"" May 8 00:47:09.157678 kubelet[1769]: E0508 00:47:09.157639 1769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:09.160539 containerd[1460]: time="2025-05-08T00:47:09.160500617Z" level=info msg="CreateContainer within sandbox \"d779a8ebd9b3b7e1bc4793bad91fac9cbe532c7a70c2df0ad431d2581c446bd6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 00:47:09.169045 containerd[1460]: time="2025-05-08T00:47:09.168975604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-n2cvw,Uid:8302a4a7-5941-4f05-a2a2-0c390156a5f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"088ddd5e2ed6a9df9f31777cba55ea9a2fa14bb92d1bac431593cf683bd7112f\"" May 8 00:47:09.169537 kubelet[1769]: E0508 00:47:09.169507 1769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:09.170148 containerd[1460]: time="2025-05-08T00:47:09.170119926Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 8 00:47:09.295335 kubelet[1769]: I0508 00:47:09.295278 1769 setters.go:602] "Node became not ready" node="10.0.0.140" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-08T00:47:09Z","lastTransitionTime":"2025-05-08T00:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 8 00:47:09.303225 kubelet[1769]: E0508 00:47:09.303199 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:09.314023 containerd[1460]: time="2025-05-08T00:47:09.313966558Z" level=info msg="CreateContainer within sandbox \"d779a8ebd9b3b7e1bc4793bad91fac9cbe532c7a70c2df0ad431d2581c446bd6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bacd24b5dd33f16209a6439a33ce5af48bd1f89be11bffc5a126dce69c637af7\"" May 8 00:47:09.314631 containerd[1460]: time="2025-05-08T00:47:09.314472779Z" level=info msg="StartContainer for \"bacd24b5dd33f16209a6439a33ce5af48bd1f89be11bffc5a126dce69c637af7\"" May 8 00:47:09.341563 systemd[1]: Started cri-containerd-bacd24b5dd33f16209a6439a33ce5af48bd1f89be11bffc5a126dce69c637af7.scope - libcontainer container bacd24b5dd33f16209a6439a33ce5af48bd1f89be11bffc5a126dce69c637af7. May 8 00:47:09.373277 systemd[1]: cri-containerd-bacd24b5dd33f16209a6439a33ce5af48bd1f89be11bffc5a126dce69c637af7.scope: Deactivated successfully. May 8 00:47:09.474501 containerd[1460]: time="2025-05-08T00:47:09.474455567Z" level=info msg="StartContainer for \"bacd24b5dd33f16209a6439a33ce5af48bd1f89be11bffc5a126dce69c637af7\" returns successfully" May 8 00:47:09.786188 containerd[1460]: time="2025-05-08T00:47:09.786112740Z" level=info msg="shim disconnected" id=bacd24b5dd33f16209a6439a33ce5af48bd1f89be11bffc5a126dce69c637af7 namespace=k8s.io May 8 00:47:09.786188 containerd[1460]: time="2025-05-08T00:47:09.786165830Z" level=warning msg="cleaning up after shim disconnected" id=bacd24b5dd33f16209a6439a33ce5af48bd1f89be11bffc5a126dce69c637af7 namespace=k8s.io May 8 00:47:09.786188 containerd[1460]: time="2025-05-08T00:47:09.786174546Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:47:10.072428 kubelet[1769]: E0508 00:47:10.072267 1769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:10.074208 containerd[1460]: time="2025-05-08T00:47:10.074158451Z" level=info msg="CreateContainer within sandbox \"d779a8ebd9b3b7e1bc4793bad91fac9cbe532c7a70c2df0ad431d2581c446bd6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 8 00:47:10.291717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3596690833.mount: Deactivated successfully. May 8 00:47:10.304373 kubelet[1769]: E0508 00:47:10.304333 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:10.418034 containerd[1460]: time="2025-05-08T00:47:10.417872744Z" level=info msg="CreateContainer within sandbox \"d779a8ebd9b3b7e1bc4793bad91fac9cbe532c7a70c2df0ad431d2581c446bd6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e406c99751536913f0c5bd29d375e940e6485b375b21159d792f7df3db716ace\"" May 8 00:47:10.418711 containerd[1460]: time="2025-05-08T00:47:10.418461711Z" level=info msg="StartContainer for \"e406c99751536913f0c5bd29d375e940e6485b375b21159d792f7df3db716ace\"" May 8 00:47:10.450581 systemd[1]: Started cri-containerd-e406c99751536913f0c5bd29d375e940e6485b375b21159d792f7df3db716ace.scope - libcontainer container e406c99751536913f0c5bd29d375e940e6485b375b21159d792f7df3db716ace. May 8 00:47:10.494509 systemd[1]: cri-containerd-e406c99751536913f0c5bd29d375e940e6485b375b21159d792f7df3db716ace.scope: Deactivated successfully. May 8 00:47:10.543155 containerd[1460]: time="2025-05-08T00:47:10.543092250Z" level=info msg="StartContainer for \"e406c99751536913f0c5bd29d375e940e6485b375b21159d792f7df3db716ace\" returns successfully" May 8 00:47:10.610481 containerd[1460]: time="2025-05-08T00:47:10.610392326Z" level=info msg="shim disconnected" id=e406c99751536913f0c5bd29d375e940e6485b375b21159d792f7df3db716ace namespace=k8s.io May 8 00:47:10.610481 containerd[1460]: time="2025-05-08T00:47:10.610476074Z" level=warning msg="cleaning up after shim disconnected" id=e406c99751536913f0c5bd29d375e940e6485b375b21159d792f7df3db716ace namespace=k8s.io May 8 00:47:10.610481 containerd[1460]: time="2025-05-08T00:47:10.610487295Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:47:10.809350 systemd[1]: run-containerd-runc-k8s.io-e406c99751536913f0c5bd29d375e940e6485b375b21159d792f7df3db716ace-runc.wiXkYv.mount: Deactivated successfully. May 8 00:47:10.809482 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e406c99751536913f0c5bd29d375e940e6485b375b21159d792f7df3db716ace-rootfs.mount: Deactivated successfully. May 8 00:47:11.075380 kubelet[1769]: E0508 00:47:11.075266 1769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:11.076678 containerd[1460]: time="2025-05-08T00:47:11.076626769Z" level=info msg="CreateContainer within sandbox \"d779a8ebd9b3b7e1bc4793bad91fac9cbe532c7a70c2df0ad431d2581c446bd6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 8 00:47:11.304862 kubelet[1769]: E0508 00:47:11.304801 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:12.305512 kubelet[1769]: E0508 00:47:12.305439 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:12.402664 containerd[1460]: time="2025-05-08T00:47:12.402608088Z" level=info msg="CreateContainer within sandbox \"d779a8ebd9b3b7e1bc4793bad91fac9cbe532c7a70c2df0ad431d2581c446bd6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e5e4af470a2df0e5cc7114bd29adee794b328ab928ed765147834fa811b30218\"" May 8 00:47:12.403218 containerd[1460]: time="2025-05-08T00:47:12.403186715Z" level=info msg="StartContainer for \"e5e4af470a2df0e5cc7114bd29adee794b328ab928ed765147834fa811b30218\"" May 8 00:47:12.434548 systemd[1]: Started cri-containerd-e5e4af470a2df0e5cc7114bd29adee794b328ab928ed765147834fa811b30218.scope - libcontainer container e5e4af470a2df0e5cc7114bd29adee794b328ab928ed765147834fa811b30218. May 8 00:47:12.462794 systemd[1]: cri-containerd-e5e4af470a2df0e5cc7114bd29adee794b328ab928ed765147834fa811b30218.scope: Deactivated successfully. May 8 00:47:12.803514 kubelet[1769]: E0508 00:47:12.803481 1769 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 8 00:47:13.013552 containerd[1460]: time="2025-05-08T00:47:13.013488938Z" level=info msg="StartContainer for \"e5e4af470a2df0e5cc7114bd29adee794b328ab928ed765147834fa811b30218\" returns successfully" May 8 00:47:13.032130 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5e4af470a2df0e5cc7114bd29adee794b328ab928ed765147834fa811b30218-rootfs.mount: Deactivated successfully. May 8 00:47:13.083184 kubelet[1769]: E0508 00:47:13.083073 1769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:13.306628 kubelet[1769]: E0508 00:47:13.306554 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:13.383123 containerd[1460]: time="2025-05-08T00:47:13.382973643Z" level=info msg="shim disconnected" id=e5e4af470a2df0e5cc7114bd29adee794b328ab928ed765147834fa811b30218 namespace=k8s.io May 8 00:47:13.383123 containerd[1460]: time="2025-05-08T00:47:13.383029107Z" level=warning msg="cleaning up after shim disconnected" id=e5e4af470a2df0e5cc7114bd29adee794b328ab928ed765147834fa811b30218 namespace=k8s.io May 8 00:47:13.383123 containerd[1460]: time="2025-05-08T00:47:13.383038345Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:47:14.086329 kubelet[1769]: E0508 00:47:14.086292 1769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:14.087906 containerd[1460]: time="2025-05-08T00:47:14.087875429Z" level=info msg="CreateContainer within sandbox \"d779a8ebd9b3b7e1bc4793bad91fac9cbe532c7a70c2df0ad431d2581c446bd6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 8 00:47:14.306984 kubelet[1769]: E0508 00:47:14.306913 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:14.460299 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2354029986.mount: Deactivated successfully. May 8 00:47:15.307444 kubelet[1769]: E0508 00:47:15.307343 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:15.601188 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount633576722.mount: Deactivated successfully. May 8 00:47:15.625124 containerd[1460]: time="2025-05-08T00:47:15.625048395Z" level=info msg="CreateContainer within sandbox \"d779a8ebd9b3b7e1bc4793bad91fac9cbe532c7a70c2df0ad431d2581c446bd6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"57520572e7232b48fb01b45c2ebcc2acab9fe2a5deb6dd16007c11548d1f4a5a\"" May 8 00:47:15.625704 containerd[1460]: time="2025-05-08T00:47:15.625671306Z" level=info msg="StartContainer for \"57520572e7232b48fb01b45c2ebcc2acab9fe2a5deb6dd16007c11548d1f4a5a\"" May 8 00:47:15.659635 systemd[1]: Started cri-containerd-57520572e7232b48fb01b45c2ebcc2acab9fe2a5deb6dd16007c11548d1f4a5a.scope - libcontainer container 57520572e7232b48fb01b45c2ebcc2acab9fe2a5deb6dd16007c11548d1f4a5a. May 8 00:47:15.681610 systemd[1]: cri-containerd-57520572e7232b48fb01b45c2ebcc2acab9fe2a5deb6dd16007c11548d1f4a5a.scope: Deactivated successfully. May 8 00:47:15.762981 containerd[1460]: time="2025-05-08T00:47:15.762906812Z" level=info msg="StartContainer for \"57520572e7232b48fb01b45c2ebcc2acab9fe2a5deb6dd16007c11548d1f4a5a\" returns successfully" May 8 00:47:15.852325 containerd[1460]: time="2025-05-08T00:47:15.852162501Z" level=info msg="shim disconnected" id=57520572e7232b48fb01b45c2ebcc2acab9fe2a5deb6dd16007c11548d1f4a5a namespace=k8s.io May 8 00:47:15.852325 containerd[1460]: time="2025-05-08T00:47:15.852218096Z" level=warning msg="cleaning up after shim disconnected" id=57520572e7232b48fb01b45c2ebcc2acab9fe2a5deb6dd16007c11548d1f4a5a namespace=k8s.io May 8 00:47:15.852325 containerd[1460]: time="2025-05-08T00:47:15.852227043Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 00:47:16.092789 kubelet[1769]: E0508 00:47:16.092755 1769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:16.094889 containerd[1460]: time="2025-05-08T00:47:16.094762179Z" level=info msg="CreateContainer within sandbox \"d779a8ebd9b3b7e1bc4793bad91fac9cbe532c7a70c2df0ad431d2581c446bd6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 8 00:47:16.114712 containerd[1460]: time="2025-05-08T00:47:16.114572684Z" level=info msg="CreateContainer within sandbox \"d779a8ebd9b3b7e1bc4793bad91fac9cbe532c7a70c2df0ad431d2581c446bd6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"05043823db4ac897c63f9157b67a7ec57574a19586d4afc0f69e859f0422e6dc\"" May 8 00:47:16.115393 containerd[1460]: time="2025-05-08T00:47:16.115281115Z" level=info msg="StartContainer for \"05043823db4ac897c63f9157b67a7ec57574a19586d4afc0f69e859f0422e6dc\"" May 8 00:47:16.146574 systemd[1]: Started cri-containerd-05043823db4ac897c63f9157b67a7ec57574a19586d4afc0f69e859f0422e6dc.scope - libcontainer container 05043823db4ac897c63f9157b67a7ec57574a19586d4afc0f69e859f0422e6dc. May 8 00:47:16.305854 containerd[1460]: time="2025-05-08T00:47:16.305788877Z" level=info msg="StartContainer for \"05043823db4ac897c63f9157b67a7ec57574a19586d4afc0f69e859f0422e6dc\" returns successfully" May 8 00:47:16.307677 kubelet[1769]: E0508 00:47:16.307644 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:16.599300 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-57520572e7232b48fb01b45c2ebcc2acab9fe2a5deb6dd16007c11548d1f4a5a-rootfs.mount: Deactivated successfully. May 8 00:47:16.599637 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 8 00:47:16.695960 containerd[1460]: time="2025-05-08T00:47:16.695887371Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:47:16.703686 containerd[1460]: time="2025-05-08T00:47:16.703606408Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 8 00:47:16.711112 containerd[1460]: time="2025-05-08T00:47:16.711050448Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 00:47:16.712240 containerd[1460]: time="2025-05-08T00:47:16.712194777Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 7.542046819s" May 8 00:47:16.712240 containerd[1460]: time="2025-05-08T00:47:16.712233139Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 8 00:47:16.714238 containerd[1460]: time="2025-05-08T00:47:16.714207236Z" level=info msg="CreateContainer within sandbox \"088ddd5e2ed6a9df9f31777cba55ea9a2fa14bb92d1bac431593cf683bd7112f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 8 00:47:16.827728 containerd[1460]: time="2025-05-08T00:47:16.827512349Z" level=info msg="CreateContainer within sandbox \"088ddd5e2ed6a9df9f31777cba55ea9a2fa14bb92d1bac431593cf683bd7112f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0dba4c9342d70293b83ace005e44d0a134ca31362d262fcdb12cf187f6ab3ee5\"" May 8 00:47:16.828355 containerd[1460]: time="2025-05-08T00:47:16.828316098Z" level=info msg="StartContainer for \"0dba4c9342d70293b83ace005e44d0a134ca31362d262fcdb12cf187f6ab3ee5\"" May 8 00:47:16.862616 systemd[1]: Started cri-containerd-0dba4c9342d70293b83ace005e44d0a134ca31362d262fcdb12cf187f6ab3ee5.scope - libcontainer container 0dba4c9342d70293b83ace005e44d0a134ca31362d262fcdb12cf187f6ab3ee5. May 8 00:47:16.969503 containerd[1460]: time="2025-05-08T00:47:16.969432081Z" level=info msg="StartContainer for \"0dba4c9342d70293b83ace005e44d0a134ca31362d262fcdb12cf187f6ab3ee5\" returns successfully" May 8 00:47:17.096098 kubelet[1769]: E0508 00:47:17.096054 1769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:17.099288 kubelet[1769]: E0508 00:47:17.099260 1769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:17.124120 kubelet[1769]: I0508 00:47:17.123953 1769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-n2cvw" podStartSLOduration=1.580828828 podStartE2EDuration="9.123932692s" podCreationTimestamp="2025-05-08 00:47:08 +0000 UTC" firstStartedPulling="2025-05-08 00:47:09.169880305 +0000 UTC m=+62.304879083" lastFinishedPulling="2025-05-08 00:47:16.712984169 +0000 UTC m=+69.847982947" observedRunningTime="2025-05-08 00:47:17.123762904 +0000 UTC m=+70.258761682" watchObservedRunningTime="2025-05-08 00:47:17.123932692 +0000 UTC m=+70.258931470" May 8 00:47:17.139868 kubelet[1769]: I0508 00:47:17.139778 1769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9f78g" podStartSLOduration=9.139757199 podStartE2EDuration="9.139757199s" podCreationTimestamp="2025-05-08 00:47:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:47:17.139238696 +0000 UTC m=+70.274237474" watchObservedRunningTime="2025-05-08 00:47:17.139757199 +0000 UTC m=+70.274755977" May 8 00:47:17.308544 kubelet[1769]: E0508 00:47:17.308454 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:17.597943 systemd[1]: run-containerd-runc-k8s.io-0dba4c9342d70293b83ace005e44d0a134ca31362d262fcdb12cf187f6ab3ee5-runc.D6ozqs.mount: Deactivated successfully. May 8 00:47:18.101209 kubelet[1769]: E0508 00:47:18.101161 1769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:18.309369 kubelet[1769]: E0508 00:47:18.309285 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:18.888653 kubelet[1769]: E0508 00:47:18.888604 1769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:19.309877 kubelet[1769]: E0508 00:47:19.309798 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:19.851279 systemd-networkd[1391]: lxc_health: Link UP May 8 00:47:19.862684 systemd-networkd[1391]: lxc_health: Gained carrier May 8 00:47:19.887348 systemd[1]: run-containerd-runc-k8s.io-05043823db4ac897c63f9157b67a7ec57574a19586d4afc0f69e859f0422e6dc-runc.aXKOvr.mount: Deactivated successfully. May 8 00:47:19.971472 kubelet[1769]: E0508 00:47:19.971204 1769 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:33634->127.0.0.1:39789: write tcp 127.0.0.1:33634->127.0.0.1:39789: write: broken pipe May 8 00:47:20.310974 kubelet[1769]: E0508 00:47:20.310898 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:20.889321 kubelet[1769]: E0508 00:47:20.889272 1769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:21.107882 kubelet[1769]: E0508 00:47:21.107831 1769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:21.311838 kubelet[1769]: E0508 00:47:21.311774 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:21.542004 systemd-networkd[1391]: lxc_health: Gained IPv6LL May 8 00:47:22.109950 kubelet[1769]: E0508 00:47:22.109903 1769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:22.312670 kubelet[1769]: E0508 00:47:22.312614 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:23.313703 kubelet[1769]: E0508 00:47:23.313659 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:24.314421 kubelet[1769]: E0508 00:47:24.314346 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:25.314541 kubelet[1769]: E0508 00:47:25.314473 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:26.315152 kubelet[1769]: E0508 00:47:26.315086 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:27.261683 kubelet[1769]: E0508 00:47:27.261566 1769 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:27.316006 kubelet[1769]: E0508 00:47:27.315973 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:28.317162 kubelet[1769]: E0508 00:47:28.317083 1769 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"