May 15 10:43:48.034101 kernel: Linux version 5.15.182-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu May 15 09:06:41 -00 2025 May 15 10:43:48.034162 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f8c1bc5ff10765e781843bfc97fc5357002a3f8a120201a0e954fce1d2ba48f0 May 15 10:43:48.034179 kernel: BIOS-provided physical RAM map: May 15 10:43:48.034187 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 15 10:43:48.034196 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 15 10:43:48.034204 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 15 10:43:48.034213 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 15 10:43:48.034220 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 15 10:43:48.034228 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable May 15 10:43:48.034237 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS May 15 10:43:48.034245 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable May 15 10:43:48.034252 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved May 15 10:43:48.034267 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data May 15 10:43:48.034275 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 15 10:43:48.034287 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable May 15 10:43:48.034298 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved May 15 10:43:48.034306 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 15 10:43:48.034314 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 15 10:43:48.034326 kernel: NX (Execute Disable) protection: active May 15 10:43:48.034334 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable May 15 10:43:48.034345 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable May 15 10:43:48.034352 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable May 15 10:43:48.034360 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable May 15 10:43:48.034367 kernel: extended physical RAM map: May 15 10:43:48.034375 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable May 15 10:43:48.034385 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable May 15 10:43:48.034395 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 15 10:43:48.034403 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable May 15 10:43:48.034411 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 15 10:43:48.034434 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable May 15 10:43:48.034445 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS May 15 10:43:48.034460 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b438017] usable May 15 10:43:48.034476 kernel: reserve setup_data: [mem 0x000000009b438018-0x000000009b474e57] usable May 15 10:43:48.034484 kernel: reserve setup_data: [mem 0x000000009b474e58-0x000000009b475017] usable May 15 10:43:48.034503 kernel: reserve setup_data: [mem 0x000000009b475018-0x000000009b47ec57] usable May 15 10:43:48.034511 kernel: reserve setup_data: [mem 0x000000009b47ec58-0x000000009c8eefff] usable May 15 10:43:48.034529 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved May 15 10:43:48.034539 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data May 15 10:43:48.034547 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 15 10:43:48.034555 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable May 15 10:43:48.034568 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved May 15 10:43:48.034577 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 15 10:43:48.034586 kernel: reserve setup_data: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 15 10:43:48.034596 kernel: efi: EFI v2.70 by EDK II May 15 10:43:48.034605 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b673018 RNG=0x9cb73018 May 15 10:43:48.034628 kernel: random: crng init done May 15 10:43:48.034637 kernel: SMBIOS 2.8 present. May 15 10:43:48.034645 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 May 15 10:43:48.034662 kernel: Hypervisor detected: KVM May 15 10:43:48.034671 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 15 10:43:48.034680 kernel: kvm-clock: cpu 0, msr 2019a001, primary cpu clock May 15 10:43:48.034689 kernel: kvm-clock: using sched offset of 5687672327 cycles May 15 10:43:48.034715 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 15 10:43:48.034724 kernel: tsc: Detected 2794.748 MHz processor May 15 10:43:48.034734 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 15 10:43:48.034743 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 15 10:43:48.034750 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 May 15 10:43:48.034757 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 15 10:43:48.034764 kernel: Using GB pages for direct mapping May 15 10:43:48.034771 kernel: Secure boot disabled May 15 10:43:48.034778 kernel: ACPI: Early table checksum verification disabled May 15 10:43:48.034787 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 15 10:43:48.034794 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 15 10:43:48.034801 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:43:48.034808 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:43:48.034818 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 15 10:43:48.034825 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:43:48.034832 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:43:48.034841 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:43:48.034848 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:43:48.034859 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 15 10:43:48.034866 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 15 10:43:48.034873 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] May 15 10:43:48.034880 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 15 10:43:48.034887 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 15 10:43:48.034894 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 15 10:43:48.034900 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 15 10:43:48.034907 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 15 10:43:48.034914 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 15 10:43:48.034922 kernel: No NUMA configuration found May 15 10:43:48.034929 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] May 15 10:43:48.034936 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] May 15 10:43:48.034943 kernel: Zone ranges: May 15 10:43:48.034949 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 15 10:43:48.034956 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] May 15 10:43:48.034963 kernel: Normal empty May 15 10:43:48.034970 kernel: Movable zone start for each node May 15 10:43:48.034976 kernel: Early memory node ranges May 15 10:43:48.034985 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 15 10:43:48.034992 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 15 10:43:48.034998 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 15 10:43:48.035005 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] May 15 10:43:48.035055 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] May 15 10:43:48.035062 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] May 15 10:43:48.035068 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] May 15 10:43:48.035075 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 15 10:43:48.035082 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 15 10:43:48.035089 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 15 10:43:48.035097 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 15 10:43:48.035104 kernel: On node 0, zone DMA: 240 pages in unavailable ranges May 15 10:43:48.035111 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges May 15 10:43:48.035118 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges May 15 10:43:48.035125 kernel: ACPI: PM-Timer IO Port: 0x608 May 15 10:43:48.035132 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 15 10:43:48.035138 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 15 10:43:48.035145 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 15 10:43:48.035152 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 15 10:43:48.035160 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 15 10:43:48.035167 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 15 10:43:48.035178 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 15 10:43:48.035189 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 15 10:43:48.035203 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 15 10:43:48.035210 kernel: TSC deadline timer available May 15 10:43:48.035216 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 15 10:43:48.035223 kernel: kvm-guest: KVM setup pv remote TLB flush May 15 10:43:48.035230 kernel: kvm-guest: setup PV sched yield May 15 10:43:48.035238 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices May 15 10:43:48.035245 kernel: Booting paravirtualized kernel on KVM May 15 10:43:48.035265 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 15 10:43:48.035274 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 May 15 10:43:48.035281 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 May 15 10:43:48.035289 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 May 15 10:43:48.035296 kernel: pcpu-alloc: [0] 0 1 2 3 May 15 10:43:48.035303 kernel: kvm-guest: setup async PF for cpu 0 May 15 10:43:48.035310 kernel: kvm-guest: stealtime: cpu 0, msr 9b21c0c0 May 15 10:43:48.035317 kernel: kvm-guest: PV spinlocks enabled May 15 10:43:48.035324 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 15 10:43:48.035331 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 May 15 10:43:48.035340 kernel: Policy zone: DMA32 May 15 10:43:48.035349 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f8c1bc5ff10765e781843bfc97fc5357002a3f8a120201a0e954fce1d2ba48f0 May 15 10:43:48.035356 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 15 10:43:48.035363 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 15 10:43:48.035375 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 10:43:48.035382 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 15 10:43:48.035390 kernel: Memory: 2397432K/2567000K available (12294K kernel code, 2276K rwdata, 13724K rodata, 47472K init, 4108K bss, 169308K reserved, 0K cma-reserved) May 15 10:43:48.035397 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 15 10:43:48.035404 kernel: ftrace: allocating 34585 entries in 136 pages May 15 10:43:48.035411 kernel: ftrace: allocated 136 pages with 2 groups May 15 10:43:48.035418 kernel: rcu: Hierarchical RCU implementation. May 15 10:43:48.035426 kernel: rcu: RCU event tracing is enabled. May 15 10:43:48.035436 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 15 10:43:48.035448 kernel: Rude variant of Tasks RCU enabled. May 15 10:43:48.035455 kernel: Tracing variant of Tasks RCU enabled. May 15 10:43:48.035462 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 15 10:43:48.035470 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 15 10:43:48.035477 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 15 10:43:48.035484 kernel: Console: colour dummy device 80x25 May 15 10:43:48.035491 kernel: printk: console [ttyS0] enabled May 15 10:43:48.035498 kernel: ACPI: Core revision 20210730 May 15 10:43:48.035505 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 15 10:43:48.035525 kernel: APIC: Switch to symmetric I/O mode setup May 15 10:43:48.035545 kernel: x2apic enabled May 15 10:43:48.035557 kernel: Switched APIC routing to physical x2apic. May 15 10:43:48.035564 kernel: kvm-guest: setup PV IPIs May 15 10:43:48.035573 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 15 10:43:48.035581 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 15 10:43:48.035588 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 15 10:43:48.035595 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 15 10:43:48.035605 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 15 10:43:48.035628 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 15 10:43:48.035652 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 15 10:43:48.035672 kernel: Spectre V2 : Mitigation: Retpolines May 15 10:43:48.035692 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 15 10:43:48.035705 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 15 10:43:48.035712 kernel: RETBleed: Mitigation: untrained return thunk May 15 10:43:48.035720 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 15 10:43:48.035729 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp May 15 10:43:48.035737 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 15 10:43:48.035750 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 15 10:43:48.035757 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 15 10:43:48.035765 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 15 10:43:48.035772 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 15 10:43:48.035779 kernel: Freeing SMP alternatives memory: 32K May 15 10:43:48.035786 kernel: pid_max: default: 32768 minimum: 301 May 15 10:43:48.035793 kernel: LSM: Security Framework initializing May 15 10:43:48.035800 kernel: SELinux: Initializing. May 15 10:43:48.035808 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 10:43:48.035816 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 10:43:48.035824 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 15 10:43:48.035831 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 15 10:43:48.035838 kernel: ... version: 0 May 15 10:43:48.035845 kernel: ... bit width: 48 May 15 10:43:48.035852 kernel: ... generic registers: 6 May 15 10:43:48.035860 kernel: ... value mask: 0000ffffffffffff May 15 10:43:48.035867 kernel: ... max period: 00007fffffffffff May 15 10:43:48.035874 kernel: ... fixed-purpose events: 0 May 15 10:43:48.035882 kernel: ... event mask: 000000000000003f May 15 10:43:48.035889 kernel: signal: max sigframe size: 1776 May 15 10:43:48.035897 kernel: rcu: Hierarchical SRCU implementation. May 15 10:43:48.035904 kernel: smp: Bringing up secondary CPUs ... May 15 10:43:48.035911 kernel: x86: Booting SMP configuration: May 15 10:43:48.035918 kernel: .... node #0, CPUs: #1 May 15 10:43:48.035925 kernel: kvm-clock: cpu 1, msr 2019a041, secondary cpu clock May 15 10:43:48.035932 kernel: kvm-guest: setup async PF for cpu 1 May 15 10:43:48.035939 kernel: kvm-guest: stealtime: cpu 1, msr 9b29c0c0 May 15 10:43:48.035948 kernel: #2 May 15 10:43:48.035955 kernel: kvm-clock: cpu 2, msr 2019a081, secondary cpu clock May 15 10:43:48.035962 kernel: kvm-guest: setup async PF for cpu 2 May 15 10:43:48.035969 kernel: kvm-guest: stealtime: cpu 2, msr 9b31c0c0 May 15 10:43:48.035976 kernel: #3 May 15 10:43:48.035983 kernel: kvm-clock: cpu 3, msr 2019a0c1, secondary cpu clock May 15 10:43:48.035990 kernel: kvm-guest: setup async PF for cpu 3 May 15 10:43:48.035997 kernel: kvm-guest: stealtime: cpu 3, msr 9b39c0c0 May 15 10:43:48.036005 kernel: smp: Brought up 1 node, 4 CPUs May 15 10:43:48.036016 kernel: smpboot: Max logical packages: 1 May 15 10:43:48.036023 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 15 10:43:48.036030 kernel: devtmpfs: initialized May 15 10:43:48.036037 kernel: x86/mm: Memory block size: 128MB May 15 10:43:48.036045 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 15 10:43:48.036052 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 15 10:43:48.036059 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) May 15 10:43:48.036071 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 15 10:43:48.036078 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 15 10:43:48.036087 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 15 10:43:48.036094 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 15 10:43:48.036101 kernel: pinctrl core: initialized pinctrl subsystem May 15 10:43:48.036108 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 15 10:43:48.036115 kernel: audit: initializing netlink subsys (disabled) May 15 10:43:48.036123 kernel: audit: type=2000 audit(1747305826.912:1): state=initialized audit_enabled=0 res=1 May 15 10:43:48.036130 kernel: thermal_sys: Registered thermal governor 'step_wise' May 15 10:43:48.036137 kernel: thermal_sys: Registered thermal governor 'user_space' May 15 10:43:48.036144 kernel: cpuidle: using governor menu May 15 10:43:48.036153 kernel: ACPI: bus type PCI registered May 15 10:43:48.036160 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 15 10:43:48.036167 kernel: dca service started, version 1.12.1 May 15 10:43:48.036174 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 15 10:43:48.036182 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 May 15 10:43:48.036189 kernel: PCI: Using configuration type 1 for base access May 15 10:43:48.036196 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 15 10:43:48.036203 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 15 10:43:48.036211 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 15 10:43:48.036219 kernel: ACPI: Added _OSI(Module Device) May 15 10:43:48.036226 kernel: ACPI: Added _OSI(Processor Device) May 15 10:43:48.036233 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 15 10:43:48.036241 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 15 10:43:48.036248 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 15 10:43:48.036255 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 15 10:43:48.036269 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 15 10:43:48.036276 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 15 10:43:48.036283 kernel: ACPI: Interpreter enabled May 15 10:43:48.036293 kernel: ACPI: PM: (supports S0 S3 S5) May 15 10:43:48.036300 kernel: ACPI: Using IOAPIC for interrupt routing May 15 10:43:48.036307 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 15 10:43:48.036314 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 15 10:43:48.036322 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 15 10:43:48.036492 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 10:43:48.036573 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 15 10:43:48.036669 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 15 10:43:48.036687 kernel: PCI host bridge to bus 0000:00 May 15 10:43:48.036831 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 15 10:43:48.037005 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 15 10:43:48.037073 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 15 10:43:48.037137 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 15 10:43:48.037209 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 15 10:43:48.037300 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] May 15 10:43:48.037434 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 15 10:43:48.037633 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 15 10:43:48.037772 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 15 10:43:48.037884 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] May 15 10:43:48.038022 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] May 15 10:43:48.038115 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 15 10:43:48.038668 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb May 15 10:43:48.038764 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 15 10:43:48.038882 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 15 10:43:48.039039 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] May 15 10:43:48.039249 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] May 15 10:43:48.039667 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] May 15 10:43:48.040127 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 15 10:43:48.040251 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] May 15 10:43:48.040363 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] May 15 10:43:48.040457 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] May 15 10:43:48.040649 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 15 10:43:48.040889 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] May 15 10:43:48.042102 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] May 15 10:43:48.042428 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] May 15 10:43:48.044044 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] May 15 10:43:48.044197 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 15 10:43:48.044315 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 15 10:43:48.044668 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 15 10:43:48.044925 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] May 15 10:43:48.045220 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] May 15 10:43:48.045570 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 15 10:43:48.045772 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] May 15 10:43:48.045784 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 15 10:43:48.045792 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 15 10:43:48.045799 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 15 10:43:48.045807 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 15 10:43:48.045814 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 15 10:43:48.045821 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 15 10:43:48.045828 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 15 10:43:48.045845 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 15 10:43:48.045852 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 15 10:43:48.045859 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 15 10:43:48.045866 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 15 10:43:48.045874 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 15 10:43:48.045881 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 15 10:43:48.045888 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 15 10:43:48.045895 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 15 10:43:48.045902 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 15 10:43:48.045914 kernel: iommu: Default domain type: Translated May 15 10:43:48.045921 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 15 10:43:48.046000 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 15 10:43:48.046082 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 15 10:43:48.046155 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 15 10:43:48.046165 kernel: vgaarb: loaded May 15 10:43:48.046173 kernel: pps_core: LinuxPPS API ver. 1 registered May 15 10:43:48.046180 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 15 10:43:48.046190 kernel: PTP clock support registered May 15 10:43:48.046198 kernel: Registered efivars operations May 15 10:43:48.046205 kernel: PCI: Using ACPI for IRQ routing May 15 10:43:48.046217 kernel: PCI: pci_cache_line_size set to 64 bytes May 15 10:43:48.046236 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 15 10:43:48.046244 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] May 15 10:43:48.046251 kernel: e820: reserve RAM buffer [mem 0x9b438018-0x9bffffff] May 15 10:43:48.046264 kernel: e820: reserve RAM buffer [mem 0x9b475018-0x9bffffff] May 15 10:43:48.046271 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] May 15 10:43:48.046290 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] May 15 10:43:48.046307 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 15 10:43:48.046315 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 15 10:43:48.046322 kernel: clocksource: Switched to clocksource kvm-clock May 15 10:43:48.046329 kernel: VFS: Disk quotas dquot_6.6.0 May 15 10:43:48.046337 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 15 10:43:48.046344 kernel: pnp: PnP ACPI init May 15 10:43:48.046508 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 15 10:43:48.046524 kernel: pnp: PnP ACPI: found 6 devices May 15 10:43:48.046531 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 15 10:43:48.046539 kernel: NET: Registered PF_INET protocol family May 15 10:43:48.046546 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 15 10:43:48.046554 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 15 10:43:48.046561 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 15 10:43:48.046569 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 15 10:43:48.046576 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 15 10:43:48.046583 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 15 10:43:48.046592 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 10:43:48.046600 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 10:43:48.046607 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 15 10:43:48.046698 kernel: NET: Registered PF_XDP protocol family May 15 10:43:48.046846 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window May 15 10:43:48.046934 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] May 15 10:43:48.047071 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 15 10:43:48.047141 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 15 10:43:48.047214 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 15 10:43:48.047293 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 15 10:43:48.047361 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 15 10:43:48.047426 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] May 15 10:43:48.047436 kernel: PCI: CLS 0 bytes, default 64 May 15 10:43:48.047443 kernel: Initialise system trusted keyrings May 15 10:43:48.047451 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 15 10:43:48.047458 kernel: Key type asymmetric registered May 15 10:43:48.047466 kernel: Asymmetric key parser 'x509' registered May 15 10:43:48.047482 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 15 10:43:48.047490 kernel: io scheduler mq-deadline registered May 15 10:43:48.047508 kernel: io scheduler kyber registered May 15 10:43:48.047517 kernel: io scheduler bfq registered May 15 10:43:48.047524 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 15 10:43:48.047532 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 15 10:43:48.047545 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 15 10:43:48.047556 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 15 10:43:48.047569 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 15 10:43:48.047578 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 15 10:43:48.047586 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 15 10:43:48.047594 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 15 10:43:48.047606 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 15 10:43:48.047639 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 15 10:43:48.047747 kernel: rtc_cmos 00:04: RTC can wake from S4 May 15 10:43:48.047853 kernel: rtc_cmos 00:04: registered as rtc0 May 15 10:43:48.047929 kernel: rtc_cmos 00:04: setting system clock to 2025-05-15T10:43:47 UTC (1747305827) May 15 10:43:48.048002 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 15 10:43:48.048012 kernel: efifb: probing for efifb May 15 10:43:48.048020 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k May 15 10:43:48.048028 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 May 15 10:43:48.048036 kernel: efifb: scrolling: redraw May 15 10:43:48.048043 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 15 10:43:48.048053 kernel: Console: switching to colour frame buffer device 160x50 May 15 10:43:48.048061 kernel: fb0: EFI VGA frame buffer device May 15 10:43:48.048068 kernel: pstore: Registered efi as persistent store backend May 15 10:43:48.048081 kernel: NET: Registered PF_INET6 protocol family May 15 10:43:48.048089 kernel: Segment Routing with IPv6 May 15 10:43:48.048097 kernel: In-situ OAM (IOAM) with IPv6 May 15 10:43:48.048106 kernel: NET: Registered PF_PACKET protocol family May 15 10:43:48.048113 kernel: Key type dns_resolver registered May 15 10:43:48.048122 kernel: IPI shorthand broadcast: enabled May 15 10:43:48.048130 kernel: sched_clock: Marking stable (517522046, 132983121)->(674704332, -24199165) May 15 10:43:48.048137 kernel: registered taskstats version 1 May 15 10:43:48.048145 kernel: Loading compiled-in X.509 certificates May 15 10:43:48.048161 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.182-flatcar: 04007c306af6b7696d09b3c2eafc1297036fd28e' May 15 10:43:48.048177 kernel: Key type .fscrypt registered May 15 10:43:48.048195 kernel: Key type fscrypt-provisioning registered May 15 10:43:48.048204 kernel: pstore: Using crash dump compression: deflate May 15 10:43:48.048220 kernel: ima: No TPM chip found, activating TPM-bypass! May 15 10:43:48.048242 kernel: ima: Allocated hash algorithm: sha1 May 15 10:43:48.048263 kernel: ima: No architecture policies found May 15 10:43:48.048272 kernel: clk: Disabling unused clocks May 15 10:43:48.048280 kernel: Freeing unused kernel image (initmem) memory: 47472K May 15 10:43:48.048299 kernel: Write protecting the kernel read-only data: 28672k May 15 10:43:48.048318 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 15 10:43:48.048331 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 15 10:43:48.048339 kernel: Run /init as init process May 15 10:43:48.048346 kernel: with arguments: May 15 10:43:48.048356 kernel: /init May 15 10:43:48.048363 kernel: with environment: May 15 10:43:48.048371 kernel: HOME=/ May 15 10:43:48.048378 kernel: TERM=linux May 15 10:43:48.048386 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 15 10:43:48.048395 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 15 10:43:48.048405 systemd[1]: Detected virtualization kvm. May 15 10:43:48.048425 systemd[1]: Detected architecture x86-64. May 15 10:43:48.048456 systemd[1]: Running in initrd. May 15 10:43:48.048464 systemd[1]: No hostname configured, using default hostname. May 15 10:43:48.048472 systemd[1]: Hostname set to . May 15 10:43:48.048480 systemd[1]: Initializing machine ID from VM UUID. May 15 10:43:48.048488 systemd[1]: Queued start job for default target initrd.target. May 15 10:43:48.048496 systemd[1]: Started systemd-ask-password-console.path. May 15 10:43:48.048507 systemd[1]: Reached target cryptsetup.target. May 15 10:43:48.048515 systemd[1]: Reached target paths.target. May 15 10:43:48.048525 systemd[1]: Reached target slices.target. May 15 10:43:48.048532 systemd[1]: Reached target swap.target. May 15 10:43:48.048540 systemd[1]: Reached target timers.target. May 15 10:43:48.048549 systemd[1]: Listening on iscsid.socket. May 15 10:43:48.048557 systemd[1]: Listening on iscsiuio.socket. May 15 10:43:48.048565 systemd[1]: Listening on systemd-journald-audit.socket. May 15 10:43:48.048573 systemd[1]: Listening on systemd-journald-dev-log.socket. May 15 10:43:48.048581 systemd[1]: Listening on systemd-journald.socket. May 15 10:43:48.048591 systemd[1]: Listening on systemd-networkd.socket. May 15 10:43:48.048599 systemd[1]: Listening on systemd-udevd-control.socket. May 15 10:43:48.048607 systemd[1]: Listening on systemd-udevd-kernel.socket. May 15 10:43:48.048628 systemd[1]: Reached target sockets.target. May 15 10:43:48.048636 systemd[1]: Starting kmod-static-nodes.service... May 15 10:43:48.048644 systemd[1]: Finished network-cleanup.service. May 15 10:43:48.048652 systemd[1]: Starting systemd-fsck-usr.service... May 15 10:43:48.048660 systemd[1]: Starting systemd-journald.service... May 15 10:43:48.048668 systemd[1]: Starting systemd-modules-load.service... May 15 10:43:48.048678 systemd[1]: Starting systemd-resolved.service... May 15 10:43:48.048686 systemd[1]: Starting systemd-vconsole-setup.service... May 15 10:43:48.048694 systemd[1]: Finished kmod-static-nodes.service. May 15 10:43:48.048702 systemd[1]: Finished systemd-fsck-usr.service. May 15 10:43:48.048710 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 15 10:43:48.048718 systemd[1]: Finished systemd-vconsole-setup.service. May 15 10:43:48.048726 kernel: audit: type=1130 audit(1747305828.045:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:48.048738 systemd-journald[197]: Journal started May 15 10:43:48.048788 systemd-journald[197]: Runtime Journal (/run/log/journal/310220e044bc4fdb9427ceeeb8c3a392) is 6.0M, max 48.4M, 42.4M free. May 15 10:43:48.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:48.041982 systemd-modules-load[198]: Inserted module 'overlay' May 15 10:43:48.055273 systemd[1]: Started systemd-journald.service. May 15 10:43:48.055294 kernel: audit: type=1130 audit(1747305828.050:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:48.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:48.051785 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 15 10:43:48.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:48.057776 systemd[1]: Starting dracut-cmdline-ask.service... May 15 10:43:48.061012 kernel: audit: type=1130 audit(1747305828.055:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:48.071405 systemd[1]: Finished dracut-cmdline-ask.service. May 15 10:43:48.078569 kernel: audit: type=1130 audit(1747305828.071:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:48.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:48.075062 systemd-resolved[199]: Positive Trust Anchors: May 15 10:43:48.075070 systemd-resolved[199]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 10:43:48.085529 kernel: audit: type=1130 audit(1747305828.078:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:48.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:48.075097 systemd-resolved[199]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 15 10:43:48.091305 dracut-cmdline[214]: dracut-dracut-053 May 15 10:43:48.091305 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f8c1bc5ff10765e781843bfc97fc5357002a3f8a120201a0e954fce1d2ba48f0 May 15 10:43:48.075750 systemd[1]: Starting dracut-cmdline.service... May 15 10:43:48.077399 systemd-resolved[199]: Defaulting to hostname 'linux'. May 15 10:43:48.079008 systemd[1]: Started systemd-resolved.service. May 15 10:43:48.079144 systemd[1]: Reached target nss-lookup.target. May 15 10:43:48.113635 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 15 10:43:48.118324 systemd-modules-load[198]: Inserted module 'br_netfilter' May 15 10:43:48.119262 kernel: Bridge firewalling registered May 15 10:43:48.135638 kernel: SCSI subsystem initialized May 15 10:43:48.146936 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 15 10:43:48.146956 kernel: device-mapper: uevent: version 1.0.3 May 15 10:43:48.148245 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 15 10:43:48.150981 systemd-modules-load[198]: Inserted module 'dm_multipath' May 15 10:43:48.152658 systemd[1]: Finished systemd-modules-load.service. May 15 10:43:48.158685 kernel: Loading iSCSI transport class v2.0-870. May 15 10:43:48.158700 kernel: audit: type=1130 audit(1747305828.153:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:48.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:48.154285 systemd[1]: Starting systemd-sysctl.service... May 15 10:43:48.162984 systemd[1]: Finished systemd-sysctl.service. May 15 10:43:48.168216 kernel: audit: type=1130 audit(1747305828.162:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:48.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:48.180646 kernel: iscsi: registered transport (tcp) May 15 10:43:48.202650 kernel: iscsi: registered transport (qla4xxx) May 15 10:43:48.202676 kernel: QLogic iSCSI HBA Driver May 15 10:43:48.235170 systemd[1]: Finished dracut-cmdline.service. May 15 10:43:48.240225 kernel: audit: type=1130 audit(1747305828.234:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:48.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:48.236306 systemd[1]: Starting dracut-pre-udev.service... May 15 10:43:48.282654 kernel: raid6: avx2x4 gen() 30281 MB/s May 15 10:43:48.299643 kernel: raid6: avx2x4 xor() 7514 MB/s May 15 10:43:48.316635 kernel: raid6: avx2x2 gen() 32585 MB/s May 15 10:43:48.333637 kernel: raid6: avx2x2 xor() 19235 MB/s May 15 10:43:48.350635 kernel: raid6: avx2x1 gen() 26545 MB/s May 15 10:43:48.367634 kernel: raid6: avx2x1 xor() 15366 MB/s May 15 10:43:48.384634 kernel: raid6: sse2x4 gen() 14822 MB/s May 15 10:43:48.401635 kernel: raid6: sse2x4 xor() 7595 MB/s May 15 10:43:48.418635 kernel: raid6: sse2x2 gen() 16305 MB/s May 15 10:43:48.435637 kernel: raid6: sse2x2 xor() 9829 MB/s May 15 10:43:48.452646 kernel: raid6: sse2x1 gen() 12503 MB/s May 15 10:43:48.470027 kernel: raid6: sse2x1 xor() 7770 MB/s May 15 10:43:48.470056 kernel: raid6: using algorithm avx2x2 gen() 32585 MB/s May 15 10:43:48.470067 kernel: raid6: .... xor() 19235 MB/s, rmw enabled May 15 10:43:48.470738 kernel: raid6: using avx2x2 recovery algorithm May 15 10:43:48.483652 kernel: xor: automatically using best checksumming function avx May 15 10:43:48.578646 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 15 10:43:48.587914 systemd[1]: Finished dracut-pre-udev.service. May 15 10:43:48.592798 kernel: audit: type=1130 audit(1747305828.588:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:48.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:48.592000 audit: BPF prog-id=7 op=LOAD May 15 10:43:48.592000 audit: BPF prog-id=8 op=LOAD May 15 10:43:48.593198 systemd[1]: Starting systemd-udevd.service... May 15 10:43:48.606651 systemd-udevd[398]: Using default interface naming scheme 'v252'. May 15 10:43:48.610969 systemd[1]: Started systemd-udevd.service. May 15 10:43:48.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:48.613383 systemd[1]: Starting dracut-pre-trigger.service... May 15 10:43:48.623949 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation May 15 10:43:48.650322 systemd[1]: Finished dracut-pre-trigger.service. May 15 10:43:48.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:48.651998 systemd[1]: Starting systemd-udev-trigger.service... May 15 10:43:48.688251 systemd[1]: Finished systemd-udev-trigger.service. May 15 10:43:48.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:48.726176 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 15 10:43:48.733061 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 15 10:43:48.733075 kernel: GPT:9289727 != 19775487 May 15 10:43:48.733083 kernel: GPT:Alternate GPT header not at the end of the disk. May 15 10:43:48.733110 kernel: GPT:9289727 != 19775487 May 15 10:43:48.733127 kernel: GPT: Use GNU Parted to correct GPT errors. May 15 10:43:48.733135 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 10:43:48.734637 kernel: cryptd: max_cpu_qlen set to 1000 May 15 10:43:48.743644 kernel: libata version 3.00 loaded. May 15 10:43:48.746645 kernel: AVX2 version of gcm_enc/dec engaged. May 15 10:43:48.782658 kernel: AES CTR mode by8 optimization enabled May 15 10:43:48.788649 kernel: ahci 0000:00:1f.2: version 3.0 May 15 10:43:48.813982 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 15 10:43:48.814000 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 15 10:43:48.814103 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 15 10:43:48.814199 kernel: scsi host0: ahci May 15 10:43:48.814336 kernel: scsi host1: ahci May 15 10:43:48.814435 kernel: scsi host2: ahci May 15 10:43:48.814523 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (455) May 15 10:43:48.814534 kernel: scsi host3: ahci May 15 10:43:48.814658 kernel: scsi host4: ahci May 15 10:43:48.814755 kernel: scsi host5: ahci May 15 10:43:48.814850 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 May 15 10:43:48.814860 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 May 15 10:43:48.814870 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 May 15 10:43:48.814879 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 May 15 10:43:48.814888 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 May 15 10:43:48.814897 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 May 15 10:43:48.795855 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 15 10:43:48.798764 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 15 10:43:48.805562 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 15 10:43:48.812821 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 15 10:43:48.829351 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 15 10:43:48.832754 systemd[1]: Starting disk-uuid.service... May 15 10:43:48.841539 disk-uuid[526]: Primary Header is updated. May 15 10:43:48.841539 disk-uuid[526]: Secondary Entries is updated. May 15 10:43:48.841539 disk-uuid[526]: Secondary Header is updated. May 15 10:43:48.845691 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 10:43:48.848659 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 10:43:48.852667 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 10:43:49.131686 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 15 10:43:49.131787 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 15 10:43:49.131798 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 15 10:43:49.133927 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 15 10:43:49.134038 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 15 10:43:49.135667 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 15 10:43:49.136659 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 15 10:43:49.137730 kernel: ata3.00: applying bridge limits May 15 10:43:49.138696 kernel: ata3.00: configured for UDMA/100 May 15 10:43:49.140646 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 15 10:43:49.173160 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 15 10:43:49.190681 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 15 10:43:49.190714 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 15 10:43:49.850380 disk-uuid[527]: The operation has completed successfully. May 15 10:43:49.851763 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 10:43:49.873823 systemd[1]: disk-uuid.service: Deactivated successfully. May 15 10:43:49.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:49.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:49.873925 systemd[1]: Finished disk-uuid.service. May 15 10:43:49.882831 systemd[1]: Starting verity-setup.service... May 15 10:43:49.896643 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 15 10:43:49.916845 systemd[1]: Found device dev-mapper-usr.device. May 15 10:43:49.918668 systemd[1]: Mounting sysusr-usr.mount... May 15 10:43:49.921089 systemd[1]: Finished verity-setup.service. May 15 10:43:49.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:49.991503 systemd[1]: Mounted sysusr-usr.mount. May 15 10:43:49.992934 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 15 10:43:49.992086 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 15 10:43:49.992809 systemd[1]: Starting ignition-setup.service... May 15 10:43:49.996560 systemd[1]: Starting parse-ip-for-networkd.service... May 15 10:43:50.003279 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 10:43:50.003335 kernel: BTRFS info (device vda6): using free space tree May 15 10:43:50.003350 kernel: BTRFS info (device vda6): has skinny extents May 15 10:43:50.010854 systemd[1]: mnt-oem.mount: Deactivated successfully. May 15 10:43:50.065049 systemd[1]: Finished parse-ip-for-networkd.service. May 15 10:43:50.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:50.069000 audit: BPF prog-id=9 op=LOAD May 15 10:43:50.070457 systemd[1]: Starting systemd-networkd.service... May 15 10:43:50.091193 systemd-networkd[712]: lo: Link UP May 15 10:43:50.091203 systemd-networkd[712]: lo: Gained carrier May 15 10:43:50.099424 systemd-networkd[712]: Enumeration completed May 15 10:43:50.099512 systemd[1]: Started systemd-networkd.service. May 15 10:43:50.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:50.100166 systemd[1]: Reached target network.target. May 15 10:43:50.101101 systemd[1]: Starting iscsiuio.service... May 15 10:43:50.104393 systemd-networkd[712]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 10:43:50.106114 systemd[1]: Started iscsiuio.service. May 15 10:43:50.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:50.107854 systemd[1]: Starting iscsid.service... May 15 10:43:50.109339 systemd-networkd[712]: eth0: Link UP May 15 10:43:50.109347 systemd-networkd[712]: eth0: Gained carrier May 15 10:43:50.110992 iscsid[717]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 15 10:43:50.110992 iscsid[717]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 15 10:43:50.110992 iscsid[717]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 15 10:43:50.110992 iscsid[717]: If using hardware iscsi like qla4xxx this message can be ignored. May 15 10:43:50.110992 iscsid[717]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 15 10:43:50.110992 iscsid[717]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 15 10:43:50.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:50.112284 systemd[1]: Started iscsid.service. May 15 10:43:50.118114 systemd[1]: Starting dracut-initqueue.service... May 15 10:43:50.129719 systemd-networkd[712]: eth0: DHCPv4 address 10.0.0.96/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 10:43:50.166528 systemd[1]: Finished dracut-initqueue.service. May 15 10:43:50.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:50.167497 systemd[1]: Reached target remote-fs-pre.target. May 15 10:43:50.169084 systemd[1]: Reached target remote-cryptsetup.target. May 15 10:43:50.169969 systemd[1]: Reached target remote-fs.target. May 15 10:43:50.171433 systemd[1]: Starting dracut-pre-mount.service... May 15 10:43:50.179005 systemd[1]: Finished dracut-pre-mount.service. May 15 10:43:50.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:50.220277 systemd[1]: Finished ignition-setup.service. May 15 10:43:50.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:50.222015 systemd[1]: Starting ignition-fetch-offline.service... May 15 10:43:50.256740 ignition[732]: Ignition 2.14.0 May 15 10:43:50.256750 ignition[732]: Stage: fetch-offline May 15 10:43:50.256797 ignition[732]: no configs at "/usr/lib/ignition/base.d" May 15 10:43:50.256807 ignition[732]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 10:43:50.256900 ignition[732]: parsed url from cmdline: "" May 15 10:43:50.256904 ignition[732]: no config URL provided May 15 10:43:50.256908 ignition[732]: reading system config file "/usr/lib/ignition/user.ign" May 15 10:43:50.256915 ignition[732]: no config at "/usr/lib/ignition/user.ign" May 15 10:43:50.256939 ignition[732]: op(1): [started] loading QEMU firmware config module May 15 10:43:50.256944 ignition[732]: op(1): executing: "modprobe" "qemu_fw_cfg" May 15 10:43:50.261029 ignition[732]: op(1): [finished] loading QEMU firmware config module May 15 10:43:50.301998 ignition[732]: parsing config with SHA512: c2a148f10cbd09aeb23065e00e46c8bcb66192a005211fb945057ee5f3cc1dcb02dbcf79199cfa87c784c297db5e0c0e052d56f5f2864319876b15b6810a5bf4 May 15 10:43:50.308611 unknown[732]: fetched base config from "system" May 15 10:43:50.308636 unknown[732]: fetched user config from "qemu" May 15 10:43:50.309163 ignition[732]: fetch-offline: fetch-offline passed May 15 10:43:50.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:50.310414 systemd[1]: Finished ignition-fetch-offline.service. May 15 10:43:50.309217 ignition[732]: Ignition finished successfully May 15 10:43:50.311571 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 15 10:43:50.312324 systemd[1]: Starting ignition-kargs.service... May 15 10:43:50.321772 ignition[740]: Ignition 2.14.0 May 15 10:43:50.321783 ignition[740]: Stage: kargs May 15 10:43:50.321874 ignition[740]: no configs at "/usr/lib/ignition/base.d" May 15 10:43:50.321884 ignition[740]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 10:43:50.324181 systemd[1]: Finished ignition-kargs.service. May 15 10:43:50.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:50.322935 ignition[740]: kargs: kargs passed May 15 10:43:50.326645 systemd[1]: Starting ignition-disks.service... May 15 10:43:50.322970 ignition[740]: Ignition finished successfully May 15 10:43:50.333087 ignition[746]: Ignition 2.14.0 May 15 10:43:50.333098 ignition[746]: Stage: disks May 15 10:43:50.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:50.335051 systemd[1]: Finished ignition-disks.service. May 15 10:43:50.333206 ignition[746]: no configs at "/usr/lib/ignition/base.d" May 15 10:43:50.345011 systemd[1]: Reached target initrd-root-device.target. May 15 10:43:50.333225 ignition[746]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 10:43:50.346387 systemd[1]: Reached target local-fs-pre.target. May 15 10:43:50.334453 ignition[746]: disks: disks passed May 15 10:43:50.347231 systemd[1]: Reached target local-fs.target. May 15 10:43:50.334489 ignition[746]: Ignition finished successfully May 15 10:43:50.348779 systemd[1]: Reached target sysinit.target. May 15 10:43:50.350408 systemd[1]: Reached target basic.target. May 15 10:43:50.352814 systemd[1]: Starting systemd-fsck-root.service... May 15 10:43:50.385776 systemd-fsck[754]: ROOT: clean, 623/553520 files, 56023/553472 blocks May 15 10:43:50.642312 systemd[1]: Finished systemd-fsck-root.service. May 15 10:43:50.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:50.643906 systemd[1]: Mounting sysroot.mount... May 15 10:43:50.676383 systemd[1]: Mounted sysroot.mount. May 15 10:43:50.677648 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 15 10:43:50.677023 systemd[1]: Reached target initrd-root-fs.target. May 15 10:43:50.679311 systemd[1]: Mounting sysroot-usr.mount... May 15 10:43:50.681021 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 15 10:43:50.681058 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 15 10:43:50.681077 systemd[1]: Reached target ignition-diskful.target. May 15 10:43:50.687515 systemd[1]: Mounted sysroot-usr.mount. May 15 10:43:50.688919 systemd[1]: Starting initrd-setup-root.service... May 15 10:43:50.694752 initrd-setup-root[764]: cut: /sysroot/etc/passwd: No such file or directory May 15 10:43:50.698697 initrd-setup-root[772]: cut: /sysroot/etc/group: No such file or directory May 15 10:43:50.702111 initrd-setup-root[780]: cut: /sysroot/etc/shadow: No such file or directory May 15 10:43:50.705802 initrd-setup-root[788]: cut: /sysroot/etc/gshadow: No such file or directory May 15 10:43:50.732665 systemd[1]: Finished initrd-setup-root.service. May 15 10:43:50.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:50.734254 systemd[1]: Starting ignition-mount.service... May 15 10:43:50.735547 systemd[1]: Starting sysroot-boot.service... May 15 10:43:50.739782 bash[805]: umount: /sysroot/usr/share/oem: not mounted. May 15 10:43:50.747197 ignition[807]: INFO : Ignition 2.14.0 May 15 10:43:50.747197 ignition[807]: INFO : Stage: mount May 15 10:43:50.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:50.770586 ignition[807]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 10:43:50.770586 ignition[807]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 10:43:50.770586 ignition[807]: INFO : mount: mount passed May 15 10:43:50.770586 ignition[807]: INFO : Ignition finished successfully May 15 10:43:50.749051 systemd[1]: Finished ignition-mount.service. May 15 10:43:50.776534 systemd[1]: Finished sysroot-boot.service. May 15 10:43:50.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:50.929979 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 15 10:43:50.939643 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (815) May 15 10:43:50.941734 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 10:43:50.941798 kernel: BTRFS info (device vda6): using free space tree May 15 10:43:50.941809 kernel: BTRFS info (device vda6): has skinny extents May 15 10:43:50.945629 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 15 10:43:50.948016 systemd[1]: Starting ignition-files.service... May 15 10:43:50.961729 ignition[835]: INFO : Ignition 2.14.0 May 15 10:43:50.961729 ignition[835]: INFO : Stage: files May 15 10:43:50.968400 ignition[835]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 10:43:50.968400 ignition[835]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 10:43:50.970759 ignition[835]: DEBUG : files: compiled without relabeling support, skipping May 15 10:43:50.970759 ignition[835]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 15 10:43:50.970759 ignition[835]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 15 10:43:50.974878 ignition[835]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 15 10:43:50.974878 ignition[835]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 15 10:43:50.974878 ignition[835]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 15 10:43:50.974878 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 15 10:43:50.974878 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 15 10:43:50.974878 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 15 10:43:50.974878 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 15 10:43:50.971877 unknown[835]: wrote ssh authorized keys file for user: core May 15 10:43:51.015354 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 15 10:43:51.157569 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 15 10:43:51.159799 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 10:43:51.159799 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 15 10:43:51.518893 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK May 15 10:43:51.613332 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 10:43:51.613332 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" May 15 10:43:51.618068 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" May 15 10:43:51.618068 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" May 15 10:43:51.618068 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" May 15 10:43:51.618068 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 10:43:51.618068 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 10:43:51.618068 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 10:43:51.618068 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 10:43:51.618068 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" May 15 10:43:51.618068 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 15 10:43:51.618068 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 15 10:43:51.618068 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 15 10:43:51.618068 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 15 10:43:51.618068 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 15 10:43:51.910849 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK May 15 10:43:52.140776 systemd-networkd[712]: eth0: Gained IPv6LL May 15 10:43:52.690805 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 15 10:43:52.690805 ignition[835]: INFO : files: op(d): [started] processing unit "containerd.service" May 15 10:43:52.701326 ignition[835]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 15 10:43:52.703839 ignition[835]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 15 10:43:52.703839 ignition[835]: INFO : files: op(d): [finished] processing unit "containerd.service" May 15 10:43:52.703839 ignition[835]: INFO : files: op(f): [started] processing unit "prepare-helm.service" May 15 10:43:52.708795 ignition[835]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 10:43:52.708795 ignition[835]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 10:43:52.708795 ignition[835]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" May 15 10:43:52.708795 ignition[835]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" May 15 10:43:52.715996 ignition[835]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 10:43:52.715996 ignition[835]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 10:43:52.715996 ignition[835]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" May 15 10:43:52.715996 ignition[835]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" May 15 10:43:52.723126 ignition[835]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" May 15 10:43:52.724640 ignition[835]: INFO : files: op(14): [started] setting preset to disabled for "coreos-metadata.service" May 15 10:43:52.724640 ignition[835]: INFO : files: op(14): op(15): [started] removing enablement symlink(s) for "coreos-metadata.service" May 15 10:43:52.824314 ignition[835]: INFO : files: op(14): op(15): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 15 10:43:52.826334 ignition[835]: INFO : files: op(14): [finished] setting preset to disabled for "coreos-metadata.service" May 15 10:43:52.826334 ignition[835]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" May 15 10:43:52.826334 ignition[835]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" May 15 10:43:52.826334 ignition[835]: INFO : files: files passed May 15 10:43:52.826334 ignition[835]: INFO : Ignition finished successfully May 15 10:43:52.835058 systemd[1]: Finished ignition-files.service. May 15 10:43:52.841203 kernel: kauditd_printk_skb: 23 callbacks suppressed May 15 10:43:52.841233 kernel: audit: type=1130 audit(1747305832.834:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:52.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:52.841391 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 15 10:43:52.843513 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 15 10:43:52.844402 systemd[1]: Starting ignition-quench.service... May 15 10:43:52.848356 systemd[1]: ignition-quench.service: Deactivated successfully. May 15 10:43:52.848486 systemd[1]: Finished ignition-quench.service. May 15 10:43:52.857870 kernel: audit: type=1130 audit(1747305832.847:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:52.857929 kernel: audit: type=1131 audit(1747305832.847:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:52.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:52.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:52.860742 initrd-setup-root-after-ignition[862]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory May 15 10:43:52.863465 initrd-setup-root-after-ignition[864]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 10:43:52.865592 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 15 10:43:52.871181 kernel: audit: type=1130 audit(1747305832.865:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:52.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:52.866163 systemd[1]: Reached target ignition-complete.target. May 15 10:43:52.872253 systemd[1]: Starting initrd-parse-etc.service... May 15 10:43:52.884737 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 15 10:43:52.884817 systemd[1]: Finished initrd-parse-etc.service. May 15 10:43:52.893086 kernel: audit: type=1130 audit(1747305832.886:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:52.893109 kernel: audit: type=1131 audit(1747305832.886:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:52.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:52.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:52.886826 systemd[1]: Reached target initrd-fs.target. May 15 10:43:52.894110 systemd[1]: Reached target initrd.target. May 15 10:43:52.895562 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 15 10:43:52.896306 systemd[1]: Starting dracut-pre-pivot.service... May 15 10:43:52.907776 systemd[1]: Finished dracut-pre-pivot.service. May 15 10:43:52.912207 kernel: audit: type=1130 audit(1747305832.907:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:52.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:52.912189 systemd[1]: Starting initrd-cleanup.service... May 15 10:43:52.920905 systemd[1]: Stopped target nss-lookup.target. May 15 10:43:52.921385 systemd[1]: Stopped target remote-cryptsetup.target. May 15 10:43:52.923000 systemd[1]: Stopped target timers.target. May 15 10:43:52.924397 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 15 10:43:52.929820 kernel: audit: type=1131 audit(1747305832.925:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:52.925000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:52.924480 systemd[1]: Stopped dracut-pre-pivot.service. May 15 10:43:52.925968 systemd[1]: Stopped target initrd.target. May 15 10:43:52.930261 systemd[1]: Stopped target basic.target. May 15 10:43:52.930564 systemd[1]: Stopped target ignition-complete.target. May 15 10:43:52.932983 systemd[1]: Stopped target ignition-diskful.target. May 15 10:43:52.934445 systemd[1]: Stopped target initrd-root-device.target. May 15 10:43:52.936025 systemd[1]: Stopped target remote-fs.target. May 15 10:43:52.937597 systemd[1]: Stopped target remote-fs-pre.target. May 15 10:43:52.939106 systemd[1]: Stopped target sysinit.target. May 15 10:43:52.940603 systemd[1]: Stopped target local-fs.target. May 15 10:43:52.942018 systemd[1]: Stopped target local-fs-pre.target. May 15 10:43:52.942343 systemd[1]: Stopped target swap.target. May 15 10:43:52.949960 kernel: audit: type=1131 audit(1747305832.945:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:52.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:52.944914 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 15 10:43:52.945004 systemd[1]: Stopped dracut-pre-mount.service. May 15 10:43:52.955846 kernel: audit: type=1131 audit(1747305832.950:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:52.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:52.946222 systemd[1]: Stopped target cryptsetup.target. May 15 10:43:52.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:52.950381 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 15 10:43:52.950467 systemd[1]: Stopped dracut-initqueue.service. May 15 10:43:52.950915 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 15 10:43:52.950991 systemd[1]: Stopped ignition-fetch-offline.service. May 15 10:43:52.956328 systemd[1]: Stopped target paths.target. May 15 10:43:52.956559 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 15 10:43:52.960664 systemd[1]: Stopped systemd-ask-password-console.path. May 15 10:43:52.964702 systemd[1]: Stopped target slices.target. May 15 10:43:52.966221 systemd[1]: Stopped target sockets.target. May 15 10:43:52.967777 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 15 10:43:52.968952 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 15 10:43:52.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:52.970933 systemd[1]: ignition-files.service: Deactivated successfully. May 15 10:43:52.971911 systemd[1]: Stopped ignition-files.service. May 15 10:43:52.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:52.974082 systemd[1]: Stopping ignition-mount.service... May 15 10:43:52.975742 systemd[1]: Stopping iscsid.service... May 15 10:43:52.977058 iscsid[717]: iscsid shutting down. May 15 10:43:52.978378 systemd[1]: Stopping sysroot-boot.service... May 15 10:43:52.980699 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 15 10:43:52.980879 systemd[1]: Stopped systemd-udev-trigger.service. May 15 10:43:52.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:52.982766 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 15 10:43:52.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:52.985043 ignition[877]: INFO : Ignition 2.14.0 May 15 10:43:52.985043 ignition[877]: INFO : Stage: umount May 15 10:43:52.985043 ignition[877]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 10:43:52.985043 ignition[877]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 10:43:52.985043 ignition[877]: INFO : umount: umount passed May 15 10:43:52.985043 ignition[877]: INFO : Ignition finished successfully May 15 10:43:52.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:52.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:52.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:52.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:52.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:52.982887 systemd[1]: Stopped dracut-pre-trigger.service. May 15 10:43:52.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:52.984866 systemd[1]: iscsid.service: Deactivated successfully. May 15 10:43:52.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:52.984965 systemd[1]: Stopped iscsid.service. May 15 10:43:52.985792 systemd[1]: ignition-mount.service: Deactivated successfully. May 15 10:43:52.985864 systemd[1]: Stopped ignition-mount.service. May 15 10:43:52.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:52.987891 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 15 10:43:52.987967 systemd[1]: Finished initrd-cleanup.service. May 15 10:43:52.989003 systemd[1]: iscsid.socket: Deactivated successfully. May 15 10:43:52.989030 systemd[1]: Closed iscsid.socket. May 15 10:43:52.990837 systemd[1]: ignition-disks.service: Deactivated successfully. May 15 10:43:52.990877 systemd[1]: Stopped ignition-disks.service. May 15 10:43:52.992197 systemd[1]: ignition-kargs.service: Deactivated successfully. May 15 10:43:52.992233 systemd[1]: Stopped ignition-kargs.service. May 15 10:43:52.993872 systemd[1]: ignition-setup.service: Deactivated successfully. May 15 10:43:52.993910 systemd[1]: Stopped ignition-setup.service. May 15 10:43:52.995253 systemd[1]: Stopping iscsiuio.service... May 15 10:43:52.998286 systemd[1]: iscsiuio.service: Deactivated successfully. May 15 10:43:52.998361 systemd[1]: Stopped iscsiuio.service. May 15 10:43:52.998921 systemd[1]: Stopped target network.target. May 15 10:43:52.999194 systemd[1]: iscsiuio.socket: Deactivated successfully. May 15 10:43:52.999221 systemd[1]: Closed iscsiuio.socket. May 15 10:43:53.002026 systemd[1]: Stopping systemd-networkd.service... May 15 10:43:53.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:53.003437 systemd[1]: Stopping systemd-resolved.service... May 15 10:43:53.013502 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 15 10:43:53.014702 systemd-networkd[712]: eth0: DHCPv6 lease lost May 15 10:43:53.016365 systemd[1]: systemd-networkd.service: Deactivated successfully. May 15 10:43:53.022000 audit: BPF prog-id=9 op=UNLOAD May 15 10:43:53.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:53.016449 systemd[1]: Stopped systemd-networkd.service. May 15 10:43:53.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:53.018735 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 15 10:43:53.018762 systemd[1]: Closed systemd-networkd.socket. May 15 10:43:53.026000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:53.019961 systemd[1]: Stopping network-cleanup.service... May 15 10:43:53.021480 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 15 10:43:53.021522 systemd[1]: Stopped parse-ip-for-networkd.service. May 15 10:43:53.023448 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 10:43:53.023488 systemd[1]: Stopped systemd-sysctl.service. May 15 10:43:53.026524 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 15 10:43:53.026559 systemd[1]: Stopped systemd-modules-load.service. May 15 10:43:53.027596 systemd[1]: Stopping systemd-udevd.service... May 15 10:43:53.029979 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 15 10:43:53.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:53.037050 systemd[1]: systemd-resolved.service: Deactivated successfully. May 15 10:43:53.037170 systemd[1]: Stopped systemd-resolved.service. May 15 10:43:53.041362 systemd[1]: systemd-udevd.service: Deactivated successfully. May 15 10:43:53.041477 systemd[1]: Stopped systemd-udevd.service. May 15 10:43:53.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:53.044337 systemd[1]: network-cleanup.service: Deactivated successfully. May 15 10:43:53.044419 systemd[1]: Stopped network-cleanup.service. May 15 10:43:53.046000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:53.046000 audit: BPF prog-id=6 op=UNLOAD May 15 10:43:53.046276 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 15 10:43:53.046310 systemd[1]: Closed systemd-udevd-control.socket. May 15 10:43:53.047696 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 15 10:43:53.051000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:53.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:53.047732 systemd[1]: Closed systemd-udevd-kernel.socket. May 15 10:43:53.049563 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 15 10:43:53.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:53.049597 systemd[1]: Stopped dracut-pre-udev.service. May 15 10:43:53.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:53.051130 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 15 10:43:53.051170 systemd[1]: Stopped dracut-cmdline.service. May 15 10:43:53.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:53.061000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:53.052878 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 10:43:53.052910 systemd[1]: Stopped dracut-cmdline-ask.service. May 15 10:43:53.055361 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 15 10:43:53.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:53.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:53.056445 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 15 10:43:53.056505 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. May 15 10:43:53.058312 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 15 10:43:53.058359 systemd[1]: Stopped kmod-static-nodes.service. May 15 10:43:53.060186 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 10:43:53.060236 systemd[1]: Stopped systemd-vconsole-setup.service. May 15 10:43:53.062581 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 15 10:43:53.062986 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 15 10:43:53.063060 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 15 10:43:53.127771 systemd[1]: sysroot-boot.service: Deactivated successfully. May 15 10:43:53.127873 systemd[1]: Stopped sysroot-boot.service. May 15 10:43:53.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:53.130047 systemd[1]: Reached target initrd-switch-root.target. May 15 10:43:53.131016 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 15 10:43:53.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:53.131079 systemd[1]: Stopped initrd-setup-root.service. May 15 10:43:53.134680 systemd[1]: Starting initrd-switch-root.service... May 15 10:43:53.142750 systemd[1]: Switching root. May 15 10:43:53.145000 audit: BPF prog-id=5 op=UNLOAD May 15 10:43:53.145000 audit: BPF prog-id=4 op=UNLOAD May 15 10:43:53.145000 audit: BPF prog-id=3 op=UNLOAD May 15 10:43:53.145000 audit: BPF prog-id=8 op=UNLOAD May 15 10:43:53.145000 audit: BPF prog-id=7 op=UNLOAD May 15 10:43:53.161565 systemd-journald[197]: Journal stopped May 15 10:43:57.279707 systemd-journald[197]: Received SIGTERM from PID 1 (systemd). May 15 10:43:57.279762 kernel: SELinux: Class mctp_socket not defined in policy. May 15 10:43:57.279779 kernel: SELinux: Class anon_inode not defined in policy. May 15 10:43:57.279788 kernel: SELinux: the above unknown classes and permissions will be allowed May 15 10:43:57.279798 kernel: SELinux: policy capability network_peer_controls=1 May 15 10:43:57.279813 kernel: SELinux: policy capability open_perms=1 May 15 10:43:57.279826 kernel: SELinux: policy capability extended_socket_class=1 May 15 10:43:57.279839 kernel: SELinux: policy capability always_check_network=0 May 15 10:43:57.279851 kernel: SELinux: policy capability cgroup_seclabel=1 May 15 10:43:57.279861 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 15 10:43:57.279870 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 15 10:43:57.279880 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 15 10:43:57.279890 systemd[1]: Successfully loaded SELinux policy in 43.115ms. May 15 10:43:57.279910 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.801ms. May 15 10:43:57.279925 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 15 10:43:57.279945 systemd[1]: Detected virtualization kvm. May 15 10:43:57.279954 systemd[1]: Detected architecture x86-64. May 15 10:43:57.279969 systemd[1]: Detected first boot. May 15 10:43:57.279979 systemd[1]: Initializing machine ID from VM UUID. May 15 10:43:57.279989 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 15 10:43:57.279999 systemd[1]: Populated /etc with preset unit settings. May 15 10:43:57.280014 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 15 10:43:57.280029 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 15 10:43:57.280040 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 10:43:57.280052 systemd[1]: Queued start job for default target multi-user.target. May 15 10:43:57.280069 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 15 10:43:57.280079 systemd[1]: Created slice system-addon\x2dconfig.slice. May 15 10:43:57.280089 systemd[1]: Created slice system-addon\x2drun.slice. May 15 10:43:57.280099 systemd[1]: Created slice system-getty.slice. May 15 10:43:57.280115 systemd[1]: Created slice system-modprobe.slice. May 15 10:43:57.280126 systemd[1]: Created slice system-serial\x2dgetty.slice. May 15 10:43:57.280137 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 15 10:43:57.280147 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 15 10:43:57.280157 systemd[1]: Created slice user.slice. May 15 10:43:57.280167 systemd[1]: Started systemd-ask-password-console.path. May 15 10:43:57.280177 systemd[1]: Started systemd-ask-password-wall.path. May 15 10:43:57.280195 systemd[1]: Set up automount boot.automount. May 15 10:43:57.280206 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 15 10:43:57.280220 systemd[1]: Reached target integritysetup.target. May 15 10:43:57.280230 systemd[1]: Reached target remote-cryptsetup.target. May 15 10:43:57.280240 systemd[1]: Reached target remote-fs.target. May 15 10:43:57.280251 systemd[1]: Reached target slices.target. May 15 10:43:57.280264 systemd[1]: Reached target swap.target. May 15 10:43:57.280274 systemd[1]: Reached target torcx.target. May 15 10:43:57.280285 systemd[1]: Reached target veritysetup.target. May 15 10:43:57.280296 systemd[1]: Listening on systemd-coredump.socket. May 15 10:43:57.280310 systemd[1]: Listening on systemd-initctl.socket. May 15 10:43:57.280320 systemd[1]: Listening on systemd-journald-audit.socket. May 15 10:43:57.280330 systemd[1]: Listening on systemd-journald-dev-log.socket. May 15 10:43:57.280340 systemd[1]: Listening on systemd-journald.socket. May 15 10:43:57.280350 systemd[1]: Listening on systemd-networkd.socket. May 15 10:43:57.280361 systemd[1]: Listening on systemd-udevd-control.socket. May 15 10:43:57.280371 systemd[1]: Listening on systemd-udevd-kernel.socket. May 15 10:43:57.280381 systemd[1]: Listening on systemd-userdbd.socket. May 15 10:43:57.280391 systemd[1]: Mounting dev-hugepages.mount... May 15 10:43:57.280404 systemd[1]: Mounting dev-mqueue.mount... May 15 10:43:57.280419 systemd[1]: Mounting media.mount... May 15 10:43:57.280436 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 10:43:57.280447 systemd[1]: Mounting sys-kernel-debug.mount... May 15 10:43:57.280457 systemd[1]: Mounting sys-kernel-tracing.mount... May 15 10:43:57.280467 systemd[1]: Mounting tmp.mount... May 15 10:43:57.280477 systemd[1]: Starting flatcar-tmpfiles.service... May 15 10:43:57.280487 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 10:43:57.280498 systemd[1]: Starting kmod-static-nodes.service... May 15 10:43:57.280508 systemd[1]: Starting modprobe@configfs.service... May 15 10:43:57.280523 systemd[1]: Starting modprobe@dm_mod.service... May 15 10:43:57.280533 systemd[1]: Starting modprobe@drm.service... May 15 10:43:57.280544 systemd[1]: Starting modprobe@efi_pstore.service... May 15 10:43:57.280554 systemd[1]: Starting modprobe@fuse.service... May 15 10:43:57.280564 systemd[1]: Starting modprobe@loop.service... May 15 10:43:57.280575 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 15 10:43:57.280586 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 15 10:43:57.280596 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) May 15 10:43:57.280606 systemd[1]: Starting systemd-journald.service... May 15 10:43:57.280637 systemd[1]: Starting systemd-modules-load.service... May 15 10:43:57.280650 kernel: fuse: init (API version 7.34) May 15 10:43:57.280660 systemd[1]: Starting systemd-network-generator.service... May 15 10:43:57.280671 systemd[1]: Starting systemd-remount-fs.service... May 15 10:43:57.280689 systemd[1]: Starting systemd-udev-trigger.service... May 15 10:43:57.280699 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 10:43:57.280709 systemd[1]: Mounted dev-hugepages.mount. May 15 10:43:57.280719 systemd[1]: Mounted dev-mqueue.mount. May 15 10:43:57.280729 kernel: loop: module loaded May 15 10:43:57.280744 systemd[1]: Mounted media.mount. May 15 10:43:57.280755 systemd[1]: Mounted sys-kernel-debug.mount. May 15 10:43:57.280765 systemd[1]: Mounted sys-kernel-tracing.mount. May 15 10:43:57.280789 systemd[1]: Mounted tmp.mount. May 15 10:43:57.280806 systemd[1]: Finished kmod-static-nodes.service. May 15 10:43:57.280826 systemd-journald[1014]: Journal started May 15 10:43:57.280867 systemd-journald[1014]: Runtime Journal (/run/log/journal/310220e044bc4fdb9427ceeeb8c3a392) is 6.0M, max 48.4M, 42.4M free. May 15 10:43:57.186000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 15 10:43:57.186000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 May 15 10:43:57.277000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 15 10:43:57.277000 audit[1014]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fff64bc72d0 a2=4000 a3=7fff64bc736c items=0 ppid=1 pid=1014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:43:57.277000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 15 10:43:57.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:57.284190 systemd[1]: Started systemd-journald.service. May 15 10:43:57.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:57.285175 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 15 10:43:57.285874 systemd[1]: Finished modprobe@configfs.service. May 15 10:43:57.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:57.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:57.287205 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 10:43:57.289150 systemd[1]: Finished modprobe@dm_mod.service. May 15 10:43:57.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:57.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:57.290542 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 10:43:57.290972 systemd[1]: Finished modprobe@drm.service. May 15 10:43:57.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:57.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:57.292225 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 10:43:57.292573 systemd[1]: Finished modprobe@efi_pstore.service. May 15 10:43:57.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:57.292000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:57.293964 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 15 10:43:57.294156 systemd[1]: Finished modprobe@fuse.service. May 15 10:43:57.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:57.294000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:57.295243 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 10:43:57.295527 systemd[1]: Finished modprobe@loop.service. May 15 10:43:57.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:57.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:57.296844 systemd[1]: Finished systemd-modules-load.service. May 15 10:43:57.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:57.298408 systemd[1]: Finished systemd-network-generator.service. May 15 10:43:57.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:57.299795 systemd[1]: Finished systemd-remount-fs.service. May 15 10:43:57.302241 systemd[1]: Reached target network-pre.target. May 15 10:43:57.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:57.305028 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 15 10:43:57.307184 systemd[1]: Mounting sys-kernel-config.mount... May 15 10:43:57.307994 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 15 10:43:57.309544 systemd[1]: Starting systemd-hwdb-update.service... May 15 10:43:57.311462 systemd[1]: Starting systemd-journal-flush.service... May 15 10:43:57.312386 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 10:43:57.314341 systemd[1]: Starting systemd-random-seed.service... May 15 10:43:57.315210 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 10:43:57.316271 systemd[1]: Starting systemd-sysctl.service... May 15 10:43:57.321850 systemd[1]: Finished flatcar-tmpfiles.service. May 15 10:43:57.323122 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 15 10:43:57.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:57.324128 systemd[1]: Mounted sys-kernel-config.mount. May 15 10:43:57.325870 systemd-journald[1014]: Time spent on flushing to /var/log/journal/310220e044bc4fdb9427ceeeb8c3a392 is 22.674ms for 1110 entries. May 15 10:43:57.325870 systemd-journald[1014]: System Journal (/var/log/journal/310220e044bc4fdb9427ceeeb8c3a392) is 8.0M, max 195.6M, 187.6M free. May 15 10:43:57.383522 systemd-journald[1014]: Received client request to flush runtime journal. May 15 10:43:57.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:57.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:57.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:57.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:57.326571 systemd[1]: Starting systemd-sysusers.service... May 15 10:43:57.358201 systemd[1]: Finished systemd-random-seed.service. May 15 10:43:57.384659 udevadm[1063]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 15 10:43:57.359363 systemd[1]: Reached target first-boot-complete.target. May 15 10:43:57.361804 systemd[1]: Finished systemd-sysctl.service. May 15 10:43:57.362922 systemd[1]: Finished systemd-udev-trigger.service. May 15 10:43:57.364997 systemd[1]: Starting systemd-udev-settle.service... May 15 10:43:57.378521 systemd[1]: Finished systemd-sysusers.service. May 15 10:43:57.380845 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 15 10:43:57.384447 systemd[1]: Finished systemd-journal-flush.service. May 15 10:43:57.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:57.400688 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 15 10:43:57.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:57.903092 systemd[1]: Finished systemd-hwdb-update.service. May 15 10:43:57.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:57.905060 kernel: kauditd_printk_skb: 77 callbacks suppressed May 15 10:43:57.905121 kernel: audit: type=1130 audit(1747305837.904:112): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:57.905443 systemd[1]: Starting systemd-udevd.service... May 15 10:43:57.924124 systemd-udevd[1071]: Using default interface naming scheme 'v252'. May 15 10:43:57.937465 systemd[1]: Started systemd-udevd.service. May 15 10:43:57.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:57.942697 kernel: audit: type=1130 audit(1747305837.937:113): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:57.939719 systemd[1]: Starting systemd-networkd.service... May 15 10:43:57.946417 systemd[1]: Starting systemd-userdbd.service... May 15 10:43:57.986951 systemd[1]: Started systemd-userdbd.service. May 15 10:43:57.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:57.992707 kernel: audit: type=1130 audit(1747305837.987:114): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:57.994004 systemd[1]: Found device dev-ttyS0.device. May 15 10:43:58.015865 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 15 10:43:58.037422 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 15 10:43:58.044667 kernel: ACPI: button: Power Button [PWRF] May 15 10:43:58.051529 systemd-networkd[1080]: lo: Link UP May 15 10:43:58.051539 systemd-networkd[1080]: lo: Gained carrier May 15 10:43:58.052379 systemd-networkd[1080]: Enumeration completed May 15 10:43:58.052480 systemd-networkd[1080]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 10:43:58.052514 systemd[1]: Started systemd-networkd.service. May 15 10:43:58.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:58.054691 systemd-networkd[1080]: eth0: Link UP May 15 10:43:58.054697 systemd-networkd[1080]: eth0: Gained carrier May 15 10:43:58.057659 kernel: audit: type=1130 audit(1747305838.052:115): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:58.057000 audit[1082]: AVC avc: denied { confidentiality } for pid=1082 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 15 10:43:58.057000 audit[1082]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=555f29fafcd0 a1=338ac a2=7f6d0e2c4bc5 a3=5 items=110 ppid=1071 pid=1082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:43:58.068787 kernel: audit: type=1400 audit(1747305838.057:116): avc: denied { confidentiality } for pid=1082 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 15 10:43:58.068855 kernel: audit: type=1300 audit(1747305838.057:116): arch=c000003e syscall=175 success=yes exit=0 a0=555f29fafcd0 a1=338ac a2=7f6d0e2c4bc5 a3=5 items=110 ppid=1071 pid=1082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:43:58.068876 kernel: audit: type=1307 audit(1747305838.057:116): cwd="/" May 15 10:43:58.057000 audit: CWD cwd="/" May 15 10:43:58.069656 kernel: audit: type=1302 audit(1747305838.057:116): item=0 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=0 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=1 name=(null) inode=11896 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.080254 kernel: audit: type=1302 audit(1747305838.057:116): item=1 name=(null) inode=11896 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.080290 kernel: audit: type=1302 audit(1747305838.057:116): item=2 name=(null) inode=11896 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=2 name=(null) inode=11896 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=3 name=(null) inode=11897 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=4 name=(null) inode=11896 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=5 name=(null) inode=11898 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=6 name=(null) inode=11896 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=7 name=(null) inode=11899 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=8 name=(null) inode=11899 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=9 name=(null) inode=11900 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=10 name=(null) inode=11899 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=11 name=(null) inode=11901 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=12 name=(null) inode=11899 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=13 name=(null) inode=11902 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=14 name=(null) inode=11899 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=15 name=(null) inode=11903 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=16 name=(null) inode=11899 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=17 name=(null) inode=11904 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=18 name=(null) inode=11896 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=19 name=(null) inode=11905 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=20 name=(null) inode=11905 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=21 name=(null) inode=11906 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=22 name=(null) inode=11905 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=23 name=(null) inode=11907 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=24 name=(null) inode=11905 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=25 name=(null) inode=11908 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=26 name=(null) inode=11905 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=27 name=(null) inode=11909 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=28 name=(null) inode=11905 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=29 name=(null) inode=11910 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=30 name=(null) inode=11896 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=31 name=(null) inode=11911 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=32 name=(null) inode=11911 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=33 name=(null) inode=11912 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=34 name=(null) inode=11911 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=35 name=(null) inode=11913 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=36 name=(null) inode=11911 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=37 name=(null) inode=11914 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=38 name=(null) inode=11911 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=39 name=(null) inode=11915 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=40 name=(null) inode=11911 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=41 name=(null) inode=11916 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=42 name=(null) inode=11896 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=43 name=(null) inode=11917 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=44 name=(null) inode=11917 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=45 name=(null) inode=11918 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=46 name=(null) inode=11917 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=47 name=(null) inode=11919 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=48 name=(null) inode=11917 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=49 name=(null) inode=11920 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=50 name=(null) inode=11917 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=51 name=(null) inode=11921 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=52 name=(null) inode=11917 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=53 name=(null) inode=11922 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=54 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=55 name=(null) inode=11923 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=56 name=(null) inode=11923 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=57 name=(null) inode=11924 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=58 name=(null) inode=11923 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=59 name=(null) inode=11925 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=60 name=(null) inode=11923 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=61 name=(null) inode=11926 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=62 name=(null) inode=11926 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=63 name=(null) inode=11927 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=64 name=(null) inode=11926 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=65 name=(null) inode=11928 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=66 name=(null) inode=11926 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=67 name=(null) inode=11929 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=68 name=(null) inode=11926 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=69 name=(null) inode=11930 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=70 name=(null) inode=11926 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=71 name=(null) inode=11931 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=72 name=(null) inode=11923 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=73 name=(null) inode=11932 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=74 name=(null) inode=11932 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=75 name=(null) inode=11933 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=76 name=(null) inode=11932 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=77 name=(null) inode=11934 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=78 name=(null) inode=11932 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=79 name=(null) inode=11935 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=80 name=(null) inode=11932 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=81 name=(null) inode=11936 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=82 name=(null) inode=11932 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=83 name=(null) inode=11937 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=84 name=(null) inode=11923 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=85 name=(null) inode=11938 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=86 name=(null) inode=11938 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=87 name=(null) inode=11939 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=88 name=(null) inode=11938 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=89 name=(null) inode=11940 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=90 name=(null) inode=11938 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=91 name=(null) inode=11941 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=92 name=(null) inode=11938 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=93 name=(null) inode=11942 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=94 name=(null) inode=11938 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=95 name=(null) inode=11943 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=96 name=(null) inode=11923 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=97 name=(null) inode=11944 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=98 name=(null) inode=11944 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=99 name=(null) inode=11945 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=100 name=(null) inode=11944 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=101 name=(null) inode=11946 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=102 name=(null) inode=11944 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=103 name=(null) inode=11947 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=104 name=(null) inode=11944 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=105 name=(null) inode=11948 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=106 name=(null) inode=11944 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=107 name=(null) inode=11949 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PATH item=109 name=(null) inode=11950 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:43:58.057000 audit: PROCTITLE proctitle="(udev-worker)" May 15 10:43:58.090081 systemd-networkd[1080]: eth0: DHCPv4 address 10.0.0.96/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 10:43:58.095079 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 15 10:43:58.102996 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 15 10:43:58.111123 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 15 10:43:58.111318 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 15 10:43:58.111440 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 15 10:43:58.111537 kernel: mousedev: PS/2 mouse device common for all mice May 15 10:43:58.179972 kernel: kvm: Nested Virtualization enabled May 15 10:43:58.180117 kernel: SVM: kvm: Nested Paging enabled May 15 10:43:58.180139 kernel: SVM: Virtual VMLOAD VMSAVE supported May 15 10:43:58.180686 kernel: SVM: Virtual GIF supported May 15 10:43:58.202698 kernel: EDAC MC: Ver: 3.0.0 May 15 10:43:58.231091 systemd[1]: Finished systemd-udev-settle.service. May 15 10:43:58.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:58.233331 systemd[1]: Starting lvm2-activation-early.service... May 15 10:43:58.241696 lvm[1109]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 10:43:58.273198 systemd[1]: Finished lvm2-activation-early.service. May 15 10:43:58.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:58.274524 systemd[1]: Reached target cryptsetup.target. May 15 10:43:58.276938 systemd[1]: Starting lvm2-activation.service... May 15 10:43:58.282340 lvm[1111]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 10:43:58.313079 systemd[1]: Finished lvm2-activation.service. May 15 10:43:58.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:58.314193 systemd[1]: Reached target local-fs-pre.target. May 15 10:43:58.315112 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 15 10:43:58.315141 systemd[1]: Reached target local-fs.target. May 15 10:43:58.315996 systemd[1]: Reached target machines.target. May 15 10:43:58.318251 systemd[1]: Starting ldconfig.service... May 15 10:43:58.319440 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 10:43:58.319494 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 10:43:58.320744 systemd[1]: Starting systemd-boot-update.service... May 15 10:43:58.322939 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 15 10:43:58.325931 systemd[1]: Starting systemd-machine-id-commit.service... May 15 10:43:58.328144 systemd[1]: Starting systemd-sysext.service... May 15 10:43:58.329644 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1114 (bootctl) May 15 10:43:58.331139 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 15 10:43:58.337373 systemd[1]: Unmounting usr-share-oem.mount... May 15 10:43:58.340624 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 15 10:43:58.340803 systemd[1]: Unmounted usr-share-oem.mount. May 15 10:43:58.349646 kernel: loop0: detected capacity change from 0 to 210664 May 15 10:43:58.350808 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 15 10:43:58.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:58.595643 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 15 10:43:58.650305 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 15 10:43:58.651127 systemd[1]: Finished systemd-machine-id-commit.service. May 15 10:43:58.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:58.659441 systemd-fsck[1127]: fsck.fat 4.2 (2021-01-31) May 15 10:43:58.659441 systemd-fsck[1127]: /dev/vda1: 791 files, 120752/258078 clusters May 15 10:43:58.661254 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 15 10:43:58.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:58.664252 systemd[1]: Mounting boot.mount... May 15 10:43:58.668641 kernel: loop1: detected capacity change from 0 to 210664 May 15 10:43:58.671082 systemd[1]: Mounted boot.mount. May 15 10:43:58.674144 (sd-sysext)[1134]: Using extensions 'kubernetes'. May 15 10:43:58.674493 (sd-sysext)[1134]: Merged extensions into '/usr'. May 15 10:43:58.682917 systemd[1]: Finished systemd-boot-update.service. May 15 10:43:58.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:58.693187 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 10:43:58.694606 systemd[1]: Mounting usr-share-oem.mount... May 15 10:43:58.695785 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 10:43:58.697014 systemd[1]: Starting modprobe@dm_mod.service... May 15 10:43:58.699387 systemd[1]: Starting modprobe@efi_pstore.service... May 15 10:43:58.701783 systemd[1]: Starting modprobe@loop.service... May 15 10:43:58.702726 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 10:43:58.702910 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 10:43:58.703130 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 10:43:58.706122 systemd[1]: Mounted usr-share-oem.mount. May 15 10:43:58.707588 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 10:43:58.707856 systemd[1]: Finished modprobe@dm_mod.service. May 15 10:43:58.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:58.708000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:58.709571 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 10:43:58.709815 systemd[1]: Finished modprobe@efi_pstore.service. May 15 10:43:58.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:58.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:58.711222 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 10:43:58.711378 systemd[1]: Finished modprobe@loop.service. May 15 10:43:58.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:58.711000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:58.712813 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 10:43:58.712910 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 10:43:58.713880 systemd[1]: Finished systemd-sysext.service. May 15 10:43:58.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:58.716232 systemd[1]: Starting ensure-sysext.service... May 15 10:43:58.718406 systemd[1]: Starting systemd-tmpfiles-setup.service... May 15 10:43:58.723131 systemd[1]: Reloading. May 15 10:43:58.730277 systemd-tmpfiles[1150]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 15 10:43:58.731344 systemd-tmpfiles[1150]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 15 10:43:58.733526 systemd-tmpfiles[1150]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 15 10:43:58.777431 ldconfig[1113]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 15 10:43:58.782775 /usr/lib/systemd/system-generators/torcx-generator[1172]: time="2025-05-15T10:43:58Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" May 15 10:43:58.783143 /usr/lib/systemd/system-generators/torcx-generator[1172]: time="2025-05-15T10:43:58Z" level=info msg="torcx already run" May 15 10:43:58.863036 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 15 10:43:58.863055 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 15 10:43:58.881961 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 10:43:58.934125 systemd[1]: Finished ldconfig.service. May 15 10:43:58.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:58.936476 systemd[1]: Finished systemd-tmpfiles-setup.service. May 15 10:43:58.940276 systemd[1]: Starting audit-rules.service... May 15 10:43:58.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:58.942267 systemd[1]: Starting clean-ca-certificates.service... May 15 10:43:58.944352 systemd[1]: Starting systemd-journal-catalog-update.service... May 15 10:43:58.947286 systemd[1]: Starting systemd-resolved.service... May 15 10:43:58.949778 systemd[1]: Starting systemd-timesyncd.service... May 15 10:43:58.952010 systemd[1]: Starting systemd-update-utmp.service... May 15 10:43:58.953935 systemd[1]: Finished clean-ca-certificates.service. May 15 10:43:58.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:58.959955 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 10:43:58.962350 systemd[1]: Starting modprobe@dm_mod.service... May 15 10:43:58.964000 audit[1232]: SYSTEM_BOOT pid=1232 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 15 10:43:58.965036 systemd[1]: Starting modprobe@efi_pstore.service... May 15 10:43:58.967756 systemd[1]: Starting modprobe@loop.service... May 15 10:43:58.968864 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 10:43:58.969050 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 10:43:58.969224 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 10:43:58.970411 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 10:43:58.970598 systemd[1]: Finished modprobe@dm_mod.service. May 15 10:43:58.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:58.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:58.972440 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 10:43:58.972626 systemd[1]: Finished modprobe@efi_pstore.service. May 15 10:43:58.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:58.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:58.974405 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 10:43:58.974787 systemd[1]: Finished modprobe@loop.service. May 15 10:43:58.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:58.975000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:58.976892 systemd[1]: Finished systemd-journal-catalog-update.service. May 15 10:43:58.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:58.979980 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 10:43:58.980130 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 10:43:58.981932 systemd[1]: Starting systemd-update-done.service... May 15 10:43:58.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:58.984830 systemd[1]: Finished systemd-update-utmp.service. May 15 10:43:58.987904 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 10:43:58.989253 systemd[1]: Starting modprobe@dm_mod.service... May 15 10:43:58.991182 systemd[1]: Starting modprobe@efi_pstore.service... May 15 10:43:58.993436 systemd[1]: Starting modprobe@loop.service... May 15 10:43:58.994394 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 10:43:58.994526 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 10:43:58.994655 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 10:43:58.996053 systemd[1]: Finished systemd-update-done.service. May 15 10:43:58.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:58.997475 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 10:43:58.997608 systemd[1]: Finished modprobe@dm_mod.service. May 15 10:43:58.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:58.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:58.999048 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 10:43:58.999190 systemd[1]: Finished modprobe@efi_pstore.service. May 15 10:43:58.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:58.999000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:59.000748 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 10:43:59.001080 systemd[1]: Finished modprobe@loop.service. May 15 10:43:59.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:59.001000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:59.002415 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 10:43:59.002494 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 10:43:59.005299 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 10:43:59.006419 systemd[1]: Starting modprobe@dm_mod.service... May 15 10:43:59.008495 systemd[1]: Starting modprobe@drm.service... May 15 10:43:59.010501 systemd[1]: Starting modprobe@efi_pstore.service... May 15 10:43:59.012314 systemd[1]: Starting modprobe@loop.service... May 15 10:43:59.013295 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 10:43:59.013416 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 10:43:59.014826 systemd[1]: Starting systemd-networkd-wait-online.service... May 15 10:43:59.015944 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 10:43:59.017076 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 10:43:59.017229 systemd[1]: Finished modprobe@dm_mod.service. May 15 10:43:59.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:59.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:43:59.023082 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 10:43:59.023224 systemd[1]: Finished modprobe@drm.service. May 15 10:43:59.024070 augenrules[1269]: No rules May 15 10:43:59.023000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 15 10:43:59.023000 audit[1269]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcdb75dc00 a2=420 a3=0 items=0 ppid=1220 pid=1269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:43:59.023000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 15 10:43:59.024450 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 10:43:59.024583 systemd[1]: Finished modprobe@efi_pstore.service. May 15 10:43:59.025943 systemd[1]: Finished audit-rules.service. May 15 10:43:59.027227 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 10:43:59.027367 systemd[1]: Finished modprobe@loop.service. May 15 10:43:59.028670 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 10:43:59.028759 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 10:43:59.030538 systemd[1]: Finished ensure-sysext.service. May 15 10:43:59.038377 systemd-resolved[1224]: Positive Trust Anchors: May 15 10:43:59.038399 systemd-resolved[1224]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 10:43:59.038436 systemd-resolved[1224]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 15 10:43:59.043828 systemd[1]: Started systemd-timesyncd.service. May 15 10:43:59.045128 systemd-timesyncd[1225]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 15 10:43:59.045171 systemd[1]: Reached target time-set.target. May 15 10:43:59.045195 systemd-resolved[1224]: Defaulting to hostname 'linux'. May 15 10:43:59.045456 systemd-timesyncd[1225]: Initial clock synchronization to Thu 2025-05-15 10:43:58.991267 UTC. May 15 10:43:59.046678 systemd[1]: Started systemd-resolved.service. May 15 10:43:59.047589 systemd[1]: Reached target network.target. May 15 10:43:59.048536 systemd[1]: Reached target nss-lookup.target. May 15 10:43:59.049549 systemd[1]: Reached target sysinit.target. May 15 10:43:59.050710 systemd[1]: Started motdgen.path. May 15 10:43:59.051590 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 15 10:43:59.052896 systemd[1]: Started logrotate.timer. May 15 10:43:59.053735 systemd[1]: Started mdadm.timer. May 15 10:43:59.054458 systemd[1]: Started systemd-tmpfiles-clean.timer. May 15 10:43:59.055359 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 15 10:43:59.055382 systemd[1]: Reached target paths.target. May 15 10:43:59.056163 systemd[1]: Reached target timers.target. May 15 10:43:59.057226 systemd[1]: Listening on dbus.socket. May 15 10:43:59.059126 systemd[1]: Starting docker.socket... May 15 10:43:59.060724 systemd[1]: Listening on sshd.socket. May 15 10:43:59.061562 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 10:43:59.061817 systemd[1]: Listening on docker.socket. May 15 10:43:59.062649 systemd[1]: Reached target sockets.target. May 15 10:43:59.063446 systemd[1]: Reached target basic.target. May 15 10:43:59.064339 systemd[1]: System is tainted: cgroupsv1 May 15 10:43:59.064378 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 15 10:43:59.064397 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 15 10:43:59.065359 systemd[1]: Starting containerd.service... May 15 10:43:59.067154 systemd[1]: Starting dbus.service... May 15 10:43:59.068793 systemd[1]: Starting enable-oem-cloudinit.service... May 15 10:43:59.070681 systemd[1]: Starting extend-filesystems.service... May 15 10:43:59.071594 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 15 10:43:59.072670 systemd[1]: Starting motdgen.service... May 15 10:43:59.074343 systemd[1]: Starting prepare-helm.service... May 15 10:43:59.075960 systemd[1]: Starting ssh-key-proc-cmdline.service... May 15 10:43:59.077381 jq[1284]: false May 15 10:43:59.077885 systemd[1]: Starting sshd-keygen.service... May 15 10:43:59.080658 systemd[1]: Starting systemd-logind.service... May 15 10:43:59.081477 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 10:43:59.081545 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 15 10:43:59.084314 systemd[1]: Starting update-engine.service... May 15 10:43:59.087363 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 15 10:43:59.090973 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 15 10:43:59.091296 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 15 10:43:59.092809 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 15 10:43:59.093290 systemd[1]: Finished ssh-key-proc-cmdline.service. May 15 10:43:59.093864 jq[1300]: true May 15 10:43:59.095644 extend-filesystems[1285]: Found loop1 May 15 10:43:59.096663 extend-filesystems[1285]: Found sr0 May 15 10:43:59.096663 extend-filesystems[1285]: Found vda May 15 10:43:59.096663 extend-filesystems[1285]: Found vda1 May 15 10:43:59.096663 extend-filesystems[1285]: Found vda2 May 15 10:43:59.096663 extend-filesystems[1285]: Found vda3 May 15 10:43:59.096663 extend-filesystems[1285]: Found usr May 15 10:43:59.096663 extend-filesystems[1285]: Found vda4 May 15 10:43:59.096663 extend-filesystems[1285]: Found vda6 May 15 10:43:59.096663 extend-filesystems[1285]: Found vda7 May 15 10:43:59.096663 extend-filesystems[1285]: Found vda9 May 15 10:43:59.096663 extend-filesystems[1285]: Checking size of /dev/vda9 May 15 10:43:59.112539 dbus-daemon[1282]: [system] SELinux support is enabled May 15 10:43:59.110541 systemd[1]: motdgen.service: Deactivated successfully. May 15 10:43:59.110778 systemd[1]: Finished motdgen.service. May 15 10:43:59.112705 systemd[1]: Started dbus.service. May 15 10:43:59.115443 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 15 10:43:59.115471 systemd[1]: Reached target system-config.target. May 15 10:43:59.116516 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 15 10:43:59.116537 systemd[1]: Reached target user-config.target. May 15 10:43:59.122034 jq[1311]: true May 15 10:43:59.127986 extend-filesystems[1285]: Resized partition /dev/vda9 May 15 10:43:59.131331 tar[1308]: linux-amd64/helm May 15 10:43:59.145431 extend-filesystems[1327]: resize2fs 1.46.5 (30-Dec-2021) May 15 10:43:59.168886 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 15 10:43:59.183263 update_engine[1298]: I0515 10:43:59.182741 1298 main.cc:92] Flatcar Update Engine starting May 15 10:43:59.190567 update_engine[1298]: I0515 10:43:59.188807 1298 update_check_scheduler.cc:74] Next update check in 2m52s May 15 10:43:59.188986 systemd[1]: Started update-engine.service. May 15 10:43:59.190215 systemd-logind[1294]: Watching system buttons on /dev/input/event1 (Power Button) May 15 10:43:59.190232 systemd-logind[1294]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 15 10:43:59.191485 systemd[1]: Started locksmithd.service. May 15 10:43:59.195192 systemd-logind[1294]: New seat seat0. May 15 10:43:59.203733 systemd[1]: Started systemd-logind.service. May 15 10:43:59.208285 env[1314]: time="2025-05-15T10:43:59.207913318Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 15 10:43:59.221665 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 15 10:43:59.252556 env[1314]: time="2025-05-15T10:43:59.229356019Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 15 10:43:59.252556 env[1314]: time="2025-05-15T10:43:59.252215386Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 15 10:43:59.263944 extend-filesystems[1327]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 15 10:43:59.263944 extend-filesystems[1327]: old_desc_blocks = 1, new_desc_blocks = 1 May 15 10:43:59.263944 extend-filesystems[1327]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 15 10:43:59.269452 bash[1341]: Updated "/home/core/.ssh/authorized_keys" May 15 10:43:59.255449 systemd[1]: extend-filesystems.service: Deactivated successfully. May 15 10:43:59.269665 env[1314]: time="2025-05-15T10:43:59.264546426Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.182-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 15 10:43:59.269665 env[1314]: time="2025-05-15T10:43:59.266271171Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 15 10:43:59.269665 env[1314]: time="2025-05-15T10:43:59.267423913Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 10:43:59.269665 env[1314]: time="2025-05-15T10:43:59.267450353Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 15 10:43:59.269665 env[1314]: time="2025-05-15T10:43:59.267469088Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 15 10:43:59.269665 env[1314]: time="2025-05-15T10:43:59.267481010Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 15 10:43:59.269665 env[1314]: time="2025-05-15T10:43:59.267637414Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 15 10:43:59.269945 extend-filesystems[1285]: Resized filesystem in /dev/vda9 May 15 10:43:59.255714 systemd[1]: Finished extend-filesystems.service. May 15 10:43:59.271455 env[1314]: time="2025-05-15T10:43:59.269921498Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 15 10:43:59.271455 env[1314]: time="2025-05-15T10:43:59.271031821Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 10:43:59.271455 env[1314]: time="2025-05-15T10:43:59.271051758Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 15 10:43:59.271455 env[1314]: time="2025-05-15T10:43:59.271105098Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 15 10:43:59.271455 env[1314]: time="2025-05-15T10:43:59.271116119Z" level=info msg="metadata content store policy set" policy=shared May 15 10:43:59.259264 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 15 10:43:59.308394 locksmithd[1342]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 15 10:43:59.341709 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 10:43:59.341771 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 10:43:59.450041 env[1314]: time="2025-05-15T10:43:59.449902711Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 15 10:43:59.450041 env[1314]: time="2025-05-15T10:43:59.449966401Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 15 10:43:59.450041 env[1314]: time="2025-05-15T10:43:59.449980888Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 15 10:43:59.450208 env[1314]: time="2025-05-15T10:43:59.450054907Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 15 10:43:59.450208 env[1314]: time="2025-05-15T10:43:59.450073762Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 15 10:43:59.450208 env[1314]: time="2025-05-15T10:43:59.450086246Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 15 10:43:59.450208 env[1314]: time="2025-05-15T10:43:59.450098358Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 15 10:43:59.450208 env[1314]: time="2025-05-15T10:43:59.450117134Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 15 10:43:59.450208 env[1314]: time="2025-05-15T10:43:59.450131270Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 15 10:43:59.450208 env[1314]: time="2025-05-15T10:43:59.450143703Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 15 10:43:59.450208 env[1314]: time="2025-05-15T10:43:59.450158211Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 15 10:43:59.450208 env[1314]: time="2025-05-15T10:43:59.450169742Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 15 10:43:59.450424 env[1314]: time="2025-05-15T10:43:59.450306849Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 15 10:43:59.450424 env[1314]: time="2025-05-15T10:43:59.450390506Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 15 10:43:59.450996 env[1314]: time="2025-05-15T10:43:59.450978830Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 15 10:43:59.451059 env[1314]: time="2025-05-15T10:43:59.451022111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 15 10:43:59.451059 env[1314]: time="2025-05-15T10:43:59.451035216Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 15 10:43:59.451099 env[1314]: time="2025-05-15T10:43:59.451093445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 15 10:43:59.451127 env[1314]: time="2025-05-15T10:43:59.451105227Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 15 10:43:59.451127 env[1314]: time="2025-05-15T10:43:59.451116358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 15 10:43:59.451165 env[1314]: time="2025-05-15T10:43:59.451126246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 15 10:43:59.451165 env[1314]: time="2025-05-15T10:43:59.451136415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 15 10:43:59.451165 env[1314]: time="2025-05-15T10:43:59.451148769Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 15 10:43:59.451165 env[1314]: time="2025-05-15T10:43:59.451159990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 15 10:43:59.451239 env[1314]: time="2025-05-15T10:43:59.451169357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 15 10:43:59.451239 env[1314]: time="2025-05-15T10:43:59.451180749Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 15 10:43:59.451324 env[1314]: time="2025-05-15T10:43:59.451307096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 15 10:43:59.451324 env[1314]: time="2025-05-15T10:43:59.451324188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 15 10:43:59.451405 env[1314]: time="2025-05-15T10:43:59.451334848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 15 10:43:59.451405 env[1314]: time="2025-05-15T10:43:59.451344105Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 15 10:43:59.451405 env[1314]: time="2025-05-15T10:43:59.451359845Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 15 10:43:59.451405 env[1314]: time="2025-05-15T10:43:59.451369543Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 15 10:43:59.451405 env[1314]: time="2025-05-15T10:43:59.451389600Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 15 10:43:59.451497 env[1314]: time="2025-05-15T10:43:59.451424776Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 15 10:43:59.451742 env[1314]: time="2025-05-15T10:43:59.451633247Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 15 10:43:59.451742 env[1314]: time="2025-05-15T10:43:59.451690425Z" level=info msg="Connect containerd service" May 15 10:43:59.451742 env[1314]: time="2025-05-15T10:43:59.451727584Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 15 10:43:59.453588 env[1314]: time="2025-05-15T10:43:59.452385669Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 10:43:59.453588 env[1314]: time="2025-05-15T10:43:59.452579172Z" level=info msg="Start subscribing containerd event" May 15 10:43:59.453588 env[1314]: time="2025-05-15T10:43:59.452640186Z" level=info msg="Start recovering state" May 15 10:43:59.453588 env[1314]: time="2025-05-15T10:43:59.452697073Z" level=info msg="Start event monitor" May 15 10:43:59.453588 env[1314]: time="2025-05-15T10:43:59.452712442Z" level=info msg="Start snapshots syncer" May 15 10:43:59.453588 env[1314]: time="2025-05-15T10:43:59.452721629Z" level=info msg="Start cni network conf syncer for default" May 15 10:43:59.453588 env[1314]: time="2025-05-15T10:43:59.452728071Z" level=info msg="Start streaming server" May 15 10:43:59.453588 env[1314]: time="2025-05-15T10:43:59.452976948Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 15 10:43:59.453588 env[1314]: time="2025-05-15T10:43:59.453012424Z" level=info msg=serving... address=/run/containerd/containerd.sock May 15 10:43:59.453182 systemd[1]: Started containerd.service. May 15 10:43:59.453901 env[1314]: time="2025-05-15T10:43:59.453880322Z" level=info msg="containerd successfully booted in 0.249769s" May 15 10:43:59.616180 tar[1308]: linux-amd64/LICENSE May 15 10:43:59.616333 tar[1308]: linux-amd64/README.md May 15 10:43:59.620545 systemd[1]: Finished prepare-helm.service. May 15 10:43:59.808719 sshd_keygen[1315]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 15 10:43:59.819759 systemd-networkd[1080]: eth0: Gained IPv6LL May 15 10:43:59.821719 systemd[1]: Finished systemd-networkd-wait-online.service. May 15 10:43:59.823077 systemd[1]: Reached target network-online.target. May 15 10:43:59.825560 systemd[1]: Starting kubelet.service... May 15 10:43:59.831369 systemd[1]: Finished sshd-keygen.service. May 15 10:43:59.834002 systemd[1]: Starting issuegen.service... May 15 10:43:59.840320 systemd[1]: issuegen.service: Deactivated successfully. May 15 10:43:59.840552 systemd[1]: Finished issuegen.service. May 15 10:43:59.842840 systemd[1]: Starting systemd-user-sessions.service... May 15 10:43:59.850308 systemd[1]: Finished systemd-user-sessions.service. May 15 10:43:59.869261 systemd[1]: Started getty@tty1.service. May 15 10:43:59.871422 systemd[1]: Started serial-getty@ttyS0.service. May 15 10:43:59.872522 systemd[1]: Reached target getty.target. May 15 10:44:00.316196 systemd[1]: Created slice system-sshd.slice. May 15 10:44:00.318697 systemd[1]: Started sshd@0-10.0.0.96:22-10.0.0.1:36310.service. May 15 10:44:00.354913 sshd[1379]: Accepted publickey for core from 10.0.0.1 port 36310 ssh2: RSA SHA256:haioSl9UPoE92ibERJujrg0rXVEisgSt061naG/EAtE May 15 10:44:00.383215 sshd[1379]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:44:00.393246 systemd-logind[1294]: New session 1 of user core. May 15 10:44:00.394104 systemd[1]: Created slice user-500.slice. May 15 10:44:00.396559 systemd[1]: Starting user-runtime-dir@500.service... May 15 10:44:00.404841 systemd[1]: Finished user-runtime-dir@500.service. May 15 10:44:00.408839 systemd[1]: Starting user@500.service... May 15 10:44:00.412476 (systemd)[1384]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 15 10:44:00.510110 systemd[1384]: Queued start job for default target default.target. May 15 10:44:00.510313 systemd[1384]: Reached target paths.target. May 15 10:44:00.510327 systemd[1384]: Reached target sockets.target. May 15 10:44:00.510338 systemd[1384]: Reached target timers.target. May 15 10:44:00.510348 systemd[1384]: Reached target basic.target. May 15 10:44:00.510515 systemd[1]: Started user@500.service. May 15 10:44:00.511702 systemd[1384]: Reached target default.target. May 15 10:44:00.511808 systemd[1384]: Startup finished in 92ms. May 15 10:44:00.525303 systemd[1]: Started session-1.scope. May 15 10:44:00.578582 systemd[1]: Started sshd@1-10.0.0.96:22-10.0.0.1:36322.service. May 15 10:44:00.639727 sshd[1393]: Accepted publickey for core from 10.0.0.1 port 36322 ssh2: RSA SHA256:haioSl9UPoE92ibERJujrg0rXVEisgSt061naG/EAtE May 15 10:44:00.641254 sshd[1393]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:44:00.645075 systemd-logind[1294]: New session 2 of user core. May 15 10:44:00.645826 systemd[1]: Started session-2.scope. May 15 10:44:00.715331 sshd[1393]: pam_unix(sshd:session): session closed for user core May 15 10:44:00.717905 systemd[1]: Started sshd@2-10.0.0.96:22-10.0.0.1:36328.service. May 15 10:44:00.719449 systemd[1]: sshd@1-10.0.0.96:22-10.0.0.1:36322.service: Deactivated successfully. May 15 10:44:00.720526 systemd[1]: session-2.scope: Deactivated successfully. May 15 10:44:00.721141 systemd-logind[1294]: Session 2 logged out. Waiting for processes to exit. May 15 10:44:00.722095 systemd-logind[1294]: Removed session 2. May 15 10:44:00.749682 sshd[1398]: Accepted publickey for core from 10.0.0.1 port 36328 ssh2: RSA SHA256:haioSl9UPoE92ibERJujrg0rXVEisgSt061naG/EAtE May 15 10:44:00.750948 sshd[1398]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:44:00.754232 systemd-logind[1294]: New session 3 of user core. May 15 10:44:00.754980 systemd[1]: Started session-3.scope. May 15 10:44:00.831272 sshd[1398]: pam_unix(sshd:session): session closed for user core May 15 10:44:00.833902 systemd[1]: sshd@2-10.0.0.96:22-10.0.0.1:36328.service: Deactivated successfully. May 15 10:44:00.834963 systemd-logind[1294]: Session 3 logged out. Waiting for processes to exit. May 15 10:44:00.834968 systemd[1]: session-3.scope: Deactivated successfully. May 15 10:44:00.835907 systemd-logind[1294]: Removed session 3. May 15 10:44:00.947985 systemd[1]: Started kubelet.service. May 15 10:44:00.949599 systemd[1]: Reached target multi-user.target. May 15 10:44:00.952158 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 15 10:44:00.960272 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 15 10:44:00.960671 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 15 10:44:00.963669 systemd[1]: Startup finished in 6.207s (kernel) + 7.754s (userspace) = 13.962s. May 15 10:44:01.712918 kubelet[1412]: E0515 10:44:01.712822 1412 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 10:44:01.714703 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 10:44:01.714854 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 10:44:10.798456 systemd[1]: Started sshd@3-10.0.0.96:22-10.0.0.1:43424.service. May 15 10:44:10.830765 sshd[1423]: Accepted publickey for core from 10.0.0.1 port 43424 ssh2: RSA SHA256:haioSl9UPoE92ibERJujrg0rXVEisgSt061naG/EAtE May 15 10:44:10.831939 sshd[1423]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:44:10.835655 systemd-logind[1294]: New session 4 of user core. May 15 10:44:10.836399 systemd[1]: Started session-4.scope. May 15 10:44:10.890371 sshd[1423]: pam_unix(sshd:session): session closed for user core May 15 10:44:10.892953 systemd[1]: Started sshd@4-10.0.0.96:22-10.0.0.1:43434.service. May 15 10:44:10.893489 systemd[1]: sshd@3-10.0.0.96:22-10.0.0.1:43424.service: Deactivated successfully. May 15 10:44:10.894668 systemd[1]: session-4.scope: Deactivated successfully. May 15 10:44:10.895094 systemd-logind[1294]: Session 4 logged out. Waiting for processes to exit. May 15 10:44:10.896249 systemd-logind[1294]: Removed session 4. May 15 10:44:10.925785 sshd[1428]: Accepted publickey for core from 10.0.0.1 port 43434 ssh2: RSA SHA256:haioSl9UPoE92ibERJujrg0rXVEisgSt061naG/EAtE May 15 10:44:10.926869 sshd[1428]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:44:10.930225 systemd-logind[1294]: New session 5 of user core. May 15 10:44:10.930929 systemd[1]: Started session-5.scope. May 15 10:44:10.981095 sshd[1428]: pam_unix(sshd:session): session closed for user core May 15 10:44:10.983587 systemd[1]: Started sshd@5-10.0.0.96:22-10.0.0.1:43446.service. May 15 10:44:10.984049 systemd[1]: sshd@4-10.0.0.96:22-10.0.0.1:43434.service: Deactivated successfully. May 15 10:44:10.985155 systemd[1]: session-5.scope: Deactivated successfully. May 15 10:44:10.985552 systemd-logind[1294]: Session 5 logged out. Waiting for processes to exit. May 15 10:44:10.986356 systemd-logind[1294]: Removed session 5. May 15 10:44:11.015342 sshd[1436]: Accepted publickey for core from 10.0.0.1 port 43446 ssh2: RSA SHA256:haioSl9UPoE92ibERJujrg0rXVEisgSt061naG/EAtE May 15 10:44:11.016496 sshd[1436]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:44:11.019970 systemd-logind[1294]: New session 6 of user core. May 15 10:44:11.020704 systemd[1]: Started session-6.scope. May 15 10:44:11.076145 sshd[1436]: pam_unix(sshd:session): session closed for user core May 15 10:44:11.078958 systemd[1]: Started sshd@6-10.0.0.96:22-10.0.0.1:43458.service. May 15 10:44:11.079664 systemd[1]: sshd@5-10.0.0.96:22-10.0.0.1:43446.service: Deactivated successfully. May 15 10:44:11.080596 systemd-logind[1294]: Session 6 logged out. Waiting for processes to exit. May 15 10:44:11.080674 systemd[1]: session-6.scope: Deactivated successfully. May 15 10:44:11.081810 systemd-logind[1294]: Removed session 6. May 15 10:44:11.111436 sshd[1442]: Accepted publickey for core from 10.0.0.1 port 43458 ssh2: RSA SHA256:haioSl9UPoE92ibERJujrg0rXVEisgSt061naG/EAtE May 15 10:44:11.112537 sshd[1442]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:44:11.115646 systemd-logind[1294]: New session 7 of user core. May 15 10:44:11.116531 systemd[1]: Started session-7.scope. May 15 10:44:11.171325 sudo[1448]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 15 10:44:11.171526 sudo[1448]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 15 10:44:11.194022 systemd[1]: Starting docker.service... May 15 10:44:11.248026 env[1460]: time="2025-05-15T10:44:11.247945732Z" level=info msg="Starting up" May 15 10:44:11.249449 env[1460]: time="2025-05-15T10:44:11.249402561Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 15 10:44:11.249449 env[1460]: time="2025-05-15T10:44:11.249433301Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 15 10:44:11.249546 env[1460]: time="2025-05-15T10:44:11.249453888Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 15 10:44:11.249546 env[1460]: time="2025-05-15T10:44:11.249464311Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 15 10:44:11.251816 env[1460]: time="2025-05-15T10:44:11.251781842Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 15 10:44:11.251816 env[1460]: time="2025-05-15T10:44:11.251803789Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 15 10:44:11.251898 env[1460]: time="2025-05-15T10:44:11.251821466Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 15 10:44:11.251898 env[1460]: time="2025-05-15T10:44:11.251829938Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 15 10:44:11.962364 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 15 10:44:11.962592 systemd[1]: Stopped kubelet.service. May 15 10:44:11.996806 systemd[1]: Starting kubelet.service... May 15 10:44:12.652920 systemd[1]: Started kubelet.service. May 15 10:44:12.710166 kubelet[1479]: E0515 10:44:12.710103 1479 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 10:44:12.713182 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 10:44:12.713406 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 10:44:13.442830 env[1460]: time="2025-05-15T10:44:13.442758318Z" level=warning msg="Your kernel does not support cgroup blkio weight" May 15 10:44:13.442830 env[1460]: time="2025-05-15T10:44:13.442795724Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" May 15 10:44:13.443345 env[1460]: time="2025-05-15T10:44:13.442993891Z" level=info msg="Loading containers: start." May 15 10:44:13.649656 kernel: Initializing XFRM netlink socket May 15 10:44:13.680006 env[1460]: time="2025-05-15T10:44:13.679947510Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 15 10:44:13.733083 systemd-networkd[1080]: docker0: Link UP May 15 10:44:13.749298 env[1460]: time="2025-05-15T10:44:13.749237198Z" level=info msg="Loading containers: done." May 15 10:44:13.857813 env[1460]: time="2025-05-15T10:44:13.857729494Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 15 10:44:13.858057 env[1460]: time="2025-05-15T10:44:13.857979928Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 15 10:44:13.858141 env[1460]: time="2025-05-15T10:44:13.858089343Z" level=info msg="Daemon has completed initialization" May 15 10:44:13.891261 systemd[1]: Started docker.service. May 15 10:44:13.931265 env[1460]: time="2025-05-15T10:44:13.931176129Z" level=info msg="API listen on /run/docker.sock" May 15 10:44:15.032377 env[1314]: time="2025-05-15T10:44:15.032308215Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 15 10:44:15.823162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount429491203.mount: Deactivated successfully. May 15 10:44:19.676530 env[1314]: time="2025-05-15T10:44:19.676416876Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:44:19.680665 env[1314]: time="2025-05-15T10:44:19.680570761Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:44:19.683578 env[1314]: time="2025-05-15T10:44:19.683493371Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:44:19.687259 env[1314]: time="2025-05-15T10:44:19.687200605Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:44:19.688294 env[1314]: time="2025-05-15T10:44:19.688213938Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 15 10:44:19.701878 env[1314]: time="2025-05-15T10:44:19.701817629Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 15 10:44:22.395342 env[1314]: time="2025-05-15T10:44:22.395270664Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:44:22.398051 env[1314]: time="2025-05-15T10:44:22.398021886Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:44:22.400589 env[1314]: time="2025-05-15T10:44:22.400541005Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:44:22.402685 env[1314]: time="2025-05-15T10:44:22.402652143Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:44:22.403384 env[1314]: time="2025-05-15T10:44:22.403347971Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 15 10:44:22.421157 env[1314]: time="2025-05-15T10:44:22.421116959Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 15 10:44:22.962112 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 15 10:44:22.962295 systemd[1]: Stopped kubelet.service. May 15 10:44:22.964011 systemd[1]: Starting kubelet.service... May 15 10:44:23.112664 systemd[1]: Started kubelet.service. May 15 10:44:23.171722 kubelet[1637]: E0515 10:44:23.171658 1637 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 10:44:23.173658 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 10:44:23.174025 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 10:44:24.660637 env[1314]: time="2025-05-15T10:44:24.660544731Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:44:24.662978 env[1314]: time="2025-05-15T10:44:24.662949086Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:44:24.664929 env[1314]: time="2025-05-15T10:44:24.664896388Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:44:24.667642 env[1314]: time="2025-05-15T10:44:24.667588164Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:44:24.668333 env[1314]: time="2025-05-15T10:44:24.668296249Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 15 10:44:24.684316 env[1314]: time="2025-05-15T10:44:24.684273878Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 15 10:44:26.837027 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount338093997.mount: Deactivated successfully. May 15 10:44:29.124836 env[1314]: time="2025-05-15T10:44:29.124719287Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:44:29.127108 env[1314]: time="2025-05-15T10:44:29.127078222Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:44:29.128509 env[1314]: time="2025-05-15T10:44:29.128440388Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:44:29.129689 env[1314]: time="2025-05-15T10:44:29.129653514Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:44:29.130206 env[1314]: time="2025-05-15T10:44:29.130166845Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 15 10:44:29.148595 env[1314]: time="2025-05-15T10:44:29.148534281Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 15 10:44:30.077595 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1905792090.mount: Deactivated successfully. May 15 10:44:31.165907 env[1314]: time="2025-05-15T10:44:31.165832021Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:44:31.168388 env[1314]: time="2025-05-15T10:44:31.168333976Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:44:31.172062 env[1314]: time="2025-05-15T10:44:31.172036144Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:44:31.174889 env[1314]: time="2025-05-15T10:44:31.174811173Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:44:31.176763 env[1314]: time="2025-05-15T10:44:31.176688544Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 15 10:44:31.200906 env[1314]: time="2025-05-15T10:44:31.200828603Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 15 10:44:31.630776 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2912435032.mount: Deactivated successfully. May 15 10:44:31.636557 env[1314]: time="2025-05-15T10:44:31.636521725Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:44:31.638473 env[1314]: time="2025-05-15T10:44:31.638423940Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:44:31.639779 env[1314]: time="2025-05-15T10:44:31.639758891Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:44:31.641458 env[1314]: time="2025-05-15T10:44:31.641422725Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:44:31.641910 env[1314]: time="2025-05-15T10:44:31.641863504Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 15 10:44:31.652037 env[1314]: time="2025-05-15T10:44:31.651991291Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 15 10:44:32.441197 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1434704015.mount: Deactivated successfully. May 15 10:44:33.212320 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 15 10:44:33.212531 systemd[1]: Stopped kubelet.service. May 15 10:44:33.215557 systemd[1]: Starting kubelet.service... May 15 10:44:33.304958 systemd[1]: Started kubelet.service. May 15 10:44:33.635135 kubelet[1678]: E0515 10:44:33.634954 1678 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 10:44:33.636776 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 10:44:33.636974 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 10:44:38.076489 env[1314]: time="2025-05-15T10:44:38.076423747Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:44:38.080148 env[1314]: time="2025-05-15T10:44:38.080088057Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:44:38.082446 env[1314]: time="2025-05-15T10:44:38.082417131Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:44:38.084528 env[1314]: time="2025-05-15T10:44:38.084489873Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:44:38.085233 env[1314]: time="2025-05-15T10:44:38.085201048Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 15 10:44:40.254961 systemd[1]: Stopped kubelet.service. May 15 10:44:40.257103 systemd[1]: Starting kubelet.service... May 15 10:44:40.274584 systemd[1]: Reloading. May 15 10:44:40.331854 /usr/lib/systemd/system-generators/torcx-generator[1793]: time="2025-05-15T10:44:40Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" May 15 10:44:40.331885 /usr/lib/systemd/system-generators/torcx-generator[1793]: time="2025-05-15T10:44:40Z" level=info msg="torcx already run" May 15 10:44:40.753958 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 15 10:44:40.753977 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 15 10:44:40.774052 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 10:44:40.848783 systemd[1]: Started kubelet.service. May 15 10:44:40.851792 systemd[1]: Stopping kubelet.service... May 15 10:44:40.852139 systemd[1]: kubelet.service: Deactivated successfully. May 15 10:44:40.852356 systemd[1]: Stopped kubelet.service. May 15 10:44:40.853785 systemd[1]: Starting kubelet.service... May 15 10:44:40.930238 systemd[1]: Started kubelet.service. May 15 10:44:40.966794 kubelet[1856]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 10:44:40.967195 kubelet[1856]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 10:44:40.967266 kubelet[1856]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 10:44:40.968634 kubelet[1856]: I0515 10:44:40.968585 1856 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 10:44:41.289557 kubelet[1856]: I0515 10:44:41.289512 1856 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 15 10:44:41.289557 kubelet[1856]: I0515 10:44:41.289540 1856 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 10:44:41.289773 kubelet[1856]: I0515 10:44:41.289754 1856 server.go:927] "Client rotation is on, will bootstrap in background" May 15 10:44:41.302335 kubelet[1856]: I0515 10:44:41.302279 1856 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 10:44:41.303644 kubelet[1856]: E0515 10:44:41.303606 1856 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.96:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.96:6443: connect: connection refused May 15 10:44:41.313046 kubelet[1856]: I0515 10:44:41.313013 1856 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 10:44:41.313954 kubelet[1856]: I0515 10:44:41.313917 1856 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 10:44:41.314109 kubelet[1856]: I0515 10:44:41.313949 1856 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 15 10:44:41.314209 kubelet[1856]: I0515 10:44:41.314117 1856 topology_manager.go:138] "Creating topology manager with none policy" May 15 10:44:41.314209 kubelet[1856]: I0515 10:44:41.314126 1856 container_manager_linux.go:301] "Creating device plugin manager" May 15 10:44:41.314265 kubelet[1856]: I0515 10:44:41.314240 1856 state_mem.go:36] "Initialized new in-memory state store" May 15 10:44:41.315094 kubelet[1856]: I0515 10:44:41.315073 1856 kubelet.go:400] "Attempting to sync node with API server" May 15 10:44:41.315094 kubelet[1856]: I0515 10:44:41.315092 1856 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 10:44:41.315151 kubelet[1856]: I0515 10:44:41.315124 1856 kubelet.go:312] "Adding apiserver pod source" May 15 10:44:41.315151 kubelet[1856]: I0515 10:44:41.315150 1856 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 10:44:41.325607 kubelet[1856]: I0515 10:44:41.325582 1856 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 15 10:44:41.330312 kubelet[1856]: W0515 10:44:41.330260 1856 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.96:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused May 15 10:44:41.330369 kubelet[1856]: E0515 10:44:41.330325 1856 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.96:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused May 15 10:44:41.334123 kubelet[1856]: I0515 10:44:41.334096 1856 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 10:44:41.334198 kubelet[1856]: W0515 10:44:41.334165 1856 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 10:44:41.334821 kubelet[1856]: I0515 10:44:41.334795 1856 server.go:1264] "Started kubelet" May 15 10:44:41.336704 kubelet[1856]: I0515 10:44:41.336671 1856 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 10:44:41.337507 kubelet[1856]: I0515 10:44:41.337480 1856 server.go:455] "Adding debug handlers to kubelet server" May 15 10:44:41.343051 kubelet[1856]: W0515 10:44:41.342999 1856 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.96:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused May 15 10:44:41.343051 kubelet[1856]: E0515 10:44:41.343049 1856 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.96:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused May 15 10:44:41.348700 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 15 10:44:41.353403 kubelet[1856]: I0515 10:44:41.353384 1856 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 10:44:41.355428 kubelet[1856]: I0515 10:44:41.355374 1856 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 10:44:41.355707 kubelet[1856]: I0515 10:44:41.355693 1856 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 10:44:41.355851 kubelet[1856]: E0515 10:44:41.355834 1856 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 10:44:41.355927 kubelet[1856]: E0515 10:44:41.355798 1856 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.96:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.96:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183fad7102e8da76 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-15 10:44:41.334766198 +0000 UTC m=+0.400890939,LastTimestamp:2025-05-15 10:44:41.334766198 +0000 UTC m=+0.400890939,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 15 10:44:41.356087 kubelet[1856]: I0515 10:44:41.356073 1856 volume_manager.go:291] "Starting Kubelet Volume Manager" May 15 10:44:41.356341 kubelet[1856]: I0515 10:44:41.356312 1856 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 10:44:41.356421 kubelet[1856]: I0515 10:44:41.356403 1856 reconciler.go:26] "Reconciler: start to sync state" May 15 10:44:41.357115 kubelet[1856]: W0515 10:44:41.356724 1856 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.96:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused May 15 10:44:41.357115 kubelet[1856]: E0515 10:44:41.356772 1856 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.96:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused May 15 10:44:41.357115 kubelet[1856]: E0515 10:44:41.357104 1856 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.96:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.96:6443: connect: connection refused" interval="200ms" May 15 10:44:41.358265 kubelet[1856]: I0515 10:44:41.358246 1856 factory.go:221] Registration of the systemd container factory successfully May 15 10:44:41.358344 kubelet[1856]: I0515 10:44:41.358324 1856 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 10:44:41.359592 kubelet[1856]: E0515 10:44:41.359553 1856 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 10:44:41.359962 kubelet[1856]: I0515 10:44:41.359941 1856 factory.go:221] Registration of the containerd container factory successfully May 15 10:44:41.367228 kubelet[1856]: I0515 10:44:41.367191 1856 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 10:44:41.368416 kubelet[1856]: I0515 10:44:41.368120 1856 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 10:44:41.368416 kubelet[1856]: I0515 10:44:41.368154 1856 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 10:44:41.368416 kubelet[1856]: I0515 10:44:41.368178 1856 kubelet.go:2337] "Starting kubelet main sync loop" May 15 10:44:41.368416 kubelet[1856]: E0515 10:44:41.368219 1856 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 10:44:41.373487 kubelet[1856]: W0515 10:44:41.373422 1856 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.96:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused May 15 10:44:41.373487 kubelet[1856]: E0515 10:44:41.373489 1856 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.96:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused May 15 10:44:41.379185 kubelet[1856]: I0515 10:44:41.379163 1856 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 10:44:41.379249 kubelet[1856]: I0515 10:44:41.379194 1856 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 10:44:41.379249 kubelet[1856]: I0515 10:44:41.379217 1856 state_mem.go:36] "Initialized new in-memory state store" May 15 10:44:41.457976 kubelet[1856]: I0515 10:44:41.457930 1856 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 15 10:44:41.458281 kubelet[1856]: E0515 10:44:41.458247 1856 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.96:6443/api/v1/nodes\": dial tcp 10.0.0.96:6443: connect: connection refused" node="localhost" May 15 10:44:41.468425 kubelet[1856]: E0515 10:44:41.468402 1856 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 15 10:44:41.558055 kubelet[1856]: E0515 10:44:41.557970 1856 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.96:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.96:6443: connect: connection refused" interval="400ms" May 15 10:44:41.660196 kubelet[1856]: I0515 10:44:41.660167 1856 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 15 10:44:41.660442 kubelet[1856]: E0515 10:44:41.660412 1856 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.96:6443/api/v1/nodes\": dial tcp 10.0.0.96:6443: connect: connection refused" node="localhost" May 15 10:44:41.668500 kubelet[1856]: E0515 10:44:41.668469 1856 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 15 10:44:41.959265 kubelet[1856]: E0515 10:44:41.959173 1856 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.96:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.96:6443: connect: connection refused" interval="800ms" May 15 10:44:42.061867 kubelet[1856]: I0515 10:44:42.061825 1856 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 15 10:44:42.062297 kubelet[1856]: E0515 10:44:42.062262 1856 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.96:6443/api/v1/nodes\": dial tcp 10.0.0.96:6443: connect: connection refused" node="localhost" May 15 10:44:42.069320 kubelet[1856]: E0515 10:44:42.069298 1856 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 15 10:44:42.079096 kubelet[1856]: I0515 10:44:42.079030 1856 policy_none.go:49] "None policy: Start" May 15 10:44:42.079788 kubelet[1856]: I0515 10:44:42.079761 1856 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 10:44:42.079901 kubelet[1856]: I0515 10:44:42.079801 1856 state_mem.go:35] "Initializing new in-memory state store" May 15 10:44:42.091421 kubelet[1856]: I0515 10:44:42.091312 1856 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 10:44:42.091693 kubelet[1856]: I0515 10:44:42.091502 1856 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 10:44:42.091693 kubelet[1856]: I0515 10:44:42.091672 1856 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 10:44:42.093140 kubelet[1856]: E0515 10:44:42.093122 1856 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 15 10:44:42.246461 kubelet[1856]: W0515 10:44:42.246401 1856 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.96:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused May 15 10:44:42.246461 kubelet[1856]: E0515 10:44:42.246456 1856 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.96:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused May 15 10:44:42.402208 kubelet[1856]: W0515 10:44:42.402120 1856 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.96:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused May 15 10:44:42.402208 kubelet[1856]: E0515 10:44:42.402191 1856 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.96:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused May 15 10:44:42.719321 kubelet[1856]: W0515 10:44:42.719246 1856 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.96:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused May 15 10:44:42.719321 kubelet[1856]: E0515 10:44:42.719324 1856 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.96:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused May 15 10:44:42.760019 kubelet[1856]: E0515 10:44:42.759959 1856 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.96:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.96:6443: connect: connection refused" interval="1.6s" May 15 10:44:42.846056 kubelet[1856]: W0515 10:44:42.846000 1856 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.96:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused May 15 10:44:42.846056 kubelet[1856]: E0515 10:44:42.846060 1856 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.96:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused May 15 10:44:42.864236 kubelet[1856]: I0515 10:44:42.864214 1856 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 15 10:44:42.864564 kubelet[1856]: E0515 10:44:42.864514 1856 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.96:6443/api/v1/nodes\": dial tcp 10.0.0.96:6443: connect: connection refused" node="localhost" May 15 10:44:42.869720 kubelet[1856]: I0515 10:44:42.869665 1856 topology_manager.go:215] "Topology Admit Handler" podUID="e850497759c7f871d2e8e7dd2865480a" podNamespace="kube-system" podName="kube-apiserver-localhost" May 15 10:44:42.870527 kubelet[1856]: I0515 10:44:42.870488 1856 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 15 10:44:42.871074 kubelet[1856]: I0515 10:44:42.871053 1856 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 15 10:44:42.966384 kubelet[1856]: I0515 10:44:42.966316 1856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:44:42.966384 kubelet[1856]: I0515 10:44:42.966361 1856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:44:42.966384 kubelet[1856]: I0515 10:44:42.966386 1856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 15 10:44:42.966384 kubelet[1856]: I0515 10:44:42.966401 1856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e850497759c7f871d2e8e7dd2865480a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e850497759c7f871d2e8e7dd2865480a\") " pod="kube-system/kube-apiserver-localhost" May 15 10:44:42.966384 kubelet[1856]: I0515 10:44:42.966414 1856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:44:42.966732 kubelet[1856]: I0515 10:44:42.966444 1856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:44:42.966732 kubelet[1856]: I0515 10:44:42.966531 1856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:44:42.966732 kubelet[1856]: I0515 10:44:42.966594 1856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e850497759c7f871d2e8e7dd2865480a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e850497759c7f871d2e8e7dd2865480a\") " pod="kube-system/kube-apiserver-localhost" May 15 10:44:42.966732 kubelet[1856]: I0515 10:44:42.966638 1856 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e850497759c7f871d2e8e7dd2865480a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e850497759c7f871d2e8e7dd2865480a\") " pod="kube-system/kube-apiserver-localhost" May 15 10:44:43.175581 kubelet[1856]: E0515 10:44:43.175439 1856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:44:43.175986 kubelet[1856]: E0515 10:44:43.175817 1856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:44:43.176170 env[1314]: time="2025-05-15T10:44:43.176126813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e850497759c7f871d2e8e7dd2865480a,Namespace:kube-system,Attempt:0,}" May 15 10:44:43.176750 env[1314]: time="2025-05-15T10:44:43.176703586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 15 10:44:43.176959 kubelet[1856]: E0515 10:44:43.176942 1856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:44:43.177177 env[1314]: time="2025-05-15T10:44:43.177152539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 15 10:44:43.377949 kubelet[1856]: E0515 10:44:43.377904 1856 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.96:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.96:6443: connect: connection refused May 15 10:44:43.650626 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1659587644.mount: Deactivated successfully. May 15 10:44:43.656902 env[1314]: time="2025-05-15T10:44:43.656827659Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:44:43.659898 env[1314]: time="2025-05-15T10:44:43.659834223Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:44:43.660772 env[1314]: time="2025-05-15T10:44:43.660726157Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:44:43.661721 env[1314]: time="2025-05-15T10:44:43.661685469Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:44:43.664121 env[1314]: time="2025-05-15T10:44:43.664085003Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:44:43.665268 env[1314]: time="2025-05-15T10:44:43.665228509Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:44:43.666417 env[1314]: time="2025-05-15T10:44:43.666386013Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:44:43.667626 env[1314]: time="2025-05-15T10:44:43.667575115Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:44:43.669141 env[1314]: time="2025-05-15T10:44:43.669118181Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:44:43.672312 env[1314]: time="2025-05-15T10:44:43.672259778Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:44:43.673086 env[1314]: time="2025-05-15T10:44:43.673051926Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:44:43.674645 env[1314]: time="2025-05-15T10:44:43.674594441Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:44:43.694845 env[1314]: time="2025-05-15T10:44:43.694748675Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:44:43.694845 env[1314]: time="2025-05-15T10:44:43.694808046Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:44:43.695124 env[1314]: time="2025-05-15T10:44:43.694822022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:44:43.695414 env[1314]: time="2025-05-15T10:44:43.695337791Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/80b39bbfcb7559726b84f1c7ccf8332138019593c804bf0e942963c2c9b65d1a pid=1897 runtime=io.containerd.runc.v2 May 15 10:44:43.703680 env[1314]: time="2025-05-15T10:44:43.703529228Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:44:43.703680 env[1314]: time="2025-05-15T10:44:43.703591625Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:44:43.703680 env[1314]: time="2025-05-15T10:44:43.703603737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:44:43.704061 env[1314]: time="2025-05-15T10:44:43.704033104Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/43e532ce9708cb367fea7d0e9758c03d4e6e01de017c417ddf1aa3b2eb513dd2 pid=1915 runtime=io.containerd.runc.v2 May 15 10:44:43.707664 env[1314]: time="2025-05-15T10:44:43.707563911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:44:43.707664 env[1314]: time="2025-05-15T10:44:43.707631418Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:44:43.707771 env[1314]: time="2025-05-15T10:44:43.707644784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:44:43.707852 env[1314]: time="2025-05-15T10:44:43.707820063Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2afdfb402556770a90777475b040cc39ad53e758b8724c7e72c030846bbfc629 pid=1937 runtime=io.containerd.runc.v2 May 15 10:44:43.751979 env[1314]: time="2025-05-15T10:44:43.747774566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"80b39bbfcb7559726b84f1c7ccf8332138019593c804bf0e942963c2c9b65d1a\"" May 15 10:44:43.751979 env[1314]: time="2025-05-15T10:44:43.751402596Z" level=info msg="CreateContainer within sandbox \"80b39bbfcb7559726b84f1c7ccf8332138019593c804bf0e942963c2c9b65d1a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 15 10:44:43.752203 kubelet[1856]: E0515 10:44:43.748905 1856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:44:43.763417 env[1314]: time="2025-05-15T10:44:43.763371304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"2afdfb402556770a90777475b040cc39ad53e758b8724c7e72c030846bbfc629\"" May 15 10:44:43.764738 kubelet[1856]: E0515 10:44:43.764429 1856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:44:43.764941 env[1314]: time="2025-05-15T10:44:43.764911184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e850497759c7f871d2e8e7dd2865480a,Namespace:kube-system,Attempt:0,} returns sandbox id \"43e532ce9708cb367fea7d0e9758c03d4e6e01de017c417ddf1aa3b2eb513dd2\"" May 15 10:44:43.766028 kubelet[1856]: E0515 10:44:43.765875 1856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:44:43.768292 env[1314]: time="2025-05-15T10:44:43.768264860Z" level=info msg="CreateContainer within sandbox \"2afdfb402556770a90777475b040cc39ad53e758b8724c7e72c030846bbfc629\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 15 10:44:43.768504 env[1314]: time="2025-05-15T10:44:43.768384134Z" level=info msg="CreateContainer within sandbox \"43e532ce9708cb367fea7d0e9758c03d4e6e01de017c417ddf1aa3b2eb513dd2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 15 10:44:43.891172 env[1314]: time="2025-05-15T10:44:43.891099937Z" level=info msg="CreateContainer within sandbox \"80b39bbfcb7559726b84f1c7ccf8332138019593c804bf0e942963c2c9b65d1a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ee43f8ae0f859da0f1fe591d9f6eaf49b2697c48bd112f1037bda4363e95faaf\"" May 15 10:44:43.891886 env[1314]: time="2025-05-15T10:44:43.891844955Z" level=info msg="StartContainer for \"ee43f8ae0f859da0f1fe591d9f6eaf49b2697c48bd112f1037bda4363e95faaf\"" May 15 10:44:43.892414 env[1314]: time="2025-05-15T10:44:43.892371955Z" level=info msg="CreateContainer within sandbox \"2afdfb402556770a90777475b040cc39ad53e758b8724c7e72c030846bbfc629\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"214ce3da5a310d581948a25182f02ce5613a608641ddff8eeace05e23ff85541\"" May 15 10:44:43.892795 env[1314]: time="2025-05-15T10:44:43.892758641Z" level=info msg="StartContainer for \"214ce3da5a310d581948a25182f02ce5613a608641ddff8eeace05e23ff85541\"" May 15 10:44:43.893961 env[1314]: time="2025-05-15T10:44:43.893923287Z" level=info msg="CreateContainer within sandbox \"43e532ce9708cb367fea7d0e9758c03d4e6e01de017c417ddf1aa3b2eb513dd2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5a02439ef6bb59e4eebbd0a246d447297afbe2db47d8273aea3c27f63734f856\"" May 15 10:44:43.894297 env[1314]: time="2025-05-15T10:44:43.894270760Z" level=info msg="StartContainer for \"5a02439ef6bb59e4eebbd0a246d447297afbe2db47d8273aea3c27f63734f856\"" May 15 10:44:43.960887 env[1314]: time="2025-05-15T10:44:43.953694330Z" level=info msg="StartContainer for \"214ce3da5a310d581948a25182f02ce5613a608641ddff8eeace05e23ff85541\" returns successfully" May 15 10:44:43.960887 env[1314]: time="2025-05-15T10:44:43.953763019Z" level=info msg="StartContainer for \"ee43f8ae0f859da0f1fe591d9f6eaf49b2697c48bd112f1037bda4363e95faaf\" returns successfully" May 15 10:44:43.973948 env[1314]: time="2025-05-15T10:44:43.973889601Z" level=info msg="StartContainer for \"5a02439ef6bb59e4eebbd0a246d447297afbe2db47d8273aea3c27f63734f856\" returns successfully" May 15 10:44:44.006849 update_engine[1298]: I0515 10:44:44.006779 1298 update_attempter.cc:509] Updating boot flags... May 15 10:44:44.382556 kubelet[1856]: E0515 10:44:44.382509 1856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:44:44.384650 kubelet[1856]: E0515 10:44:44.384610 1856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:44:44.386611 kubelet[1856]: E0515 10:44:44.386585 1856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:44:44.466725 kubelet[1856]: I0515 10:44:44.466669 1856 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 15 10:44:44.825388 kubelet[1856]: E0515 10:44:44.825337 1856 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 15 10:44:45.038319 kubelet[1856]: I0515 10:44:45.038277 1856 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 15 10:44:45.052746 kubelet[1856]: E0515 10:44:45.052685 1856 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 10:44:45.153143 kubelet[1856]: E0515 10:44:45.153007 1856 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 10:44:45.318081 kubelet[1856]: I0515 10:44:45.318004 1856 apiserver.go:52] "Watching apiserver" May 15 10:44:45.356526 kubelet[1856]: I0515 10:44:45.356463 1856 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 10:44:45.393345 kubelet[1856]: E0515 10:44:45.393303 1856 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 15 10:44:45.393780 kubelet[1856]: E0515 10:44:45.393759 1856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:44:45.466041 kubelet[1856]: E0515 10:44:45.465993 1856 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 15 10:44:45.466310 kubelet[1856]: E0515 10:44:45.466285 1856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:44:46.410193 kubelet[1856]: E0515 10:44:46.410160 1856 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:44:46.867766 systemd[1]: Reloading. May 15 10:44:46.934721 /usr/lib/systemd/system-generators/torcx-generator[2175]: time="2025-05-15T10:44:46Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" May 15 10:44:46.935072 /usr/lib/systemd/system-generators/torcx-generator[2175]: time="2025-05-15T10:44:46Z" level=info msg="torcx already run" May 15 10:44:47.007530 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 15 10:44:47.007548 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 15 10:44:47.026297 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 10:44:47.100213 systemd[1]: Stopping kubelet.service... May 15 10:44:47.100429 kubelet[1856]: E0515 10:44:47.100142 1856 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{localhost.183fad7102e8da76 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-15 10:44:41.334766198 +0000 UTC m=+0.400890939,LastTimestamp:2025-05-15 10:44:41.334766198 +0000 UTC m=+0.400890939,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 15 10:44:47.121185 systemd[1]: kubelet.service: Deactivated successfully. May 15 10:44:47.121570 systemd[1]: Stopped kubelet.service. May 15 10:44:47.123947 systemd[1]: Starting kubelet.service... May 15 10:44:47.213573 systemd[1]: Started kubelet.service. May 15 10:44:47.255088 kubelet[2231]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 10:44:47.255088 kubelet[2231]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 10:44:47.255088 kubelet[2231]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 10:44:47.255544 kubelet[2231]: I0515 10:44:47.255113 2231 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 10:44:47.260526 kubelet[2231]: I0515 10:44:47.260493 2231 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 15 10:44:47.260677 kubelet[2231]: I0515 10:44:47.260662 2231 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 10:44:47.261058 kubelet[2231]: I0515 10:44:47.261042 2231 server.go:927] "Client rotation is on, will bootstrap in background" May 15 10:44:47.263143 kubelet[2231]: I0515 10:44:47.263082 2231 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 15 10:44:47.264733 kubelet[2231]: I0515 10:44:47.264697 2231 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 10:44:47.274341 kubelet[2231]: I0515 10:44:47.274307 2231 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 10:44:47.275379 kubelet[2231]: I0515 10:44:47.275321 2231 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 10:44:47.276128 kubelet[2231]: I0515 10:44:47.275807 2231 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 15 10:44:47.276259 kubelet[2231]: I0515 10:44:47.276140 2231 topology_manager.go:138] "Creating topology manager with none policy" May 15 10:44:47.276259 kubelet[2231]: I0515 10:44:47.276153 2231 container_manager_linux.go:301] "Creating device plugin manager" May 15 10:44:47.276259 kubelet[2231]: I0515 10:44:47.276193 2231 state_mem.go:36] "Initialized new in-memory state store" May 15 10:44:47.276528 kubelet[2231]: I0515 10:44:47.276324 2231 kubelet.go:400] "Attempting to sync node with API server" May 15 10:44:47.276817 kubelet[2231]: I0515 10:44:47.276398 2231 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 10:44:47.277154 kubelet[2231]: I0515 10:44:47.277119 2231 kubelet.go:312] "Adding apiserver pod source" May 15 10:44:47.277229 kubelet[2231]: I0515 10:44:47.277161 2231 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 10:44:47.279813 kubelet[2231]: I0515 10:44:47.279777 2231 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 15 10:44:47.279988 kubelet[2231]: I0515 10:44:47.279965 2231 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 10:44:47.280417 kubelet[2231]: I0515 10:44:47.280398 2231 server.go:1264] "Started kubelet" May 15 10:44:47.284984 kubelet[2231]: I0515 10:44:47.282247 2231 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 10:44:47.284984 kubelet[2231]: I0515 10:44:47.282390 2231 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 10:44:47.284984 kubelet[2231]: I0515 10:44:47.282484 2231 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 10:44:47.284984 kubelet[2231]: I0515 10:44:47.282513 2231 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 10:44:47.284984 kubelet[2231]: I0515 10:44:47.284030 2231 volume_manager.go:291] "Starting Kubelet Volume Manager" May 15 10:44:47.284984 kubelet[2231]: I0515 10:44:47.284098 2231 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 10:44:47.284984 kubelet[2231]: I0515 10:44:47.284206 2231 reconciler.go:26] "Reconciler: start to sync state" May 15 10:44:47.284984 kubelet[2231]: E0515 10:44:47.284202 2231 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 10:44:47.286670 kubelet[2231]: I0515 10:44:47.286506 2231 server.go:455] "Adding debug handlers to kubelet server" May 15 10:44:47.288388 kubelet[2231]: I0515 10:44:47.288356 2231 factory.go:221] Registration of the containerd container factory successfully May 15 10:44:47.288388 kubelet[2231]: I0515 10:44:47.288371 2231 factory.go:221] Registration of the systemd container factory successfully May 15 10:44:47.288550 kubelet[2231]: I0515 10:44:47.288434 2231 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 10:44:47.299135 kubelet[2231]: I0515 10:44:47.299076 2231 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 10:44:47.300160 kubelet[2231]: I0515 10:44:47.300145 2231 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 10:44:47.300290 kubelet[2231]: I0515 10:44:47.300276 2231 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 10:44:47.300386 kubelet[2231]: I0515 10:44:47.300371 2231 kubelet.go:2337] "Starting kubelet main sync loop" May 15 10:44:47.300535 kubelet[2231]: E0515 10:44:47.300515 2231 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 10:44:47.329836 kubelet[2231]: I0515 10:44:47.329801 2231 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 10:44:47.329836 kubelet[2231]: I0515 10:44:47.329820 2231 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 10:44:47.329836 kubelet[2231]: I0515 10:44:47.329839 2231 state_mem.go:36] "Initialized new in-memory state store" May 15 10:44:47.330118 kubelet[2231]: I0515 10:44:47.329965 2231 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 15 10:44:47.330118 kubelet[2231]: I0515 10:44:47.329975 2231 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 15 10:44:47.330118 kubelet[2231]: I0515 10:44:47.329994 2231 policy_none.go:49] "None policy: Start" May 15 10:44:47.330696 kubelet[2231]: I0515 10:44:47.330670 2231 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 10:44:47.330696 kubelet[2231]: I0515 10:44:47.330712 2231 state_mem.go:35] "Initializing new in-memory state store" May 15 10:44:47.330914 kubelet[2231]: I0515 10:44:47.330887 2231 state_mem.go:75] "Updated machine memory state" May 15 10:44:47.332060 kubelet[2231]: I0515 10:44:47.332039 2231 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 10:44:47.332245 kubelet[2231]: I0515 10:44:47.332208 2231 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 10:44:47.332358 kubelet[2231]: I0515 10:44:47.332309 2231 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 10:44:47.388469 kubelet[2231]: I0515 10:44:47.388362 2231 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 15 10:44:47.401535 kubelet[2231]: I0515 10:44:47.401479 2231 topology_manager.go:215] "Topology Admit Handler" podUID="e850497759c7f871d2e8e7dd2865480a" podNamespace="kube-system" podName="kube-apiserver-localhost" May 15 10:44:47.401747 kubelet[2231]: I0515 10:44:47.401578 2231 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 15 10:44:47.401747 kubelet[2231]: I0515 10:44:47.401647 2231 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 15 10:44:47.448399 kubelet[2231]: E0515 10:44:47.448360 2231 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 15 10:44:47.449394 kubelet[2231]: I0515 10:44:47.449353 2231 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 15 10:44:47.449488 kubelet[2231]: I0515 10:44:47.449459 2231 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 15 10:44:47.484642 kubelet[2231]: I0515 10:44:47.484570 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:44:47.484841 kubelet[2231]: I0515 10:44:47.484652 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:44:47.484841 kubelet[2231]: I0515 10:44:47.484744 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:44:47.484841 kubelet[2231]: I0515 10:44:47.484806 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 15 10:44:47.484841 kubelet[2231]: I0515 10:44:47.484833 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e850497759c7f871d2e8e7dd2865480a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e850497759c7f871d2e8e7dd2865480a\") " pod="kube-system/kube-apiserver-localhost" May 15 10:44:47.484987 kubelet[2231]: I0515 10:44:47.484867 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e850497759c7f871d2e8e7dd2865480a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e850497759c7f871d2e8e7dd2865480a\") " pod="kube-system/kube-apiserver-localhost" May 15 10:44:47.484987 kubelet[2231]: I0515 10:44:47.484899 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e850497759c7f871d2e8e7dd2865480a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e850497759c7f871d2e8e7dd2865480a\") " pod="kube-system/kube-apiserver-localhost" May 15 10:44:47.484987 kubelet[2231]: I0515 10:44:47.484924 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:44:47.484987 kubelet[2231]: I0515 10:44:47.484949 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:44:47.749751 kubelet[2231]: E0515 10:44:47.749672 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:44:47.749751 kubelet[2231]: E0515 10:44:47.749726 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:44:47.750022 kubelet[2231]: E0515 10:44:47.749795 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:44:47.865076 sudo[2265]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 15 10:44:47.865273 sudo[2265]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) May 15 10:44:48.278208 kubelet[2231]: I0515 10:44:48.278142 2231 apiserver.go:52] "Watching apiserver" May 15 10:44:48.284825 kubelet[2231]: I0515 10:44:48.284774 2231 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 10:44:48.309953 kubelet[2231]: E0515 10:44:48.309930 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:44:48.311092 kubelet[2231]: E0515 10:44:48.310697 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:44:48.313530 sudo[2265]: pam_unix(sudo:session): session closed for user root May 15 10:44:48.315159 kubelet[2231]: E0515 10:44:48.315125 2231 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 15 10:44:48.315498 kubelet[2231]: E0515 10:44:48.315473 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:44:48.326129 kubelet[2231]: I0515 10:44:48.326049 2231 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.326033931 podStartE2EDuration="1.326033931s" podCreationTimestamp="2025-05-15 10:44:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 10:44:48.32573525 +0000 UTC m=+1.108060663" watchObservedRunningTime="2025-05-15 10:44:48.326033931 +0000 UTC m=+1.108359344" May 15 10:44:48.335300 kubelet[2231]: I0515 10:44:48.335213 2231 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.335191969 podStartE2EDuration="2.335191969s" podCreationTimestamp="2025-05-15 10:44:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 10:44:48.335019546 +0000 UTC m=+1.117344959" watchObservedRunningTime="2025-05-15 10:44:48.335191969 +0000 UTC m=+1.117517383" May 15 10:44:48.349226 kubelet[2231]: I0515 10:44:48.349163 2231 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.34914112 podStartE2EDuration="1.34914112s" podCreationTimestamp="2025-05-15 10:44:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 10:44:48.342536515 +0000 UTC m=+1.124861918" watchObservedRunningTime="2025-05-15 10:44:48.34914112 +0000 UTC m=+1.131466533" May 15 10:44:49.311864 kubelet[2231]: E0515 10:44:49.311805 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:44:49.790587 sudo[1448]: pam_unix(sudo:session): session closed for user root May 15 10:44:49.791664 sshd[1442]: pam_unix(sshd:session): session closed for user core May 15 10:44:49.793730 systemd[1]: sshd@6-10.0.0.96:22-10.0.0.1:43458.service: Deactivated successfully. May 15 10:44:49.794747 systemd-logind[1294]: Session 7 logged out. Waiting for processes to exit. May 15 10:44:49.794806 systemd[1]: session-7.scope: Deactivated successfully. May 15 10:44:49.795490 systemd-logind[1294]: Removed session 7. May 15 10:44:50.710158 kubelet[2231]: E0515 10:44:50.710122 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:44:53.986239 kubelet[2231]: E0515 10:44:53.986194 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:44:54.148945 kubelet[2231]: E0515 10:44:54.148903 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:44:54.318096 kubelet[2231]: E0515 10:44:54.317216 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:44:54.318096 kubelet[2231]: E0515 10:44:54.317348 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:45:00.713714 kubelet[2231]: E0515 10:45:00.713678 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:45:02.275659 kubelet[2231]: I0515 10:45:02.275598 2231 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 15 10:45:02.276477 env[1314]: time="2025-05-15T10:45:02.276426061Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 10:45:02.277121 kubelet[2231]: I0515 10:45:02.277087 2231 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 15 10:45:03.225029 kubelet[2231]: I0515 10:45:03.224980 2231 topology_manager.go:215] "Topology Admit Handler" podUID="ddf85c33-30ef-409f-a6ca-62eb53943349" podNamespace="kube-system" podName="kube-proxy-25zhs" May 15 10:45:03.234060 kubelet[2231]: I0515 10:45:03.234000 2231 topology_manager.go:215] "Topology Admit Handler" podUID="f6539885-94f1-4060-8592-f691eb278487" podNamespace="kube-system" podName="cilium-ppmc9" May 15 10:45:03.280339 kubelet[2231]: I0515 10:45:03.280287 2231 topology_manager.go:215] "Topology Admit Handler" podUID="f055928f-4e13-4f87-ae9d-e8b48941c8c9" podNamespace="kube-system" podName="cilium-operator-599987898-dnr77" May 15 10:45:03.285954 kubelet[2231]: I0515 10:45:03.285882 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f6539885-94f1-4060-8592-f691eb278487-etc-cni-netd\") pod \"cilium-ppmc9\" (UID: \"f6539885-94f1-4060-8592-f691eb278487\") " pod="kube-system/cilium-ppmc9" May 15 10:45:03.285954 kubelet[2231]: I0515 10:45:03.285955 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f6539885-94f1-4060-8592-f691eb278487-hostproc\") pod \"cilium-ppmc9\" (UID: \"f6539885-94f1-4060-8592-f691eb278487\") " pod="kube-system/cilium-ppmc9" May 15 10:45:03.286178 kubelet[2231]: I0515 10:45:03.285989 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f6539885-94f1-4060-8592-f691eb278487-host-proc-sys-kernel\") pod \"cilium-ppmc9\" (UID: \"f6539885-94f1-4060-8592-f691eb278487\") " pod="kube-system/cilium-ppmc9" May 15 10:45:03.286178 kubelet[2231]: I0515 10:45:03.286019 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f6539885-94f1-4060-8592-f691eb278487-cni-path\") pod \"cilium-ppmc9\" (UID: \"f6539885-94f1-4060-8592-f691eb278487\") " pod="kube-system/cilium-ppmc9" May 15 10:45:03.286178 kubelet[2231]: I0515 10:45:03.286046 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f6539885-94f1-4060-8592-f691eb278487-lib-modules\") pod \"cilium-ppmc9\" (UID: \"f6539885-94f1-4060-8592-f691eb278487\") " pod="kube-system/cilium-ppmc9" May 15 10:45:03.286178 kubelet[2231]: I0515 10:45:03.286074 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f6539885-94f1-4060-8592-f691eb278487-xtables-lock\") pod \"cilium-ppmc9\" (UID: \"f6539885-94f1-4060-8592-f691eb278487\") " pod="kube-system/cilium-ppmc9" May 15 10:45:03.286178 kubelet[2231]: I0515 10:45:03.286096 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f6539885-94f1-4060-8592-f691eb278487-hubble-tls\") pod \"cilium-ppmc9\" (UID: \"f6539885-94f1-4060-8592-f691eb278487\") " pod="kube-system/cilium-ppmc9" May 15 10:45:03.286178 kubelet[2231]: I0515 10:45:03.286126 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpgsj\" (UniqueName: \"kubernetes.io/projected/ddf85c33-30ef-409f-a6ca-62eb53943349-kube-api-access-lpgsj\") pod \"kube-proxy-25zhs\" (UID: \"ddf85c33-30ef-409f-a6ca-62eb53943349\") " pod="kube-system/kube-proxy-25zhs" May 15 10:45:03.286417 kubelet[2231]: I0515 10:45:03.286157 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f6539885-94f1-4060-8592-f691eb278487-cilium-config-path\") pod \"cilium-ppmc9\" (UID: \"f6539885-94f1-4060-8592-f691eb278487\") " pod="kube-system/cilium-ppmc9" May 15 10:45:03.286417 kubelet[2231]: I0515 10:45:03.286184 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhpbq\" (UniqueName: \"kubernetes.io/projected/f6539885-94f1-4060-8592-f691eb278487-kube-api-access-nhpbq\") pod \"cilium-ppmc9\" (UID: \"f6539885-94f1-4060-8592-f691eb278487\") " pod="kube-system/cilium-ppmc9" May 15 10:45:03.286417 kubelet[2231]: I0515 10:45:03.286217 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ddf85c33-30ef-409f-a6ca-62eb53943349-kube-proxy\") pod \"kube-proxy-25zhs\" (UID: \"ddf85c33-30ef-409f-a6ca-62eb53943349\") " pod="kube-system/kube-proxy-25zhs" May 15 10:45:03.286417 kubelet[2231]: I0515 10:45:03.286243 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f6539885-94f1-4060-8592-f691eb278487-cilium-run\") pod \"cilium-ppmc9\" (UID: \"f6539885-94f1-4060-8592-f691eb278487\") " pod="kube-system/cilium-ppmc9" May 15 10:45:03.286417 kubelet[2231]: I0515 10:45:03.286272 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f6539885-94f1-4060-8592-f691eb278487-bpf-maps\") pod \"cilium-ppmc9\" (UID: \"f6539885-94f1-4060-8592-f691eb278487\") " pod="kube-system/cilium-ppmc9" May 15 10:45:03.286417 kubelet[2231]: I0515 10:45:03.286302 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ddf85c33-30ef-409f-a6ca-62eb53943349-xtables-lock\") pod \"kube-proxy-25zhs\" (UID: \"ddf85c33-30ef-409f-a6ca-62eb53943349\") " pod="kube-system/kube-proxy-25zhs" May 15 10:45:03.286603 kubelet[2231]: I0515 10:45:03.286331 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f6539885-94f1-4060-8592-f691eb278487-clustermesh-secrets\") pod \"cilium-ppmc9\" (UID: \"f6539885-94f1-4060-8592-f691eb278487\") " pod="kube-system/cilium-ppmc9" May 15 10:45:03.286603 kubelet[2231]: I0515 10:45:03.286359 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ddf85c33-30ef-409f-a6ca-62eb53943349-lib-modules\") pod \"kube-proxy-25zhs\" (UID: \"ddf85c33-30ef-409f-a6ca-62eb53943349\") " pod="kube-system/kube-proxy-25zhs" May 15 10:45:03.286603 kubelet[2231]: I0515 10:45:03.286396 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f6539885-94f1-4060-8592-f691eb278487-cilium-cgroup\") pod \"cilium-ppmc9\" (UID: \"f6539885-94f1-4060-8592-f691eb278487\") " pod="kube-system/cilium-ppmc9" May 15 10:45:03.286603 kubelet[2231]: I0515 10:45:03.286428 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f6539885-94f1-4060-8592-f691eb278487-host-proc-sys-net\") pod \"cilium-ppmc9\" (UID: \"f6539885-94f1-4060-8592-f691eb278487\") " pod="kube-system/cilium-ppmc9" May 15 10:45:03.386681 kubelet[2231]: I0515 10:45:03.386605 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f055928f-4e13-4f87-ae9d-e8b48941c8c9-cilium-config-path\") pod \"cilium-operator-599987898-dnr77\" (UID: \"f055928f-4e13-4f87-ae9d-e8b48941c8c9\") " pod="kube-system/cilium-operator-599987898-dnr77" May 15 10:45:03.386922 kubelet[2231]: I0515 10:45:03.386902 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdjx5\" (UniqueName: \"kubernetes.io/projected/f055928f-4e13-4f87-ae9d-e8b48941c8c9-kube-api-access-kdjx5\") pod \"cilium-operator-599987898-dnr77\" (UID: \"f055928f-4e13-4f87-ae9d-e8b48941c8c9\") " pod="kube-system/cilium-operator-599987898-dnr77" May 15 10:45:03.530571 kubelet[2231]: E0515 10:45:03.530424 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:45:03.531414 env[1314]: time="2025-05-15T10:45:03.531171494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-25zhs,Uid:ddf85c33-30ef-409f-a6ca-62eb53943349,Namespace:kube-system,Attempt:0,}" May 15 10:45:03.537022 kubelet[2231]: E0515 10:45:03.536993 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:45:03.537460 env[1314]: time="2025-05-15T10:45:03.537412965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ppmc9,Uid:f6539885-94f1-4060-8592-f691eb278487,Namespace:kube-system,Attempt:0,}" May 15 10:45:03.583338 kubelet[2231]: E0515 10:45:03.583276 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:45:03.583947 env[1314]: time="2025-05-15T10:45:03.583900882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-dnr77,Uid:f055928f-4e13-4f87-ae9d-e8b48941c8c9,Namespace:kube-system,Attempt:0,}" May 15 10:45:04.729179 env[1314]: time="2025-05-15T10:45:04.729107670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:45:04.729830 env[1314]: time="2025-05-15T10:45:04.729784079Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:45:04.729830 env[1314]: time="2025-05-15T10:45:04.729810168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:45:04.731547 env[1314]: time="2025-05-15T10:45:04.731437642Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1dec96e0b69442ddbd975803b0ec29ba74e78104126a66962a1517e95e0e4ddb pid=2329 runtime=io.containerd.runc.v2 May 15 10:45:04.731547 env[1314]: time="2025-05-15T10:45:04.731330531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:45:04.731547 env[1314]: time="2025-05-15T10:45:04.731385724Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:45:04.731547 env[1314]: time="2025-05-15T10:45:04.731408958Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:45:04.731747 env[1314]: time="2025-05-15T10:45:04.731576262Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3875ce5784aef6c4ba5ae21ca62aef3d018ecc39cbc2e2eb95e5fb46aa66341e pid=2335 runtime=io.containerd.runc.v2 May 15 10:45:04.753435 env[1314]: time="2025-05-15T10:45:04.753260740Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:45:04.753435 env[1314]: time="2025-05-15T10:45:04.753296527Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:45:04.753435 env[1314]: time="2025-05-15T10:45:04.753305844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:45:04.753749 env[1314]: time="2025-05-15T10:45:04.753427773Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c619fcfcfd47a88d84478a41dad55b7ae8c81c81205de5b0530371fd09e40c4e pid=2384 runtime=io.containerd.runc.v2 May 15 10:45:04.766195 env[1314]: time="2025-05-15T10:45:04.766137381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ppmc9,Uid:f6539885-94f1-4060-8592-f691eb278487,Namespace:kube-system,Attempt:0,} returns sandbox id \"1dec96e0b69442ddbd975803b0ec29ba74e78104126a66962a1517e95e0e4ddb\"" May 15 10:45:04.766721 kubelet[2231]: E0515 10:45:04.766698 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:45:04.768212 env[1314]: time="2025-05-15T10:45:04.767880962Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 15 10:45:04.795682 env[1314]: time="2025-05-15T10:45:04.795431596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-25zhs,Uid:ddf85c33-30ef-409f-a6ca-62eb53943349,Namespace:kube-system,Attempt:0,} returns sandbox id \"3875ce5784aef6c4ba5ae21ca62aef3d018ecc39cbc2e2eb95e5fb46aa66341e\"" May 15 10:45:04.796690 kubelet[2231]: E0515 10:45:04.796565 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:45:04.800435 env[1314]: time="2025-05-15T10:45:04.800385160Z" level=info msg="CreateContainer within sandbox \"3875ce5784aef6c4ba5ae21ca62aef3d018ecc39cbc2e2eb95e5fb46aa66341e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 10:45:04.813759 env[1314]: time="2025-05-15T10:45:04.813700454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-dnr77,Uid:f055928f-4e13-4f87-ae9d-e8b48941c8c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"c619fcfcfd47a88d84478a41dad55b7ae8c81c81205de5b0530371fd09e40c4e\"" May 15 10:45:04.814538 kubelet[2231]: E0515 10:45:04.814513 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:45:04.845291 env[1314]: time="2025-05-15T10:45:04.845217751Z" level=info msg="CreateContainer within sandbox \"3875ce5784aef6c4ba5ae21ca62aef3d018ecc39cbc2e2eb95e5fb46aa66341e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"768100a9642064e3c5964558a5fd56cc595830bc07f57796e6e261fa6e323d41\"" May 15 10:45:04.845849 env[1314]: time="2025-05-15T10:45:04.845822505Z" level=info msg="StartContainer for \"768100a9642064e3c5964558a5fd56cc595830bc07f57796e6e261fa6e323d41\"" May 15 10:45:04.894965 env[1314]: time="2025-05-15T10:45:04.894901894Z" level=info msg="StartContainer for \"768100a9642064e3c5964558a5fd56cc595830bc07f57796e6e261fa6e323d41\" returns successfully" May 15 10:45:05.333843 kubelet[2231]: E0515 10:45:05.333782 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:45:09.210555 systemd[1]: Started sshd@7-10.0.0.96:22-10.0.0.1:40464.service. May 15 10:45:09.258113 sshd[2602]: Accepted publickey for core from 10.0.0.1 port 40464 ssh2: RSA SHA256:haioSl9UPoE92ibERJujrg0rXVEisgSt061naG/EAtE May 15 10:45:09.259561 sshd[2602]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:45:09.264813 systemd-logind[1294]: New session 8 of user core. May 15 10:45:09.265201 systemd[1]: Started session-8.scope. May 15 10:45:09.379805 sshd[2602]: pam_unix(sshd:session): session closed for user core May 15 10:45:09.381917 systemd-logind[1294]: Session 8 logged out. Waiting for processes to exit. May 15 10:45:09.382167 systemd[1]: sshd@7-10.0.0.96:22-10.0.0.1:40464.service: Deactivated successfully. May 15 10:45:09.382890 systemd[1]: session-8.scope: Deactivated successfully. May 15 10:45:09.384090 systemd-logind[1294]: Removed session 8. May 15 10:45:09.479861 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2153891352.mount: Deactivated successfully. May 15 10:45:13.178220 env[1314]: time="2025-05-15T10:45:13.178149100Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:45:13.180263 env[1314]: time="2025-05-15T10:45:13.180224383Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:45:13.181929 env[1314]: time="2025-05-15T10:45:13.181888245Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:45:13.182427 env[1314]: time="2025-05-15T10:45:13.182387512Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 15 10:45:13.183411 env[1314]: time="2025-05-15T10:45:13.183381788Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 15 10:45:13.185361 env[1314]: time="2025-05-15T10:45:13.185323471Z" level=info msg="CreateContainer within sandbox \"1dec96e0b69442ddbd975803b0ec29ba74e78104126a66962a1517e95e0e4ddb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 10:45:13.200246 env[1314]: time="2025-05-15T10:45:13.200195273Z" level=info msg="CreateContainer within sandbox \"1dec96e0b69442ddbd975803b0ec29ba74e78104126a66962a1517e95e0e4ddb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c79b8fa4536fdafb28803b5bbb062c1de14a42aff0fc5951cf9af7d27fa9a563\"" May 15 10:45:13.200849 env[1314]: time="2025-05-15T10:45:13.200819854Z" level=info msg="StartContainer for \"c79b8fa4536fdafb28803b5bbb062c1de14a42aff0fc5951cf9af7d27fa9a563\"" May 15 10:45:13.755019 env[1314]: time="2025-05-15T10:45:13.754676755Z" level=info msg="StartContainer for \"c79b8fa4536fdafb28803b5bbb062c1de14a42aff0fc5951cf9af7d27fa9a563\" returns successfully" May 15 10:45:13.777389 env[1314]: time="2025-05-15T10:45:13.777340796Z" level=info msg="shim disconnected" id=c79b8fa4536fdafb28803b5bbb062c1de14a42aff0fc5951cf9af7d27fa9a563 May 15 10:45:13.777389 env[1314]: time="2025-05-15T10:45:13.777387153Z" level=warning msg="cleaning up after shim disconnected" id=c79b8fa4536fdafb28803b5bbb062c1de14a42aff0fc5951cf9af7d27fa9a563 namespace=k8s.io May 15 10:45:13.777389 env[1314]: time="2025-05-15T10:45:13.777396351Z" level=info msg="cleaning up dead shim" May 15 10:45:13.783336 env[1314]: time="2025-05-15T10:45:13.783300608Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:45:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2666 runtime=io.containerd.runc.v2\n" May 15 10:45:14.196302 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c79b8fa4536fdafb28803b5bbb062c1de14a42aff0fc5951cf9af7d27fa9a563-rootfs.mount: Deactivated successfully. May 15 10:45:14.383276 systemd[1]: Started sshd@8-10.0.0.96:22-10.0.0.1:47718.service. May 15 10:45:14.439053 sshd[2678]: Accepted publickey for core from 10.0.0.1 port 47718 ssh2: RSA SHA256:haioSl9UPoE92ibERJujrg0rXVEisgSt061naG/EAtE May 15 10:45:14.439980 sshd[2678]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:45:14.443668 systemd-logind[1294]: New session 9 of user core. May 15 10:45:14.444476 systemd[1]: Started session-9.scope. May 15 10:45:14.546866 sshd[2678]: pam_unix(sshd:session): session closed for user core May 15 10:45:14.549253 systemd[1]: sshd@8-10.0.0.96:22-10.0.0.1:47718.service: Deactivated successfully. May 15 10:45:14.550142 systemd[1]: session-9.scope: Deactivated successfully. May 15 10:45:14.551128 systemd-logind[1294]: Session 9 logged out. Waiting for processes to exit. May 15 10:45:14.552013 systemd-logind[1294]: Removed session 9. May 15 10:45:14.759532 kubelet[2231]: E0515 10:45:14.759494 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:45:14.761259 env[1314]: time="2025-05-15T10:45:14.761219461Z" level=info msg="CreateContainer within sandbox \"1dec96e0b69442ddbd975803b0ec29ba74e78104126a66962a1517e95e0e4ddb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 10:45:14.952938 kubelet[2231]: I0515 10:45:14.951919 2231 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-25zhs" podStartSLOduration=11.951903358 podStartE2EDuration="11.951903358s" podCreationTimestamp="2025-05-15 10:45:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 10:45:05.342499145 +0000 UTC m=+18.124824558" watchObservedRunningTime="2025-05-15 10:45:14.951903358 +0000 UTC m=+27.734228771" May 15 10:45:15.017263 env[1314]: time="2025-05-15T10:45:15.017198754Z" level=info msg="CreateContainer within sandbox \"1dec96e0b69442ddbd975803b0ec29ba74e78104126a66962a1517e95e0e4ddb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"931c1fbadb9a0dc943722f048560ad78031dcc555f4701e5f0bfa7c435f17a3c\"" May 15 10:45:15.017842 env[1314]: time="2025-05-15T10:45:15.017804821Z" level=info msg="StartContainer for \"931c1fbadb9a0dc943722f048560ad78031dcc555f4701e5f0bfa7c435f17a3c\"" May 15 10:45:15.060888 env[1314]: time="2025-05-15T10:45:15.060835614Z" level=info msg="StartContainer for \"931c1fbadb9a0dc943722f048560ad78031dcc555f4701e5f0bfa7c435f17a3c\" returns successfully" May 15 10:45:15.067814 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 10:45:15.068052 systemd[1]: Stopped systemd-sysctl.service. May 15 10:45:15.068239 systemd[1]: Stopping systemd-sysctl.service... May 15 10:45:15.069788 systemd[1]: Starting systemd-sysctl.service... May 15 10:45:15.079736 systemd[1]: Finished systemd-sysctl.service. May 15 10:45:15.092570 env[1314]: time="2025-05-15T10:45:15.092509159Z" level=info msg="shim disconnected" id=931c1fbadb9a0dc943722f048560ad78031dcc555f4701e5f0bfa7c435f17a3c May 15 10:45:15.092570 env[1314]: time="2025-05-15T10:45:15.092552921Z" level=warning msg="cleaning up after shim disconnected" id=931c1fbadb9a0dc943722f048560ad78031dcc555f4701e5f0bfa7c435f17a3c namespace=k8s.io May 15 10:45:15.092570 env[1314]: time="2025-05-15T10:45:15.092561417Z" level=info msg="cleaning up dead shim" May 15 10:45:15.100116 env[1314]: time="2025-05-15T10:45:15.100065676Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:45:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2746 runtime=io.containerd.runc.v2\n" May 15 10:45:15.195916 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-931c1fbadb9a0dc943722f048560ad78031dcc555f4701e5f0bfa7c435f17a3c-rootfs.mount: Deactivated successfully. May 15 10:45:15.402965 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3153486731.mount: Deactivated successfully. May 15 10:45:15.763863 kubelet[2231]: E0515 10:45:15.763136 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:45:15.769749 env[1314]: time="2025-05-15T10:45:15.769711815Z" level=info msg="CreateContainer within sandbox \"1dec96e0b69442ddbd975803b0ec29ba74e78104126a66962a1517e95e0e4ddb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 10:45:15.939681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount671017627.mount: Deactivated successfully. May 15 10:45:15.956788 env[1314]: time="2025-05-15T10:45:15.956719703Z" level=info msg="CreateContainer within sandbox \"1dec96e0b69442ddbd975803b0ec29ba74e78104126a66962a1517e95e0e4ddb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1af8b3c850675fc025a8724c080b8e3169f1febd61817cac3bcbd81e528649c8\"" May 15 10:45:15.957397 env[1314]: time="2025-05-15T10:45:15.957365224Z" level=info msg="StartContainer for \"1af8b3c850675fc025a8724c080b8e3169f1febd61817cac3bcbd81e528649c8\"" May 15 10:45:16.002537 env[1314]: time="2025-05-15T10:45:16.002476911Z" level=info msg="StartContainer for \"1af8b3c850675fc025a8724c080b8e3169f1febd61817cac3bcbd81e528649c8\" returns successfully" May 15 10:45:16.004009 env[1314]: time="2025-05-15T10:45:16.003966155Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:45:16.006916 env[1314]: time="2025-05-15T10:45:16.006885061Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:45:16.008512 env[1314]: time="2025-05-15T10:45:16.008490173Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:45:16.009182 env[1314]: time="2025-05-15T10:45:16.008852994Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 15 10:45:16.011765 env[1314]: time="2025-05-15T10:45:16.011739971Z" level=info msg="CreateContainer within sandbox \"c619fcfcfd47a88d84478a41dad55b7ae8c81c81205de5b0530371fd09e40c4e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 15 10:45:16.343936 env[1314]: time="2025-05-15T10:45:16.343881231Z" level=info msg="shim disconnected" id=1af8b3c850675fc025a8724c080b8e3169f1febd61817cac3bcbd81e528649c8 May 15 10:45:16.343936 env[1314]: time="2025-05-15T10:45:16.343930423Z" level=warning msg="cleaning up after shim disconnected" id=1af8b3c850675fc025a8724c080b8e3169f1febd61817cac3bcbd81e528649c8 namespace=k8s.io May 15 10:45:16.343936 env[1314]: time="2025-05-15T10:45:16.343939079Z" level=info msg="cleaning up dead shim" May 15 10:45:16.344738 env[1314]: time="2025-05-15T10:45:16.344700587Z" level=info msg="CreateContainer within sandbox \"c619fcfcfd47a88d84478a41dad55b7ae8c81c81205de5b0530371fd09e40c4e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e4b44270d30c4f83a258b3a7bdb37b760f41aeb8dda755148affc710e82a0298\"" May 15 10:45:16.345807 env[1314]: time="2025-05-15T10:45:16.345389861Z" level=info msg="StartContainer for \"e4b44270d30c4f83a258b3a7bdb37b760f41aeb8dda755148affc710e82a0298\"" May 15 10:45:16.355701 env[1314]: time="2025-05-15T10:45:16.355647886Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:45:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2803 runtime=io.containerd.runc.v2\n" May 15 10:45:16.390965 env[1314]: time="2025-05-15T10:45:16.390885538Z" level=info msg="StartContainer for \"e4b44270d30c4f83a258b3a7bdb37b760f41aeb8dda755148affc710e82a0298\" returns successfully" May 15 10:45:16.766424 kubelet[2231]: E0515 10:45:16.766377 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:45:16.768900 kubelet[2231]: E0515 10:45:16.768869 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:45:16.771339 env[1314]: time="2025-05-15T10:45:16.771293645Z" level=info msg="CreateContainer within sandbox \"1dec96e0b69442ddbd975803b0ec29ba74e78104126a66962a1517e95e0e4ddb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 10:45:16.786330 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount146811330.mount: Deactivated successfully. May 15 10:45:16.792292 env[1314]: time="2025-05-15T10:45:16.788104744Z" level=info msg="CreateContainer within sandbox \"1dec96e0b69442ddbd975803b0ec29ba74e78104126a66962a1517e95e0e4ddb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0fed8ebeed140fa9699a1453e9e89d7018d1bd8a75b08b3e32d09ea5f0a440bf\"" May 15 10:45:16.792292 env[1314]: time="2025-05-15T10:45:16.789171657Z" level=info msg="StartContainer for \"0fed8ebeed140fa9699a1453e9e89d7018d1bd8a75b08b3e32d09ea5f0a440bf\"" May 15 10:45:16.792406 kubelet[2231]: I0515 10:45:16.791225 2231 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-dnr77" podStartSLOduration=2.5969808739999998 podStartE2EDuration="13.791203349s" podCreationTimestamp="2025-05-15 10:45:03 +0000 UTC" firstStartedPulling="2025-05-15 10:45:04.815992114 +0000 UTC m=+17.598317527" lastFinishedPulling="2025-05-15 10:45:16.010214579 +0000 UTC m=+28.792540002" observedRunningTime="2025-05-15 10:45:16.775107319 +0000 UTC m=+29.557432732" watchObservedRunningTime="2025-05-15 10:45:16.791203349 +0000 UTC m=+29.573528762" May 15 10:45:16.832840 env[1314]: time="2025-05-15T10:45:16.832788138Z" level=info msg="StartContainer for \"0fed8ebeed140fa9699a1453e9e89d7018d1bd8a75b08b3e32d09ea5f0a440bf\" returns successfully" May 15 10:45:16.853235 env[1314]: time="2025-05-15T10:45:16.853181829Z" level=info msg="shim disconnected" id=0fed8ebeed140fa9699a1453e9e89d7018d1bd8a75b08b3e32d09ea5f0a440bf May 15 10:45:16.853235 env[1314]: time="2025-05-15T10:45:16.853226363Z" level=warning msg="cleaning up after shim disconnected" id=0fed8ebeed140fa9699a1453e9e89d7018d1bd8a75b08b3e32d09ea5f0a440bf namespace=k8s.io May 15 10:45:16.853235 env[1314]: time="2025-05-15T10:45:16.853234769Z" level=info msg="cleaning up dead shim" May 15 10:45:16.859445 env[1314]: time="2025-05-15T10:45:16.859414102Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:45:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2896 runtime=io.containerd.runc.v2\n" May 15 10:45:17.772669 kubelet[2231]: E0515 10:45:17.772635 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:45:17.773221 kubelet[2231]: E0515 10:45:17.773164 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:45:17.774562 env[1314]: time="2025-05-15T10:45:17.774395545Z" level=info msg="CreateContainer within sandbox \"1dec96e0b69442ddbd975803b0ec29ba74e78104126a66962a1517e95e0e4ddb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 10:45:17.801203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1666190934.mount: Deactivated successfully. May 15 10:45:17.805418 env[1314]: time="2025-05-15T10:45:17.805355672Z" level=info msg="CreateContainer within sandbox \"1dec96e0b69442ddbd975803b0ec29ba74e78104126a66962a1517e95e0e4ddb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"968bfb502685d17672a41f4b3f139739726fc95eeaa023409992282223e93f68\"" May 15 10:45:17.805883 env[1314]: time="2025-05-15T10:45:17.805842625Z" level=info msg="StartContainer for \"968bfb502685d17672a41f4b3f139739726fc95eeaa023409992282223e93f68\"" May 15 10:45:17.859436 env[1314]: time="2025-05-15T10:45:17.857826364Z" level=info msg="StartContainer for \"968bfb502685d17672a41f4b3f139739726fc95eeaa023409992282223e93f68\" returns successfully" May 15 10:45:17.935937 kubelet[2231]: I0515 10:45:17.935892 2231 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 15 10:45:17.962741 kubelet[2231]: I0515 10:45:17.962675 2231 topology_manager.go:215] "Topology Admit Handler" podUID="f5d80d5c-b0e2-432e-8065-e9c2ae5c4f22" podNamespace="kube-system" podName="coredns-7db6d8ff4d-45j9t" May 15 10:45:17.967922 kubelet[2231]: I0515 10:45:17.967858 2231 topology_manager.go:215] "Topology Admit Handler" podUID="732b5c4e-94d2-49e2-956d-e2b9a3d3a42c" podNamespace="kube-system" podName="coredns-7db6d8ff4d-bp4ft" May 15 10:45:18.008968 kubelet[2231]: I0515 10:45:18.008910 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq7hg\" (UniqueName: \"kubernetes.io/projected/732b5c4e-94d2-49e2-956d-e2b9a3d3a42c-kube-api-access-dq7hg\") pod \"coredns-7db6d8ff4d-bp4ft\" (UID: \"732b5c4e-94d2-49e2-956d-e2b9a3d3a42c\") " pod="kube-system/coredns-7db6d8ff4d-bp4ft" May 15 10:45:18.009342 kubelet[2231]: I0515 10:45:18.009312 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/732b5c4e-94d2-49e2-956d-e2b9a3d3a42c-config-volume\") pod \"coredns-7db6d8ff4d-bp4ft\" (UID: \"732b5c4e-94d2-49e2-956d-e2b9a3d3a42c\") " pod="kube-system/coredns-7db6d8ff4d-bp4ft" May 15 10:45:18.009631 kubelet[2231]: I0515 10:45:18.009564 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc5h2\" (UniqueName: \"kubernetes.io/projected/f5d80d5c-b0e2-432e-8065-e9c2ae5c4f22-kube-api-access-dc5h2\") pod \"coredns-7db6d8ff4d-45j9t\" (UID: \"f5d80d5c-b0e2-432e-8065-e9c2ae5c4f22\") " pod="kube-system/coredns-7db6d8ff4d-45j9t" May 15 10:45:18.009631 kubelet[2231]: I0515 10:45:18.009635 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5d80d5c-b0e2-432e-8065-e9c2ae5c4f22-config-volume\") pod \"coredns-7db6d8ff4d-45j9t\" (UID: \"f5d80d5c-b0e2-432e-8065-e9c2ae5c4f22\") " pod="kube-system/coredns-7db6d8ff4d-45j9t" May 15 10:45:18.267836 kubelet[2231]: E0515 10:45:18.267788 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:45:18.268419 env[1314]: time="2025-05-15T10:45:18.268371656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-45j9t,Uid:f5d80d5c-b0e2-432e-8065-e9c2ae5c4f22,Namespace:kube-system,Attempt:0,}" May 15 10:45:18.272164 kubelet[2231]: E0515 10:45:18.272134 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:45:18.272548 env[1314]: time="2025-05-15T10:45:18.272513136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bp4ft,Uid:732b5c4e-94d2-49e2-956d-e2b9a3d3a42c,Namespace:kube-system,Attempt:0,}" May 15 10:45:18.777178 kubelet[2231]: E0515 10:45:18.777141 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:45:18.789645 kubelet[2231]: I0515 10:45:18.789185 2231 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ppmc9" podStartSLOduration=7.373476199 podStartE2EDuration="15.789159428s" podCreationTimestamp="2025-05-15 10:45:03 +0000 UTC" firstStartedPulling="2025-05-15 10:45:04.767524432 +0000 UTC m=+17.549849845" lastFinishedPulling="2025-05-15 10:45:13.183207641 +0000 UTC m=+25.965533074" observedRunningTime="2025-05-15 10:45:18.788551667 +0000 UTC m=+31.570877081" watchObservedRunningTime="2025-05-15 10:45:18.789159428 +0000 UTC m=+31.571484852" May 15 10:45:19.549997 systemd[1]: Started sshd@9-10.0.0.96:22-10.0.0.1:47728.service. May 15 10:45:19.583359 sshd[3082]: Accepted publickey for core from 10.0.0.1 port 47728 ssh2: RSA SHA256:haioSl9UPoE92ibERJujrg0rXVEisgSt061naG/EAtE May 15 10:45:19.584545 sshd[3082]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:45:19.587858 systemd-logind[1294]: New session 10 of user core. May 15 10:45:19.588643 systemd[1]: Started session-10.scope. May 15 10:45:19.733761 sshd[3082]: pam_unix(sshd:session): session closed for user core May 15 10:45:19.736475 systemd[1]: sshd@9-10.0.0.96:22-10.0.0.1:47728.service: Deactivated successfully. May 15 10:45:19.737520 systemd-logind[1294]: Session 10 logged out. Waiting for processes to exit. May 15 10:45:19.737534 systemd[1]: session-10.scope: Deactivated successfully. May 15 10:45:19.738380 systemd-logind[1294]: Removed session 10. May 15 10:45:19.779467 kubelet[2231]: E0515 10:45:19.779428 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:45:19.931953 systemd-networkd[1080]: cilium_host: Link UP May 15 10:45:19.932077 systemd-networkd[1080]: cilium_net: Link UP May 15 10:45:19.933796 systemd-networkd[1080]: cilium_net: Gained carrier May 15 10:45:19.935659 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready May 15 10:45:19.935717 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 15 10:45:19.935034 systemd-networkd[1080]: cilium_host: Gained carrier May 15 10:45:19.935156 systemd-networkd[1080]: cilium_net: Gained IPv6LL May 15 10:45:19.936219 systemd-networkd[1080]: cilium_host: Gained IPv6LL May 15 10:45:20.007758 systemd-networkd[1080]: cilium_vxlan: Link UP May 15 10:45:20.007770 systemd-networkd[1080]: cilium_vxlan: Gained carrier May 15 10:45:20.203660 kernel: NET: Registered PF_ALG protocol family May 15 10:45:20.744851 systemd-networkd[1080]: lxc_health: Link UP May 15 10:45:20.754651 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 15 10:45:20.754785 systemd-networkd[1080]: lxc_health: Gained carrier May 15 10:45:20.780891 kubelet[2231]: E0515 10:45:20.780851 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:45:20.851844 systemd-networkd[1080]: lxcfc59b9f4dab7: Link UP May 15 10:45:20.859660 kernel: eth0: renamed from tmp6fadd May 15 10:45:20.871497 systemd-networkd[1080]: lxcd43c04899909: Link UP May 15 10:45:20.875058 systemd-networkd[1080]: lxcfc59b9f4dab7: Gained carrier May 15 10:45:20.875660 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcfc59b9f4dab7: link becomes ready May 15 10:45:20.878649 kernel: eth0: renamed from tmpa059d May 15 10:45:20.889158 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcd43c04899909: link becomes ready May 15 10:45:20.888459 systemd-networkd[1080]: lxcd43c04899909: Gained carrier May 15 10:45:21.355772 systemd-networkd[1080]: cilium_vxlan: Gained IPv6LL May 15 10:45:21.782673 kubelet[2231]: E0515 10:45:21.782610 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:45:21.803784 systemd-networkd[1080]: lxc_health: Gained IPv6LL May 15 10:45:22.187845 systemd-networkd[1080]: lxcfc59b9f4dab7: Gained IPv6LL May 15 10:45:22.571836 systemd-networkd[1080]: lxcd43c04899909: Gained IPv6LL May 15 10:45:22.784369 kubelet[2231]: E0515 10:45:22.784322 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:45:23.786683 kubelet[2231]: E0515 10:45:23.786597 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:45:24.120967 env[1314]: time="2025-05-15T10:45:24.120649387Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:45:24.121392 env[1314]: time="2025-05-15T10:45:24.120712355Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:45:24.121392 env[1314]: time="2025-05-15T10:45:24.120723025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:45:24.121725 env[1314]: time="2025-05-15T10:45:24.121609488Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6fadd01d471ccfc2eb2745e98f749eb8e1507c8ded8a01a1ee44934f52a447b6 pid=3491 runtime=io.containerd.runc.v2 May 15 10:45:24.155275 systemd-resolved[1224]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 10:45:24.181127 env[1314]: time="2025-05-15T10:45:24.180989702Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:45:24.181127 env[1314]: time="2025-05-15T10:45:24.181101311Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:45:24.181127 env[1314]: time="2025-05-15T10:45:24.181134483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:45:24.181385 env[1314]: time="2025-05-15T10:45:24.181332074Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a059db8ce8099ff491ba480bf9cc19d70e491ea97913ff42e0bb2e357ccdbf7a pid=3530 runtime=io.containerd.runc.v2 May 15 10:45:24.188156 env[1314]: time="2025-05-15T10:45:24.188098598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bp4ft,Uid:732b5c4e-94d2-49e2-956d-e2b9a3d3a42c,Namespace:kube-system,Attempt:0,} returns sandbox id \"6fadd01d471ccfc2eb2745e98f749eb8e1507c8ded8a01a1ee44934f52a447b6\"" May 15 10:45:24.189015 kubelet[2231]: E0515 10:45:24.188971 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:45:24.191662 env[1314]: time="2025-05-15T10:45:24.191214173Z" level=info msg="CreateContainer within sandbox \"6fadd01d471ccfc2eb2745e98f749eb8e1507c8ded8a01a1ee44934f52a447b6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 10:45:24.207742 systemd-resolved[1224]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 10:45:24.211676 env[1314]: time="2025-05-15T10:45:24.211503418Z" level=info msg="CreateContainer within sandbox \"6fadd01d471ccfc2eb2745e98f749eb8e1507c8ded8a01a1ee44934f52a447b6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"23be9d30776e2df40f1bef80fc5d4f52acf70d54cf045e61a8804e5f3f6a4cb6\"" May 15 10:45:24.213682 env[1314]: time="2025-05-15T10:45:24.213503781Z" level=info msg="StartContainer for \"23be9d30776e2df40f1bef80fc5d4f52acf70d54cf045e61a8804e5f3f6a4cb6\"" May 15 10:45:24.232654 env[1314]: time="2025-05-15T10:45:24.232012214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-45j9t,Uid:f5d80d5c-b0e2-432e-8065-e9c2ae5c4f22,Namespace:kube-system,Attempt:0,} returns sandbox id \"a059db8ce8099ff491ba480bf9cc19d70e491ea97913ff42e0bb2e357ccdbf7a\"" May 15 10:45:24.232835 kubelet[2231]: E0515 10:45:24.232570 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:45:24.235455 env[1314]: time="2025-05-15T10:45:24.235422884Z" level=info msg="CreateContainer within sandbox \"a059db8ce8099ff491ba480bf9cc19d70e491ea97913ff42e0bb2e357ccdbf7a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 10:45:24.251948 env[1314]: time="2025-05-15T10:45:24.251889296Z" level=info msg="CreateContainer within sandbox \"a059db8ce8099ff491ba480bf9cc19d70e491ea97913ff42e0bb2e357ccdbf7a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"36aee1f3324f6749b93afae173dc855fb1406f5f916434657b6a308c5f4d3cea\"" May 15 10:45:24.253650 env[1314]: time="2025-05-15T10:45:24.253611888Z" level=info msg="StartContainer for \"36aee1f3324f6749b93afae173dc855fb1406f5f916434657b6a308c5f4d3cea\"" May 15 10:45:24.259796 env[1314]: time="2025-05-15T10:45:24.259768779Z" level=info msg="StartContainer for \"23be9d30776e2df40f1bef80fc5d4f52acf70d54cf045e61a8804e5f3f6a4cb6\" returns successfully" May 15 10:45:24.313198 env[1314]: time="2025-05-15T10:45:24.313144888Z" level=info msg="StartContainer for \"36aee1f3324f6749b93afae173dc855fb1406f5f916434657b6a308c5f4d3cea\" returns successfully" May 15 10:45:24.736476 systemd[1]: Started sshd@10-10.0.0.96:22-10.0.0.1:35378.service. May 15 10:45:24.770471 sshd[3642]: Accepted publickey for core from 10.0.0.1 port 35378 ssh2: RSA SHA256:haioSl9UPoE92ibERJujrg0rXVEisgSt061naG/EAtE May 15 10:45:24.771692 sshd[3642]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:45:24.774962 systemd-logind[1294]: New session 11 of user core. May 15 10:45:24.775692 systemd[1]: Started session-11.scope. May 15 10:45:24.789870 kubelet[2231]: E0515 10:45:24.789536 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:45:24.791784 kubelet[2231]: E0515 10:45:24.791759 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:45:24.809119 kubelet[2231]: I0515 10:45:24.808684 2231 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-45j9t" podStartSLOduration=21.808666097 podStartE2EDuration="21.808666097s" podCreationTimestamp="2025-05-15 10:45:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 10:45:24.808494645 +0000 UTC m=+37.590820058" watchObservedRunningTime="2025-05-15 10:45:24.808666097 +0000 UTC m=+37.590991510" May 15 10:45:24.809119 kubelet[2231]: I0515 10:45:24.808800 2231 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-bp4ft" podStartSLOduration=21.808796722 podStartE2EDuration="21.808796722s" podCreationTimestamp="2025-05-15 10:45:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 10:45:24.798846835 +0000 UTC m=+37.581172248" watchObservedRunningTime="2025-05-15 10:45:24.808796722 +0000 UTC m=+37.591122135" May 15 10:45:24.885981 sshd[3642]: pam_unix(sshd:session): session closed for user core May 15 10:45:24.888523 systemd[1]: sshd@10-10.0.0.96:22-10.0.0.1:35378.service: Deactivated successfully. May 15 10:45:24.889453 systemd-logind[1294]: Session 11 logged out. Waiting for processes to exit. May 15 10:45:24.889483 systemd[1]: session-11.scope: Deactivated successfully. May 15 10:45:24.890311 systemd-logind[1294]: Removed session 11. May 15 10:45:25.793358 kubelet[2231]: E0515 10:45:25.793317 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:45:25.793847 kubelet[2231]: E0515 10:45:25.793521 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:45:26.795539 kubelet[2231]: E0515 10:45:26.795486 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:45:26.795539 kubelet[2231]: E0515 10:45:26.795545 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:45:29.889724 systemd[1]: Started sshd@11-10.0.0.96:22-10.0.0.1:35386.service. May 15 10:45:29.923577 sshd[3664]: Accepted publickey for core from 10.0.0.1 port 35386 ssh2: RSA SHA256:haioSl9UPoE92ibERJujrg0rXVEisgSt061naG/EAtE May 15 10:45:29.925064 sshd[3664]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:45:29.929205 systemd-logind[1294]: New session 12 of user core. May 15 10:45:29.930032 systemd[1]: Started session-12.scope. May 15 10:45:30.038440 sshd[3664]: pam_unix(sshd:session): session closed for user core May 15 10:45:30.041428 systemd[1]: Started sshd@12-10.0.0.96:22-10.0.0.1:35388.service. May 15 10:45:30.042004 systemd[1]: sshd@11-10.0.0.96:22-10.0.0.1:35386.service: Deactivated successfully. May 15 10:45:30.043112 systemd-logind[1294]: Session 12 logged out. Waiting for processes to exit. May 15 10:45:30.043234 systemd[1]: session-12.scope: Deactivated successfully. May 15 10:45:30.044394 systemd-logind[1294]: Removed session 12. May 15 10:45:30.077456 sshd[3677]: Accepted publickey for core from 10.0.0.1 port 35388 ssh2: RSA SHA256:haioSl9UPoE92ibERJujrg0rXVEisgSt061naG/EAtE May 15 10:45:30.079115 sshd[3677]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:45:30.084343 systemd-logind[1294]: New session 13 of user core. May 15 10:45:30.085251 systemd[1]: Started session-13.scope. May 15 10:45:30.240756 sshd[3677]: pam_unix(sshd:session): session closed for user core May 15 10:45:30.245847 systemd[1]: Started sshd@13-10.0.0.96:22-10.0.0.1:35398.service. May 15 10:45:30.246362 systemd[1]: sshd@12-10.0.0.96:22-10.0.0.1:35388.service: Deactivated successfully. May 15 10:45:30.247204 systemd[1]: session-13.scope: Deactivated successfully. May 15 10:45:30.247975 systemd-logind[1294]: Session 13 logged out. Waiting for processes to exit. May 15 10:45:30.250299 systemd-logind[1294]: Removed session 13. May 15 10:45:30.282701 sshd[3690]: Accepted publickey for core from 10.0.0.1 port 35398 ssh2: RSA SHA256:haioSl9UPoE92ibERJujrg0rXVEisgSt061naG/EAtE May 15 10:45:30.283893 sshd[3690]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:45:30.287638 systemd-logind[1294]: New session 14 of user core. May 15 10:45:30.288416 systemd[1]: Started session-14.scope. May 15 10:45:30.390661 sshd[3690]: pam_unix(sshd:session): session closed for user core May 15 10:45:30.393195 systemd[1]: sshd@13-10.0.0.96:22-10.0.0.1:35398.service: Deactivated successfully. May 15 10:45:30.394196 systemd-logind[1294]: Session 14 logged out. Waiting for processes to exit. May 15 10:45:30.394238 systemd[1]: session-14.scope: Deactivated successfully. May 15 10:45:30.395075 systemd-logind[1294]: Removed session 14. May 15 10:45:35.393810 systemd[1]: Started sshd@14-10.0.0.96:22-10.0.0.1:35176.service. May 15 10:45:35.425670 sshd[3707]: Accepted publickey for core from 10.0.0.1 port 35176 ssh2: RSA SHA256:haioSl9UPoE92ibERJujrg0rXVEisgSt061naG/EAtE May 15 10:45:35.426829 sshd[3707]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:45:35.430100 systemd-logind[1294]: New session 15 of user core. May 15 10:45:35.430856 systemd[1]: Started session-15.scope. May 15 10:45:35.533432 sshd[3707]: pam_unix(sshd:session): session closed for user core May 15 10:45:35.535987 systemd[1]: sshd@14-10.0.0.96:22-10.0.0.1:35176.service: Deactivated successfully. May 15 10:45:35.537002 systemd-logind[1294]: Session 15 logged out. Waiting for processes to exit. May 15 10:45:35.537038 systemd[1]: session-15.scope: Deactivated successfully. May 15 10:45:35.537821 systemd-logind[1294]: Removed session 15. May 15 10:45:40.536745 systemd[1]: Started sshd@15-10.0.0.96:22-10.0.0.1:35186.service. May 15 10:45:40.568039 sshd[3721]: Accepted publickey for core from 10.0.0.1 port 35186 ssh2: RSA SHA256:haioSl9UPoE92ibERJujrg0rXVEisgSt061naG/EAtE May 15 10:45:40.569038 sshd[3721]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:45:40.572658 systemd-logind[1294]: New session 16 of user core. May 15 10:45:40.573437 systemd[1]: Started session-16.scope. May 15 10:45:40.676755 sshd[3721]: pam_unix(sshd:session): session closed for user core May 15 10:45:40.680394 systemd[1]: Started sshd@16-10.0.0.96:22-10.0.0.1:35202.service. May 15 10:45:40.681155 systemd[1]: sshd@15-10.0.0.96:22-10.0.0.1:35186.service: Deactivated successfully. May 15 10:45:40.682847 systemd[1]: session-16.scope: Deactivated successfully. May 15 10:45:40.683444 systemd-logind[1294]: Session 16 logged out. Waiting for processes to exit. May 15 10:45:40.684406 systemd-logind[1294]: Removed session 16. May 15 10:45:40.711352 sshd[3734]: Accepted publickey for core from 10.0.0.1 port 35202 ssh2: RSA SHA256:haioSl9UPoE92ibERJujrg0rXVEisgSt061naG/EAtE May 15 10:45:40.712382 sshd[3734]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:45:40.715921 systemd-logind[1294]: New session 17 of user core. May 15 10:45:40.716568 systemd[1]: Started session-17.scope. May 15 10:45:40.891082 sshd[3734]: pam_unix(sshd:session): session closed for user core May 15 10:45:40.893535 systemd[1]: Started sshd@17-10.0.0.96:22-10.0.0.1:35216.service. May 15 10:45:40.894496 systemd[1]: sshd@16-10.0.0.96:22-10.0.0.1:35202.service: Deactivated successfully. May 15 10:45:40.895324 systemd[1]: session-17.scope: Deactivated successfully. May 15 10:45:40.895356 systemd-logind[1294]: Session 17 logged out. Waiting for processes to exit. May 15 10:45:40.896244 systemd-logind[1294]: Removed session 17. May 15 10:45:40.928680 sshd[3745]: Accepted publickey for core from 10.0.0.1 port 35216 ssh2: RSA SHA256:haioSl9UPoE92ibERJujrg0rXVEisgSt061naG/EAtE May 15 10:45:40.929783 sshd[3745]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:45:40.933307 systemd-logind[1294]: New session 18 of user core. May 15 10:45:40.934104 systemd[1]: Started session-18.scope. May 15 10:45:42.456004 sshd[3745]: pam_unix(sshd:session): session closed for user core May 15 10:45:42.458491 systemd[1]: Started sshd@18-10.0.0.96:22-10.0.0.1:35226.service. May 15 10:45:42.460441 systemd[1]: sshd@17-10.0.0.96:22-10.0.0.1:35216.service: Deactivated successfully. May 15 10:45:42.461663 systemd[1]: session-18.scope: Deactivated successfully. May 15 10:45:42.462161 systemd-logind[1294]: Session 18 logged out. Waiting for processes to exit. May 15 10:45:42.463115 systemd-logind[1294]: Removed session 18. May 15 10:45:42.498282 sshd[3764]: Accepted publickey for core from 10.0.0.1 port 35226 ssh2: RSA SHA256:haioSl9UPoE92ibERJujrg0rXVEisgSt061naG/EAtE May 15 10:45:42.499502 sshd[3764]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:45:42.503170 systemd-logind[1294]: New session 19 of user core. May 15 10:45:42.503939 systemd[1]: Started session-19.scope. May 15 10:45:42.714142 sshd[3764]: pam_unix(sshd:session): session closed for user core May 15 10:45:42.716818 systemd[1]: Started sshd@19-10.0.0.96:22-10.0.0.1:35238.service. May 15 10:45:42.719698 systemd-logind[1294]: Session 19 logged out. Waiting for processes to exit. May 15 10:45:42.719925 systemd[1]: sshd@18-10.0.0.96:22-10.0.0.1:35226.service: Deactivated successfully. May 15 10:45:42.720756 systemd[1]: session-19.scope: Deactivated successfully. May 15 10:45:42.722438 systemd-logind[1294]: Removed session 19. May 15 10:45:42.749887 sshd[3778]: Accepted publickey for core from 10.0.0.1 port 35238 ssh2: RSA SHA256:haioSl9UPoE92ibERJujrg0rXVEisgSt061naG/EAtE May 15 10:45:42.751037 sshd[3778]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:45:42.754413 systemd-logind[1294]: New session 20 of user core. May 15 10:45:42.755181 systemd[1]: Started session-20.scope. May 15 10:45:42.855190 sshd[3778]: pam_unix(sshd:session): session closed for user core May 15 10:45:42.857431 systemd[1]: sshd@19-10.0.0.96:22-10.0.0.1:35238.service: Deactivated successfully. May 15 10:45:42.858199 systemd[1]: session-20.scope: Deactivated successfully. May 15 10:45:42.859034 systemd-logind[1294]: Session 20 logged out. Waiting for processes to exit. May 15 10:45:42.859872 systemd-logind[1294]: Removed session 20. May 15 10:45:47.858648 systemd[1]: Started sshd@20-10.0.0.96:22-10.0.0.1:58446.service. May 15 10:45:47.890231 sshd[3796]: Accepted publickey for core from 10.0.0.1 port 58446 ssh2: RSA SHA256:haioSl9UPoE92ibERJujrg0rXVEisgSt061naG/EAtE May 15 10:45:47.891215 sshd[3796]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:45:47.894300 systemd-logind[1294]: New session 21 of user core. May 15 10:45:47.895230 systemd[1]: Started session-21.scope. May 15 10:45:47.996987 sshd[3796]: pam_unix(sshd:session): session closed for user core May 15 10:45:47.999087 systemd[1]: sshd@20-10.0.0.96:22-10.0.0.1:58446.service: Deactivated successfully. May 15 10:45:48.000337 systemd-logind[1294]: Session 21 logged out. Waiting for processes to exit. May 15 10:45:48.000389 systemd[1]: session-21.scope: Deactivated successfully. May 15 10:45:48.001386 systemd-logind[1294]: Removed session 21. May 15 10:45:53.000596 systemd[1]: Started sshd@21-10.0.0.96:22-10.0.0.1:58450.service. May 15 10:45:53.032490 sshd[3814]: Accepted publickey for core from 10.0.0.1 port 58450 ssh2: RSA SHA256:haioSl9UPoE92ibERJujrg0rXVEisgSt061naG/EAtE May 15 10:45:53.033642 sshd[3814]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:45:53.037319 systemd-logind[1294]: New session 22 of user core. May 15 10:45:53.038361 systemd[1]: Started session-22.scope. May 15 10:45:53.140648 sshd[3814]: pam_unix(sshd:session): session closed for user core May 15 10:45:53.143280 systemd[1]: sshd@21-10.0.0.96:22-10.0.0.1:58450.service: Deactivated successfully. May 15 10:45:53.144659 systemd-logind[1294]: Session 22 logged out. Waiting for processes to exit. May 15 10:45:53.144753 systemd[1]: session-22.scope: Deactivated successfully. May 15 10:45:53.145711 systemd-logind[1294]: Removed session 22. May 15 10:45:58.144798 systemd[1]: Started sshd@22-10.0.0.96:22-10.0.0.1:42514.service. May 15 10:45:58.177030 sshd[3828]: Accepted publickey for core from 10.0.0.1 port 42514 ssh2: RSA SHA256:haioSl9UPoE92ibERJujrg0rXVEisgSt061naG/EAtE May 15 10:45:58.178582 sshd[3828]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:45:58.182402 systemd-logind[1294]: New session 23 of user core. May 15 10:45:58.183197 systemd[1]: Started session-23.scope. May 15 10:45:58.283422 sshd[3828]: pam_unix(sshd:session): session closed for user core May 15 10:45:58.286017 systemd[1]: sshd@22-10.0.0.96:22-10.0.0.1:42514.service: Deactivated successfully. May 15 10:45:58.286875 systemd[1]: session-23.scope: Deactivated successfully. May 15 10:45:58.287701 systemd-logind[1294]: Session 23 logged out. Waiting for processes to exit. May 15 10:45:58.288568 systemd-logind[1294]: Removed session 23. May 15 10:46:03.287254 systemd[1]: Started sshd@23-10.0.0.96:22-10.0.0.1:42530.service. May 15 10:46:03.319302 sshd[3842]: Accepted publickey for core from 10.0.0.1 port 42530 ssh2: RSA SHA256:haioSl9UPoE92ibERJujrg0rXVEisgSt061naG/EAtE May 15 10:46:03.320399 sshd[3842]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:46:03.323890 systemd-logind[1294]: New session 24 of user core. May 15 10:46:03.324658 systemd[1]: Started session-24.scope. May 15 10:46:03.425572 sshd[3842]: pam_unix(sshd:session): session closed for user core May 15 10:46:03.428477 systemd[1]: Started sshd@24-10.0.0.96:22-10.0.0.1:42532.service. May 15 10:46:03.429080 systemd[1]: sshd@23-10.0.0.96:22-10.0.0.1:42530.service: Deactivated successfully. May 15 10:46:03.430753 systemd[1]: session-24.scope: Deactivated successfully. May 15 10:46:03.430805 systemd-logind[1294]: Session 24 logged out. Waiting for processes to exit. May 15 10:46:03.431654 systemd-logind[1294]: Removed session 24. May 15 10:46:03.464182 sshd[3854]: Accepted publickey for core from 10.0.0.1 port 42532 ssh2: RSA SHA256:haioSl9UPoE92ibERJujrg0rXVEisgSt061naG/EAtE May 15 10:46:03.465317 sshd[3854]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:46:03.468911 systemd-logind[1294]: New session 25 of user core. May 15 10:46:03.469794 systemd[1]: Started session-25.scope. May 15 10:46:04.822847 env[1314]: time="2025-05-15T10:46:04.822745796Z" level=info msg="StopContainer for \"e4b44270d30c4f83a258b3a7bdb37b760f41aeb8dda755148affc710e82a0298\" with timeout 30 (s)" May 15 10:46:04.823351 env[1314]: time="2025-05-15T10:46:04.823102845Z" level=info msg="Stop container \"e4b44270d30c4f83a258b3a7bdb37b760f41aeb8dda755148affc710e82a0298\" with signal terminated" May 15 10:46:04.841405 systemd[1]: run-containerd-runc-k8s.io-968bfb502685d17672a41f4b3f139739726fc95eeaa023409992282223e93f68-runc.w9niAQ.mount: Deactivated successfully. May 15 10:46:04.852736 env[1314]: time="2025-05-15T10:46:04.852585427Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 10:46:04.859897 env[1314]: time="2025-05-15T10:46:04.858151509Z" level=info msg="StopContainer for \"968bfb502685d17672a41f4b3f139739726fc95eeaa023409992282223e93f68\" with timeout 2 (s)" May 15 10:46:04.859897 env[1314]: time="2025-05-15T10:46:04.858413106Z" level=info msg="Stop container \"968bfb502685d17672a41f4b3f139739726fc95eeaa023409992282223e93f68\" with signal terminated" May 15 10:46:04.859911 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e4b44270d30c4f83a258b3a7bdb37b760f41aeb8dda755148affc710e82a0298-rootfs.mount: Deactivated successfully. May 15 10:46:04.864668 systemd-networkd[1080]: lxc_health: Link DOWN May 15 10:46:04.864677 systemd-networkd[1080]: lxc_health: Lost carrier May 15 10:46:04.865790 env[1314]: time="2025-05-15T10:46:04.865737261Z" level=info msg="shim disconnected" id=e4b44270d30c4f83a258b3a7bdb37b760f41aeb8dda755148affc710e82a0298 May 15 10:46:04.865790 env[1314]: time="2025-05-15T10:46:04.865786584Z" level=warning msg="cleaning up after shim disconnected" id=e4b44270d30c4f83a258b3a7bdb37b760f41aeb8dda755148affc710e82a0298 namespace=k8s.io May 15 10:46:04.865790 env[1314]: time="2025-05-15T10:46:04.865796383Z" level=info msg="cleaning up dead shim" May 15 10:46:04.872775 env[1314]: time="2025-05-15T10:46:04.872730517Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:46:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3914 runtime=io.containerd.runc.v2\n" May 15 10:46:04.875853 env[1314]: time="2025-05-15T10:46:04.875806855Z" level=info msg="StopContainer for \"e4b44270d30c4f83a258b3a7bdb37b760f41aeb8dda755148affc710e82a0298\" returns successfully" May 15 10:46:04.876796 env[1314]: time="2025-05-15T10:46:04.876751150Z" level=info msg="StopPodSandbox for \"c619fcfcfd47a88d84478a41dad55b7ae8c81c81205de5b0530371fd09e40c4e\"" May 15 10:46:04.876886 env[1314]: time="2025-05-15T10:46:04.876827625Z" level=info msg="Container to stop \"e4b44270d30c4f83a258b3a7bdb37b760f41aeb8dda755148affc710e82a0298\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 10:46:04.878993 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c619fcfcfd47a88d84478a41dad55b7ae8c81c81205de5b0530371fd09e40c4e-shm.mount: Deactivated successfully. May 15 10:46:04.905164 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-968bfb502685d17672a41f4b3f139739726fc95eeaa023409992282223e93f68-rootfs.mount: Deactivated successfully. May 15 10:46:04.912100 env[1314]: time="2025-05-15T10:46:04.912052474Z" level=info msg="shim disconnected" id=c619fcfcfd47a88d84478a41dad55b7ae8c81c81205de5b0530371fd09e40c4e May 15 10:46:04.912862 env[1314]: time="2025-05-15T10:46:04.912821266Z" level=warning msg="cleaning up after shim disconnected" id=c619fcfcfd47a88d84478a41dad55b7ae8c81c81205de5b0530371fd09e40c4e namespace=k8s.io May 15 10:46:04.912862 env[1314]: time="2025-05-15T10:46:04.912840382Z" level=info msg="cleaning up dead shim" May 15 10:46:04.913020 env[1314]: time="2025-05-15T10:46:04.912463737Z" level=info msg="shim disconnected" id=968bfb502685d17672a41f4b3f139739726fc95eeaa023409992282223e93f68 May 15 10:46:04.913020 env[1314]: time="2025-05-15T10:46:04.912959419Z" level=warning msg="cleaning up after shim disconnected" id=968bfb502685d17672a41f4b3f139739726fc95eeaa023409992282223e93f68 namespace=k8s.io May 15 10:46:04.913020 env[1314]: time="2025-05-15T10:46:04.912971833Z" level=info msg="cleaning up dead shim" May 15 10:46:04.920051 env[1314]: time="2025-05-15T10:46:04.920002780Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:46:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3963 runtime=io.containerd.runc.v2\n" May 15 10:46:04.921077 env[1314]: time="2025-05-15T10:46:04.921042746Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:46:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3962 runtime=io.containerd.runc.v2\n" May 15 10:46:04.922220 env[1314]: time="2025-05-15T10:46:04.922189657Z" level=info msg="TearDown network for sandbox \"c619fcfcfd47a88d84478a41dad55b7ae8c81c81205de5b0530371fd09e40c4e\" successfully" May 15 10:46:04.922220 env[1314]: time="2025-05-15T10:46:04.922215496Z" level=info msg="StopPodSandbox for \"c619fcfcfd47a88d84478a41dad55b7ae8c81c81205de5b0530371fd09e40c4e\" returns successfully" May 15 10:46:04.923739 env[1314]: time="2025-05-15T10:46:04.923698495Z" level=info msg="StopContainer for \"968bfb502685d17672a41f4b3f139739726fc95eeaa023409992282223e93f68\" returns successfully" May 15 10:46:04.924355 env[1314]: time="2025-05-15T10:46:04.924322792Z" level=info msg="StopPodSandbox for \"1dec96e0b69442ddbd975803b0ec29ba74e78104126a66962a1517e95e0e4ddb\"" May 15 10:46:04.924493 env[1314]: time="2025-05-15T10:46:04.924469832Z" level=info msg="Container to stop \"0fed8ebeed140fa9699a1453e9e89d7018d1bd8a75b08b3e32d09ea5f0a440bf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 10:46:04.924641 env[1314]: time="2025-05-15T10:46:04.924598877Z" level=info msg="Container to stop \"968bfb502685d17672a41f4b3f139739726fc95eeaa023409992282223e93f68\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 10:46:04.924759 env[1314]: time="2025-05-15T10:46:04.924733704Z" level=info msg="Container to stop \"c79b8fa4536fdafb28803b5bbb062c1de14a42aff0fc5951cf9af7d27fa9a563\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 10:46:04.924861 env[1314]: time="2025-05-15T10:46:04.924833443Z" level=info msg="Container to stop \"931c1fbadb9a0dc943722f048560ad78031dcc555f4701e5f0bfa7c435f17a3c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 10:46:04.924988 env[1314]: time="2025-05-15T10:46:04.924938113Z" level=info msg="Container to stop \"1af8b3c850675fc025a8724c080b8e3169f1febd61817cac3bcbd81e528649c8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 10:46:04.951344 env[1314]: time="2025-05-15T10:46:04.951277739Z" level=info msg="shim disconnected" id=1dec96e0b69442ddbd975803b0ec29ba74e78104126a66962a1517e95e0e4ddb May 15 10:46:04.951513 env[1314]: time="2025-05-15T10:46:04.951352150Z" level=warning msg="cleaning up after shim disconnected" id=1dec96e0b69442ddbd975803b0ec29ba74e78104126a66962a1517e95e0e4ddb namespace=k8s.io May 15 10:46:04.951513 env[1314]: time="2025-05-15T10:46:04.951368501Z" level=info msg="cleaning up dead shim" May 15 10:46:04.959250 env[1314]: time="2025-05-15T10:46:04.959193458Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:46:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4005 runtime=io.containerd.runc.v2\n" May 15 10:46:04.959751 env[1314]: time="2025-05-15T10:46:04.959718075Z" level=info msg="TearDown network for sandbox \"1dec96e0b69442ddbd975803b0ec29ba74e78104126a66962a1517e95e0e4ddb\" successfully" May 15 10:46:04.959751 env[1314]: time="2025-05-15T10:46:04.959747151Z" level=info msg="StopPodSandbox for \"1dec96e0b69442ddbd975803b0ec29ba74e78104126a66962a1517e95e0e4ddb\" returns successfully" May 15 10:46:04.968308 kubelet[2231]: I0515 10:46:04.968242 2231 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f055928f-4e13-4f87-ae9d-e8b48941c8c9-cilium-config-path\") pod \"f055928f-4e13-4f87-ae9d-e8b48941c8c9\" (UID: \"f055928f-4e13-4f87-ae9d-e8b48941c8c9\") " May 15 10:46:04.970285 kubelet[2231]: I0515 10:46:04.968317 2231 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdjx5\" (UniqueName: \"kubernetes.io/projected/f055928f-4e13-4f87-ae9d-e8b48941c8c9-kube-api-access-kdjx5\") pod \"f055928f-4e13-4f87-ae9d-e8b48941c8c9\" (UID: \"f055928f-4e13-4f87-ae9d-e8b48941c8c9\") " May 15 10:46:04.971398 kubelet[2231]: I0515 10:46:04.971351 2231 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f055928f-4e13-4f87-ae9d-e8b48941c8c9-kube-api-access-kdjx5" (OuterVolumeSpecName: "kube-api-access-kdjx5") pod "f055928f-4e13-4f87-ae9d-e8b48941c8c9" (UID: "f055928f-4e13-4f87-ae9d-e8b48941c8c9"). InnerVolumeSpecName "kube-api-access-kdjx5". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 10:46:04.972220 kubelet[2231]: I0515 10:46:04.972182 2231 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f055928f-4e13-4f87-ae9d-e8b48941c8c9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f055928f-4e13-4f87-ae9d-e8b48941c8c9" (UID: "f055928f-4e13-4f87-ae9d-e8b48941c8c9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 15 10:46:05.069967 kubelet[2231]: I0515 10:46:05.069912 2231 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f6539885-94f1-4060-8592-f691eb278487-cilium-run\") pod \"f6539885-94f1-4060-8592-f691eb278487\" (UID: \"f6539885-94f1-4060-8592-f691eb278487\") " May 15 10:46:05.069967 kubelet[2231]: I0515 10:46:05.069952 2231 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f6539885-94f1-4060-8592-f691eb278487-cilium-cgroup\") pod \"f6539885-94f1-4060-8592-f691eb278487\" (UID: \"f6539885-94f1-4060-8592-f691eb278487\") " May 15 10:46:05.069967 kubelet[2231]: I0515 10:46:05.069975 2231 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f6539885-94f1-4060-8592-f691eb278487-cilium-config-path\") pod \"f6539885-94f1-4060-8592-f691eb278487\" (UID: \"f6539885-94f1-4060-8592-f691eb278487\") " May 15 10:46:05.070220 kubelet[2231]: I0515 10:46:05.069995 2231 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f6539885-94f1-4060-8592-f691eb278487-bpf-maps\") pod \"f6539885-94f1-4060-8592-f691eb278487\" (UID: \"f6539885-94f1-4060-8592-f691eb278487\") " May 15 10:46:05.070220 kubelet[2231]: I0515 10:46:05.070009 2231 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f6539885-94f1-4060-8592-f691eb278487-host-proc-sys-net\") pod \"f6539885-94f1-4060-8592-f691eb278487\" (UID: \"f6539885-94f1-4060-8592-f691eb278487\") " May 15 10:46:05.070220 kubelet[2231]: I0515 10:46:05.070020 2231 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f6539885-94f1-4060-8592-f691eb278487-cni-path\") pod \"f6539885-94f1-4060-8592-f691eb278487\" (UID: \"f6539885-94f1-4060-8592-f691eb278487\") " May 15 10:46:05.070220 kubelet[2231]: I0515 10:46:05.070033 2231 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f6539885-94f1-4060-8592-f691eb278487-host-proc-sys-kernel\") pod \"f6539885-94f1-4060-8592-f691eb278487\" (UID: \"f6539885-94f1-4060-8592-f691eb278487\") " May 15 10:46:05.070220 kubelet[2231]: I0515 10:46:05.070049 2231 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhpbq\" (UniqueName: \"kubernetes.io/projected/f6539885-94f1-4060-8592-f691eb278487-kube-api-access-nhpbq\") pod \"f6539885-94f1-4060-8592-f691eb278487\" (UID: \"f6539885-94f1-4060-8592-f691eb278487\") " May 15 10:46:05.070220 kubelet[2231]: I0515 10:46:05.070064 2231 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f6539885-94f1-4060-8592-f691eb278487-etc-cni-netd\") pod \"f6539885-94f1-4060-8592-f691eb278487\" (UID: \"f6539885-94f1-4060-8592-f691eb278487\") " May 15 10:46:05.070372 kubelet[2231]: I0515 10:46:05.070079 2231 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f6539885-94f1-4060-8592-f691eb278487-clustermesh-secrets\") pod \"f6539885-94f1-4060-8592-f691eb278487\" (UID: \"f6539885-94f1-4060-8592-f691eb278487\") " May 15 10:46:05.070372 kubelet[2231]: I0515 10:46:05.070092 2231 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f6539885-94f1-4060-8592-f691eb278487-hostproc\") pod \"f6539885-94f1-4060-8592-f691eb278487\" (UID: \"f6539885-94f1-4060-8592-f691eb278487\") " May 15 10:46:05.070372 kubelet[2231]: I0515 10:46:05.070118 2231 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f6539885-94f1-4060-8592-f691eb278487-hubble-tls\") pod \"f6539885-94f1-4060-8592-f691eb278487\" (UID: \"f6539885-94f1-4060-8592-f691eb278487\") " May 15 10:46:05.070372 kubelet[2231]: I0515 10:46:05.070131 2231 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f6539885-94f1-4060-8592-f691eb278487-xtables-lock\") pod \"f6539885-94f1-4060-8592-f691eb278487\" (UID: \"f6539885-94f1-4060-8592-f691eb278487\") " May 15 10:46:05.070372 kubelet[2231]: I0515 10:46:05.070145 2231 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f6539885-94f1-4060-8592-f691eb278487-lib-modules\") pod \"f6539885-94f1-4060-8592-f691eb278487\" (UID: \"f6539885-94f1-4060-8592-f691eb278487\") " May 15 10:46:05.070372 kubelet[2231]: I0515 10:46:05.070180 2231 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f055928f-4e13-4f87-ae9d-e8b48941c8c9-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 15 10:46:05.070372 kubelet[2231]: I0515 10:46:05.070189 2231 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-kdjx5\" (UniqueName: \"kubernetes.io/projected/f055928f-4e13-4f87-ae9d-e8b48941c8c9-kube-api-access-kdjx5\") on node \"localhost\" DevicePath \"\"" May 15 10:46:05.070595 kubelet[2231]: I0515 10:46:05.070069 2231 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6539885-94f1-4060-8592-f691eb278487-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f6539885-94f1-4060-8592-f691eb278487" (UID: "f6539885-94f1-4060-8592-f691eb278487"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:46:05.071002 kubelet[2231]: I0515 10:46:05.070076 2231 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6539885-94f1-4060-8592-f691eb278487-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f6539885-94f1-4060-8592-f691eb278487" (UID: "f6539885-94f1-4060-8592-f691eb278487"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:46:05.071091 kubelet[2231]: I0515 10:46:05.070105 2231 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6539885-94f1-4060-8592-f691eb278487-cni-path" (OuterVolumeSpecName: "cni-path") pod "f6539885-94f1-4060-8592-f691eb278487" (UID: "f6539885-94f1-4060-8592-f691eb278487"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:46:05.071184 kubelet[2231]: I0515 10:46:05.070129 2231 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6539885-94f1-4060-8592-f691eb278487-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f6539885-94f1-4060-8592-f691eb278487" (UID: "f6539885-94f1-4060-8592-f691eb278487"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:46:05.071266 kubelet[2231]: I0515 10:46:05.070142 2231 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6539885-94f1-4060-8592-f691eb278487-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f6539885-94f1-4060-8592-f691eb278487" (UID: "f6539885-94f1-4060-8592-f691eb278487"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:46:05.071347 kubelet[2231]: I0515 10:46:05.070230 2231 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6539885-94f1-4060-8592-f691eb278487-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f6539885-94f1-4060-8592-f691eb278487" (UID: "f6539885-94f1-4060-8592-f691eb278487"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:46:05.071431 kubelet[2231]: I0515 10:46:05.070796 2231 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6539885-94f1-4060-8592-f691eb278487-hostproc" (OuterVolumeSpecName: "hostproc") pod "f6539885-94f1-4060-8592-f691eb278487" (UID: "f6539885-94f1-4060-8592-f691eb278487"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:46:05.071511 kubelet[2231]: I0515 10:46:05.070816 2231 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6539885-94f1-4060-8592-f691eb278487-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f6539885-94f1-4060-8592-f691eb278487" (UID: "f6539885-94f1-4060-8592-f691eb278487"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:46:05.071595 kubelet[2231]: I0515 10:46:05.070913 2231 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6539885-94f1-4060-8592-f691eb278487-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f6539885-94f1-4060-8592-f691eb278487" (UID: "f6539885-94f1-4060-8592-f691eb278487"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:46:05.071693 kubelet[2231]: I0515 10:46:05.070945 2231 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6539885-94f1-4060-8592-f691eb278487-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f6539885-94f1-4060-8592-f691eb278487" (UID: "f6539885-94f1-4060-8592-f691eb278487"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:46:05.072119 kubelet[2231]: I0515 10:46:05.072067 2231 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6539885-94f1-4060-8592-f691eb278487-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f6539885-94f1-4060-8592-f691eb278487" (UID: "f6539885-94f1-4060-8592-f691eb278487"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 15 10:46:05.073499 kubelet[2231]: I0515 10:46:05.073431 2231 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6539885-94f1-4060-8592-f691eb278487-kube-api-access-nhpbq" (OuterVolumeSpecName: "kube-api-access-nhpbq") pod "f6539885-94f1-4060-8592-f691eb278487" (UID: "f6539885-94f1-4060-8592-f691eb278487"). InnerVolumeSpecName "kube-api-access-nhpbq". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 10:46:05.073554 kubelet[2231]: I0515 10:46:05.073536 2231 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6539885-94f1-4060-8592-f691eb278487-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f6539885-94f1-4060-8592-f691eb278487" (UID: "f6539885-94f1-4060-8592-f691eb278487"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 15 10:46:05.074960 kubelet[2231]: I0515 10:46:05.074929 2231 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6539885-94f1-4060-8592-f691eb278487-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f6539885-94f1-4060-8592-f691eb278487" (UID: "f6539885-94f1-4060-8592-f691eb278487"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 10:46:05.171258 kubelet[2231]: I0515 10:46:05.171214 2231 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f6539885-94f1-4060-8592-f691eb278487-lib-modules\") on node \"localhost\" DevicePath \"\"" May 15 10:46:05.171258 kubelet[2231]: I0515 10:46:05.171253 2231 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f6539885-94f1-4060-8592-f691eb278487-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 15 10:46:05.171258 kubelet[2231]: I0515 10:46:05.171263 2231 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f6539885-94f1-4060-8592-f691eb278487-cilium-run\") on node \"localhost\" DevicePath \"\"" May 15 10:46:05.171374 kubelet[2231]: I0515 10:46:05.171271 2231 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f6539885-94f1-4060-8592-f691eb278487-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 15 10:46:05.171374 kubelet[2231]: I0515 10:46:05.171278 2231 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f6539885-94f1-4060-8592-f691eb278487-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 15 10:46:05.171374 kubelet[2231]: I0515 10:46:05.171285 2231 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f6539885-94f1-4060-8592-f691eb278487-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 15 10:46:05.171374 kubelet[2231]: I0515 10:46:05.171293 2231 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f6539885-94f1-4060-8592-f691eb278487-cni-path\") on node \"localhost\" DevicePath \"\"" May 15 10:46:05.171374 kubelet[2231]: I0515 10:46:05.171299 2231 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f6539885-94f1-4060-8592-f691eb278487-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 15 10:46:05.171374 kubelet[2231]: I0515 10:46:05.171306 2231 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f6539885-94f1-4060-8592-f691eb278487-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 15 10:46:05.171374 kubelet[2231]: I0515 10:46:05.171314 2231 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-nhpbq\" (UniqueName: \"kubernetes.io/projected/f6539885-94f1-4060-8592-f691eb278487-kube-api-access-nhpbq\") on node \"localhost\" DevicePath \"\"" May 15 10:46:05.171374 kubelet[2231]: I0515 10:46:05.171321 2231 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f6539885-94f1-4060-8592-f691eb278487-hostproc\") on node \"localhost\" DevicePath \"\"" May 15 10:46:05.171551 kubelet[2231]: I0515 10:46:05.171327 2231 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f6539885-94f1-4060-8592-f691eb278487-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 15 10:46:05.171551 kubelet[2231]: I0515 10:46:05.171334 2231 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f6539885-94f1-4060-8592-f691eb278487-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 15 10:46:05.171551 kubelet[2231]: I0515 10:46:05.171340 2231 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f6539885-94f1-4060-8592-f691eb278487-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 15 10:46:05.835217 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c619fcfcfd47a88d84478a41dad55b7ae8c81c81205de5b0530371fd09e40c4e-rootfs.mount: Deactivated successfully. May 15 10:46:05.835354 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1dec96e0b69442ddbd975803b0ec29ba74e78104126a66962a1517e95e0e4ddb-rootfs.mount: Deactivated successfully. May 15 10:46:05.835439 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1dec96e0b69442ddbd975803b0ec29ba74e78104126a66962a1517e95e0e4ddb-shm.mount: Deactivated successfully. May 15 10:46:05.835525 systemd[1]: var-lib-kubelet-pods-f055928f\x2d4e13\x2d4f87\x2dae9d\x2de8b48941c8c9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkdjx5.mount: Deactivated successfully. May 15 10:46:05.835606 systemd[1]: var-lib-kubelet-pods-f6539885\x2d94f1\x2d4060\x2d8592\x2df691eb278487-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnhpbq.mount: Deactivated successfully. May 15 10:46:05.835713 systemd[1]: var-lib-kubelet-pods-f6539885\x2d94f1\x2d4060\x2d8592\x2df691eb278487-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 15 10:46:05.835802 systemd[1]: var-lib-kubelet-pods-f6539885\x2d94f1\x2d4060\x2d8592\x2df691eb278487-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 15 10:46:05.871245 kubelet[2231]: I0515 10:46:05.871209 2231 scope.go:117] "RemoveContainer" containerID="968bfb502685d17672a41f4b3f139739726fc95eeaa023409992282223e93f68" May 15 10:46:05.872433 env[1314]: time="2025-05-15T10:46:05.872375069Z" level=info msg="RemoveContainer for \"968bfb502685d17672a41f4b3f139739726fc95eeaa023409992282223e93f68\"" May 15 10:46:05.878667 env[1314]: time="2025-05-15T10:46:05.878609257Z" level=info msg="RemoveContainer for \"968bfb502685d17672a41f4b3f139739726fc95eeaa023409992282223e93f68\" returns successfully" May 15 10:46:05.878908 kubelet[2231]: I0515 10:46:05.878867 2231 scope.go:117] "RemoveContainer" containerID="0fed8ebeed140fa9699a1453e9e89d7018d1bd8a75b08b3e32d09ea5f0a440bf" May 15 10:46:05.879828 env[1314]: time="2025-05-15T10:46:05.879790603Z" level=info msg="RemoveContainer for \"0fed8ebeed140fa9699a1453e9e89d7018d1bd8a75b08b3e32d09ea5f0a440bf\"" May 15 10:46:05.883725 env[1314]: time="2025-05-15T10:46:05.883691457Z" level=info msg="RemoveContainer for \"0fed8ebeed140fa9699a1453e9e89d7018d1bd8a75b08b3e32d09ea5f0a440bf\" returns successfully" May 15 10:46:05.884667 kubelet[2231]: I0515 10:46:05.884609 2231 scope.go:117] "RemoveContainer" containerID="1af8b3c850675fc025a8724c080b8e3169f1febd61817cac3bcbd81e528649c8" May 15 10:46:05.885858 env[1314]: time="2025-05-15T10:46:05.885817027Z" level=info msg="RemoveContainer for \"1af8b3c850675fc025a8724c080b8e3169f1febd61817cac3bcbd81e528649c8\"" May 15 10:46:05.889044 env[1314]: time="2025-05-15T10:46:05.889008823Z" level=info msg="RemoveContainer for \"1af8b3c850675fc025a8724c080b8e3169f1febd61817cac3bcbd81e528649c8\" returns successfully" May 15 10:46:05.889242 kubelet[2231]: I0515 10:46:05.889217 2231 scope.go:117] "RemoveContainer" containerID="931c1fbadb9a0dc943722f048560ad78031dcc555f4701e5f0bfa7c435f17a3c" May 15 10:46:05.890184 env[1314]: time="2025-05-15T10:46:05.890161373Z" level=info msg="RemoveContainer for \"931c1fbadb9a0dc943722f048560ad78031dcc555f4701e5f0bfa7c435f17a3c\"" May 15 10:46:05.893632 env[1314]: time="2025-05-15T10:46:05.893589319Z" level=info msg="RemoveContainer for \"931c1fbadb9a0dc943722f048560ad78031dcc555f4701e5f0bfa7c435f17a3c\" returns successfully" May 15 10:46:05.894030 kubelet[2231]: I0515 10:46:05.893985 2231 scope.go:117] "RemoveContainer" containerID="c79b8fa4536fdafb28803b5bbb062c1de14a42aff0fc5951cf9af7d27fa9a563" May 15 10:46:05.894961 env[1314]: time="2025-05-15T10:46:05.894921952Z" level=info msg="RemoveContainer for \"c79b8fa4536fdafb28803b5bbb062c1de14a42aff0fc5951cf9af7d27fa9a563\"" May 15 10:46:05.897669 env[1314]: time="2025-05-15T10:46:05.897634967Z" level=info msg="RemoveContainer for \"c79b8fa4536fdafb28803b5bbb062c1de14a42aff0fc5951cf9af7d27fa9a563\" returns successfully" May 15 10:46:05.897857 kubelet[2231]: I0515 10:46:05.897823 2231 scope.go:117] "RemoveContainer" containerID="968bfb502685d17672a41f4b3f139739726fc95eeaa023409992282223e93f68" May 15 10:46:05.898075 env[1314]: time="2025-05-15T10:46:05.898008318Z" level=error msg="ContainerStatus for \"968bfb502685d17672a41f4b3f139739726fc95eeaa023409992282223e93f68\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"968bfb502685d17672a41f4b3f139739726fc95eeaa023409992282223e93f68\": not found" May 15 10:46:05.898244 kubelet[2231]: E0515 10:46:05.898214 2231 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"968bfb502685d17672a41f4b3f139739726fc95eeaa023409992282223e93f68\": not found" containerID="968bfb502685d17672a41f4b3f139739726fc95eeaa023409992282223e93f68" May 15 10:46:05.898319 kubelet[2231]: I0515 10:46:05.898246 2231 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"968bfb502685d17672a41f4b3f139739726fc95eeaa023409992282223e93f68"} err="failed to get container status \"968bfb502685d17672a41f4b3f139739726fc95eeaa023409992282223e93f68\": rpc error: code = NotFound desc = an error occurred when try to find container \"968bfb502685d17672a41f4b3f139739726fc95eeaa023409992282223e93f68\": not found" May 15 10:46:05.898352 kubelet[2231]: I0515 10:46:05.898320 2231 scope.go:117] "RemoveContainer" containerID="0fed8ebeed140fa9699a1453e9e89d7018d1bd8a75b08b3e32d09ea5f0a440bf" May 15 10:46:05.898588 env[1314]: time="2025-05-15T10:46:05.898521714Z" level=error msg="ContainerStatus for \"0fed8ebeed140fa9699a1453e9e89d7018d1bd8a75b08b3e32d09ea5f0a440bf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0fed8ebeed140fa9699a1453e9e89d7018d1bd8a75b08b3e32d09ea5f0a440bf\": not found" May 15 10:46:05.898721 kubelet[2231]: E0515 10:46:05.898699 2231 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0fed8ebeed140fa9699a1453e9e89d7018d1bd8a75b08b3e32d09ea5f0a440bf\": not found" containerID="0fed8ebeed140fa9699a1453e9e89d7018d1bd8a75b08b3e32d09ea5f0a440bf" May 15 10:46:05.898803 kubelet[2231]: I0515 10:46:05.898722 2231 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0fed8ebeed140fa9699a1453e9e89d7018d1bd8a75b08b3e32d09ea5f0a440bf"} err="failed to get container status \"0fed8ebeed140fa9699a1453e9e89d7018d1bd8a75b08b3e32d09ea5f0a440bf\": rpc error: code = NotFound desc = an error occurred when try to find container \"0fed8ebeed140fa9699a1453e9e89d7018d1bd8a75b08b3e32d09ea5f0a440bf\": not found" May 15 10:46:05.898803 kubelet[2231]: I0515 10:46:05.898737 2231 scope.go:117] "RemoveContainer" containerID="1af8b3c850675fc025a8724c080b8e3169f1febd61817cac3bcbd81e528649c8" May 15 10:46:05.898941 env[1314]: time="2025-05-15T10:46:05.898894031Z" level=error msg="ContainerStatus for \"1af8b3c850675fc025a8724c080b8e3169f1febd61817cac3bcbd81e528649c8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1af8b3c850675fc025a8724c080b8e3169f1febd61817cac3bcbd81e528649c8\": not found" May 15 10:46:05.899051 kubelet[2231]: E0515 10:46:05.899028 2231 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1af8b3c850675fc025a8724c080b8e3169f1febd61817cac3bcbd81e528649c8\": not found" containerID="1af8b3c850675fc025a8724c080b8e3169f1febd61817cac3bcbd81e528649c8" May 15 10:46:05.899099 kubelet[2231]: I0515 10:46:05.899060 2231 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1af8b3c850675fc025a8724c080b8e3169f1febd61817cac3bcbd81e528649c8"} err="failed to get container status \"1af8b3c850675fc025a8724c080b8e3169f1febd61817cac3bcbd81e528649c8\": rpc error: code = NotFound desc = an error occurred when try to find container \"1af8b3c850675fc025a8724c080b8e3169f1febd61817cac3bcbd81e528649c8\": not found" May 15 10:46:05.899142 kubelet[2231]: I0515 10:46:05.899106 2231 scope.go:117] "RemoveContainer" containerID="931c1fbadb9a0dc943722f048560ad78031dcc555f4701e5f0bfa7c435f17a3c" May 15 10:46:05.899310 env[1314]: time="2025-05-15T10:46:05.899263643Z" level=error msg="ContainerStatus for \"931c1fbadb9a0dc943722f048560ad78031dcc555f4701e5f0bfa7c435f17a3c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"931c1fbadb9a0dc943722f048560ad78031dcc555f4701e5f0bfa7c435f17a3c\": not found" May 15 10:46:05.899412 kubelet[2231]: E0515 10:46:05.899389 2231 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"931c1fbadb9a0dc943722f048560ad78031dcc555f4701e5f0bfa7c435f17a3c\": not found" containerID="931c1fbadb9a0dc943722f048560ad78031dcc555f4701e5f0bfa7c435f17a3c" May 15 10:46:05.899478 kubelet[2231]: I0515 10:46:05.899412 2231 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"931c1fbadb9a0dc943722f048560ad78031dcc555f4701e5f0bfa7c435f17a3c"} err="failed to get container status \"931c1fbadb9a0dc943722f048560ad78031dcc555f4701e5f0bfa7c435f17a3c\": rpc error: code = NotFound desc = an error occurred when try to find container \"931c1fbadb9a0dc943722f048560ad78031dcc555f4701e5f0bfa7c435f17a3c\": not found" May 15 10:46:05.899478 kubelet[2231]: I0515 10:46:05.899426 2231 scope.go:117] "RemoveContainer" containerID="c79b8fa4536fdafb28803b5bbb062c1de14a42aff0fc5951cf9af7d27fa9a563" May 15 10:46:05.899654 env[1314]: time="2025-05-15T10:46:05.899576408Z" level=error msg="ContainerStatus for \"c79b8fa4536fdafb28803b5bbb062c1de14a42aff0fc5951cf9af7d27fa9a563\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c79b8fa4536fdafb28803b5bbb062c1de14a42aff0fc5951cf9af7d27fa9a563\": not found" May 15 10:46:05.899737 kubelet[2231]: E0515 10:46:05.899712 2231 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c79b8fa4536fdafb28803b5bbb062c1de14a42aff0fc5951cf9af7d27fa9a563\": not found" containerID="c79b8fa4536fdafb28803b5bbb062c1de14a42aff0fc5951cf9af7d27fa9a563" May 15 10:46:05.899804 kubelet[2231]: I0515 10:46:05.899735 2231 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c79b8fa4536fdafb28803b5bbb062c1de14a42aff0fc5951cf9af7d27fa9a563"} err="failed to get container status \"c79b8fa4536fdafb28803b5bbb062c1de14a42aff0fc5951cf9af7d27fa9a563\": rpc error: code = NotFound desc = an error occurred when try to find container \"c79b8fa4536fdafb28803b5bbb062c1de14a42aff0fc5951cf9af7d27fa9a563\": not found" May 15 10:46:05.899804 kubelet[2231]: I0515 10:46:05.899750 2231 scope.go:117] "RemoveContainer" containerID="e4b44270d30c4f83a258b3a7bdb37b760f41aeb8dda755148affc710e82a0298" May 15 10:46:05.900585 env[1314]: time="2025-05-15T10:46:05.900559215Z" level=info msg="RemoveContainer for \"e4b44270d30c4f83a258b3a7bdb37b760f41aeb8dda755148affc710e82a0298\"" May 15 10:46:05.903343 env[1314]: time="2025-05-15T10:46:05.903310264Z" level=info msg="RemoveContainer for \"e4b44270d30c4f83a258b3a7bdb37b760f41aeb8dda755148affc710e82a0298\" returns successfully" May 15 10:46:05.903497 kubelet[2231]: I0515 10:46:05.903467 2231 scope.go:117] "RemoveContainer" containerID="e4b44270d30c4f83a258b3a7bdb37b760f41aeb8dda755148affc710e82a0298" May 15 10:46:05.903735 env[1314]: time="2025-05-15T10:46:05.903683403Z" level=error msg="ContainerStatus for \"e4b44270d30c4f83a258b3a7bdb37b760f41aeb8dda755148affc710e82a0298\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e4b44270d30c4f83a258b3a7bdb37b760f41aeb8dda755148affc710e82a0298\": not found" May 15 10:46:05.903864 kubelet[2231]: E0515 10:46:05.903838 2231 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e4b44270d30c4f83a258b3a7bdb37b760f41aeb8dda755148affc710e82a0298\": not found" containerID="e4b44270d30c4f83a258b3a7bdb37b760f41aeb8dda755148affc710e82a0298" May 15 10:46:05.903918 kubelet[2231]: I0515 10:46:05.903870 2231 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e4b44270d30c4f83a258b3a7bdb37b760f41aeb8dda755148affc710e82a0298"} err="failed to get container status \"e4b44270d30c4f83a258b3a7bdb37b760f41aeb8dda755148affc710e82a0298\": rpc error: code = NotFound desc = an error occurred when try to find container \"e4b44270d30c4f83a258b3a7bdb37b760f41aeb8dda755148affc710e82a0298\": not found" May 15 10:46:06.804095 sshd[3854]: pam_unix(sshd:session): session closed for user core May 15 10:46:06.806492 systemd[1]: Started sshd@25-10.0.0.96:22-10.0.0.1:48014.service. May 15 10:46:06.806973 systemd[1]: sshd@24-10.0.0.96:22-10.0.0.1:42532.service: Deactivated successfully. May 15 10:46:06.808265 systemd[1]: session-25.scope: Deactivated successfully. May 15 10:46:06.808338 systemd-logind[1294]: Session 25 logged out. Waiting for processes to exit. May 15 10:46:06.809497 systemd-logind[1294]: Removed session 25. May 15 10:46:06.838461 sshd[4026]: Accepted publickey for core from 10.0.0.1 port 48014 ssh2: RSA SHA256:haioSl9UPoE92ibERJujrg0rXVEisgSt061naG/EAtE May 15 10:46:06.839392 sshd[4026]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:46:06.842802 systemd-logind[1294]: New session 26 of user core. May 15 10:46:06.843532 systemd[1]: Started session-26.scope. May 15 10:46:07.303515 kubelet[2231]: I0515 10:46:07.303439 2231 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f055928f-4e13-4f87-ae9d-e8b48941c8c9" path="/var/lib/kubelet/pods/f055928f-4e13-4f87-ae9d-e8b48941c8c9/volumes" May 15 10:46:07.303945 kubelet[2231]: I0515 10:46:07.303825 2231 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6539885-94f1-4060-8592-f691eb278487" path="/var/lib/kubelet/pods/f6539885-94f1-4060-8592-f691eb278487/volumes" May 15 10:46:07.344982 kubelet[2231]: E0515 10:46:07.344943 2231 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 10:46:07.404106 sshd[4026]: pam_unix(sshd:session): session closed for user core May 15 10:46:07.406820 systemd[1]: Started sshd@26-10.0.0.96:22-10.0.0.1:48028.service. May 15 10:46:07.407928 systemd[1]: sshd@25-10.0.0.96:22-10.0.0.1:48014.service: Deactivated successfully. May 15 10:46:07.408833 systemd[1]: session-26.scope: Deactivated successfully. May 15 10:46:07.409945 systemd-logind[1294]: Session 26 logged out. Waiting for processes to exit. May 15 10:46:07.410775 systemd-logind[1294]: Removed session 26. May 15 10:46:07.429549 kubelet[2231]: I0515 10:46:07.429501 2231 topology_manager.go:215] "Topology Admit Handler" podUID="39705986-38c2-4b24-9da2-21214caea684" podNamespace="kube-system" podName="cilium-p7m9k" May 15 10:46:07.429727 kubelet[2231]: E0515 10:46:07.429562 2231 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f055928f-4e13-4f87-ae9d-e8b48941c8c9" containerName="cilium-operator" May 15 10:46:07.429727 kubelet[2231]: E0515 10:46:07.429570 2231 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f6539885-94f1-4060-8592-f691eb278487" containerName="clean-cilium-state" May 15 10:46:07.429727 kubelet[2231]: E0515 10:46:07.429577 2231 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f6539885-94f1-4060-8592-f691eb278487" containerName="apply-sysctl-overwrites" May 15 10:46:07.429727 kubelet[2231]: E0515 10:46:07.429582 2231 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f6539885-94f1-4060-8592-f691eb278487" containerName="mount-cgroup" May 15 10:46:07.429727 kubelet[2231]: E0515 10:46:07.429588 2231 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f6539885-94f1-4060-8592-f691eb278487" containerName="mount-bpf-fs" May 15 10:46:07.429727 kubelet[2231]: E0515 10:46:07.429594 2231 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f6539885-94f1-4060-8592-f691eb278487" containerName="cilium-agent" May 15 10:46:07.429727 kubelet[2231]: I0515 10:46:07.429633 2231 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6539885-94f1-4060-8592-f691eb278487" containerName="cilium-agent" May 15 10:46:07.429727 kubelet[2231]: I0515 10:46:07.429639 2231 memory_manager.go:354] "RemoveStaleState removing state" podUID="f055928f-4e13-4f87-ae9d-e8b48941c8c9" containerName="cilium-operator" May 15 10:46:07.440654 kubelet[2231]: W0515 10:46:07.438193 2231 reflector.go:547] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 15 10:46:07.440654 kubelet[2231]: E0515 10:46:07.438247 2231 reflector.go:150] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 15 10:46:07.440654 kubelet[2231]: W0515 10:46:07.438278 2231 reflector.go:547] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 15 10:46:07.440654 kubelet[2231]: E0515 10:46:07.438287 2231 reflector.go:150] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 15 10:46:07.440654 kubelet[2231]: W0515 10:46:07.439540 2231 reflector.go:547] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 15 10:46:07.440983 kubelet[2231]: E0515 10:46:07.439641 2231 reflector.go:150] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 15 10:46:07.440983 kubelet[2231]: W0515 10:46:07.439573 2231 reflector.go:547] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 15 10:46:07.440983 kubelet[2231]: E0515 10:46:07.439659 2231 reflector.go:150] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 15 10:46:07.446878 sshd[4038]: Accepted publickey for core from 10.0.0.1 port 48028 ssh2: RSA SHA256:haioSl9UPoE92ibERJujrg0rXVEisgSt061naG/EAtE May 15 10:46:07.447711 sshd[4038]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:46:07.464190 systemd[1]: Started session-27.scope. May 15 10:46:07.465356 systemd-logind[1294]: New session 27 of user core. May 15 10:46:07.484642 kubelet[2231]: I0515 10:46:07.484553 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/39705986-38c2-4b24-9da2-21214caea684-bpf-maps\") pod \"cilium-p7m9k\" (UID: \"39705986-38c2-4b24-9da2-21214caea684\") " pod="kube-system/cilium-p7m9k" May 15 10:46:07.484642 kubelet[2231]: I0515 10:46:07.484599 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/39705986-38c2-4b24-9da2-21214caea684-cilium-config-path\") pod \"cilium-p7m9k\" (UID: \"39705986-38c2-4b24-9da2-21214caea684\") " pod="kube-system/cilium-p7m9k" May 15 10:46:07.484642 kubelet[2231]: I0515 10:46:07.484628 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-649cd\" (UniqueName: \"kubernetes.io/projected/39705986-38c2-4b24-9da2-21214caea684-kube-api-access-649cd\") pod \"cilium-p7m9k\" (UID: \"39705986-38c2-4b24-9da2-21214caea684\") " pod="kube-system/cilium-p7m9k" May 15 10:46:07.484642 kubelet[2231]: I0515 10:46:07.484643 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/39705986-38c2-4b24-9da2-21214caea684-etc-cni-netd\") pod \"cilium-p7m9k\" (UID: \"39705986-38c2-4b24-9da2-21214caea684\") " pod="kube-system/cilium-p7m9k" May 15 10:46:07.484642 kubelet[2231]: I0515 10:46:07.484657 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/39705986-38c2-4b24-9da2-21214caea684-xtables-lock\") pod \"cilium-p7m9k\" (UID: \"39705986-38c2-4b24-9da2-21214caea684\") " pod="kube-system/cilium-p7m9k" May 15 10:46:07.484933 kubelet[2231]: I0515 10:46:07.484688 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/39705986-38c2-4b24-9da2-21214caea684-clustermesh-secrets\") pod \"cilium-p7m9k\" (UID: \"39705986-38c2-4b24-9da2-21214caea684\") " pod="kube-system/cilium-p7m9k" May 15 10:46:07.484933 kubelet[2231]: I0515 10:46:07.484707 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/39705986-38c2-4b24-9da2-21214caea684-cilium-ipsec-secrets\") pod \"cilium-p7m9k\" (UID: \"39705986-38c2-4b24-9da2-21214caea684\") " pod="kube-system/cilium-p7m9k" May 15 10:46:07.484933 kubelet[2231]: I0515 10:46:07.484723 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/39705986-38c2-4b24-9da2-21214caea684-cilium-run\") pod \"cilium-p7m9k\" (UID: \"39705986-38c2-4b24-9da2-21214caea684\") " pod="kube-system/cilium-p7m9k" May 15 10:46:07.484933 kubelet[2231]: I0515 10:46:07.484738 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/39705986-38c2-4b24-9da2-21214caea684-cilium-cgroup\") pod \"cilium-p7m9k\" (UID: \"39705986-38c2-4b24-9da2-21214caea684\") " pod="kube-system/cilium-p7m9k" May 15 10:46:07.484933 kubelet[2231]: I0515 10:46:07.484750 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/39705986-38c2-4b24-9da2-21214caea684-hostproc\") pod \"cilium-p7m9k\" (UID: \"39705986-38c2-4b24-9da2-21214caea684\") " pod="kube-system/cilium-p7m9k" May 15 10:46:07.484933 kubelet[2231]: I0515 10:46:07.484762 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/39705986-38c2-4b24-9da2-21214caea684-cni-path\") pod \"cilium-p7m9k\" (UID: \"39705986-38c2-4b24-9da2-21214caea684\") " pod="kube-system/cilium-p7m9k" May 15 10:46:07.485074 kubelet[2231]: I0515 10:46:07.484776 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/39705986-38c2-4b24-9da2-21214caea684-lib-modules\") pod \"cilium-p7m9k\" (UID: \"39705986-38c2-4b24-9da2-21214caea684\") " pod="kube-system/cilium-p7m9k" May 15 10:46:07.485074 kubelet[2231]: I0515 10:46:07.484789 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/39705986-38c2-4b24-9da2-21214caea684-hubble-tls\") pod \"cilium-p7m9k\" (UID: \"39705986-38c2-4b24-9da2-21214caea684\") " pod="kube-system/cilium-p7m9k" May 15 10:46:07.485074 kubelet[2231]: I0515 10:46:07.484802 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/39705986-38c2-4b24-9da2-21214caea684-host-proc-sys-kernel\") pod \"cilium-p7m9k\" (UID: \"39705986-38c2-4b24-9da2-21214caea684\") " pod="kube-system/cilium-p7m9k" May 15 10:46:07.485074 kubelet[2231]: I0515 10:46:07.484815 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/39705986-38c2-4b24-9da2-21214caea684-host-proc-sys-net\") pod \"cilium-p7m9k\" (UID: \"39705986-38c2-4b24-9da2-21214caea684\") " pod="kube-system/cilium-p7m9k" May 15 10:46:07.593367 sshd[4038]: pam_unix(sshd:session): session closed for user core May 15 10:46:07.598330 systemd[1]: sshd@26-10.0.0.96:22-10.0.0.1:48028.service: Deactivated successfully. May 15 10:46:07.600971 systemd[1]: session-27.scope: Deactivated successfully. May 15 10:46:07.603393 systemd-logind[1294]: Session 27 logged out. Waiting for processes to exit. May 15 10:46:07.605918 systemd[1]: Started sshd@27-10.0.0.96:22-10.0.0.1:48030.service. May 15 10:46:07.608861 kubelet[2231]: E0515 10:46:07.608389 2231 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[cilium-config-path cilium-ipsec-secrets clustermesh-secrets hubble-tls], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-p7m9k" podUID="39705986-38c2-4b24-9da2-21214caea684" May 15 10:46:07.606950 systemd-logind[1294]: Removed session 27. May 15 10:46:07.640581 sshd[4056]: Accepted publickey for core from 10.0.0.1 port 48030 ssh2: RSA SHA256:haioSl9UPoE92ibERJujrg0rXVEisgSt061naG/EAtE May 15 10:46:07.641804 sshd[4056]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:46:07.645090 systemd-logind[1294]: New session 28 of user core. May 15 10:46:07.645921 systemd[1]: Started session-28.scope. May 15 10:46:07.988201 kubelet[2231]: I0515 10:46:07.988138 2231 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/39705986-38c2-4b24-9da2-21214caea684-bpf-maps\") pod \"39705986-38c2-4b24-9da2-21214caea684\" (UID: \"39705986-38c2-4b24-9da2-21214caea684\") " May 15 10:46:07.988201 kubelet[2231]: I0515 10:46:07.988203 2231 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-649cd\" (UniqueName: \"kubernetes.io/projected/39705986-38c2-4b24-9da2-21214caea684-kube-api-access-649cd\") pod \"39705986-38c2-4b24-9da2-21214caea684\" (UID: \"39705986-38c2-4b24-9da2-21214caea684\") " May 15 10:46:07.988424 kubelet[2231]: I0515 10:46:07.988227 2231 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/39705986-38c2-4b24-9da2-21214caea684-host-proc-sys-kernel\") pod \"39705986-38c2-4b24-9da2-21214caea684\" (UID: \"39705986-38c2-4b24-9da2-21214caea684\") " May 15 10:46:07.988424 kubelet[2231]: I0515 10:46:07.988245 2231 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/39705986-38c2-4b24-9da2-21214caea684-etc-cni-netd\") pod \"39705986-38c2-4b24-9da2-21214caea684\" (UID: \"39705986-38c2-4b24-9da2-21214caea684\") " May 15 10:46:07.988424 kubelet[2231]: I0515 10:46:07.988264 2231 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/39705986-38c2-4b24-9da2-21214caea684-hostproc\") pod \"39705986-38c2-4b24-9da2-21214caea684\" (UID: \"39705986-38c2-4b24-9da2-21214caea684\") " May 15 10:46:07.988424 kubelet[2231]: I0515 10:46:07.988275 2231 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/39705986-38c2-4b24-9da2-21214caea684-cilium-cgroup\") pod \"39705986-38c2-4b24-9da2-21214caea684\" (UID: \"39705986-38c2-4b24-9da2-21214caea684\") " May 15 10:46:07.988424 kubelet[2231]: I0515 10:46:07.988287 2231 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/39705986-38c2-4b24-9da2-21214caea684-xtables-lock\") pod \"39705986-38c2-4b24-9da2-21214caea684\" (UID: \"39705986-38c2-4b24-9da2-21214caea684\") " May 15 10:46:07.988424 kubelet[2231]: I0515 10:46:07.988298 2231 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/39705986-38c2-4b24-9da2-21214caea684-cilium-run\") pod \"39705986-38c2-4b24-9da2-21214caea684\" (UID: \"39705986-38c2-4b24-9da2-21214caea684\") " May 15 10:46:07.988580 kubelet[2231]: I0515 10:46:07.988310 2231 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/39705986-38c2-4b24-9da2-21214caea684-host-proc-sys-net\") pod \"39705986-38c2-4b24-9da2-21214caea684\" (UID: \"39705986-38c2-4b24-9da2-21214caea684\") " May 15 10:46:07.988580 kubelet[2231]: I0515 10:46:07.988298 2231 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39705986-38c2-4b24-9da2-21214caea684-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "39705986-38c2-4b24-9da2-21214caea684" (UID: "39705986-38c2-4b24-9da2-21214caea684"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:46:07.988580 kubelet[2231]: I0515 10:46:07.988323 2231 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/39705986-38c2-4b24-9da2-21214caea684-cni-path\") pod \"39705986-38c2-4b24-9da2-21214caea684\" (UID: \"39705986-38c2-4b24-9da2-21214caea684\") " May 15 10:46:07.988580 kubelet[2231]: I0515 10:46:07.988343 2231 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39705986-38c2-4b24-9da2-21214caea684-cni-path" (OuterVolumeSpecName: "cni-path") pod "39705986-38c2-4b24-9da2-21214caea684" (UID: "39705986-38c2-4b24-9da2-21214caea684"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:46:07.988580 kubelet[2231]: I0515 10:46:07.988369 2231 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39705986-38c2-4b24-9da2-21214caea684-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "39705986-38c2-4b24-9da2-21214caea684" (UID: "39705986-38c2-4b24-9da2-21214caea684"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:46:07.988717 kubelet[2231]: I0515 10:46:07.988382 2231 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39705986-38c2-4b24-9da2-21214caea684-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "39705986-38c2-4b24-9da2-21214caea684" (UID: "39705986-38c2-4b24-9da2-21214caea684"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:46:07.988717 kubelet[2231]: I0515 10:46:07.988382 2231 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/39705986-38c2-4b24-9da2-21214caea684-lib-modules\") pod \"39705986-38c2-4b24-9da2-21214caea684\" (UID: \"39705986-38c2-4b24-9da2-21214caea684\") " May 15 10:46:07.988717 kubelet[2231]: I0515 10:46:07.988397 2231 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39705986-38c2-4b24-9da2-21214caea684-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "39705986-38c2-4b24-9da2-21214caea684" (UID: "39705986-38c2-4b24-9da2-21214caea684"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:46:07.988717 kubelet[2231]: I0515 10:46:07.988417 2231 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39705986-38c2-4b24-9da2-21214caea684-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "39705986-38c2-4b24-9da2-21214caea684" (UID: "39705986-38c2-4b24-9da2-21214caea684"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:46:07.988717 kubelet[2231]: I0515 10:46:07.988434 2231 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39705986-38c2-4b24-9da2-21214caea684-hostproc" (OuterVolumeSpecName: "hostproc") pod "39705986-38c2-4b24-9da2-21214caea684" (UID: "39705986-38c2-4b24-9da2-21214caea684"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:46:07.988833 kubelet[2231]: I0515 10:46:07.988437 2231 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/39705986-38c2-4b24-9da2-21214caea684-cni-path\") on node \"localhost\" DevicePath \"\"" May 15 10:46:07.988833 kubelet[2231]: I0515 10:46:07.988447 2231 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/39705986-38c2-4b24-9da2-21214caea684-lib-modules\") on node \"localhost\" DevicePath \"\"" May 15 10:46:07.988833 kubelet[2231]: I0515 10:46:07.988449 2231 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39705986-38c2-4b24-9da2-21214caea684-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "39705986-38c2-4b24-9da2-21214caea684" (UID: "39705986-38c2-4b24-9da2-21214caea684"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:46:07.988833 kubelet[2231]: I0515 10:46:07.988455 2231 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/39705986-38c2-4b24-9da2-21214caea684-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 15 10:46:07.988833 kubelet[2231]: I0515 10:46:07.988465 2231 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/39705986-38c2-4b24-9da2-21214caea684-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 15 10:46:07.988833 kubelet[2231]: I0515 10:46:07.988465 2231 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39705986-38c2-4b24-9da2-21214caea684-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "39705986-38c2-4b24-9da2-21214caea684" (UID: "39705986-38c2-4b24-9da2-21214caea684"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:46:07.988833 kubelet[2231]: I0515 10:46:07.988472 2231 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/39705986-38c2-4b24-9da2-21214caea684-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 15 10:46:07.988995 kubelet[2231]: I0515 10:46:07.988479 2231 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/39705986-38c2-4b24-9da2-21214caea684-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 15 10:46:07.988995 kubelet[2231]: I0515 10:46:07.988482 2231 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39705986-38c2-4b24-9da2-21214caea684-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "39705986-38c2-4b24-9da2-21214caea684" (UID: "39705986-38c2-4b24-9da2-21214caea684"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 10:46:07.990952 kubelet[2231]: I0515 10:46:07.990898 2231 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39705986-38c2-4b24-9da2-21214caea684-kube-api-access-649cd" (OuterVolumeSpecName: "kube-api-access-649cd") pod "39705986-38c2-4b24-9da2-21214caea684" (UID: "39705986-38c2-4b24-9da2-21214caea684"). InnerVolumeSpecName "kube-api-access-649cd". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 10:46:07.992589 systemd[1]: var-lib-kubelet-pods-39705986\x2d38c2\x2d4b24\x2d9da2\x2d21214caea684-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d649cd.mount: Deactivated successfully. May 15 10:46:08.089425 kubelet[2231]: I0515 10:46:08.089366 2231 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/39705986-38c2-4b24-9da2-21214caea684-hostproc\") on node \"localhost\" DevicePath \"\"" May 15 10:46:08.089425 kubelet[2231]: I0515 10:46:08.089422 2231 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/39705986-38c2-4b24-9da2-21214caea684-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 15 10:46:08.089425 kubelet[2231]: I0515 10:46:08.089441 2231 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/39705986-38c2-4b24-9da2-21214caea684-cilium-run\") on node \"localhost\" DevicePath \"\"" May 15 10:46:08.089425 kubelet[2231]: I0515 10:46:08.089451 2231 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/39705986-38c2-4b24-9da2-21214caea684-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 15 10:46:08.089792 kubelet[2231]: I0515 10:46:08.089460 2231 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-649cd\" (UniqueName: \"kubernetes.io/projected/39705986-38c2-4b24-9da2-21214caea684-kube-api-access-649cd\") on node \"localhost\" DevicePath \"\"" May 15 10:46:08.392219 kubelet[2231]: I0515 10:46:08.392082 2231 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/39705986-38c2-4b24-9da2-21214caea684-cilium-ipsec-secrets\") pod \"39705986-38c2-4b24-9da2-21214caea684\" (UID: \"39705986-38c2-4b24-9da2-21214caea684\") " May 15 10:46:08.395036 kubelet[2231]: I0515 10:46:08.394998 2231 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39705986-38c2-4b24-9da2-21214caea684-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "39705986-38c2-4b24-9da2-21214caea684" (UID: "39705986-38c2-4b24-9da2-21214caea684"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 15 10:46:08.397073 systemd[1]: var-lib-kubelet-pods-39705986\x2d38c2\x2d4b24\x2d9da2\x2d21214caea684-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 15 10:46:08.493175 kubelet[2231]: I0515 10:46:08.493138 2231 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/39705986-38c2-4b24-9da2-21214caea684-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" May 15 10:46:08.587048 kubelet[2231]: E0515 10:46:08.587003 2231 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition May 15 10:46:08.587232 kubelet[2231]: E0515 10:46:08.587137 2231 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/39705986-38c2-4b24-9da2-21214caea684-cilium-config-path podName:39705986-38c2-4b24-9da2-21214caea684 nodeName:}" failed. No retries permitted until 2025-05-15 10:46:09.087081961 +0000 UTC m=+81.869407404 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/39705986-38c2-4b24-9da2-21214caea684-cilium-config-path") pod "cilium-p7m9k" (UID: "39705986-38c2-4b24-9da2-21214caea684") : failed to sync configmap cache: timed out waiting for the condition May 15 10:46:08.587648 kubelet[2231]: E0515 10:46:08.587582 2231 projected.go:269] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition May 15 10:46:08.587648 kubelet[2231]: E0515 10:46:08.587611 2231 projected.go:200] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-p7m9k: failed to sync secret cache: timed out waiting for the condition May 15 10:46:08.587751 kubelet[2231]: E0515 10:46:08.587649 2231 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition May 15 10:46:08.587751 kubelet[2231]: E0515 10:46:08.587693 2231 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/39705986-38c2-4b24-9da2-21214caea684-hubble-tls podName:39705986-38c2-4b24-9da2-21214caea684 nodeName:}" failed. No retries permitted until 2025-05-15 10:46:09.087679195 +0000 UTC m=+81.870004608 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/39705986-38c2-4b24-9da2-21214caea684-hubble-tls") pod "cilium-p7m9k" (UID: "39705986-38c2-4b24-9da2-21214caea684") : failed to sync secret cache: timed out waiting for the condition May 15 10:46:08.587751 kubelet[2231]: E0515 10:46:08.587715 2231 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/39705986-38c2-4b24-9da2-21214caea684-clustermesh-secrets podName:39705986-38c2-4b24-9da2-21214caea684 nodeName:}" failed. No retries permitted until 2025-05-15 10:46:09.087707028 +0000 UTC m=+81.870032441 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/39705986-38c2-4b24-9da2-21214caea684-clustermesh-secrets") pod "cilium-p7m9k" (UID: "39705986-38c2-4b24-9da2-21214caea684") : failed to sync secret cache: timed out waiting for the condition May 15 10:46:08.910595 kubelet[2231]: I0515 10:46:08.910540 2231 topology_manager.go:215] "Topology Admit Handler" podUID="01242ae8-0db8-4a50-8141-dcdbe2d0442b" podNamespace="kube-system" podName="cilium-4l7zp" May 15 10:46:08.996611 kubelet[2231]: I0515 10:46:08.996554 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/01242ae8-0db8-4a50-8141-dcdbe2d0442b-clustermesh-secrets\") pod \"cilium-4l7zp\" (UID: \"01242ae8-0db8-4a50-8141-dcdbe2d0442b\") " pod="kube-system/cilium-4l7zp" May 15 10:46:08.996611 kubelet[2231]: I0515 10:46:08.996591 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01242ae8-0db8-4a50-8141-dcdbe2d0442b-xtables-lock\") pod \"cilium-4l7zp\" (UID: \"01242ae8-0db8-4a50-8141-dcdbe2d0442b\") " pod="kube-system/cilium-4l7zp" May 15 10:46:08.996611 kubelet[2231]: I0515 10:46:08.996610 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/01242ae8-0db8-4a50-8141-dcdbe2d0442b-host-proc-sys-net\") pod \"cilium-4l7zp\" (UID: \"01242ae8-0db8-4a50-8141-dcdbe2d0442b\") " pod="kube-system/cilium-4l7zp" May 15 10:46:08.996611 kubelet[2231]: I0515 10:46:08.996645 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/01242ae8-0db8-4a50-8141-dcdbe2d0442b-host-proc-sys-kernel\") pod \"cilium-4l7zp\" (UID: \"01242ae8-0db8-4a50-8141-dcdbe2d0442b\") " pod="kube-system/cilium-4l7zp" May 15 10:46:08.996907 kubelet[2231]: I0515 10:46:08.996661 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/01242ae8-0db8-4a50-8141-dcdbe2d0442b-bpf-maps\") pod \"cilium-4l7zp\" (UID: \"01242ae8-0db8-4a50-8141-dcdbe2d0442b\") " pod="kube-system/cilium-4l7zp" May 15 10:46:08.996907 kubelet[2231]: I0515 10:46:08.996762 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb8g2\" (UniqueName: \"kubernetes.io/projected/01242ae8-0db8-4a50-8141-dcdbe2d0442b-kube-api-access-gb8g2\") pod \"cilium-4l7zp\" (UID: \"01242ae8-0db8-4a50-8141-dcdbe2d0442b\") " pod="kube-system/cilium-4l7zp" May 15 10:46:08.996907 kubelet[2231]: I0515 10:46:08.996851 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/01242ae8-0db8-4a50-8141-dcdbe2d0442b-hostproc\") pod \"cilium-4l7zp\" (UID: \"01242ae8-0db8-4a50-8141-dcdbe2d0442b\") " pod="kube-system/cilium-4l7zp" May 15 10:46:08.996907 kubelet[2231]: I0515 10:46:08.996869 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/01242ae8-0db8-4a50-8141-dcdbe2d0442b-lib-modules\") pod \"cilium-4l7zp\" (UID: \"01242ae8-0db8-4a50-8141-dcdbe2d0442b\") " pod="kube-system/cilium-4l7zp" May 15 10:46:08.996907 kubelet[2231]: I0515 10:46:08.996886 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/01242ae8-0db8-4a50-8141-dcdbe2d0442b-cilium-ipsec-secrets\") pod \"cilium-4l7zp\" (UID: \"01242ae8-0db8-4a50-8141-dcdbe2d0442b\") " pod="kube-system/cilium-4l7zp" May 15 10:46:08.996907 kubelet[2231]: I0515 10:46:08.996901 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/01242ae8-0db8-4a50-8141-dcdbe2d0442b-cilium-cgroup\") pod \"cilium-4l7zp\" (UID: \"01242ae8-0db8-4a50-8141-dcdbe2d0442b\") " pod="kube-system/cilium-4l7zp" May 15 10:46:08.997136 kubelet[2231]: I0515 10:46:08.996916 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/01242ae8-0db8-4a50-8141-dcdbe2d0442b-cilium-config-path\") pod \"cilium-4l7zp\" (UID: \"01242ae8-0db8-4a50-8141-dcdbe2d0442b\") " pod="kube-system/cilium-4l7zp" May 15 10:46:08.997136 kubelet[2231]: I0515 10:46:08.996930 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/01242ae8-0db8-4a50-8141-dcdbe2d0442b-cilium-run\") pod \"cilium-4l7zp\" (UID: \"01242ae8-0db8-4a50-8141-dcdbe2d0442b\") " pod="kube-system/cilium-4l7zp" May 15 10:46:08.997136 kubelet[2231]: I0515 10:46:08.996944 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/01242ae8-0db8-4a50-8141-dcdbe2d0442b-cni-path\") pod \"cilium-4l7zp\" (UID: \"01242ae8-0db8-4a50-8141-dcdbe2d0442b\") " pod="kube-system/cilium-4l7zp" May 15 10:46:08.997136 kubelet[2231]: I0515 10:46:08.996957 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/01242ae8-0db8-4a50-8141-dcdbe2d0442b-etc-cni-netd\") pod \"cilium-4l7zp\" (UID: \"01242ae8-0db8-4a50-8141-dcdbe2d0442b\") " pod="kube-system/cilium-4l7zp" May 15 10:46:08.997136 kubelet[2231]: I0515 10:46:08.996970 2231 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/01242ae8-0db8-4a50-8141-dcdbe2d0442b-hubble-tls\") pod \"cilium-4l7zp\" (UID: \"01242ae8-0db8-4a50-8141-dcdbe2d0442b\") " pod="kube-system/cilium-4l7zp" May 15 10:46:08.997136 kubelet[2231]: I0515 10:46:08.996996 2231 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/39705986-38c2-4b24-9da2-21214caea684-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 15 10:46:08.997288 kubelet[2231]: I0515 10:46:08.997005 2231 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/39705986-38c2-4b24-9da2-21214caea684-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 15 10:46:08.997288 kubelet[2231]: I0515 10:46:08.997014 2231 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/39705986-38c2-4b24-9da2-21214caea684-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 15 10:46:09.213845 kubelet[2231]: E0515 10:46:09.213801 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:46:09.214332 env[1314]: time="2025-05-15T10:46:09.214294160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4l7zp,Uid:01242ae8-0db8-4a50-8141-dcdbe2d0442b,Namespace:kube-system,Attempt:0,}" May 15 10:46:09.232169 env[1314]: time="2025-05-15T10:46:09.232095143Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:46:09.232169 env[1314]: time="2025-05-15T10:46:09.232137013Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:46:09.232169 env[1314]: time="2025-05-15T10:46:09.232147001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:46:09.232401 env[1314]: time="2025-05-15T10:46:09.232303318Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/32b05324d3e2abccc6f903f9a1ce2b107361b2e78c1bfe3b42a33a6c19a5bedc pid=4081 runtime=io.containerd.runc.v2 May 15 10:46:09.263569 env[1314]: time="2025-05-15T10:46:09.263522654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4l7zp,Uid:01242ae8-0db8-4a50-8141-dcdbe2d0442b,Namespace:kube-system,Attempt:0,} returns sandbox id \"32b05324d3e2abccc6f903f9a1ce2b107361b2e78c1bfe3b42a33a6c19a5bedc\"" May 15 10:46:09.265035 kubelet[2231]: E0515 10:46:09.264998 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:46:09.267460 env[1314]: time="2025-05-15T10:46:09.267400284Z" level=info msg="CreateContainer within sandbox \"32b05324d3e2abccc6f903f9a1ce2b107361b2e78c1bfe3b42a33a6c19a5bedc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 10:46:09.279811 env[1314]: time="2025-05-15T10:46:09.279748686Z" level=info msg="CreateContainer within sandbox \"32b05324d3e2abccc6f903f9a1ce2b107361b2e78c1bfe3b42a33a6c19a5bedc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"817f41ab4ce8e352df592e00fcc513135d5eef2c207125c0ae21c98b6108aced\"" May 15 10:46:09.280348 env[1314]: time="2025-05-15T10:46:09.280310472Z" level=info msg="StartContainer for \"817f41ab4ce8e352df592e00fcc513135d5eef2c207125c0ae21c98b6108aced\"" May 15 10:46:09.303972 kubelet[2231]: I0515 10:46:09.303908 2231 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39705986-38c2-4b24-9da2-21214caea684" path="/var/lib/kubelet/pods/39705986-38c2-4b24-9da2-21214caea684/volumes" May 15 10:46:09.318039 env[1314]: time="2025-05-15T10:46:09.317983069Z" level=info msg="StartContainer for \"817f41ab4ce8e352df592e00fcc513135d5eef2c207125c0ae21c98b6108aced\" returns successfully" May 15 10:46:09.350331 env[1314]: time="2025-05-15T10:46:09.350271714Z" level=info msg="shim disconnected" id=817f41ab4ce8e352df592e00fcc513135d5eef2c207125c0ae21c98b6108aced May 15 10:46:09.350331 env[1314]: time="2025-05-15T10:46:09.350321388Z" level=warning msg="cleaning up after shim disconnected" id=817f41ab4ce8e352df592e00fcc513135d5eef2c207125c0ae21c98b6108aced namespace=k8s.io May 15 10:46:09.350331 env[1314]: time="2025-05-15T10:46:09.350331016Z" level=info msg="cleaning up dead shim" May 15 10:46:09.355898 env[1314]: time="2025-05-15T10:46:09.355853540Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:46:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4164 runtime=io.containerd.runc.v2\n" May 15 10:46:09.513984 kubelet[2231]: I0515 10:46:09.512904 2231 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-15T10:46:09Z","lastTransitionTime":"2025-05-15T10:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 15 10:46:09.887298 kubelet[2231]: E0515 10:46:09.887143 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:46:09.890664 env[1314]: time="2025-05-15T10:46:09.889502343Z" level=info msg="CreateContainer within sandbox \"32b05324d3e2abccc6f903f9a1ce2b107361b2e78c1bfe3b42a33a6c19a5bedc\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 10:46:10.223511 env[1314]: time="2025-05-15T10:46:10.223450880Z" level=info msg="CreateContainer within sandbox \"32b05324d3e2abccc6f903f9a1ce2b107361b2e78c1bfe3b42a33a6c19a5bedc\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7fea46c5d40638f796f3c06213b873702f48a4db55d029c1fb10dbf1a0977fa7\"" May 15 10:46:10.224098 env[1314]: time="2025-05-15T10:46:10.224055298Z" level=info msg="StartContainer for \"7fea46c5d40638f796f3c06213b873702f48a4db55d029c1fb10dbf1a0977fa7\"" May 15 10:46:10.301859 kubelet[2231]: E0515 10:46:10.301817 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:46:10.349422 env[1314]: time="2025-05-15T10:46:10.349359194Z" level=info msg="StartContainer for \"7fea46c5d40638f796f3c06213b873702f48a4db55d029c1fb10dbf1a0977fa7\" returns successfully" May 15 10:46:10.364079 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7fea46c5d40638f796f3c06213b873702f48a4db55d029c1fb10dbf1a0977fa7-rootfs.mount: Deactivated successfully. May 15 10:46:10.582658 env[1314]: time="2025-05-15T10:46:10.582512470Z" level=info msg="shim disconnected" id=7fea46c5d40638f796f3c06213b873702f48a4db55d029c1fb10dbf1a0977fa7 May 15 10:46:10.582658 env[1314]: time="2025-05-15T10:46:10.582570179Z" level=warning msg="cleaning up after shim disconnected" id=7fea46c5d40638f796f3c06213b873702f48a4db55d029c1fb10dbf1a0977fa7 namespace=k8s.io May 15 10:46:10.582658 env[1314]: time="2025-05-15T10:46:10.582578796Z" level=info msg="cleaning up dead shim" May 15 10:46:10.589790 env[1314]: time="2025-05-15T10:46:10.589765756Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:46:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4227 runtime=io.containerd.runc.v2\n" May 15 10:46:10.891014 kubelet[2231]: E0515 10:46:10.890498 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:46:10.895630 env[1314]: time="2025-05-15T10:46:10.895569271Z" level=info msg="CreateContainer within sandbox \"32b05324d3e2abccc6f903f9a1ce2b107361b2e78c1bfe3b42a33a6c19a5bedc\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 10:46:11.140158 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3402836932.mount: Deactivated successfully. May 15 10:46:11.142723 env[1314]: time="2025-05-15T10:46:11.142626705Z" level=info msg="CreateContainer within sandbox \"32b05324d3e2abccc6f903f9a1ce2b107361b2e78c1bfe3b42a33a6c19a5bedc\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7e403e17e98528f090df54a22eeb9da500deb038f18fbd754c5ac4ddc9def830\"" May 15 10:46:11.143292 env[1314]: time="2025-05-15T10:46:11.143259767Z" level=info msg="StartContainer for \"7e403e17e98528f090df54a22eeb9da500deb038f18fbd754c5ac4ddc9def830\"" May 15 10:46:11.183743 env[1314]: time="2025-05-15T10:46:11.183688814Z" level=info msg="StartContainer for \"7e403e17e98528f090df54a22eeb9da500deb038f18fbd754c5ac4ddc9def830\" returns successfully" May 15 10:46:11.200579 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e403e17e98528f090df54a22eeb9da500deb038f18fbd754c5ac4ddc9def830-rootfs.mount: Deactivated successfully. May 15 10:46:11.279672 env[1314]: time="2025-05-15T10:46:11.279594991Z" level=info msg="shim disconnected" id=7e403e17e98528f090df54a22eeb9da500deb038f18fbd754c5ac4ddc9def830 May 15 10:46:11.279672 env[1314]: time="2025-05-15T10:46:11.279663360Z" level=warning msg="cleaning up after shim disconnected" id=7e403e17e98528f090df54a22eeb9da500deb038f18fbd754c5ac4ddc9def830 namespace=k8s.io May 15 10:46:11.279672 env[1314]: time="2025-05-15T10:46:11.279672137Z" level=info msg="cleaning up dead shim" May 15 10:46:11.285910 env[1314]: time="2025-05-15T10:46:11.285845279Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:46:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4283 runtime=io.containerd.runc.v2\n" May 15 10:46:11.893912 kubelet[2231]: E0515 10:46:11.893875 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:46:11.896152 env[1314]: time="2025-05-15T10:46:11.896093918Z" level=info msg="CreateContainer within sandbox \"32b05324d3e2abccc6f903f9a1ce2b107361b2e78c1bfe3b42a33a6c19a5bedc\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 10:46:11.911282 env[1314]: time="2025-05-15T10:46:11.911226670Z" level=info msg="CreateContainer within sandbox \"32b05324d3e2abccc6f903f9a1ce2b107361b2e78c1bfe3b42a33a6c19a5bedc\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5018f2241d2ae75c01d51a00862a4df3d197dbc8c1f2f4c6d3f70daf2a7e89e7\"" May 15 10:46:11.911845 env[1314]: time="2025-05-15T10:46:11.911808274Z" level=info msg="StartContainer for \"5018f2241d2ae75c01d51a00862a4df3d197dbc8c1f2f4c6d3f70daf2a7e89e7\"" May 15 10:46:11.950315 env[1314]: time="2025-05-15T10:46:11.950251633Z" level=info msg="StartContainer for \"5018f2241d2ae75c01d51a00862a4df3d197dbc8c1f2f4c6d3f70daf2a7e89e7\" returns successfully" May 15 10:46:11.970244 env[1314]: time="2025-05-15T10:46:11.970185635Z" level=info msg="shim disconnected" id=5018f2241d2ae75c01d51a00862a4df3d197dbc8c1f2f4c6d3f70daf2a7e89e7 May 15 10:46:11.970244 env[1314]: time="2025-05-15T10:46:11.970230149Z" level=warning msg="cleaning up after shim disconnected" id=5018f2241d2ae75c01d51a00862a4df3d197dbc8c1f2f4c6d3f70daf2a7e89e7 namespace=k8s.io May 15 10:46:11.970244 env[1314]: time="2025-05-15T10:46:11.970239037Z" level=info msg="cleaning up dead shim" May 15 10:46:11.976073 env[1314]: time="2025-05-15T10:46:11.976022880Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:46:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4337 runtime=io.containerd.runc.v2\n" May 15 10:46:12.346214 kubelet[2231]: E0515 10:46:12.346174 2231 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 10:46:12.897236 kubelet[2231]: E0515 10:46:12.897210 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:46:12.899294 env[1314]: time="2025-05-15T10:46:12.899243007Z" level=info msg="CreateContainer within sandbox \"32b05324d3e2abccc6f903f9a1ce2b107361b2e78c1bfe3b42a33a6c19a5bedc\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 10:46:12.913441 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount780372889.mount: Deactivated successfully. May 15 10:46:12.915852 env[1314]: time="2025-05-15T10:46:12.915797519Z" level=info msg="CreateContainer within sandbox \"32b05324d3e2abccc6f903f9a1ce2b107361b2e78c1bfe3b42a33a6c19a5bedc\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"111b21124660f9c04b40e2475acae9ad9f67f3a4b763d3271be2b54686df8789\"" May 15 10:46:12.916353 env[1314]: time="2025-05-15T10:46:12.916316162Z" level=info msg="StartContainer for \"111b21124660f9c04b40e2475acae9ad9f67f3a4b763d3271be2b54686df8789\"" May 15 10:46:12.964144 env[1314]: time="2025-05-15T10:46:12.960509310Z" level=info msg="StartContainer for \"111b21124660f9c04b40e2475acae9ad9f67f3a4b763d3271be2b54686df8789\" returns successfully" May 15 10:46:13.210648 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 15 10:46:13.902416 kubelet[2231]: E0515 10:46:13.902145 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:46:14.301733 kubelet[2231]: E0515 10:46:14.301691 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:46:15.215494 kubelet[2231]: E0515 10:46:15.215461 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:46:15.819386 systemd-networkd[1080]: lxc_health: Link UP May 15 10:46:15.827292 systemd-networkd[1080]: lxc_health: Gained carrier May 15 10:46:15.827643 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 15 10:46:17.165950 systemd-networkd[1080]: lxc_health: Gained IPv6LL May 15 10:46:17.216230 kubelet[2231]: E0515 10:46:17.216179 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:46:17.228920 kubelet[2231]: I0515 10:46:17.228860 2231 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4l7zp" podStartSLOduration=9.228842286 podStartE2EDuration="9.228842286s" podCreationTimestamp="2025-05-15 10:46:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 10:46:13.922102052 +0000 UTC m=+86.704427495" watchObservedRunningTime="2025-05-15 10:46:17.228842286 +0000 UTC m=+90.011167699" May 15 10:46:17.910238 kubelet[2231]: E0515 10:46:17.910198 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:46:19.301912 kubelet[2231]: E0515 10:46:19.301859 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:46:22.431679 sshd[4056]: pam_unix(sshd:session): session closed for user core May 15 10:46:22.433803 systemd[1]: sshd@27-10.0.0.96:22-10.0.0.1:48030.service: Deactivated successfully. May 15 10:46:22.434717 systemd[1]: session-28.scope: Deactivated successfully. May 15 10:46:22.434729 systemd-logind[1294]: Session 28 logged out. Waiting for processes to exit. May 15 10:46:22.435587 systemd-logind[1294]: Removed session 28.